Look into history of object brokering in object oriented environments. I was around when KDE went from CORBA to DCOP to DBUS, but not involved in the decisions. Basically: object sharing between processes with security, type translation, and a few other things. In the Microsoft world, this was called “component object model” if my memory is correct.
Based on the various other descriptions of the DBUS features, I kept thinking “this sounds like a message passing model with a bit of CORBA hiding in there”. It’s got a bit of SLP and AMQP/MQTT to it, just on a local machine instead of a distributed network. It’s solving a lot of problems with service discovery, message passing structure, and separating transmission layer details from service API design. Raw sockets/pipes can always be used to pass data (it’s how DBUS does it!), but there’s additional problems of where to send the data and how to ensure data formatting that sockets/pipes do not have the capability of solving by design since they’re simple and foundational to how interprocess communication works in the kernel.
We all know that won’t happen because most users don’t give a shit about things like conserving hardware or the resources that went into making them, and will just use this as an excuse to splurge on the latest shiny device.
Herein lies the rub where the discourse online always fails. It’s easy to blame the rich, corporations, politicians, etc. but the end of the day they’re simply doing what the masses want. We are the problem.
Now that said, I understand the appeal of blaming a smaller segment of the population because it’s easier to shift blame and it’s easier to force change that way, but rest assured Apple stops making a new phone every year their brand dies unless everyone stops doing it. They’re doing it because we are conditioned to want it.
fuck me harder ms yes daddy yes please mmmm thats good keep going harder harder HARDER YES
fr tho, that ‘smaller segment of the population’ owns the means of production/computation. so yes, I do blame them. and so should you and anyone with more that a couple folds in their brain.
If you’re saying everyone is providing your point, I think you disproved your own point.
They’re doing it because we are conditioned to want it.
Conditioning a behavior is basic psych101. It’s a controlled external stimulus which illicits a desirable action from the subject. So a conditioned consumer is subject to external stimulie which illicits them to spend. I wouldn’t blame the subject here, I’d blame the one doing the conditioning.
Regardless of all that, you are right but so wrong. We are all to blame, but I mean corporations and us. Politicians and the oligarchs aren’t responsible for what you or I do. What they are responsible for is manipulating systems to benefit themselves over the interests of the general public.
Since politicians and the business elite wield so much influence that makes them more responsible than you or I who really can’t make a big difference on our own. You’re blaming the proletariat for being the proletariat, but we don’t choose whether or not we are. You can work as hard as you can your entire life and you’ll never amount to the level of power and influence Elon, Jeff, Mark, Bill, or Steve had/have.
Nah, that’s shit. We are not the problem. The people in this thread don’t seem to be the kind of people to go out and buy the latest device every 6 months. I keep my phones for years until either the performance or battery becomes nearly unusable for me. I install Linux on older hardware (and newer hardware) and buy new hardware when necessary, not every time it comes out.
You can blame the average person, sure, but saying all of us is just incorrect.
Nah I haven’t bought a new machine in years, sticking to the old school bro. But I get it, a lot of people who are gamers or streamers would definitely buy in and I get why too because so long as the internet speeds keep increasing, there will be more streamers for the next big game or influencer chit chat etc
Let me raise a side concern here since the core questions already seems to be answered; reconsider the Nvidia GPUs. Depending on what you do, you might encounter significant shortcomings at the moment since they do not play perfectly with Wayland yet (which is clearly the future). So do some research in this direction first before you pull the trigger.
I second this. For a while my Nvidia card worked great on pop os for gaming. Then one day a kernel upgrade hosed it all. I tried for days to get it working again and finally realized I was swimming upstream and always would be. Bought an amd card and now I hardly have any issues.
Nvidia usually worked for me fine outside of gaming but I did literally find some distros that had severe issues even in the installer sometimes. Nvidia intentionally fucks over Linux support so I decided it was best not to support them anymore anyhow.
Sidenote, my games all look great and run better than they did on the Nvidia card. I always thought amd graphics cards were second rate, but I don’t think that’s at all true these days.
If all you want to do is play games, then AMD is the way to go. Better price to performance ratio.
But if you want to play around with AI, Nvidia is the only game in town. AMD still does not properly support GPGPU on their consumer cards. It’s infuriating and embarrassing really.
The performance I’m seeing on AMD makes me tend to agree with your first point. Looks fantastic and framerates are really nice.
I’m not really interested in AI except for the realtime background noise removal I’d heard about like 2 years ago which apparently Nvidia rtx cards can do. But in all this time I never set it up so I obviously wasn’t that interested in it!
Surely amd will get gpgpu support working at some point!
The sense of entitlement in some of the replies on that post are absolutely awful
As for me personally, I want to love Wayland. It has great performance on ALL my devices, (except one with a nvidia GPU) and is super smooth compared to X11!
However… the secure aspect of Wayland makes it very difficult, if not impossible to easily get a remote desktop going. Wayvnc doesn’t support the most popular desktop environments depending on how Wayland was compiled, and the built-in desktop sharing on distros that have switched over to Wayland often require very specific Linux-only VNC and RDP clients, otherwise you run into odd errors.
I really hope the desktop sharing situation improves because it’s a pretty big showstopper for me. On X11 you just install & run x11vnc from a remote SSH session and you have immediate session access with VNC from Linux, Android, and Windows. If you want lockscreen access too then you run as root and provide the greeter’s Xauth credentials. But Wayland’s not so simple sadly AFAICT…
Waypipe is something I’ve found out about recently though, so need to check that out and see how well it works at the moment. If anyone has any helpful info or pointers please share, I’m completely new to Wayland and would appreciate it!
Looked at it. It seems GNOME are doing dirty hacks, since AFAIK they don’t have tearing control in their compositor.
And game fps is not directly translated to perceived performance on wayland. For example in Xonotic 90-95 fps on wayland feel laggy, but if enable glFinish, I’ll get 80 fps in same area, but game will feel much better. But it causes game to run in 20-30 fps in places and cause more lag there, where it would run 30-60. For context runnung game in X11 without glFinish gets me 110 fps in same area, which feel like 110 fps. Running game in kmsdrm gets me 120 fps in same area.
I know this is not useful for most use cases, but if you login to the desktop on the ‘remote Wayland’, locally first then RD will work as expected. So if you can change the behaviour of the remote desktop to stay unlocked (IE its in a secure place where others cannot just access the device), then and RD will work with Wayland.
I use NoMachine (since I manage all sorts of devices, and its nice that there is a client and server for everything including phones/arm) and it works for me because many of the machines are actually VM’s and I can keep the desktops unlocked and logged in. NoMachines solution for Wayland - is to disable it and use X11 !!
But I wish many of the RD developers would just embrace Wayland and add/rewrite code to support it (If it is in their scope, I don’t know) It might not be, since I am aware of Waypipe and Pipewire, but I’d assume that RD devs would still need to include support for that.
before you change anything it would be good to use dd and save the whole drive to a bigger drive or maybe compress it with gzip while using dd to save it to a slightly smaller one. That takes a very long time, but gives you the ability to start over with your recovery. Only do that if it’s worth to wait several hours.
photorec can also recover some files by looking at the raw data still there, if all else fails.
Your “of” can also just be a regular file if that’s easier to work with vs needing to create a new partition for the copy.
I’ll also say you might want to use the block size parameter “bs=” on “dd” to speed things up, especially if you are using fast storage. Using “dd” with “bs=1G” will speed things up tremendously if you have at least >1GB of RAM.
this clones one parition to another which is fine if you have free partition with enough space lying around. My code was for taking compressed backup of the partition to be restored later if needed. Its less convenient but doesnt require as much space nor does it require an entire partition
depends on what you mean by “broken”. If broken means has bad sector or other hardware issue, then yes OP should transfer data to healthy partition and work from there. though it certainly won’t hurt if he attempted to recover data from broken partition (worst the HD dies and OP restore the backup on healthy HD) However he said “i broke my partition” which make me think its software issue, not hardware. in which case, would be faster to recover data directly after taking backup
That’s what people always underestimate about data recovery jobs: you need lots of space. One copy for safekeeping. One to work on. One disk of the same size you store recovered files on.
Whenever friends or family ask me to look at a disk I always tell them to give me the disk and two new ones of the same or greater capacity and I’ll give it a shot. Usually they discover the data isn’t that important after all. If it is I have all I need.
I thought a more advanced distro, that is still stable, would be good overall. However, not getting new software for a long time sounds quite annoying.
Arch is actually not as bad as many say. It’s pretty stable nowadays, I even run Arch on some servers and I never had any issues. It gives you the benefits that you can basically find any package in the AUR and everything is up-to-date. Try it out, if you don’t like it, you can still switch to something else.
I’m wanting to challenge myself to get much better with Linux, partitioning, CLI, CLI tools
The best way to learn the CLI is to use it. Try not to use your graphical file manager for a while and only interact with the file system through the terminal, that teaches you a lot.
I’ve been considering installing Arch the traditional way, on my X220, as a way to force myself to improve. Is this a good way to learn more about Linux and a Linux system in general?
Yes.
I always hear good things about the Arch Wiki.
It is truly fantastic.
Is there any other tips someone can give me, to sharpen my Linux skills?
Use the system, don’t be shy, try different things out. If you are scared that you might break something, try it out in a VM. Break your VM and try to fix it. That teaches you a lot.
I was even considering trying out Gentoo on my X220, but the compiling times sound painful.
I would not recommend that, updating packages will take ages, it’s not a great experience.
Thanks for this, I think I will give Arch Linux a go, and avoid GUI file management. My plan is to daily drive my X220 for more lightweight tasks as it’s a nice laptop to use. So using Arch on it may just force me to have to use Arch as a daily driver if I want to use a nice laptop keyboard.
Another thing that helped me: Get comfortable in the Terminal. Obviously you have to learn some commands and how they work, but just configure your shell and commonly used CLI tools. It makes the experience so much more pleasant. Install a nice shell prompt, set up some aliases for frequently used commands, learn the basics of shell scripting and write your own useful little scripts for things you often have to do, maybe start using Vim and configure it the way you like it. Also explore other shells. Bash is the default shell, but there are better options like zsh or fish. You can watch this video to get some inspiration: youtube.com/watch?v=KKxhf50FIPI. This is actually pretty close to my shell setup. If you want to start customizing bash, check this out: youtube.com/watch?v=b3W7Ky_aaaY.
To continue learning, maybe subscribe to some Linux-oriented YouTube channels. DistroTube makes great videos about Linux, tiling window managers, how to use various commands, how to configure your shell environment, etc. He also reviews many Linux distros or explains why free software is important. If you search any Linux-related topic on YouTube, chances are that DT already made a great video about it.
Arch is actually not as bad as many say. It’s pretty stable nowadays, I even run Arch on some servers and I never had any issues.
Not even just nowadays. My desktop is running a nearly 10 year old install. It’s so old, it not only predates the installer, it predates the “traditional” way and used the old TUI installer. It even predates the sysvinit to systemd switch! The physical computer has been ship of thesis’d twice.
Arch is surprisingly reliable. It’s not “stable” as in things change and you have to update some configs or even your own software. But it’s been so reliable I never even felt the need to go look elsewhere. It just works.
Even my Arch servers have been considerably more reliable and maintenance-free than the thousands I manage at work with lolbuntu on them. Arch does so little on its own, there’s less to go wrong. Meanwhile the work boxes can’t even update GRUB noninteractively, every now and then we have a grub update that pops a debconf screen and hangs unattended-upgrades until manually fixed and hoses up apt as a whole.
Similar experience. My current install is not as old due to hardware failure but I’ve been using arch since 2007ish and it’s been stable enough through all that concurrent with sort of losing interest in being an admin for a hobby in the last few years that I’ve honestly got kind of bad at administrating the thing, haha. But it hardly matters because issues are rare.
Third same. Before my current PC, my old PC was a 6yo install. Never had problems that weren't caused by me (although I could count on one hand the number of times I had issues in that 6 years).
My current install is more than 2yo trouble free.
I've DE hopped and fiddled with heaps of stuff in that time too.
Same here. Been going on 8 years with the same install and it has never let me down. The only time something “breaks” is when I’m the one who caused it, and it’s always been easily fixable with arch-chroot from the iso. Although I haven’t even had to resort to that in 3 or 4 years now.
While this is cool, but I am interested in a comparison with a fresh windows install. This article says it’s out of the box from HP, I wouldn’t be surprised if they have some dumb processes running, chunking performance… I’m confident linux would still outperform but this is quite an insane gap on display.
That’s a fair comment. But on the other hand, if you are spending a fortune on a CPU the size of your hand (look at that thing in the article!) then there’s a good chance you’re using it for business purposes, and either you or your IT department will be very keen to have a completely vender-supported stack. Enthusiasts with fresh OS installs will not be representative of users of this tech - AMD haven’t really been targetting it at gamer desktops.
Of course, comparing both would be even better, see whether it is an HP crapware issue…
Totally agree, it’s two different tests and use cases. Most people will run it how it comes out of the box and that’s probably more representative of the real world.
I just think it’s not entirely fair to say “windows is 20% slower” when we have no idea which trash HP loaded it up with. If I managed an IT Dept and learned my $$$$ hardware lost 1/5 of it’s performance I’d certainly be pushing HP for solutions. Or maybe they’d prefer to take 20% off the price?
Don’t most businesses cut the bloat out and put their own builds on it? Sure they put their own software on that will hurt performance but it seems fresh vs fresh would be give better metrics.
Nouveau is stable and runs, but don’t expect the best performance. The official NVIDIA driver is unstable, lacks proper wayland support but has decent performance. I’d go with anything but a NVIDIA GPU.
You have a list of systems you’ve connected to in known_hosts, though. And the config file is easy enough to parse - throwing away the stuff you don’t care about - to expand on that list.
I can (and do) just read the ~/ssh/.config file if needed, it’s quite legible. In most cases however zsh autocompletion does all the heavy lifting for me (ssh ser(tab) -> ssh servername).
Still a cool idea for a script, and if it works well for you more power to you, just saying there’s more ergonomic and universally applicable solutions. (Only mentioning this since you said “I couldn’t find a decent solution to this problem”).
Great attempt on making a tool, I think your usecase might not be as appealing to others. If I need to list the hosts I have config for I would use: grep Host ~/.ssh/config If your list of servers is too long to remember, you might want to look at Ansible for configuration. But whatever works for you :)
You’re trying to use Gnome the way you’re used to using a desktop.
If you try and learn the Gnome way, you’ll have a better time.
To be honest I had the same problem when I first went from Windows to OSX, I was struggling, trying to make OSX familiar, but when I decided to learn the Apple way, everything became easier.
Well I think that’s the issue here. It’s not geared towards a group of people, but towards an ideal workflow which is the Gnome Way.
If you’re someone that likes to have masses of applications or windows open you can certainly use Gnome, but the Gnome is more focused on one or two windows per desktop/workspace and I encourage you to embrace that way of working too
Again, it’s not about people, but the intended user experience.
I remember when Windows first introduced My Documents folder and subfolders for images, music, video. To begin with I rejected this folder because I wanted my folders in the root C: as I had always done. Eventually I decided to use these folders and I learned to appreciate the convenience of this, including all the additional thumbnails and meta data that the OS provided automatically for those folders.
It makes sense to use the words that people are most used to, and bluescreen/BSOD has been the go-to lingua for describing a crash/error screen - even if not blue - since a while now.
I’d just like to interject for a moment. What you’re referring to as Linux,
is in fact, systemd/Linux, or as I’ve recently taken to calling it, systemd plus Linux.
Linux is not an operating system unto itself, but rather another free component
of a fully functioning systemd system made useful by the systemd corelibs, shell
utilities and vital system components comprising a full OS as defined by IBM.
Many computer users run a modified version of the systemd system every day,
without realizing it. Through a peculiar turn of events, the version of systemd
which is widely used today is often called “Linux”, and many of its users are
not aware that it is basically the systemd system, developed by Lennart Poettering.
There really is a Linux, and these people are using it, but it is just a
part of the system they use. Linux is the kernel: the program in the system
that allocates the machine’s resources to the other programs that you run.
The kernel is an essential part of an operating system, but useless by itself;
it can only function in the context of a complete operating system. Linux is
normally used in combination with the systemd operating system: the whole system
is basically systemd with Linux added, or systemd/Linux. All the so-called “Linux”
distributions are really distributions of systemd/Linux.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.