Kent right here talks about how Linux related companies need to focus on putting their resources towards collaborating and helping big companies port their software and THEN introduce open source software to new users instead of remaking desktop environments, pushing companies away, and overall doing the same thing over and...
Oh, just invest in adobe and get it developed for Linux - easy, why didnt anyone think of this before. And better yet, if they do invest they could make it a PopOS exclusive!!?!?!! \s
It wont work because Adobe does not care and there is not enough market share in Linux for them to bother with it. No amount of money that PopOS has will be able to convince Adobe to develop it for Linux and there is no way in hell Adobe will give them access to their source to develop it for Linux. That whole argument is just a non-starter.
This may not be a Linux specific problem as I had the exact same issue earlier with Windows 7 and it’s one of the reasons I installed Linux in the first place....
Have you tried updating and rebooting your system? I have had this happen a few times and almost always that is what fixes it for me (more so the rebooting but it is generally good to have your system up to date). Other times it is typically something missing on your host system (like properly installed drivers), though if the game was running before then this is less likely to be the issue and a reboot is typically enough - so start with that.
This isn’t Linux, but Linux-like. Its a microkernel built from the rust programming language. Its still experimental, but I think it has great potential. It has a GUI desktop, but the compiler isn’t quite fully working yet....
I am not sure that is quite right. I dont think rust support just enabling dynamic linking of its dependencies. It can talk to dynamically linked libraries - which is how FFI works. And you can compile rust crates to be dynamically linked. But when you are going down this route you are talking over the C ABI. This requires some effort on the code author to make their APIs exportable to C types and means you lose all safety when talking over the C ABI.
I also dont think that rust inlines across a crate boundary unless the function is marked as inline or LTO is enabled - inlining across crate boundaries is expensive and so only done when explicitly needed or asked for it. It is more that you lose features like generics and traits and other things that are not supported over the C API.
Most done with the latter. But the nice thing is once you have done it once it is much easier to keep things up to date and in sync from then on words. You can also peace meal it - setup one application at a time and migrate things one by one over to it.
painstakingly manually code every unique facet
That makes it sound a lot worst then it actually it. It is only a bit more effort then setting something up for the first time manually. And pays its self back many times over when you next need to reinstall or install a new system. Assuming you keep up with making changes to the code and not directly to your system each time.
You dont even need a separate partition, just dont format and dont delete the /home folder. You can even keep the /etc folder as well to keep system wide settings.
I have done something similar following this post - loads of others have created similar scripted installers for Arch for their specific use cases and this guide takes it one step further with custom arch meta packages that hold deps and system wide config.
You can also do similar things with tools like ansible or saltstack or similar tools. Though these all take the approach of define your configs and system to automate the setting up of a system approach rather than the backup or clone an existing system. So are more effort initially but are able to keep multiple system in sync with system configs with far less effort then trying to create a backup/restore system for organically created configs.
that wouldn’t work (I think) because my laptop has vastly different hardware
Should not matter, you can install all the packages all your system need - such as both nvidia, amd and intel graphics drivers and the kernel will only load the ones for the hardware you have booted with. Or if you really need different configs or packages for different systems the various approaches have ways to do that.
Heya folks, some people online told me I was doing partitions wrong, but I’ve been doing it this way for years. Since I’ve been doing it for years, I could be doing it in an outdated way, so I thought I should ask....
I set it up this way so that if I need to reinstall Linux, I can just overwrite / while preserving /home and just keep working after a new install with very few hiccups.
Even with a single partition for / and /home you can keep the contents of /home during a reinstall by simple not formatting the partitions again. I know when I tried years ago with Ubuntu years ago the installed asked if I wanted to remove the system folders for you. But even if the installer does not you can delete them manually before hand. Installers wont touch /home contents if you don’t format the drive (or any files outside the system folders they care about).
Though I would still backup everything inside /home before any attempt at a reinstall as mistakes do happen no matter what process you decide to go with.
There was no option per say, at least on the ubuntu installed I tried many years ago. Just a popup that happened sometime before the install but after the manual partitioning if the root partition had folders like /etc /usr /var etc that were needed by the installer. Not sure if all installers do this - but I would suspect if they didnt you can just delete the folders manually before you enter the installer and pick manual partitioning option and opt to not format any partitions.
Applications needs some coordination between each other in order to act like you would expect - things like one window at a time having focus and thus getting all keyboard and mouse inputs. As well as things like positioning on the screen and which screen to render to, the clipboard, and various others things.
X is a server and set of protocols that applications can implement to allow all this behaviour. X11 is the 11th version of the server and protocols. But X was also first created in 1984, and X11 since around 1987. Small changes have been made to X11 over the years but the last was in 2012.
Which makes it a very old protocol - and one which is showing its age. Advances in hardware since then and the way we use devices have left a lot to be desired in the protocol and while it has adapted a bit to keep up with modern tech it has not done so in the best of ways. I also believe its codebase is quite complex and hard to work with so changes are hard to do.
Thus is has quite a lot of limitations that modern systems are rubbing up against - for instance it does not really support multi cursors or input that is not a mouse and keyboard. So things like touch screens or pen/tablets tend to emulate a mouse and thus affect the only pointer X has. It is also not great at touchpads and things like touch pad gestures - while they do work, they are often clunky or not as flexible as some applications need.
It is also very insecure and has no real security measures in place - any GUI application has far more access to the system and input then it really requires. For instance; any application can screen grab the screen at any point in time - not something you really want when you have a banking web page open.
Wayland is basically a new set of protocols that takes more modern hardware and security practices in mind. It does the same fundamental job as X11, but without the same limitations X11 has and to fix a lot of the security issues with X.
One big difference with X though is that Wayland is just a protocol, and not a protocol and server like X. Instead it shifts the responsibilities of the X server into the window manager/compositor (which used to manage window placement and window borders as well as global effects such as any animations or transparency). It also has better controls over things like screen grabs so not every application can just grab a screen shot at once or register global shortcut keys or various things like that. Which for a while was a problem as screen sharing applications or even screenshot tools did not work - but over time these limitations have been added back in more secure ways than how X11 did them.
Additionally any application using a GUI toolkit (like kde, qt or gtk etc) only needs to to update to a version that has native Wayland support. Which means most applications already support it. At least if they don’t use any X11 APIs directly (which is not that common).
6 of the top 10 are verified or playable or 43% of the top 1000 games. But verified and playable is only a subset of the games that work, quite a few unsupported games do as well. If you go by medals the 7 of the top 10 are silver ranked or better (minor issues but generally playable) and 88% of the top 1000. So there are a lot of games that are playable that are still listed as unsupported on the deck.
You can see the numbers for various different things at www.protondb.com as well as different reports for all the games (including some tips on how to get things to work or work better).
Linux has standards where interoperability is important. The more things needs to talk to each other the more they need a common standard to talk over. Things like X11/Wayland don’t have many alternatives as so many things need to talk over them. The only reason there are two standards here is because X11 has massive limitations that cannot easily be worked around.
For package managers applications don’t care about them. Interoperability only matters within a single distro. So people are more free to create what ever standards they want for their own distros. And when people can choose people have opinions and these opinions evolve over time. Which results in multiple competing products that effectively do the same thing.
And here is my hipotesis if the GNU project came up with a good and easy to work package manager in the early days of Linux
Probably, but creating a good, easy to work, fast and reliable package manager that meets everyones needs when you are discovering how you want it to work for the first time is extremely hard. And even if you created a perfect one at the start, requirements can change. This happened with X11, and even with package managers seeing the rise of things like flatpack, snap and appimage that all work fundamentally different from the traditional ones.
The package format is almost irrelevant TBH. Most packages are not interoperable between distros due to the versions and names of dependencies. That is not something that gets fixed by a standard package format. Packages don’t even work well between different versions of the same distro. largely due to libc - anything that depends on that is built against a specific version and when you upgrade it you need to rebuild and install everything that depends on it. Similar problems exist for all compiled dependencies on a distro.
And while some packages of the same format can be installed on multiple distros (mostly those based of the same foundation) most cannot. This is what the newer package formats (like flatpack) are trying to solve - by including all dependencies inside the package.
So a standard format does not really solve those issues, so there is little advantage for one. At least not one of the old school formats. And the wars are not really over the format, they are over the tooling required for that format. At the end of the day RPMs, DEBs, and arch packages are just tarballs of files and some meta data (and there is even a tool that can convert between them - though anything with dependencies quickly becomes a complete mess). It is the build and install tooling that makes all the difference.
I shouldn’t expect remote accessing some random server will allow me to use Helix, right? Is there any other way to make this work? Or…, should I just learn both Vim and Helix’ Vim + Kakoune amalgamation?
That all depends on the server in question and if you can install things onto it or not. Some points to consider though:
If the server is restrictive on what you can install then you likely are stuck with basic vim or worst only vi - and without all your configs it is a very different beast of an editor anyway and something you will need to get used to everytime you jump on the server.
If you can install stuff to your home drive then it is quite easy to get helix running - it is a single binary with some language assets (requires one env var to point to them). So is trivial to get working from your home dir without a package manager.
IMO you should not be editing things on a server often enough to worry too much about what editors it has on it. Ideally with things like ansible you should not need an editor on it at all.
Vim is literally ubiquitous and plugins that enable its features can be found on almost any ‘platform’. It’s unrealistic to expect Helix’ adoption to be at that rate (yet). However, would you happen to know if at least the likes of VS Code and/or Jetbrains’ IDEs support it? And if so, how good their support/implementation is?
Do you mean vi input mode in other editors? That is one downside - you wont find it anywhere yet. Though since learning it I have not needed to go back to other IDEs like VS Code or Jetbrains, WIth inbuilt LSP support its language integration is just as good as VS Codes as it is working off the same essential language servers. Though it does seem that at least vscode has a plugin for kakoune keybindings which are more similar to helixs.
Though what you will find is a lot of the keys are very similar between vi and helix, so apart from the big action > movement vs selection > action and a few other things they dont feel too dissimilar from each other (things like basic movement, ie w for word, e for end of word, or text objects are essentially the same).
I have used vim/neovim for years and cannot go back to a non-modal editor. But TBH I got sick of its configuration. You need far too many plugins and config to get things into a sane working order to be usable on a day to day bases for any type of development. It takes ages to learn and become as productive as you were before and a lifetime to refine.
For the past year or so I have switched to helix and don’t plan on going back to vim/neovim as my main editor ever again. It is a modal editor that is a mix between Neovim and Kakoune editors. It comes with batteries included, and supports an IDE like experience out the box with treesitter syntax highlighting and LSP language integrations out the box. My whole config is like 6 lines long yet it works far better then my old neovim setup with a multitude of plugins and hundreds of lines of config. It is like what AstroNvim, LazyVim, LunarVim and NvChad etc are trying to do to vim/neovim but instead has built in support for all the things they rely on plugins for. Which means there is no need to constantly keep them up to date nor weird edge cases where one plugin does quite integrate with another smoothly. It is all built in so things are designed to work well together.
But it currently does lack any plugin support. So if something is not built in that you want you have to make due without it (well, except language support, adding new LSPs is not too hard). And plugin support is being worked on so even this will be a non-issue hopefully in a year or two.
Interesting. Though I can definitely see where you’re coming from. Uhmm…, have you used any of the Neovim distributions to make maintenance easier?
I have, but dont like them. They all have weird install processes and need to manage their own set of configs on top of vim in your home dir. This makes them very hard to properly package or integrate with config management tools and require a different flow to keep them up to date from the rest of your system. They combine sometimes hundreds of plugins, of which only a few are designed to work together and while a lot don’t try to step on each others toes that many I often find issues in niche use cases. And when you do find an issue, or something you want to tweak you have 100s of plugin configurations that you need to learn about to figure out just what is doing what and which options you need to tweak.
It is all just far more hassle then I want out of my editor these days. Helix just works out the box and has basically everything I want from a editor nicely integrated into it.
As you’ve touched upon it; Helix’ keybindings and ‘sentence-structures’ are different to those found on Vi(m).
They are a little different and take a bit to get used to. But IMO I find them far nicer way to work. It is very nice being able to see what your action is going to effect before you do it - unlike in vim when you just hope you have hit the right movement keys. And it also pops up a small window for leader keys (like space) which show you what you can do with it making it far more discoverable then vim/neovim without needing to pour though hundreds of pages of manuals to even get a glimpse of what it can do or needing to go back to them to remember something that you dont use very often. It is not trying to be a 100% vim compatible layer, it is trying to give you the best experience it can out the box. And I think it does that quite well (at least once you get used to the new way of working - which does not take that long).
Furthermore, neither of the two have existed long enough to be able to profess any statement regarding their longevity. Like, there’s no guarantee that I can keep using either of the two 20 years into the future.
20 years is a long time. I can see it existing for the next 5 years at least, and looks to be on the trajectory to be a long lasting product. Though no one can say for sure. But, the more people using it the more likely it is to stick around for the long term. Just about everyone that I have seen use it over vim have highly praised it and it has quite a few contributors already (700+ on github), which is very impressive compared to vim (about 300), and neovim (more then 1100).
And keep in mind that vim has been around so long thanks to a single maintainer, Bram Moolenaar, who passed away this year. Which is not a great sign for vims future for the next 20 years.
I appreciate the input, but I simply don’t want to invest in a program whose future is very unclear to me at this point in time.
The investment in helix is far less then that you need to put into vim/neovim due to all the configuration you need for them. Well worth it for how active it currently is and how many people are putting effort into it.
Does anybody know why dbus exists? I’ve been wracking my brain trying to come up with a usecase for dbus that isn’t already covered by Unix sockets....
Anything is possible with sockets… and that is a meaningless statement. It is like saying you can build anything with bricks. Technically true, but missing all the important details of how.
In an alternative universe we could have done so many things differently to solve the same problems. But we don’t live there and in our universe dbus was the attempt to solve that problem among others. And yes you can create a standardization for music players easily enough - but what about notifications, and everything else? DBus tries to be a generic interface anything can talk over at a logical level - rather that just being the basic way two process can physically send bytes between each other.
Sockets are just streams of bytes - no defined structure to them at all. Dbus is about defining a common interface that everything can talk. That means when writing a program you don’t need to learn how every program you want to talk to talks over its own socket - just can just use a dbus library and query what is available on the system.
At least that is the idea - IMO its implementation has a lot to be desired but a central event bus IMO is a good idea. Just needs to be easy to integrate with which I think is what dbus fails at.
A great example is music player software - rather than every music player software creating its own socket and each having its own API to basically all do the same operations. So anything that want to just play/pause some music would need to understand all the differences between all the various different music applications. Instead with a central event bus system each music app could integrate with that and each application that wants to talk to a music app would just need to talk to the event bus and not need to understand every single music app out there.
This guy has a good take on linux companies, agree or disagree? (www.youtube.com)
Kent right here talks about how Linux related companies need to focus on putting their resources towards collaborating and helping big companies port their software and THEN introduce open source software to new users instead of remaking desktop environments, pushing companies away, and overall doing the same thing over and...
Steam not launching games - no idea what to do [Solved]
This may not be a Linux specific problem as I had the exact same issue earlier with Windows 7 and it’s one of the reasons I installed Linux in the first place....
Redox OS - an OS built entirely out of Rust (www.redox-os.org)
This isn’t Linux, but Linux-like. Its a microkernel built from the rust programming language. Its still experimental, but I think it has great potential. It has a GUI desktop, but the compiler isn’t quite fully working yet....
Package up and transport a linux?
I have a simple wish, with a probably not so simple solution....
One single partition for Linux versus using a partition table?
Heya folks, some people online told me I was doing partitions wrong, but I’ve been doing it this way for years. Since I’ve been doing it for years, I could be doing it in an outdated way, so I thought I should ask....
ELI5 the whole Wayland vs X11 going on.
Title
Steam Linux Marketshare Surges To Nearly 2% In November (www.phoronix.com)
store.steampowered.com/hwsurvey/
Package format wars daydream
Just saw a post of a novice user asking why are there so many package managers....
Looking for input regarding finding an IDE (spoilers: involves Emacs and Vim)
cross-posted from: lemmy.ml/post/9648279...
What is the point of dbus? (lemmy.world)
Does anybody know why dbus exists? I’ve been wracking my brain trying to come up with a usecase for dbus that isn’t already covered by Unix sockets....