Unity started with pretty awful performance (much like GNOME 3) and coincided with some infamous decisions on the part of Canonical, namely that whole business with the Amazon integration, so it’s permanently tainted in the minds of many. It also meant that the largest distro in town was suddenly using a desktop that was much less inviting to newcomers than the familiar GNOME 2.
I’m glad it’s being kept alive as it does have a unique vibe to it, but I always found the workflow a bit awkward and much prefer GNOME for something modern and xfce or MATE for when I want something traditional.
At first I was like WTF but actually it makes sense. A screen showing an error code is much better than a hard reset, blinking cursor, kernel panic, or just black screen you usually get when something bad happens on linux.
Is anything keeping you from just reinstalling the system and mounting your home into it again (maybe the majority of your customisations live in /home too)? I feel that is a lot less of a hassle than copying files around.
In principle you should be able to restore your system by just copying all of the relevant files from the backup to their correct partitions - it can’t really get any worse if it doesn’t work.
For the future: A backup is only any good if you know how to restore it and tested that that actually works.
Regarding the permissions: If you do a cp fileA.txt fileB.txtfileB.txt will normally be owned by the creating user. So a sudo cp … will create the files as root.
I would personally use rsync with a few additional options, archive among them. This way the fs is restored exactly as it was. But that doesn’t make a whole lot of sense if the files weren’t copied that way too.
That can be done, but as others mentioned, if you don’t have permissions/other attributes for the files it’s going to be a real PITA to get everything working. If I had to do that I’d just copy over the files, chown everything to root and then use package manager to reinstall everything, but even that will most likely need manual fixes and figuring out what to change and to what value will take quite a bit of time and complexity of it depends heavily on what you had running on the host, specially things under /var.
The only one I can find is TILVids, which has a few of the bigger Linux content creators but not much more than that. Content worth watching is really the one thing PeerTube is lacking, and that has to come from users, but that’s really a catch-22. You need more quality content to bring in more users, but you need more users to provide that quality content.
On top of that, not many unique users are going to be drawn to a platform that can’t provide avenues for monetization and which costs money to run on top of that, even with all the policies at YouTube all these creators whine about in every other video, which they only mostly whine about because it affects their monetization. So it’s either live with YouTube’s policies reducing your potential income or live with a negative income to set up or join a PeerTube instance: slightly reduced profit vs guaranteed loss. They’ll pick the slightly reduced profit every time.
Even further, the ones who get kicked off of YouTube and need to find an alternative or care enough about “free speech” to branch out are… mostly niche creators, to put it politely, and the unique content they provide to these alternative platforms tends to discourage other creators who still have YouTube channels from syncing their channels from YouTube to PeerTube in order to not be associated with those more niche creators. Other platforms such as Rumble and Odysee have similar issues. That said, PeerTube does have an advantage over Rumble/Odysee in this regard, in that instances that want to avoid that type of content can moderate and set up their federation to limit that association, but at that point they may just find it too much effort to put into bringing in too small an audience to be worth it.
The Fediverse appears to work well enough for user generated content that doesn’t take much effort or expense to provide, such as Twitter, Facebook or Reddit-type content, as the rise of Mastodon and Lemmy are showing, but when users have to put in the work and expense of publishing a video, the return on investment of PeerTube (in both money and views) compared to just staying on YouTube may just be too small to work.
But theoretically, if someone set up an instance to host their videos, they could serve banner ads right? Wouldn’t that at least partly diminish the monetization problem?
I’d suggets Void. It has Ardour in the repo (not Reaper though) and PiperWire with JACK should work out of the box. If you want Reaper, you’d have to install it manually though. I’m working on a template for Reaper, but it’s not finished yet. If you’re willing to wait a month or so, you’ll probably have Reaper in xbps-src as well.
I thought most of the FOSS ones were like libraries, just drop them in the adequate shared directory and that’s it 🤔. You could check dependencies with ldd and look for the adequate package with xtools.
They can be dropped into local directories, assuming the devs provide binaries, otherwise they need building. I know it’s not difficult to git clone and build but it’s not really a beginner friendly process and not ideal to have to remember to check for updates (and not ideal to not have them packaged for updates if there are binaries provided by the devs). I was considering using 10 years ago or so and this was one of the main reasons I decided against it in the end (along with other audio tools not having packages), it’s a shame it hasn’t really moved forward in this respect because it looked really nice otherwise.
Well, at least there is one thing that makes building on Void easier. xbps-src works with templates, so you could just write the template or write on GitHub for help from someone in the Void community. I’ve asked for help many times and people are usually very helpful ☺️. Once you have the template, updating the VST is a matter of just chaging a few things in it (version, hash, etc.) since things like UI dependencies or libraries don’t change that often in releases, those are major changes and usually come with a prior warning by the developer. Meaning, you could just make the template and just change the version numbers and hashes, recompile it and most of the time, that will be just that, bam, you’re up to date ☺️. Sure, there are major updates, but let’s face it, there are very rare. And, you can share the template with others on the official void xbps-src repo or your own repo, however you like 😉. Hell, you could even share the binaries so that other people don’t have to go through the trouble of compiling them manually 😉.
If even file owner is not preserved (it is not always root, espetially in /var), you likely lost files’ extanded attributes an, maybe, also permissions. Without them your system won’t work normally.
Then, contents of these directories must be consistent with other ones. E. g. /var contains a package manager data about packages you installed. If you installed/removed anything after creating a backup, information about this will be lost.
If you created the backup while system was working, some files (espetially under /var, again) could be changed during that process, and this also makes such backup unusable. Every sysadmin knows that to create a database backup by copying files, dbms must be stopped.
In future, think about restoration before planning a backup and test if this possible immediately after it is done.
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.