The main thing you are doing wrong is reading howto articles in the web. Most of them are written by newbies who did the thing they describe for the first time, got something likely working and want to describe this for themselwes an for the other world. This does not mean they did everything right, and howtos usually contain numerous mistakes. Better read official documentation. This will take longer time but you will understand what you do. I don’t know if Mint has GUI tools to configure samba server. I would better edit config file manually, it is more or less simple.
Hey, just to let you know, software raid nowadays is quite a bit better for home NAS that hardware raid. I would suggest using ZFS and zpools as a software raid.
If you are already past that point though. As far as sharing, if you are just using it as a small home server or NAS and want things simple, you could just use TrueNAS. It would make things much easier.
If you are running your main computer and sharing the files, I would suggest trying NFS instead of Samba. Samba shares are notoriously unreliable and buggy. Windows has NFS support for a while now for your other machines blog.netwrix.com/…/mounting-nfs-client-windows/
100% agree. Software RAID is the thing you want as a consumer. Doesn’t need to be ZFS. mdraid is another good and well tested option for the traditional way of using RAID.
Kdenlive is great, I’ve been editing a lot of my videos on there and some shorts on YouTube. It’s got a pretty unappealing UI but one you get to know and figure out where everything is you can get some content out :)
I’ve found Shotcut to be more stable than Kdenlive. Tho I haven’t tried the latest kdenlive yet. Both have glaxnimate support so motion graphics is possible with both.
The Linux Foundation and Kernel devs don’t really deal with the OS layer much. This is something that would need to be implemented at the desktop environment level; like GNOME or KDE. Neither LF nor Linus Torvalds has any say over that.
IMO it’s not that Debian isn’t good for gaming. It’s that it’s not good for gaming IF you want to just install Debian and start gaming right away. There’s going to be a bit of downloading/installing, & configuring first.
If Debian is too far back of a starting point for you then I’d either go with a gaming distro where many things will already come installed and possibly (idk for sure because I’ve not used any gaming distros) configured for you to where you mostly just need to sign in and download your games.
Real question- I have a steam deck and am incredibly pleased with the playability. I also have a desktop with a newer nvidia card. Does Linux have support for DLSS yet? It make a huge difference in oerformance and honestly it’s the only thing holding me back
That depends which DLSS. In my testing DLSS 1 and 2 work fine in games that I tried, with recent Proton enabling it as well as ray tracing shouldnt require extra steps anymore (it was experimental and opt-in using environment variables). DLSS 3 with frame generation is known as no go yet and it’s unfortunately on NVIDIA to provide support for it as it’s very much locked down guarded proprietary stuff.
It should support DLSS unless you have an older video card, which the drivers don’t work well with. I heard the newer Nvidia cards work better though. Of course, is all up to you whether you like it or not, so just try out Linux and see. If you don’t like it just reinstall Windows. Make a recovery Windows USB beforehand though, makes it easier to reinstall.
Linux and Nvidia don’t mix well, at least not until Nvidia’s official open source kernel module has been upstreamed to the Linux kernel which will take years.
Breakages, workarounds for breakages, etc. are common occurrences, especially when you want to use a modern desktop using Wayland.
Other than being completely unable to run Wayland, secure boot, and being forced to use a propietary driver what kind of things are specifically wrong with Nvidia on Linux? Maybe it’s because I switched to Linux fairly recently but I haven’t noticed many Nvidia specific issues yet.
After using ext4 for yyyeeeaaaarrrrrsss, when I upgraded my MX21 to MX23 I used btrfs, with subvolumes, especially for easy backup/snapshot/timeshift.
Just at install, super easy, create a small ext4 boot partition on the SSD, then a big LUKS partition, format with btrfs, create subvolumes for / /home /var /swap and that’s it. No hassle with sizing correctly.
btrfs seems pretty stable. I see no diff in performance compared to ext4 because my application are not that dependant to FS speed, and with SSD anyway?
Not sure about the other ones, but I use Btrfs because of subvolumes and backups.
Subvolumes are like special folders inside of your partition that mount separately. Ex. In my btrfs partition, I have a @home partition that is mounted to /home
This makes it easier to choose what you are backing up, because you can say, “just copy everything in @home to the backup location”
If I got any of that wrong, feel free to correct me!
I mean, is it actually easier to copy everything in @home than it is to copy everything in /home? Btrfs has always kinda felt like it’s a bunch of extra steps to solve problems I don’t have.
The real power for btrfs for me is incremental backups; you can take a snapshot of your home partition and send it to a backup device, then you can take a second snapshot a week later and just send the differences between them. I do my weekly backups like this. You can keep many multiple snapshots to roll back if needs be since only the differences between snapshots take up space. This is the tutorial that got me started.
Yeah, alright, I see how that could be useful for someone who isn’t me. I don’t have much that’s important on my computer, and for what little there is I just have a second ssd I drag and drop it onto. That one has Mint installed on it in case I do something stupid to my main drive, because I routinely do stupid things to my main drive.
I suppose it depends on how much stuff you have, doing a full back up of my home every week is too time consuming to be practical but takes a couple of minutes with this method.
Keeping multiple past snapshots is overkill for me but I do it because I can, more-or-less. It would be useful if I accidentally delete a file and only remember it months later.
Kinda. You can copy your snapshots from @home too, meaning a restore from backup also restores your local file version history. There are also tools to push snapshots around as a large archive instead of dealing with smaller files directly.
The COW can also reduce the chances of running rsync on a large file that is currently being accessed, and getting a partial file in your backup. Or I suck at rsync 🤷♂️
You’re right, atomic snapshots are a big advantage of CoW fs.
Rsync backups done while the system is running have a chance of being broken, while CoW fs snapshots are instant and seem basically as if the system suddenly lost power.
For me the appeal is potentially being able to verify that my code at least compiles and has basic functionality on Darwin. No idea if this can be useful for anyone other than developers.
This is mostly down to desktop environment rather than distro, but I had the same experience as you with gnome on Ubuntu, and switching to fedora it was a lot better. Still some problems with the on-screen keyboard but it at least works. I guess having a more up-to-date version of gnome helps, or ubuntu’s additions to gnome mess with it.
I haven’t tried KDE but I’ve heard it’s been getting better touch support, so it might be worth trying out too.
As a KDE fanboy I will agree, I installed regular Ubuntu on an old Surface tablet and the touch interface is better than most Android tablets I’ve used
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.