As soon as gaming is mostly flawless and similar or better performance than windows, I’ll be 100% over. Gaming has come so far, all the way into the 2010s the only games on Linux were like Portal, HL, minecraft, and KSP. But it’s still got a little ways to go.
It’s always worth remembering that Linux is not a product, it is free software. So if you are switching you can’t go into it with the mindset of “somebody better fix this or I’m leaving” because there is nobody that will feel that pressure or care. You have to use Linux because it’s something you want to do.
If that’s the only barrier, you should try again. It’s further along than you think. Thanks in large part to the Steam Deck, compatibility is miles better. I have run into 2 games since I switched 1.5 years ago that won’t run - both are EA titles (shocked Pikachu face). That was my reason not to switch too.
I’m well aware of how far out has come, I was a second batch pre-order for the steamdeck. And yes, just in the time it’s been out, Linux gaming has come sooo far. For me, all of my games don’t run seamlessly and as well, some do still just shit themselves, so I still keep a win10 boot drive for gaming. Once major support for win10 ends I think Linux gaming will be even better and my gaming will finally be all Linux.
You don’t play many competitive multiplayer titles then. Anticheat us always a pain.
Battleye and Easy Anti Cheat are Linux native, but just cause that’s the case doesn’t mean they will work. Half of the games using them either never had an official linux version or are currently broken again.
A few games using Xigncode and nProtect work too, but there the number is even lower.
Punkbuster worked on wine for 5 years but often needs to be installed manually.
As for the more aggressive ones like Riccochet and Vanguard, you can’t even run them in a VM environment.
Kernel exploits. Containers logically isolate resources but they’re still effectively running as processes on the same kernel sharing the same hardware. There was one of those just last year: blog.aquasec.com/cve-2022-0185-linux-kernel-conta…
Virtual machines are a whole other beast because the isolation is enforced at the hardware level, so you have to exploit hardware vulnerabilities like Spectre or a virtual device like a couple years ago some people found a breakout bug in the old floppy emulation driver that still gets assigned to VMs by default in QEMU.
Security comes in layers, so if you’re serious about security you do in fact plan for things like that. You always want to limit the blast radius if your security measures fail. And most of the big cloud providers do that for their container/kubernetes offerings.
If you run portainer for example and that one gets breached, that’s essentially free container escape because you can trick Docker into mounting and exposing what you need from the host to escape. It’s not uncommon for people to sometimes give more permissions than the container really needs.
It’s not like making a VM dedicated to running your containers cost anything. It’s basically free. I don’t do it all the time, but if it’s exposed to the Internet and there’s other stuff on the box I want to be hard to get into, like if it runs on my home server or desktop, then it definitely gets a VM.
Otherwise, why even bother putting your apps in containers? You could also just make the apps themselves fully secure and unbreachable. Why do we need a container for isolation? One should assume the app’s security measures are working, right?
If they can find a kernel exploit they might find a hardware exploit too. There’s no rational reason to assume containers are more likely to fail than VMs, just bias.
Oh and you can fix a kernel exploit with an update, good luck fixing a hardware exploit.
Now you’re probably going to tell me how a hardware exploit is so unlikely but since we’re playing make believe I can make it as likely it suits my argument, right?
Could it be that the manjaro repos have older versions of the HIP runtimes then what blender 4.0 is built for? Just a thought. Either way I would report it to the blender devs so they can fix it if it’s a bug
For data science, it depends on what GPU you plan to use. If it’s an Nvidia brand GPU, go with Ubuntu or Fedora. I say from personal experience that it is easier to get Nvidia drivers working on Ubuntu or Fedora than on most other distros I have tried. If it is a Radeon GPU, it will work fine on pretty much any distro at all since Radeon does a good job following Linux standard APIs for graphics card drivers, so for Radeon products I would also recommend Debian or Mint (along side Fedora and Ubuntu).
Isolate them from your main network. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;
If you’re using VLANs then configure your switch properly. Decent switches allows you to restrict the WebUI to a certain VLAN / physical port - this will make sure if your server is hacked they won’t be able to access the Switch’s UI and reconfigure their own port to access the entire network. Note that cheap TP-Link switches usually don’t have a way to specify this;
Only expose required services (nginx, game server, program x) to the Internet. Everything else such as SSH, configuration interfaces and whatnot can be moved to another private network and/or a WireGuard VPN you can connect to when you want to manage the server;
Use custom ports with 5 digits for everything - something like 23901 (up to 65535) to make your service(s) harder to find;
Disable IPv6? Might be easier than dealing with a dual stack firewall and/or other complexities;
Use nftables / iptables / another firewall and set it to drop everything but those ports you need for services and management VPN access to work - 10 minute guide;
Use your firewall to restrict what countries are allowed to access your server. If you’re just doing it for a few friends only allow incoming connection from your country (wiki.nftables.org/wiki-nftables/…/GeoIP_matching)
Realistically speaking if you’re doing this just for a few friends why not require them to access the server through WireGuard VPN? This will reduce the risk a LOT and won’t probably impact the performance. This is a decent setup guide digitalocean.com/…/how-to-set-up-wireguard-on-deb… and you might use this GUI to add/remove clients easily github.com/ngoduykhanh/wireguard-ui
I found KDE Simon, and Numen … I’ve only ever used a commercial product, Dragon Naturally Speaking, many years ago. It was used so I could speak instead of typing texts but it did have functions to assign commands as well - don’t think it worked on Linux though.
Both Docker and Podman pretty much handle all of those so I think you’re good. The last aspect about networking can easily be fixed with a few iptables/nftables/firewalld rules. One final addition could be NGINX in front of web services or something dedicated to handling web requests on the open Internet to reduce potential exploits in the embedded web servers in your apps. But other than that, you’ve got it all covered yourself.
There’s all the options needed to limit CPU usage, memory usage or generally prevent using up all the system’s resources in docker/podman-compose files as well.
If you want an additional layer of security, you could also run it all in a VM, so a container escape leads to a VM that does nothing else but run containers. So another major layer to break.
Might want to check out Ubuntu Unity. It was made more Netbooks(when those where a thing) and Touchscreens. But as another poster pointed out Bliss looks really nice for this use case
I’ll look at that, thanks! I put Bliss on one and I’m not really happy with it yet. Just trying to type my wifi password had the UI wigging out on me, had to use a usb kb just to type the pass. I’ll look into Ubuntu Unity tho, thanks!
Completely tangential tip, but in the very-limited video editing I’ve done recently: I’ve used Davinci Resolve, rendered as .mov, and then used ffmpeg to render to my actual desired format. e.g. h264 w/ aac audio so I can upload to Youtube:
I do think that finding the right flags to pass to ffmpeg is a cursed art. Do I need to specify the video profile and the pix_fmt? I don’t know; I thought I did when I adventured to collect these flags. Though maybe it’s just a reflection of the video-codec horrors lurking within all video rendering pipelines.
edit: there may also be nvidia-accelerated encoders, like h264_nvenc, see ffmpeg -codecs 2>/dev/null | grep -i ‘h.264’. I’m not sure if the profile:v and pix_fmt options apply to other encoders or just libopenh264.
using openh264… well that’s a choice. I would recommend to everyone that they use x264 whenever possible, and make sure to specify output crf and likely preset when you fo
A couple other things, you generally want to do pixel format conversion before the codec, is specified. You should be able to get satisfactory results with ffmpeg -i input.mpv -pix_fmt yuv420p -c:v libx264 -preset medium -crf 24 -c:a aac output.mp4 Play with preset a bit since that is where your Quality/Compression : Speed ratio comes in, CRF is the quality it handles. So you set CRF for a ballpark quality you want, then change the preset, slower = higher compression, faster = lower compression.
haha, yeah figuring out those ffmpeg flags is an absolute nightmare. My problem there isn’t so much the output format from Resolve, but source format I’m using. My camera only has the option to record in H.264/H.265 (consumer grade, what can you expect?) which Resolve can’t properly import on Linux. I could take the time to transcode them with ffmpeg before editing, but I’m usually working with ~2 hours worth of video per project and I don’t really want to wait all day for a transcode job to finish before I can even begin editing. On top of that my camera (rather neatly) generates its own proxy files while recording, and I’ve found leveraging these is necessary for getting good timeline performance on my aging rig. Now I could let Resolve generate its own proxy clips like I have in the past, but that’s more time waiting around before editing. I was SUPER stoked to see Kdenlive can natively utilize the proxy clips my camera generates.
I have a stack of old phones in my drawer of tech I need to go through and check which ones work on postmarketOS. I think I have an old Pixel or two as well as Nexus phones.
Just a note: there are a few on-screen software keyboards for X out there that aren’t tied to a specific DE, like xvkbd and svkbd. They might be worth trying if you find some distro that works well except that the default on-screen keyboard sucks. (No idea if there’s any equivalent for Wayland.)
It sounds like you have a few devices, so I would recommend trying out this testing distro with MauiShell. There is a testing iso under the heading “Downloads and Sources” in this most recent blog post about its development progress.
After the bug with pop_os that happened to Linus I stopped using it. I’d like reliable system and clearly the pop_os team doesn’t know how to package their software if a dependency error that bad happens
They commented on their video that it was their fault. There was never a packaging issue. The issue was that we pushed a systemd source package update to Launchpad, which silently didn’t build or publish the 32-bit systemd library packages, because Ubuntu had systemd on a blacklist for 32-bit package builds. We noticed this minutes after packages were published, and had it fixed within an hour later.
This didn’t actually affect any systems in the wild because apt held back the update until we had worked around the restriction on Launchpad (there was an invisible ceiling to the package version number). They were only affected during that time period because they manually entered that sentence from the prompt in a terminal. We stopped using Launchpad with 21.10, so all packages released since then are the same packages that are built and tested by our packaging server, and used by our QA team internally.
The drama and reputational damage that LTT caused was unnecessary. Especially given that they uploaded this video a week later, and never attempted to reach out. They still have yet to properly edit the video.
That vid is actually good, it exposes lots of issues that regular users run into when switching to linux, in fact debian changed apt to make it harder to remove essential packages like linus did.
On Arch to remove essential package you will not be prompted with confirmation to remove them, you will have to add --nodeps --nodeps twice to the command to be able to do so, no idea how long this has been the case on arch or if it was implemented after linus vid as well, but that is something that should have been that way a decades ago, I still see on reddit posts of people that accidentally delete grub or remove important directories from their system.
I used Pop on my main computer for almost a year before switching back to Mint last year. There were a lot of good things about it - for instance, it had the best compatibility out of the box with my hardware out of everything I tried. But I also saw some stability issues, and I personally dislike it’s aesthetic, and I’m not really interested in trying Cosmic. I still recommend it to people but it’s not for me.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.