It’s funny how conservative Windows is, it still has components from the NT.
That calling: ensuring things are compatible with old software and not fucking your users over. Just for fun I tried to install Photoshop 6 from 2000 on Windows 11 and it works just fine. Same goes for MS Office 2003.
Why bother with Windows? Mostly the same reasons moving from Windows to a Mac can be a pain, however on macOS you get better professional software support and less reasons to virtualize Windows from time to time. To be fair, what’s the point of using X operating system if some of the tools you need require a virtual machine or you’ve to use alternatives that are sub-par, will make you waste time and have a worse experience. Again even under macOS with Microsoft’s own MS Office for Mac things sometimes aren’t as compatible as they should be.
Linux desktop is great, I love it but I don’t sugar coat it nor I’m delusional like most posting about it. Here is a list of cases that aren’t easy to deal in Linux:
People who need the real MS Office because once you have to collaborate with others Open/Libre/OnlyOffice won’t cut it;
Designers who use Adobe apps that won’t run properly without having a dedicated GPU, passthrough and a some hacky way to get the image back into your main system that will cause noticeable delays;
People that run old software / games because not even those will run properly on Wine;
Electrical engineers: Circuit Design Suite (Multisim and Ultiboard) are primarily designed for Windows. Alternatives such as KiCad and EasyEDA may work in some cases but they aren’t great if you’ve to collaborate with others who use Circuit Design Suite;
Labs that require data acquisition from specialized hardware because companies making that hardware won’t make drivers and software for Linux;
Architects: AutoCAD isn’t available (not even the limited web version works) and Libre/FreeCAD don’t cut it if you’ve to collaborate with AutoCAD users;
Developers and sysadmins, because not everyone is using Docker and Github actions to deploy applications to some proprietary cloud solution. Finding a properly working FTP/SFTP/FTPS desktop client (similar WinSCP or Cyberduck) is an impossible task as the ones that exist fail even at basic tasks like dragging and dropping a file.
If one lives in a bubble and doesn’t to collaborate with others then native Linux apps might work and might even deliver a decent workflow. Once collaboration with Windows/Mac users is required then it’s game over – the “alternatives” aren’t just up to it.
Windows licenses are cheap and things work out of the box. Software runs fine, all vendors support whatever you’re trying to do and you’re productive from day zero. Sure, there are annoyances from time to time, but they’re way fewer and simpler to deal with than the hoops you’ve to go through to get a minimal and viable/productive Linux desktop experience. It all comes down to a question of how much time (days? months?) you want to spend fixing things on Linux that simply work out of the box under Windows for a minimal fee. Buy a Windows license and spend the time you would’ve spent dealing with Linux issues doing your actual job and you’ll, most likely, get a better ROI.
Also, the guys take on “what you go for it’s entirely your choice” when it comes to DE is total BS. What usually happens is that you’ll eventually find out while you can use any DE in fact GNOME will provide a better experience because most applications on Linux are design / depend on its components and installing them on KDE will simply give you small issues here and there, windows that don’t pick on your theme or simply create a frankenstein of a system composed by KDE + a bunch of GTK components.
Isolate them from your main network. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;
If you’re using VLANs then configure your switch properly. Decent switches allows you to restrict the WebUI to a certain VLAN / physical port - this will make sure if your server is hacked they won’t be able to access the Switch’s UI and reconfigure their own port to access the entire network. Note that cheap TP-Link switches usually don’t have a way to specify this;
Only expose required services (nginx, game server, program x) to the Internet. Everything else such as SSH, configuration interfaces and whatnot can be moved to another private network and/or a WireGuard VPN you can connect to when you want to manage the server;
Use custom ports with 5 digits for everything - something like 23901 (up to 65535) to make your service(s) harder to find;
Disable IPv6? Might be easier than dealing with a dual stack firewall and/or other complexities;
Use nftables / iptables / another firewall and set it to drop everything but those ports you need for services and management VPN access to work - 10 minute guide;
Use your firewall to restrict what countries are allowed to access your server. If you’re just doing it for a few friends only allow incoming connection from your country (wiki.nftables.org/wiki-nftables/…/GeoIP_matching)
Realistically speaking if you’re doing this just for a few friends why not require them to access the server through WireGuard VPN? This will reduce the risk a LOT and won’t probably impact the performance. This is a decent setup guide digitalocean.com/…/how-to-set-up-wireguard-on-deb… and you might use this GUI to add/remove clients easily github.com/ngoduykhanh/wireguard-ui
I’m more worried about what’s going to happen to all the self-hosters out there whenever Cloudflare changes their policy on DNS or their beloved free tunnels. People trust those companies too much. I also did at some point, until I got burned by DynDNS.
PCIe 2.0 x 4 > 2.000 GB/s PCIe 3.0 x 2 > 1.969 GB/s
But we also have to consider the suggested ARM CPU does PCIe 2.1 and we’ve to add the this detail:
PCIe 2.1 provides higher performance than the PCIe 2.0 by facilitating a transparent upgrade from a 32-bit data path to a 64-bit data path at 33MHZ and 66MHz.
I shouldn’t also have a large impact but maybe we should think about it a bit more.
Anyways I do believe this really depends on your use case, if you plan to bifurcate it or not and what devices you’re going to have on the other end. For instance for a NAS I would prefer the PCIe 2.1 x 4 as you could have more SATA controllers with their own lanes instead of sharing lanes in PCIe 3.0 using a MUX.
Conclusion: your mileage may vary depending on use case. But I was expecting to have more PCI lanes exposed be it via more m.2 slots or other solution. I guess that when a CPU comes with everything baked in and the board maker “only has” to run wires around better do it properly and expose everything. Why not all SATAs for instance?
Some systems (debian) may require this sudo usermod -a -G video www-data to make sure it will work. Because ffmpeg will be launched with the www-data user that doesn’t have access to the video cameras.
It will even turn off the camera if nobody is connected;
Use ffmpeg -f v4l2 -list_formats all -i /dev/video0 to find what formats your camera supports;
Watch the stream from VLC with the url: rtmp://device-ip/live/stream
however the packages for nginx-rtmp are quite abandoned in arch linux.
Maybe you should switch to Debian? I’ve been doing it for a long time that way and playing to VLC without issues. What repositories are you using btw? Official ones at nginx.org/en/linux_packages.html or some 3rd party garbage?
Both the ROCKPro64 and the NanoPi M4 from 2018 has a x4 PCIe 2.1 interface. Same goes for almost all RK3399 boards that care to expose the PCIe interface.
Update: there’s also the more recent NanoPC-T6 with the RK3588 that has PCIe 3.0 x4.
They could’ve exposed more SATA ports and / or PCI lanes and decided not to do it.
And… let’s not even talk about the SFF 8087 connector that isn’t rated to be used as an external plug, you’ll likely ruin it quickly with insertions and/or some light accident.
DNA also sounds interesting, though it doesnt seem like a good way of preserving data long term. DNA is very fragile, and seems like an odd route to take for long term archiving.
Yeah the 5D quartz disk is very cool.
Anyways if you think about storage density DNA isn’t that “odd”. With DNA you can store dozens of copies of the data and parity checks in a very small space so even if some gets corrupted you can still get it. I get that organic stuff has its limits but the density is just mind blowing.
Yes and I do and while it is great for infrastructure, magnitudes better than anything Microsoft ever offered as a reasonable desktop it’s a fucking a joke.
It gets worse - because ISPs are choosing NATs over IPv6,
Yes, because they’re mostly pieces of shit, technically inept and unable to properly deploy IPv6 at a large scale.
Either way IPv6 doesn’t fix everything as you’ll still need a real IPv4 to access a large part of the internet or some translation (MAP-T/MAP-E). Even if your ISP provided dual stack with a real public IPv6 + CGNAT / MAP-T IPv4 it would still be annoying as you wouldn’t be able to do port forwarding on the IPv4 and won’t be able to access your self-hosted services from a LOT of networks that are IPv4 only.
There are two versions of MAP – translated (MAP-T) and encapsulated (MAP-E). In MAP-E IPv4 traffic is encapsulated into IPv6 using a v6 header before it is sent over the v6 network. At the network operator’s boundary router, the IPv6 header is then stripped, and the IPv4 traffic is forwarded to the v4 Internet. In MAP-T, the IPv4 packet header is mapped to the IPv6 header and back. The difference between the two options is evident in their names. MAP-E uses IPv6 to encapsulate and de-encapsulate IPv4 traffic, whereas MAP-T uses NAT64 to translate IPv4 to IPv6 and back.