But this is by design, snap containers aren’t allowed to read data outside of their confinements. Same goes for flatpak and OCI-containers.
I don’t use snap myself, but it does have its uses. Bashing it just because it’s popular to hate on snap won’t yield a healthy discussion on how it could be improved.
Snap sucks, but not for the reason OP stated. There's a decillion reasons for why Snaps suck, why make up a reason that applies to other formats that are actually good?
Ok then don’t publish an application that clearly needs access to files outside of the /home directory. Or at least be upfront about how limited it is when run as a snap.
I have a 20TB RAID array that I use for a number of services mounted at /data. I would like Nextcloud to have access to more than the 128GB available to /home. I’m not willing to move my data mount into /home and reconfigure the ~5 other services that use it just to work around some stupid Snap limitation. Who knows whether Snap even can access data across filesystems if they’re mounted in home. I wouldn’t put it past the Snap devs to fall down on that point either.
Yes, Docker clearly needs access to all files. It is meant for running server software, and server software is supposed to be flexible in its setup. To me, this limitation makes it completely unusable. Nextcloud is only the first service that needed access to that directory. I’ll also be running MinIO there for blob storage for a Mastodon server. I’ll probably move Jellyfin into a Docker container, and it’ll need access too.
The fact that this giant issue with Snap is not made clear is my biggest problem with it. I had to figure it out myself over the course of two hours when there are zero warnings or error messages explaining it. What an absolutely unnecessary waste of time, when it could have warned me at install that if I wanted a completely functional version of Docker, I should use the apt package.
I will never use any Snap package again. This was such a bad experience that I probably won’t even be using Ubuntu Server going forward. I already use Fedora for desktop. And the fact that a few people here are basically saying it’s my fault for not already knowing the limitations imposed on Snap packages is just making it more obvious that Ubuntu has become a toxic distro. It’s sad, because Ubuntu got me into Linux back with Hardy Heron 8.04. I’ve been running Ubuntu servers since 9.10. I used to be excited every six months for the new Ubuntu release. It’s sad to see something you loved become awful.
The issue here is that Canonical pushed the snap install without warning about its reduced functionality. I don’t think highlighting a wildly different experience between a snap install and the Docker experience people are used to from the standard package install is “bashing it just because it’s popular to hate on snap.” For example, if you take a fresh Ubuntu server 22 install and use the snap package, not realizing that snaps have serious limitations which are not explicitly called out when the snap is offered in the installation process, you’re going to be confused unless you already have that knowledge. It also very helpfully masks everything so debugging is incredibly difficult if you are not already aware of the snap limitations.
One big shared media volume has multiple benefits, each server just have to deal with their own user management, no server switching or remembering if that one movie is of this or that Server…
I have 3 Intel S3700’s, one for the OS and two 400GB ones for a mirror pool (might do a raidz1 as well). But getting anything in a serious capacity (8-12 TB of usable storage) with datacenter SSDs is really expensive. :(
Just rob a few banks, go to prison, meet a coke dealer, get out of prison and start selling coke, rise up the ranks until you can kill the current leader and become a drug kingpin, and finally realize that you still don’t have enough money for it because they are expensive as shit.
This. And, yt-dlp and/or youtube-dl used to have an issue where if the url started with the video ID instead of the playlist ID, it just downloaded the video not the whole playlist. Not sure if that is still around, then just be aware.
I’ve been doing Linux server administration for 20 years now. You’ll always have to duckduckgo things. You’ll never keep it all in your head, even just a single server with a handful of services. Docker and containers really isn’t too hard. Just start small and build from there. If you can learn how the chroot command works, you’ve pretty much learned docker. It’s just chroot with more features.
Yep same here. Professional IT for over 25 years. Nobody knows everything. It’s ok to fail. Just keep swimming. And when you do get something working…. that high is unbelievable. It’s like a drug addiction and will drive you to do more and more. Good luck!!!
I’ve been working on this on and off for a few months now. The more I learn, the deeper the hole gets. Ports and VPNs and UPNP and TCP and UDP and hosts and containers and firewalls and on and on. It’s a lot.
Many times I can’t get things working properly, if at all, and other times it works perfectly one day and then several days later, after changing absolutely nothing, no longer works.
My current goal is to get a Mobilizon instance and a Jitsi server running, to hopefully get a community started up there that meets up regularly to help each other, and to make onboarding easier.
I tried to ask for help around here and, while a few kind people did offer to help (and disappeared shortly thereafter), I was overwhelmingly lambasted for daring to ask for personal help.
Thank you, I managed to get it working with MediaMTX and DockoVPN I still don’t know how I would manage dynamic IP changes during the days I’m away, that would break the VPN
For the dynamic ip address that you can get a free domain name from afraid or noip or maybe others and point your vpn to your domain name instead of direct ip address. Following that you can run cron job scripts to ensure the ip address that the domain points to is up to date
Some systems (debian) may require this sudo usermod -a -G video www-data to make sure it will work. Because ffmpeg will be launched with the www-data user that doesn’t have access to the video cameras.
It will even turn off the camera if nobody is connected;
Use ffmpeg -f v4l2 -list_formats all -i /dev/video0 to find what formats your camera supports;
Watch the stream from VLC with the url: rtmp://device-ip/live/stream
Not aware of any such project. I’d assume you’ll need some hardware anyways as you need it for the level of access (ATX etc.). Not sure how that would be preferable to this.
I was thinking more about the basics, like USB input and getting the image+sound. For that you could get away with a special USB cable and a capture card. I’m just not aware of any software for it, I don’t think the original PiKVM stuff was ever ported to PC.
Maybe add one of those dummy HDMI or Display dongles so you don’t need to connect a monitor and you can set the display resolution who whatever you want.
Yes. When loading small images - there is no noticeable difference between local and NAS. When loading videos or large pics - there is about a 2 sec lag, then the video plays normally. I have a 500/500 Mb internet at home and on the VPS side I think it’s a few Gbps. I am consistently pulling minimum 200 Mbps between the two. I set a mount option ,nofail so that my OS boots up when NAS is down/unreachable, and my container also starts up fine with the NAS down, but won’t play its content obviously
selfhosted
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.