It’s actually a suggested configuration / best practice to NOT have container user IDs matching the host user IDs.
Ditch the idea of root and user in a docker container. For your containerized application use 10000:10001. You’ll have only one application and one “user” in the container anyways when doing it right.
To be even more on the secure side use a different random user ID and group ID for every container.
This is really dependent on whether or not you want to interact with mounted volumes. In a production setting, containers are ephemeral and should essentially never be touched. Data is abstracted into stores like a database or object storage. If you’re interacting with mounted volumes, it’s usually through a different layer of abstraction like Kibana reading Elastic indices. In a self-hosted setting, you might be sidestepping dependency hell on a local system by containerizing. Data is often tightly coupled to the local filesystem. It is much easier to match the container user to the desired local user to avoid constant sudo calls.
I had to check the community before responding. Since we’re talking self-hosted, your advice is largely overkill.
… basically anything. Yes. You will always find yourself in problems where the best practice isn’t the best solution for.
In your described use case an option would be having the application inside the container running with 10000:10001 but writing the data into another directory that is configured to use 1000:1001 (or whatever the user is you want to access the data with from your host) and just mount the volume there. This takes a bit more configuration effort than just running the application with 1000:1001 … but still :)
Yep! The names are basically just a convenient way for referencing a user or group ID.
Under normal circumstances you should let the system decide what IDs to use, but in the confined environment of a docker container you can do pretty much what you want.
If you really, really, really want to create a user and group just set the IDs manually:
So add your user to the new docker group made on install of that package and you’ll be able to docker without sudo. You may need to relogin or newgrp docker before it works tho
Installed Nextcloud-AIO using the docker script, took about 4 - 5 terminal commands. Practically zero issues! Hopefully someone else can provide some help in the thread!
I have it set up. Try the AIO docker image. Once you get it set up, it pretty much just works. You just pick which office suite you want, check a few optional features if you want 'em, and it handles the rest for you. Most importantly, the AIO image is from nextcloud. They test it, it always works because it is the blessed version from them. If you’re not a Linux guy, don’t try the other installation methods, they’re much, much more difficult.
I’ll give it a shot. I’ve tried so many different approaches already. I think I maybe tried to install AIO straight onto a linux vm; don’t recall how it got derailed. I did build a Lubuntu VM for experimentation. I really wanted to get an Ollama chatbot running to assist me in my future digital endeavors, but it just wouldn’t come together.
Make sure your backups are solid and can’t be deleted or altered.
In addition to normal backups, something like zfs snapshots also help and make it easier to restore if needed.
I think I remember seeing a nextcloud plugin that detects mass changes to a lot of files (like ransomware would cause). Maybe something like that would help?
Also enforce good passwords.
Do you have anything exposed to the internet that also has access to either nextcloud or the server it’s running on? If so, lock that down as much as possible too.
Fail2ban or similar would help against brute force attacks.
The VM you’re running nextcloud on should be as isolated as you can comfortably make it. E.g. if you have a camera/iot vlan, don’t let the VM talk to it. Don’t let it initiate outbound connections to any of your devices, etc
You can’t entirely protect against zero day vulnerabilities, but you can do a lot to limit the risk and blast radius.
For me it’s the opposite. I tried to use nextcloud for years, installing the normal way, and it always broke for no reason. I just started using it on docker and it has been perfect, fingers crossed.
Interesting, when I used docker on a proxmox build, it would give me trouble. Once I installed it the normal way on an Ubuntu build, it was good to go.
I wonder why that is?
Fingers crossed that it continues to work for you in the current configuration!
Yeah the Docker version hated me, mainly due to it sometimes getting a bit behind on updates and then having schema mismatches if I ran an update in that missed the previous one. No issues with the Snap thus far
I used to have this problem. I started pulling a version number (like 27) instead of “latest” so that I could just pull minor releases when I did updates, and then I manually step up the version in the docker-config file for major versions when I’m ready for them. (I don’t like to pull a major release version until there’s been 1 or 2 maintenance releases since my nextcloud is fairly critical for my family)
The solution for me is that I run Nextcloud on a Kubernetes cluster and pin a container version. Then every few months I update that version in my deployment yaml to the latest one I want to run, and run kubectl apply -f nextcloud.yml and it just does its thing. Never given me any real trouble.
Well… no… I have been self hosting it for several years over multiple major versions now. Only for Files, Calendar and Deck though. It was a bit hard to set up, but reading the general Apache and PHP documentation helped a lot.
selfhosted
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.