It’s actually a suggested configuration / best practice to NOT have container user IDs matching the host user IDs.
Ditch the idea of root and user in a docker container. For your containerized application use 10000:10001. You’ll have only one application and one “user” in the container anyways when doing it right.
To be even more on the secure side use a different random user ID and group ID for every container.
This is really dependent on whether or not you want to interact with mounted volumes. In a production setting, containers are ephemeral and should essentially never be touched. Data is abstracted into stores like a database or object storage. If you’re interacting with mounted volumes, it’s usually through a different layer of abstraction like Kibana reading Elastic indices. In a self-hosted setting, you might be sidestepping dependency hell on a local system by containerizing. Data is often tightly coupled to the local filesystem. It is much easier to match the container user to the desired local user to avoid constant sudo calls.
I had to check the community before responding. Since we’re talking self-hosted, your advice is largely overkill.
… basically anything. Yes. You will always find yourself in problems where the best practice isn’t the best solution for.
In your described use case an option would be having the application inside the container running with 10000:10001 but writing the data into another directory that is configured to use 1000:1001 (or whatever the user is you want to access the data with from your host) and just mount the volume there. This takes a bit more configuration effort than just running the application with 1000:1001 … but still :)
Yep! The names are basically just a convenient way for referencing a user or group ID.
Under normal circumstances you should let the system decide what IDs to use, but in the confined environment of a docker container you can do pretty much what you want.
If you really, really, really want to create a user and group just set the IDs manually:
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.
Maybe also worth watching the system with nmon or htop (running in another tmux / screen pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
Just another thought… Maybe just format the drives as a massive EXT4 JBOD (just for a temp test) and copy the data again - just to see if ZFS is the problem… maybe it’s something else altogether? Maybe - and I hope not - the USB source drive is failing after long reads?
I believe there’s another issue. ZFS has been using nearly all RAM (which is fine, I only need RAM for system and ZFS anyway, there’s nothing else running on this box), but I was pretty convinced while I was looking that I don’t have dedup turned on. Thanks for your suggestions and links!
Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot).
So add your user to the new docker group made on install of that package and you’ll be able to docker without sudo. You may need to relogin or newgrp docker before it works tho
Yea I’ve been using nextcloud for a while and it’s fine.
I remember when I used owncloud before nextcloud was even a thing and the upgrade experience was absolute shit.
These days it’s just fine.
It’s not both, it’s Akamai Connected Cloud. The Linode brand has been retired. The vestiges of the Linode brand that are still visible are merely due to Akamai’s sloppy and disordered integration effort. They’ll likely retain the Linode.com domain so as not to break existing API calls, but the switch away from Linode.com as the primary domain, and removal of the name Linode from technical documents and design elements is ongoing.
I’m absolutely at that point with Nextcloud. I kind of didn’t want to go the syncthing route, but I’ll probably give it a shot anyway since none of the NC alternatives seem any better.
Any guidance on this? I looked into Synthing at one time to backup Android phones and got overwhelmed very quickly. I’d love to use it in a similar fashion to NextCloud for syncing between various computers too.
Well, it works in a different way than NextCloud. You don’t have a server, instead you just make a share between your computers and they are all peers.
It takes some getting used to the idea, but it’s actually much simpler than NextCloud.
@squidspinachfootball@marcos Syncthing syncs. It does one way syncs, but if your workflow is complex and depends on one way syncs that's probably not what you want.
Sync things between operational systems, then replicate to nonoperational systems, and backup to off site segregated systems.
I was very intimidated as well, I’ll try to simplify it, but as always check the documentation ;)
This is the process I used to sync between my Windows PC and Android phone to sync retroarch saves (works well, would recommend, Pokemon is awesome) I’ve never done it on a Linux, though i assume it’s not too different
I downloaded the Synctrazor program so that it would run in the tray, again I’m not sure what the equivalent/if this would be necessary on Linux.
No shade to the writers, but the documentation isn’t super noob friendly, as I figured out. I’d recommend trying to cut out all the fluff, and boil it down to bare essentials. Download the program (whichever one seems right for your device, there’s an app for Android) and follow the process for syncing stuff (I believe I used a video guide, but it’s not actually as complicated as it seems)
If you need specific help I’d be happy to answer questions, though I only understand a certain amount myself XD
It really wasn’t all that complicated for me. Install the client on two devices set a share up on one device go to the other device Hit add device put the share ID in. Go back to the first devices admin and say allow the share
i have been running the new owncloud (ocis) and, with some quirks and very basic functionality, it’s been running for 2+ years and survived multiple updates without major complications
I really don’t understand all those posts: I use nginx, apparmor, partially even modsecurity, I use collabora office official debian package, face recognition, email, update regularly (waiting for major upgrades for every app I use to be updated), etc. and literally never had a problem in the last 5 years except for my own experiment. True, only 5 people use my instance, but Nextcloud is rock solid for me
Likewise. I have been running it for years, almost no problem that I can think of. My setup is pretty vanilla, Apache, MySQL. It’s running in a container behind a reverse proxy. I keep it as up to date as possible. Only 3 people use mine, and I don’t use very many apps: files, notes, bookmarks, calendar, email.
I was trying for the 3rd time to install the collabora office app in nextcloud. I think it’s hilarious they know it’s going to time out and they give you a bogus command to run to fix it. So unnecessarily irritating.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.