selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Dirk, in Uid/gid in docker containers don't match the uid/gid on the server?
@Dirk@lemmy.ml avatar

It’s actually a suggested configuration / best practice to NOT have container user IDs matching the host user IDs.

Ditch the idea of root and user in a docker container. For your containerized application use 10000:10001. You’ll have only one application and one “user” in the container anyways when doing it right.

To be even more on the secure side use a different random user ID and group ID for every container.

thesmokingman,

This is really dependent on whether or not you want to interact with mounted volumes. In a production setting, containers are ephemeral and should essentially never be touched. Data is abstracted into stores like a database or object storage. If you’re interacting with mounted volumes, it’s usually through a different layer of abstraction like Kibana reading Elastic indices. In a self-hosted setting, you might be sidestepping dependency hell on a local system by containerizing. Data is often tightly coupled to the local filesystem. It is much easier to match the container user to the desired local user to avoid constant sudo calls.

I had to check the community before responding. Since we’re talking self-hosted, your advice is largely overkill.

Dirk,
@Dirk@lemmy.ml avatar

This is really dependent on […]

… basically anything. Yes. You will always find yourself in problems where the best practice isn’t the best solution for.

In your described use case an option would be having the application inside the container running with 10000:10001 but writing the data into another directory that is configured to use 1000:1001 (or whatever the user is you want to access the data with from your host) and just mount the volume there. This takes a bit more configuration effort than just running the application with 1000:1001 … but still :)

Appoxo,
@Appoxo@lemmy.dbzer0.com avatar

Do I need to actually create the user in advance or can I just choose a string as I see fit?

Dirk,
@Dirk@lemmy.ml avatar

You don’t need to create the user first. Here’s the simplest I can come up with:


<span style="color:#323232;">FROM alpine:latest
</span><span style="color:#323232;">COPY myscript.sh /app/myscript.sh
</span><span style="color:#323232;">USER 10000:10001
</span><span style="color:#323232;">CMD ["sh", "/app/myscript.sh"]
</span>

This simply runs /app/myscript.sh with UID 10000 and GID 10001.

Appoxo,
@Appoxo@lemmy.dbzer0.com avatar

Wasnt aware that you can just think of IDs from fresh air.
Thought it was to create the user and ID manually amd then be able to use it.

Dirk,
@Dirk@lemmy.ml avatar

Yep! The names are basically just a convenient way for referencing a user or group ID.

Under normal circumstances you should let the system decide what IDs to use, but in the confined environment of a docker container you can do pretty much what you want.

If you really, really, really want to create a user and group just set the IDs manually:


<span style="color:#323232;">FROM alpine:latest
</span><span style="color:#323232;">COPY myscript.sh /app/myscript.sh
</span><span style="color:#323232;">RUN addgroup -g 10001 mycoolgroup && adduser -D -u 10000 -G mycoolgroup mycooluser
</span><span style="color:#323232;">USER mycooluser:mycoolgroup
</span><span style="color:#323232;">CMD ["sh", "/app/myscript.sh"]
</span>

Just make sure to stay at or above 10000 so you won’t accidentally re-use IDs that are already defined on the host.

scottmeme, (edited )

My go-to for user and group IDs is 1234:1234

Cyber, in Question - ZFS and rsync

I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.

Maybe also worth watching the system with nmon or htop (running in another tmux / screen pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…

isles, (edited )

Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.

Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.

sonstwas, (edited )

Based on this thread it’s the deduplication that requires a lot of RAM.

See also: wiki.freebsd.org/ZFSTuningGuide

Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.

Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: github.com/openzfs/zfs/issues/10251

Cyber,

Just another thought… Maybe just format the drives as a massive EXT4 JBOD (just for a temp test) and copy the data again - just to see if ZFS is the problem… maybe it’s something else altogether? Maybe - and I hope not - the USB source drive is failing after long reads?

isles,

I believe there’s another issue. ZFS has been using nearly all RAM (which is fine, I only need RAM for system and ZFS anyway, there’s nothing else running on this box), but I was pretty convinced while I was looking that I don’t have dedup turned on. Thanks for your suggestions and links!

possiblylinux127, in Question - ZFS and rsync

Have you tried running it overnight to make sure its not just a performance thing?

isles,

I did, great suggestion. It never recovered.

LufyCZ, in Question - ZFS and rsync

One thing I haven’t seen mentioned here, zfs can be quite finicky with some sata cards, especially raid cards.

I suggest you connect the hard drives to the motherboard directly and test again.

isles,

Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot).

Starbuck, in Miro/Figjam alternative?

I wish I could fully endorse Escalidraw, but it only partially works in self-hosted mode. For a single user it’s fine, but not much works beyond that.

scottrepreneur,

Thoughts on the Obsidian plugin for a partial self host solution here? Can’t quite tell how much it relies on their instance

Moonrise2473, in Uid/gid in docker containers don't match the uid/gid on the server?

checked .bash_history, looks like i installed docker in the new rootless mode


<span style="color:#323232;">wget get.docker.com
</span><span style="color:#323232;">ls
</span><span style="color:#323232;">mv index.html docker.sh
</span><span style="color:#323232;">chmod +x docker.sh
</span><span style="color:#323232;">./docker.sh
</span><span style="color:#323232;">dockerd-rootless-setuptool.sh install
</span><span style="color:#323232;">sudo dockerd-rootless-setuptool.sh install
</span><span style="color:#323232;">sudo apt install uidmap
</span><span style="color:#323232;">dockerd-rootless-setuptool.sh install
</span>

now i need to see how to restore it to work in the traditional way or i will become crazy with the permissions…

Moonrise2473,

I fixed it:

for future reference:

  • from docs.docker.com/engine/security/rootless/…, run dockerd-rootless-setuptool.sh uninstall
  • delete the user data (warning: i wasn’t using any docker volumes and i had no data to lose!!!) using the command that the previous script tells you
  • add your user to the docker group and use the traditional “run docker as root” way: docs.docker.com/engine/…/linux-postinstall/
Atemu,
@Atemu@lemmy.ml avatar

Why go through all of that complexity when you could just sudo apt install docker?

Moonrise2473,

i don’t want to type sudo before each single docker command

cheet,

So add your user to the new docker group made on install of that package and you’ll be able to docker without sudo. You may need to relogin or newgrp docker before it works tho

Voroxpete, (edited )

You can do that with regular docker. Just add your user to the docker group.

(don’t forget to log out and log in again after adding new groups to your user)

twiked,

Niche use case, but you can also use newgrp to run commands with a recently-added group to your user, without having to logout/login yet.

throwafoxtrot,

Or start a new session by typing bash, when already in bash.

aadil, in Miro/Figjam alternative?

tldraw.com is great

TCB13,
@TCB13@lemmy.world avatar

Yeah thats a cool thing.

suzune, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

I’ve been updating Nextcloud in-place (manually) for multiple major versions without any flaws. What is the problem?

InEnduringGrowStrong,
@InEnduringGrowStrong@sh.itjust.works avatar

Yea I’ve been using nextcloud for a while and it’s fine.
I remember when I used owncloud before nextcloud was even a thing and the upgrade experience was absolute shit.
These days it’s just fine.

possiblylinux127, in Linode Alternative Suggestions for Small Projects

That’s odd. I personally like Linode and its intuitive interface.

promitheas,
@promitheas@iusearchlinux.fyi avatar

I never even got to see the interface haha

EncryptKeeper,

Akamai Connected Cloud*

It’s no longer Linode.

possiblylinux127,

Its both

EncryptKeeper, (edited )

It’s not both, it’s Akamai Connected Cloud. The Linode brand has been retired. The vestiges of the Linode brand that are still visible are merely due to Akamai’s sloppy and disordered integration effort. They’ll likely retain the Linode.com domain so as not to break existing API calls, but the switch away from Linode.com as the primary domain, and removal of the name Linode from technical documents and design elements is ongoing.

JoeKrogan, (edited ) in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times
@JoeKrogan@lemmy.world avatar

No if I have to keep fixing it , it is not worth my time.

I installed owncloud years ago and came to the same conclusion and just got rid of it. I use syncthing nowadays though its not the same thing.

atmur,

I’m absolutely at that point with Nextcloud. I kind of didn’t want to go the syncthing route, but I’ll probably give it a shot anyway since none of the NC alternatives seem any better.

linearchaos,
@linearchaos@lemmy.world avatar

I tried nc it for a while I would have taken me till the end of days to import all of my files.

I suspect I could keep it running by doing lockstep backups and updates. But it was just so incredibly slow.

I just want something that would give me remote access to my files with meta information about my files and a good search index.

Cupcake1972,

Pydio Cells/Seafile?

linearchaos,
@linearchaos@lemmy.world avatar

I’ll look at those ASAP, super hopeful

marcos,

Yep, I’ve adapted all of my setup to syncthing, and never looked back.

0110010001100010,
@0110010001100010@lemmy.world avatar

Any guidance on this? I looked into Synthing at one time to backup Android phones and got overwhelmed very quickly. I’d love to use it in a similar fashion to NextCloud for syncing between various computers too.

marcos,

Well, it works in a different way than NextCloud. You don’t have a server, instead you just make a share between your computers and they are all peers.

It takes some getting used to the idea, but it’s actually much simpler than NextCloud.

squidspinachfootball, (edited )

So if I wanted to sync photos from my phone to the computer, then delete the local copies on my phone to save space, that would not work?

E: But keep the copies on the computer, of course

marcos,

You would have to move them into some folder you are not syncing.

rhys,
@rhys@rhys.wtf avatar

@squidspinachfootball @marcos Syncthing syncs. It does one way syncs, but if your workflow is complex and depends on one way syncs that's probably not what you want.

Sync things between operational systems, then replicate to nonoperational systems, and backup to off site segregated systems.

FrostKing,

I was very intimidated as well, I’ll try to simplify it, but as always check the documentation ;)

This is the process I used to sync between my Windows PC and Android phone to sync retroarch saves (works well, would recommend, Pokemon is awesome) I’ve never done it on a Linux, though i assume it’s not too different

docs.syncthing.net/intro/getting-started.html

I downloaded the Synctrazor program so that it would run in the tray, again I’m not sure what the equivalent/if this would be necessary on Linux.

No shade to the writers, but the documentation isn’t super noob friendly, as I figured out. I’d recommend trying to cut out all the fluff, and boil it down to bare essentials. Download the program (whichever one seems right for your device, there’s an app for Android) and follow the process for syncing stuff (I believe I used a video guide, but it’s not actually as complicated as it seems)

If you need specific help I’d be happy to answer questions, though I only understand a certain amount myself XD

linearchaos,
@linearchaos@lemmy.world avatar

It really wasn’t all that complicated for me. Install the client on two devices set a share up on one device go to the other device Hit add device put the share ID in. Go back to the first devices admin and say allow the share

Discover5164,

i have been running the new owncloud (ocis) and, with some quirks and very basic functionality, it’s been running for 2+ years and survived multiple updates without major complications

flatpandisk,

Came to same conclusion too

danhab99, in Linode Alternative Suggestions for Small Projects
@danhab99@programming.dev avatar

Vultr has some pretty cheap prices… I like them

EncryptKeeper,

Second Vultr. The usability is pretty close to Linode, with more convenient DC locations.

I have noticed some issues with network throughput, though I don’t use mine for high bandwidth applications and I am on the cheapest tier.

nullpotential, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times
@nullpotential@lemmy.dbzer0.com avatar

The simple fix is to not use nextcloud

TBi,

What’s the alternative?

jkjustjoshing, (edited ) in Self-hosted media tracker recommendations?

This is for music not watched content, but Maloja and Multi Scrobbler are a pretty nice setup.

No recommendations for your actual post, but thought it might be useful for someone else.

savedbythezsh,

I was looking for watched content, but only because I didn’t think about music before! Will play around with it, thanks for the rec

tswerts, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

This got mee googling Nextcloud and I think I’m going to give it a try 😱

butt_mountain_69420,

Seriously homie, unless you’re a fucking linux docker nerdshit wizard, you should find another way.

tswerts,

Thanks for the warning 🙂 Sometimes I still think I have as much spare time as 10 years ago 😉

butt_mountain_69420,

You could be a legless NEET and not have enough time to get this fucking bullshit to work correctly.

Vega, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times
@Vega@feddit.it avatar

I really don’t understand all those posts: I use nginx, apparmor, partially even modsecurity, I use collabora office official debian package, face recognition, email, update regularly (waiting for major upgrades for every app I use to be updated), etc. and literally never had a problem in the last 5 years except for my own experiment. True, only 5 people use my instance, but Nextcloud is rock solid for me

multicolorKnight,

Likewise. I have been running it for years, almost no problem that I can think of. My setup is pretty vanilla, Apache, MySQL. It’s running in a container behind a reverse proxy. I keep it as up to date as possible. Only 3 people use mine, and I don’t use very many apps: files, notes, bookmarks, calendar, email.

butt_mountain_69420,

I was trying for the 3rd time to install the collabora office app in nextcloud. I think it’s hilarious they know it’s going to time out and they give you a bogus command to run to fix it. So unnecessarily irritating.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #