selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Moonrise2473, in Public DNS server with gui

I use technitium but it’s like pihole, designed for a few concurrent users in a local network? Instead you want that anyone in the world can use your DNS?

But you would only attract bad actors, normal users won’t use a random DNS server as it could redirect specific sites to phishing pages

possiblylinux127,

I’m going to use it to resolve my domain.

Moonrise2473,

Ah you want to host a name server

That’s the hardest thing ever to self host, can’t just use the free name server service from the registrar or cloudflare?

IMHO even the most dedicated sysadmin wouldn’t even think to self host that

possiblylinux127,

I’m starting to realize it would be a massive headache.

terminhell, in RaspberryPi becoming unresponsive at random intervals

Not sure if the rpi3 can use the 64bit version, or if it’s possible for it use an SSD like the the 4 can?

un_ax, in Radarr: Path: Folder '/data/' is not writable by user 'abc'

Try running the chown outside of the container: chown -R 1000:1000 /home/privatenoob/media/storage1/Filmek

PrivateNoob,

Doesn’t work either with both running this before starting/building the container and also while running it. Thanks for the help tho!

jores, in RaspberryPi becoming unresponsive at random intervals
@jores@c.im avatar

@AverageGoob I have this issue with one of my hosts as well. It appears to be a problem with the micro SD card. Same card, different pi = same problem. I'm currently working around it with a watchdog but will need to replace the card soon.

Are you running your OS from USB or from a micro SD card?

a_fancy_kiwi,

I’d bet $1 it’s the SD card. My 3B+ used to have the same problem. Been running pis off some sort of SSD ever since, no issues.

AverageGoob,
@AverageGoob@lemmy.world avatar

Id be willing to try this. How do you have it connected? Just using an external USB attached one?

a_fancy_kiwi, (edited )

I upgraded to the Pi4 but I use this case. It has a daughter board that lets me use an m.2 SATA SSD over USB. But any USB to SATA adapter should work fine

jores, (edited )
@jores@c.im avatar

@a_fancy_kiwi I agree, same here. This is the last pi that's running off an SD card with services that do "significant" disk I/O. I have a few zeros that only really write to the card for OS updates. Their job is to collect data and send it via the network. I haven't had issues with that kind of workload using micro SD cards.

Edit: For Pis with write workloads I'm using basic USB3 SSDs. Didn't have good results with USB sticks though.

notfromhere,

Pi 3B has dedicated bus for SD card but ethernet and usb share bandwidth. Enable zram, disable all swap and keep using sd card.

AverageGoob,
@AverageGoob@lemmy.world avatar

I am running it from an SD card. Did setting up the watchdog ultimately work for you? I did come across a watchdog as a possible workaround.

jores,
@jores@c.im avatar

@AverageGoob The watchdog saves me from rebooting the host manually, but at the risk of data loss (though not more than a locked up SD card). I configured a custom script that writes to a file, when the card has problems, the watchdog kicks in. To keep the script from stressing the card even more, the script only writes to the file every few minutes.
As you said it's only a workaround. I'll move the stuff on the problematic host to a VM with SSD shortly.

MajorHavoc, (edited ) in RaspberryPi becoming unresponsive at random intervals

I’ve had this happen when I had too many USB devices plugged into it. It was having power underrun, and acting unresponsive while trying to compensate. I solved it with a powered USB hub.

Edit: I’ve had pairing it with an off brand power brick cause the same problem, too. Apparently the 3 and later Pi really want better power quality regulation, and some of the cheapo bricks I had lying around - while providing the right Volts and Amps, didn’t control the variation well enough for the modern Pi computer.

AverageGoob,
@AverageGoob@lemmy.world avatar

That’s the weird part is that I don’t have any USB devices attached. I have Ethernet, power cable, and the fan on the case has pins going to some headers.

The case did come with another power supply so maybe I’ll try that and see if anything changes.

Shjosan, (edited ) in Radarr: Path: Folder '/data/' is not writable by user 'abc'
@Shjosan@sopuli.xyz avatar

Drop the / in “/data” for the chown command. Now it is looking for a data folder in root, and not the one in “Filmek”.

Don’t know if it will help with your issue thou

ShitpostCentral, in Public DNS server with gui

Be sure not to create an open resolver, something commonly used in DDoS attacks. serverfault.com/…/what-is-an-open-dns-resolver-an…

Shdwdrgn,

This right here. As a member of the OpenNIC project, I used to run an open resolver and this required a lot of hands-on maintenance. Basically what happens is someone sends a very small packet requesting the lookup of something which returns a huge amount of data (like DNSSEC records). They can make thousands of these requests in a short period, attempting to flood out the target domain’s DNS servers and effectively take them offline, by using your open server as the attacker.

At the very least, you need to have strict rate-limiting controls on DNS lookups. And since the requests come in through UDP, they can spoof their IP address so you can’t simply block an attacker. When I ran into this issue, I wrote up scripts to monitor for a lot of requests to the same domain name and outright block those until the attack stopped. It wasn’t a great solution, but it did at least make sure my system wasn’t contributing to an attack.

Your best bet is to only respond to DNS requests for your own domain(s). If you really want an open resolver, think about limiting it by creating some sort of sign-up method (for instance, ddns servers use a specific URL to register the changing IP of known users), but still keep the rate-limiting in place.

BearOfaTime, (edited ) in Help me get started with VPN

Tailscale can meet each of your bullet points.

Don’t bother with VPN just use Tailscale, and install the client on your other devices (they have clients for every OS).

This creates an encrypted virtual network between your devices. It can even enable access to hardware, like printers (or anything with an IP address) by enabling Subnet Routing.

To provide access to specific resources for other people, you can use the Funnel feature, which provides an entrance into your Tailscale Network for the specified resources, fully encrypted, from anywhere. No Tailscale client required.

And if you have friends who use Tailscale, using the Serve option, you can invite them to connect to your Tailscale network (again, for specified resources) from their Tailscale network.

willya, (edited ) in Ubergeek77 Lemmy instance problem?
@willya@lemmyf.uk avatar

As in you upgraded from a previous Lemmy? More than likely your database is migrating and it can take a while. ~30 minutes or more depending on your server.

lemmyselfhosted,

It’s a brand new deployment.

qjkxbmwvz, in VPN to home network options

As others have said, I’d play with routing/IP forwarding such that being VPN’d to one machine gives you access to everything — basically I would set it up as a “road warrior” VPN (but possibly split tunnel on the client [yes I know, WireGuard doesn’t have servers or clients but you know what I mean]).

Alternately, I think you could do some reverse proxy magic such that everything goes through the WireGuard box — a.lan goes to service A, b.lan to service B, etc., but if you have non-http services this may be a little more cumbersome.

Fizz, in Ubergeek77 Lemmy instance problem?
@Fizz@lemmy.nz avatar

After the update a lot of users had to clear their browser cache. Possibly this is the issue?

angelsomething, in Help me get started with VPN

Check out Twingate. It’s super easy and with granular controls.

N0x0n, in Adding services to an existing Docker nginx container

This how I do it, not saying it’s the best way, but serves me well :).

For each type of application, 1 docker-compose.yaml. This will have all linked containers in 1 file but all your different applications are seperate !

Every application in it’s respective folder.

  • home/user/docker/app1/docker-compose.yml
  • home/user/docker/app2/docker-compose.yml
  • home/user/docker/app3/docker-compose.yml

Everything is behind an application proxy (traefik in my case) and served with self-signed certificate.

I access all my apps through their domain name on my LAN with wireguard.

mudeth,

Yes this is what I want to do. My question is how docker manages shared processes between these apps (for example, if app1 uses mysql and app2 also uses mysql).

Does it take up the RAM of 2 mysql processes? It seems wasteful if that’s the case, especially since I’m on a low-RAM VPS. I’m getting conflicting answers, so it looks like I’ll have to try it out and see.

N0x0n,

Nah, that’s not how it works ! I have over 10 applications and half of them have databases, and that’s the prime objective of containers ! Less resource intensive and easier to deploy on low end machines. If I had to deploy 10 VMs for my 10 applications, my computer would not be able to handle it !

I have no idea how it works underneath, that’s a more technical question on how container engines work. But if you searx it or ask chatGPT (if you use this kind of tool) i’m sure you will find out how it works :).

mudeth,

This is promising, thanks!

rambos, in Help me get started with VPN

Not expert, but basically you should port forward wireguard port 51820 to your server, install wireguard server, create client(s) and load QR code (or config) on android/laptop and you are set. Pi hole DNS and everything else should work just like when you are on home wifi.

You can leave your CF for public access, but do you really need PF 80 and 443 if you are using CF tunnels? (I thought you dont, but I never used CF. Feels like its more safe to hve CF tunnels if you dont need to PF, but you have a middle man you have to trust)

PlutoniumAcid,
@PlutoniumAcid@lemmy.world avatar

Thank you for providing specific steps that I can take! I will look into this.

No I do not use cloudflare tunnels, just regular cloudflare to publish my services to the whole world - which is a concern of course.

Going with a connection from my device via wireguard sounds like just the right thing to do.

ShortN0te, in Adding services to an existing Docker nginx container

So from what i get reading your question, i would recommend reading more about container, compose files and how they work.

To your question, i assume when you are talking about adding to container you are actually referring to compose files (often called ‘stacks’)? Containers are basically almost no computational overhead.

I keep my services in extra compose files. Every service that needs a db gets a extra one. This helps to keep things simple and modular.

I need to upgrade a db from a service? -> i do just that and can leave everything else untouched.

Also, typically compose automatically creates a network where all the containing services of that stack communicate. Separating the compose files help to isolate them a little bit with the default settings.

mudeth,

Aren’t containers the product of compose files? i.e. the compose files spin up containers. I understand the architecture, I’m just not sure about how docker streamlines separate containers running the same process (eg, mysql).

I’m getting some answers saying that it deduplicates, and others saying that it doesn’t. It looks more likely that it’s the former though.

ShortN0te,

A compose file is just the configuration of one or many containers. The container is downloaded from the chosen registry and pretty much does not get touched.

A compose file ‘composes’ multiple containers together. Thats where the name comes from.

When you run multiple databases then those run parallel. So every database has its own processes. You can even see them on the host system by running something like top or htop. The container images themself can get deduplicated that means that container images that contain the same layer just use the already downloaded files from that layer. A layer is nothing else as multiple files bundled. For example you can choose a ‘ubuntu layer’ for the base of your container image and every container that you want to download using that same layer will just simply use those files on creation time. But that basically does not matter. We are talking about a few 10th or 100th of MB in extreme cases.

But important, thoses files are just shared statically and changing a file in one container does not affect the other. Every container has its own isolated filesystem.

I understand the architecture, I’m just not sure about how docker streamlines separate containers running the same process (eg, mysql).

Quite simple actually. It gives every container its own environment thats to namespacing. Every process thinks (more or less) it is running on its own machine.

There are quite simple docker implementations with just a couple of hundreds lines of code.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #