selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

icanwatermyplants, in Alternative to Home Assistant for ESPHome Devices

Consider running HA in a light weight systemd-nspawn container with minimal debian. No docker, only install the repositories you need. HACS if needed. Run your own database on the side somewhere and let HA use it.

By itself HA is fairly lightweight already.

TCB13,
@TCB13@lemmy.world avatar

I was trying to go that route with LXC actually and while it seems great what about the ESPHome addon? I’m not even sure if that thing is required to use ESPHome devices or not.

indigomirage, in Question: Best UI to manage VMs and containers?

Give portainer a try. It’s actually pretty good for getting a birdseye view, and let’s you manage more than one docker server.

It’s not perfect of course.

indigomirage,

Note that if you want actual virtualization then perhaps Proxmox (not sure if it manages multiple hypervisors - I haven’t obtained something to test it on yet). Portainer is best for Docker management (it, and it’s client agents, run as docker containers themselves. Don’t forget to enable web sockets if proxying.

Moonrise2473, in Question: Best UI to manage VMs and containers?

I tried portainer and it was overkill for my usage, too much overhead and too many features that I don’t need.

Right now I’m using ajenti 2, which shows memory and CPU usage for the docker containers in the web page

Krafting, in Question: Best UI to manage VMs and containers?
@Krafting@lemmy.world avatar

Portainer and Cockpit if you want to run VM (it also manage container but only with podman)

hperrin,

Cockpit looks interesting. It’s got a lot of features I normally do with terminal commands, but the VM manager stuff looks like what I’m looking for.

brygphilomena, in Proxmox HA, Docker Swarm, Kubrenetes, or what?

Consider power line adapters instead of wifi.

Yantantethera, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@Yantantethera@lemmy.world avatar

I use them as coffee mats…

vikingtons,
@vikingtons@lemmy.world avatar

The 3.5s make for excellent coasters lol

variants,

How do you keep them from sticking onto cups

vikingtons, (edited )
@vikingtons@lemmy.world avatar

Good question, but I’ve not had that issue so far

I typically use yeti ramblers with a metal bases on them, though I’ve set ceramic mugs down on them too and they’ve not stuck. might depend on the drink a little?

variants,

Oh it’s probably vacuum sealed then so it doesn’t condensate

vikingtons, (edited )
@vikingtons@lemmy.world avatar

Maybe but I do spill a bit every now and then. Can’t speak for the regular ceramic mugs, though that’s a bit of a rarity and they just have herbal tea

NegativeInf,

I do that with save icons!

PerogiBoi, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@PerogiBoi@lemmy.ca avatar

No but now I know what to do with my old hard drive that failed :)

lemann, in Alternative to Home Assistant for ESPHome Devices

I went with the virtual appliance when I installed Home Assistant several years ago, turned out to be a great decision looking at how it’s architected. I only self-host the database separately, which i’ve found easier to manage.

the fact that the storage usage keeps growing

There should be a setting to reduce how long Home Assistant retains data for - I removed the limit on mine, however its possible that on newer versions they’ve changed the default

Hope you find a solution though - I think node red (capable of doing dashboards on its own) with something else is going to get you part way there.

TCB13, (edited )
@TCB13@lemmy.world avatar

I’ve been doing this. I’m running HA under LXD (VM) and it works.


<span style="color:#323232;">$ lxc info havm
</span><span style="color:#323232;">Name: havm
</span><span style="color:#323232;">Status: RUNNING
</span><span style="color:#323232;">Type: virtual-machine
</span><span style="color:#323232;">Architecture: x86_64
</span><span style="color:#323232;">PID: 541921
</span><span style="color:#323232;">Created: 2023/12/05 14:14 WET
</span><span style="color:#323232;">Last Used: 2024/01/28 13:35 WET
</span>

While it works great and it was very easy to get the VM running I would rather move to something lighter like a container. About the storage I just see it growing everyday and from what I read it should be keeping for 10 days however it keeps growing. Almost 10GB for a web interface and logs from a couple of sensors, wtf?

I would be very happy with HA, really no need to move other stuff as long as things were a bit less opaque than a ready to go VM that runs 32434 daemons and containers inside it.

icanwatermyplants,

Curious, you might want to look into what’s generating your data first. It’s easy to generate data, it’s harder to only keep the data that’s useful.

TCB13,
@TCB13@lemmy.world avatar

And how do I go about that?

icanwatermyplants,

One logs into the VM and starts checking the files of course. Go from there.

icanwatermyplants,

Curious, you might want to look into what is generating your data then first. It’s very easy to generate data, it’s a lot harder to only generate and keep useful data.

Oisteink, in [solved] Nginx proxy server - strange behavior

As you can forward by ip but not by name it sounds like resolver issue.

tubbadu,

how can I find out more about this?

Oisteink, (edited )

On the host of the nginx rev proxy or in nginx config files. Something seems to block the lookup from name to ip, as ip works you know the proxy works. Check dns config and nginx config on that host

tubbadu,

here’s the configuration file for jellyfin:


<span style="color:#323232;"># ------------------------------------------------------------
</span><span style="color:#323232;"># jellyfin.tubbadu.duckdns.org
</span><span style="color:#323232;"># ------------------------------------------------------------
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">map $scheme $hsts_header {
</span><span style="color:#323232;">https   "max-age=63072000; preload";
</span><span style="color:#323232;">}
</span><span style="color:#323232;">
</span><span style="color:#323232;">server {
</span><span style="color:#323232;">set $forward_scheme http;
</span><span style="color:#323232;">set $server         "192.168.1.13";
</span><span style="color:#323232;">set $port           8096;
</span><span style="color:#323232;">
</span><span style="color:#323232;">listen 80;
</span><span style="color:#323232;">listen [::]:80;
</span><span style="color:#323232;">
</span><span style="color:#323232;">listen 443 ssl;
</span><span style="color:#323232;">listen [::]:443 ssl;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">server_name jellyfin.tubbadu.duckdns.org;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Let's Encrypt SSL
</span><span style="color:#323232;">include conf.d/include/letsencrypt-acme-challenge.conf;
</span><span style="color:#323232;">include conf.d/include/ssl-ciphers.conf;
</span><span style="color:#323232;">ssl_certificate /etc/letsencrypt/live/npm-18/fullchain.pem;
</span><span style="color:#323232;">ssl_certificate_key /etc/letsencrypt/live/npm-18/privkey.pem;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Block Exploits
</span><span style="color:#323232;">include conf.d/include/block-exploits.conf;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">proxy_set_header Upgrade $http_upgrade;
</span><span style="color:#323232;">proxy_set_header Connection $http_connection;
</span><span style="color:#323232;">proxy_http_version 1.1;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">access_log /data/logs/proxy-host-5_access.log proxy;
</span><span style="color:#323232;">error_log /data/logs/proxy-host-5_error.log warn;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">location / {
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;">proxy_set_header Upgrade $http_upgrade;
</span><span style="color:#323232;">proxy_set_header Connection $http_connection;
</span><span style="color:#323232;">proxy_http_version 1.1;
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Proxy!
</span><span style="color:#323232;">include conf.d/include/proxy.conf;
</span><span style="color:#323232;">}
</span><span style="color:#323232;">
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Custom
</span><span style="color:#323232;">include /data/nginx/custom/server_proxy[.]conf;
</span><span style="color:#323232;">}
</span>
tubbadu,

on the server host myserverhostname correctly resolves, but if I enter the container (docker exec -it nginx-app-1 bash) it does not work anymore:


<span style="color:#323232;">[root@docker-11e3869f946f:/app]# host tserver
</span><span style="color:#323232;">Host tserver not found: 3(NXDOMAIN)
</span>

(I had to install dnsutils before)

it seems a nginx issue then

Oisteink,

Could also be docker network-config. Docker should by default use the hosts resolver config if there’s nothing in /etc/resolve.conf

You can also supply dns server on the docker command or in your compose file if you’re using compose.

As a last resort you can enter server and ip i the container’s /ets/host file if the ip is static. But that’s gone once you rebuild the image.

Or maybe there’s env on the container you use for dns

tubbadu,

I found a solution: use myserverhostname.station instead of just the hostname. I really have no idea why, on the previous installation it worked well with just the hostname… ahh, whatever.

thank you very much for the help!

CalicoJack, in I want to get started with *arr apps - here are all the things I don't understand about (reverse-/)proxies and networking in order to get it set up.

If you’re only trying to use Jellyfin at home, you don’t need any reverse proxy or domain. All you need is for both devices to be on the same network, and for the Raspberry Pi to have a fixed internal IP address (through your router settings).

On the Shield, you just give the Jellyfin app that IP address and port number (10.0.0.X:8096) to connect and you’re good to go.

funkless_eck,

Even if they are in separate rooms, they just have to be on the same network?

CalicoJack,

Exactly. Doesn’t matter if they’re wired or wifi, or where they are, as long as they’re on the same network you’re fine.

Chewy7324,

Whether a device is wired or on wifi matters on some routers, because some routers have wifi and wired devices on different subnets by default. It’s unlikely, so I wouldn’t worry, unless you notice accessing it only works wired.

@CalicoJack

funkless_eck,

yes, wlan vs eth, right? And then in some providers, tun for the vpn?

jaykay,
@jaykay@lemmy.zip avatar

wlan and eth are network adapters in your raspberry Pi probably. Not subnets. Subnet is a range of IP addresses the router can use to give out IP addresses to devices. Basically, let’s assume that the router/the local network has only one subnet 192.168.1.0/24. This number means, the router can give out IP addresses from 192.168.1.0 to 192.168.1.254. If the router had two subnets, let’s say A: 192.168.1.0/24 B: 192.168.2.0/24 device on subnet A, would be able to talk to the device on subnet B.

Either way, in my opinion you’re overcomplicating things a lot for yourself. If you only wish to watch from home, on your couch, you don’t need reverse proxies, cloudflare and all that jazz. Docker and raspberry pi is enough. I can walk you through it if you want :)

funkless_eck, (edited )

that’s a helpful explanation of subnets thank you

In the paradigm of

111.222.3.4:5/22

if “3” is subnet and “5” is port - what are the names of “4”, “222”, “111”, and “22”?

And is there ever a 000.111.222.3.4:5/22 or another add on?

jaykay, (edited )
@jaykay@lemmy.zip avatar

Oh boy we’re going deep I guess haha.

So an IP address is divided into four section separated by dots. 123.123.123.123. Each of those section can go from 0 to 255, so 0.0.0.0 to 255.255.255.255. Why this number? There is 256 numbers from 0 to 255, and 256 is the biggest number you can make out of 8 bits. (If you’re interested in binary, please look it up, this is already long haha) If every number between the . can be made out of 8 bits that means the whole IP address is 32 bits. It’s 32 bits cos that’s what was convenient when it was decided basically. Makes sense?

Now, the subnets. Each network can be divided into sub networks or subnets. Subnets fall into 5 classes: ABCDE. D and E aren’t used as much so I don’t know much about them.

Class A: Subnet mask is 255.0.0.0 Class B: Subnet mask is 255.255.0.0 Class C: Subnet mask is 255.255.255.0

A subnet mask determines how many bits are reserved for the network, and how many bits are used for hosts (devices). Basically, each IP address is divided into a network part and a host part. Network part is used for identifying networks and how many you can make, while host part is used for identifying hosts/devices like your phone or PC or whatever and how many can be connected.

In class A, with 255.0.0.0, the first number is reserved for the network, and the other 3 for the devices for example.

In class A you have a small amount of possible subnets but a big number of devices, and the opposite in class C.

The 24 after the slash is just a different way of saying 255.255.255.0, called CIDR notation. 255.0.0.0 is /8 and 255.255.255.0 is /16.

So depending on the subnet class, what the numbers mean differs. Well except the port and CIDR subnet mask.

All in all, all you need to know is that your router most likely has one subnet lol

funkless_eck,

ok. I would still like to learn this stuff, so hopefully someone can come in and answer some of the questions - but it seems like, then, the challenge is just gluetun for now.

user224, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@user224@lemmy.sdf.org avatar

I will keep the magnets if I ever get into this in the future, but not the platters. I’ll just safely destroy them and dispose of them.

So far I only had 3 laptops and no desktops. I had 0 HDD failures, since I only ever had 3 of them so far.
The oldest one is more than 17 years old 80GB 2.5" Fujitsu HDD.

SzethFriendOfNimi,
@SzethFriendOfNimi@lemmy.world avatar

The magnets are fantastic for tool mounts since they’re so strong

tburkhol,

Back in the day, I’d go through HDDs faster than systems-always needed to add storage before I could replace the CPU. I didn’t start disassembling them until they got up to the 500 _M_B range, but you’d often get 3 platters back then. OP must be harvesting from a whole workgroup - I’ve only got a 3cm stack and 7 drives waiting for the screwdriver.

Delphiantares, in I want to get started with *arr apps - here are all the things I don't understand about (reverse-/)proxies and networking in order to get it set up.

If you get a reverse proxy setup all you need is port 80 and 443 and configure it it’ll expose the services that you want to be exposed through the subdomain Personally I’ve got Traefik service sitting on my media server and anything I want to expose goes through it .it has the details for the connection to cloudflare and so long as I direct it properly both on the container side and Traefik it’ll run as expected. The idea is if you go to say jellyfin.example.com cloudflare will direct that at at your reverse proxy(nginx in this case) which then redirects to the right machine/container because you entered from “jellyfin” .

The VPN gluten it is another container that will have the login details to your provider .

I’m still working my way through the self hosted rabbit hole myself, however I used a combination of Google and thishttps://www.smarthomebeginner.com/traefik-docker-compose-guide-2022/The entire site not just the specific article linked . As well as https://trash-guides.info/

funkless_eck,

Traefik

I will look into this, thank you.

XTL, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?

Yes. Got to admit mine just isn’t as big as yours, though.

Gooey0210, in Question: Best UI to manage VMs and containers?

The answer is proxmox, not portainer

hperrin,

So first, I’m not really looking to change operating systems. I’ve got my system set up the way I like it, where it closely matches the production systems I run for my company.

Second, why do you say the answer is Proxmox? What benefit does that have over other solutions that can be more easily integrated into my existing operating system?

Gooey0210,

Not many UIs can do containers and VMs

[Sorry for my not really well written reply, you really need to try different options, and in my opinion proxmox is like the only choice because of how many cool things you can do there]

Proxmox I just really good, and if you want to spin up VMs easily you will need to reshape your setup anyway

With proxmox you can do like everything with VMs, containers, etc. Not just managing only containers, or just showing status of the VMs

Also, proxmox is not really an operating system, it’s a service on top of Debian (in many cases you start installing proxmox by installing Debian)

Scrath,

Can proxmox do docker containers? Last I checked it could only do LXC

Gooey0210,

Yes it can, but not out of the box, and yeah, if you want the ui it will be that portainer again 😂

thirdBreakfast,
@thirdBreakfast@lemmy.world avatar

Yo dawg, I put most of my services in a Docker container inside their own LXC container. It used to bug me that this seems like a less than optimal use of resources, but I love the management - all the VM and containers on one pane of glass, super simple snapshots, dead easy to move a service between machines, and simple to instrument the LXC for monitoring.

I see other people doing, and I’m interested in, an even more generic system (maybe Cockpit or something) but I’ve been really happy with this. If OP’s dream is managing all the containers and VM’s together, I’d back having a look at Proxmox.

OminousOrange,
@OminousOrange@lemmy.ca avatar

I use Docker LXCs. Really just a Debian LXC with Docker and then Portainer as a UI. I have separate LXCs for common services. Arrs on one LXC, Nextcloud, Immich and SearXNG on another, Invidious on a third. I just separate them so I don’t need to kill all services if I need to restart or take down the LXC for whatever reason.

hperrin,

Thanks. I did check it out and it looks like it’s got some really cool benefits, like being able to cluster across two machines and take one down if it needs servicing, with zero down time.

I’m thinking about buying some rack mount servers and bringing everything I’m currently doing in the cloud for my business to on-premises servers. The one thing I was wary about was how I was going to handle hardware maintenance, and this looks like it would solve that issue nicely.

Gooey0210, (edited )

For the system itself I would recommend nixos

Some people like it, some people are against progress and they think work should be manual 🤣

I’m using nixos and all my machines, even integrating my phone in it

You can automate and replicate unbelievable stuff with it. You solve a bunch of problems by using nixos

But it’s a whole big rabbit whole, and it would take a lot of time to learn how to use it, then a lot of time to set everything up

But you could do zero downtime hardware maintenance without VMs or containers, just by using bare metal

Edit: or with VMs, containers, or k8s. Everything would be just cleaner and cooler

MangoPenguin,
@MangoPenguin@lemmy.blahaj.zone avatar

Proxmox doesn’t manage docker, it wouldn’t do anything for OP.

Gooey0210, (edited )

Portainer doesn’t manage VMs either

But at least you can do docker inside proxmox, and kinda manage it, or put something else on top of it

rsolva,
@rsolva@lemmy.world avatar

Proxmox does VMs and containers (LXC). You can run any docker / podman manager you want in a container.

Benefits of having Proxmox as the base is ZFS / snapshoting and easy setup of multiple boot drives, which is really nice when one drive inevitably fails 😏

MangoPenguin, (edited )
@MangoPenguin@lemmy.blahaj.zone avatar

Yes but Proxmox doesn’t manage docker, OP wants a webUI to see all their docker containers.

I agree running Proxmox as a base OS is the way to go, but you’ll still need Dockge, Portainer, etc to have a webUI for docker stuff.

maynarkh, in I want to get started with *arr apps - here are all the things I don't understand about (reverse-/)proxies and networking in order to get it set up.

Look, this is a large puzzle you’re trying to solve all at once. I’ll try to answer at least some of it. I’d advise you take these things step by step. DM me if you need some more help, I may have time to help you figure things out.

I paid for and installed mullvad (app) but it crashes a lot (for over a minute every 20 seconds), so it looks like I need to configure something like gluetun to do it instead.

Check the error logs and see what’s wrong with it instead. How is it crashing? Did you check stdout and stderr (use docker attach or check the compose logs)?

If I want to watch them on my TV I need to connect something to my TV that talks to the raspberry pi, so I have an NVIDIA shield with Jellyfin installed on it - but in order for the NVIDIA-Jellyfin to connect to the RaspberryPi-Jellyfin it needs to go through the internet (if this is not the case, how does one point the NVIDIA-Jellyfin at the Raspberry Pi jellyfin?)

Technically not. You can use the Jellyfin web UI to stream directly from the RPi. You may need the shield if the RPi does not have enough resources for streaming, but I’d try it out first. Try to get the IP the Raspberry is listening on on your local network and put that in a web browser on a computer first. IF you get the web UI and can watch stuff, then try a web browser on your TV, or cast your computer to the TV or something. As long as you have a web browser you should be fine.

First of all, is that all correct or have I misunderstood something?

You should look a bit into how the internet, DNS and IP addresses work on the public internet and private networks. You can absolutely set it up so that traffic from your local network hitting your domain never leaves your home, while if you try the same from somewhere else, you get an encrypted connection to your home. You’re a bit all over the place with these terms so it’s hard to give you a straight answer.

How does mysubdomain.mydomain.com know it’s me and not some random or bot?

If the question is whether how the domain routes to your IP, look up how DNS works. If you are asking how to make sure you can access your domain while others can’t look up the topic of authentication (basically anything from a username/password to a VPN and network rules).

How do I tell Cloudflare to switch from web:443 to local:443 (assuming I’ve understood this correctly)

If I remember correctly, Cloudflare forwards HTTP/S traffic only, so don’t worry about the ports, that’s all it will do. About the domains, you need to have a fixed public IP address for that, and you have to give Cloudflare by setting a DNS A record for an IPv4 address and/or an AAAA record for an IPv6 address.

So something like this: A myhost.mydomain.com 123.234.312.45

Is this step “port forwarding” or “opening ports” or “exposing ports” or either or both?

Nope. Port forwarding is making sure that your router knows what machine should answer when something on the Internet comes knocking. So if the RPi port 8096 is “forwarded” to the router, then if something from the internet connects to the router’s 8096 port, it will get to your RPi instead of something else. Opening ports has to deal with firewalls. Firewalls drop all connections on all ports that are not open, for security reasons. By opening a port you are telling the firewall what entities outside your device can connect to a service like Jellyfin listening on that port. Exposing ports is Docker terminology, it is the same as port forwarding except instead of “moving” a port from your machine to your router you “move” a port from a container to your machine.

If my browser when accessing mysubdomain.mydomain.com is always going to port 80/443, does it need to be told it’s going to talk to cloudflare - if so how? - and does cloudflare need to be told it’s going to talk to NGINX on my local machine - if so how?

The DNS server you are hosting the domain from will propagate that info through the DNS network. Look up how DNS works for more info. If your domain is managed by Cloudflare, it should “just work”. Cloudflare knows it talks to your router by you setting up a DNS record in their UI that points to your router, where your RPi’s port should be forwarded, which directs traffic to your RPi, on which your NGINX should be listening and directing traffic to your services.

How do I tell NGINX to switch from local:443 to local:8096 (assuming I’ve understood this correctly)

Look up NGINX virtual servers and config file syntax. You need to configure a virtual server listening on 443 with a proxy_pass block to 8096.

Is there a difference between an SSL cert and a public and private key - are they three things, two things or one thing?

Yes, SSL certs are the “public keys” of an X509 pair, while what you know as “public and private keys” are RSA or ED25519 key pairs. The former is usually used to make sure that the server you are accessing is indeed who it claims to be and not a fake copy, it’s what drives HTTPS and the little lock icon in your browser. RSA or ED25519 keys are used for authentication as in instead of a username and password, you give a public key to a service, then you can use a private key to encrypt a message to auth yourself. One service you might know that it uses it is SSH.

Doesn’t a VPN add an extra step of fuckery to this and how do I tell the VPN to allow all this traffic switching without blocking it and without showing the world what I’m doing?

A VPN like Mullvad is used for your outgoing traffic. All traffic is encrypted, the reason you want a VPN is not so that others can’t see your messages, it’s so that your ISP and the other people forwarding your messages don’t know who you’re talking to (they’ll only know you’re talking to your VPN), and so that the people you’re talking to don’t know who you are (they are talking to your VPN). You need this so your ISP doesn’t see you going to pirate sites, and so that other pirates, and copyright trolls acting as pirates don’t know who you are when you talk to them and exchange files using torrents.

Gluetun just looks like a text document to me (compose.yml) - how do I know it’s actually protecting me?

I don’t know shit about Gluetun, sorry.

From nginxproxymanager.com : "Add port forwarding for port 80 and 443 to the server hosting this project. I assume this means to tell NGINX that traffic is coming in on port 80 and 443 and it should take that traffic and send it to 8096 (Jellyfin) and 5000 (ombi) - but how?

Again, look up virtual servers in NGINX configuration. You need a virtual server listening on 80 and 443 proxying traffic to 8096 and 5000, separating on hostnames I guess.

Also from that site: “Configure your domain name details to point to your home, either with a static ip or a service like DuckDNS or Amazon Route53” - I assume this is what Cloudflare is for instead of Duck or Amazon? I also assume it means "tell Cloudflare to take traffic on port 80 and 443 and send it to NGINX’s 80 and 443 as per the previous bullet) - but how?

Add a DNS A record.

funkless_eck,

thank you so much for this considered reply. I’m just stepping out now, but will check in later to go through this in depth

funkless_eck,

Check the error logs and see what’s wrong with it instead. How is it crashing? Did you check stdout and stderr (use docker attach or check the compose logs)?

“Crash” is the wrong word. The app is running, it says “Connected” for about 15-20 seconds, then it says “Internet blocked” for about 20 seconds, then it says “Reconnecting” for 30-90 seconds, repeat indefinitely.

Using the CLI for logging, it says something along the lines of “Timeout… Hyper time out”

You should look a bit into how the internet, DNS and IP addresses work on the public internet and private networks.

Do you have any recommendations on how to learn this?

Also, thank you for explaining that “configuring a domain name” is adding an A record. I’ve added TXT records and similar for Google analytics and I’ve added mail records to set up my own domain’s email before - but this is helpful, thanks.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #