selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

haui_lemmy, in Suddenly getting a server error on my instance

I would start at the github repo and check if that issue has been documented.

If yes, follow the instructions. If not, check your dns since 502 often came from dns in my case.

If both doesnt reveal anything, you could open an issue in the repo and post your (sanitized) logs and wait for answers.

The error suggests a problem with the lemmy-lemmy-ui-1 container. Maybe it needs an update or has pulled a wrong update. When did you update last? Did you try restarting the stack?

Good luck.

Dave,
@Dave@lemmy.nz avatar

If the error is with the UI, then trying a mobile app is a good place to start (since they should connect to the API directly).

solidgrue, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)
@solidgrue@lemmy.world avatar

I’ve got HA with Frigate + USB Coral w/4 cams, FlightRadar24 receiver/feeder, ESPHome, NodeRed, InfluxDB, Mosquitto, and Zwave-JS on a refurbished Lenovo ThinkCenter M92p Tiny, rigged with an i5 3.6GHz, 8GB RAM and 500GB spindle drive. It’s almost overkill.

Frigate monitors 2 RTSP and 2 MJPEG cams (sometimes up to 3 RTSP and 5 MJPEG, depending of if I’m away for the weekend) with hardware video conversion. FR24 monitors a USB SDR dongle tracking several hundred aircraft per hour. I live under one.of the main approaches to a major US hub.

Processor sits at 10% or less most of the time, and really only spikes when I compile new binaries for the ESP32 widgets I have around the house. It uses virtually none of the available disk. It’s an awesome platform for HA for the price.

sylverstream,

Thanks for your reply! So that is a 3rd gen Intel chip if I kagi’d correctly? I was planning to get a 8th gen or later. Not sure though if it’s worth it, I’m not too familiar with the differences between all generations.

solidgrue,
@solidgrue@lemmy.world avatar

I think the i5 is Ivy Bridge, but I couldn’t tell you what gen that is. My main use of HA aside from the automation is Frigate, which apparently needs the hardware AVX flags. This chip supports AVX512, where my older AMD did not, so that’s why I went with it. Its an i5-3470T, if that helps.

For an older SFF unit, it’s a beast for HA.

sylverstream,

3470 means 3rd gen. The first number is the generation. Good to know that also works.

lemmyvore, in Self-hosted or personal email solutions?

GoDaddy is notorious for terrible service and NameCheap has started doing some shady stuff too lately. Luckily there are other decent registrars out there. I can recommend Netim.com or INWX.de in the EU – they also provide EU-specific TLDs which American registrars don’t.

If you need more than one mailbox you can’t beat the offers from providers like PurelyMail/MXRoute/Migadu, where you pay for the storage instead of per-mailbox. I’m using Migadu because, again, they work under EU/Swiss privacy laws.

Here are some more providers if you’re interested in taking advantage of EU privacy: european-alternatives.eu/…/email-providers

You do not need to spin up your own mail service and should not. Email and DNS hosting are the most abuse-prone and easy to mess up services; always go to an established provider for these.

Are there concerns tying my accounts to a service that might go under or are some “too big to fail”?

Look into their history. Generally speaking a provider that’s been around for a decade or more probably won’t dissapear overnight; they probably have a sustainable income model and have been around the block.

That being said nothing saves even long-established providers from being acquired. This happened for example to a French service (Gandi) with over 20 years of history.

The only answer to that is to pick providers that don’t lock you into proprietary technologies and offer standard services like IMAP, and also to keep your domain+DNS and your email providers separate. This way if the email service starts hiking prices or does anything funny you can copy your email, switch your domain(s), and be with another provider the very next day.

overcast5348,

What did namecheap do? I’ve got a bunch of domains with them. 🤦‍♂️

lemmyvore,

A general reduction in service quality, increasing domain prices (double check your renewals) and there are reports of domain name sniping (where they grab names that people are looking up).

Mubelotix,
@Mubelotix@jlai.lu avatar

Still much less bullshit than other providers. It has less dark patterns than OVH. I would also recommend their VPN service for beeing so cheap the first year

scarilog,

.com domains recently got more expensive. Almost double in price compared to CloudFlare (who sell domains at cost).

rar,

Gandi’s case hurts me. I had been paying for years but they kept raising their prices like dragonball z power levels.

syd, in Self-hosted or personal email solutions?
@syd@lemy.lol avatar

Yes you need a domain for sure. But you don’t need a server for it, in fact I don’t recommend trying to self-host mail server.

You can use Tuta, Proton Mail, Gmail or iCloud Mail services. You just need to add some DNS records to the domain to redirect mail provider.

SupraMario,

Cloudflare + protonmail is my setup. Works great and if you buy like 2 years it’s pretty cheap.

syd,
@syd@lemy.lol avatar

Yeah I’m also using Proton but I will switch to Tuta because it has more features I think.

SupraMario,

I just wanted mail and privacy directed.

grepe, in Self-hosted or personal email solutions?

I tried both hosting my own mail server and using a paid mail hosting with my own domain and I advise against the former.

The reason not to roll out your own mail server is that your email might go to spam at many many common mail services. Servers and domains that don’t usually send out big amount of email are considered suspicious by spam filters and the process of letting other mail servers know that they are there by sending out emails is called warming them up. It’s hard and it takes time… Also, why would you think you can do hosting better than a professional that is paid for that? Let someone else handle that.

With your own domain you are also not bound to one provider - you can change both domain registrar and your email hosting later without changing your email address.

Also, avoid using something too unusual. I went with firstname@lastname.email cause I thought it couldn’t be simpler than that. Bad idea… and I can’t count how many times people send mail to a wrong address because such tld is unfamiliar. I get told by web forms regularly that my email is not a valid address and even people that got my email written on a piece of paper have replaced the .email with .gmail.com cause “that couldn’t be right”…

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I get told by web forms regularly that my email is not a valid address and even people that got my email written on a piece of paper have replaced the .email with .gmail.com cause “that couldn’t be right”…

That’s the thing that holds me back from a non-standard TLD, as much as I’d love to get a vanity domain.

I’ve got a .org I’ve had for over 20 years now. My primary email address has been on that domain for almost as long. While I don’t have problems with web-based forms, telling people my email address is a chore at best since it’s not gmail, outlook, yahoo, etc…

CosmicTurtle,

More and more services are REQUIRING a gmail/outlook/etc. account simply because bots/scammers bombard their services. It’s their cheap captcha.

I’m seeing it more and more and it infuriates me to no end.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I keep seeing people say this but I’ve yet to encounter it even once. I fully believe it happens with non-com/net/org TLDs but I’ve been using my .org as my daily driver for 2 decades and have never had it rejected or denied.

CosmicTurtle,

The last one I encountered was one of the AI tools. I can’t remember which one. They are popping up like fucking Starbucks now.

They required using your Gmail, Outlook, or Discord credentials.

lemmyvore,

As if a scammer can’t get a Gmail address. 😄 What does that even prove?

CosmicTurtle,

I think the point is that a scammer may have one or two. But not millions of Gmail addresses.

rar,

You mean those websites that instead of email input fields there are multiple horizontal stripes saying “Login with Google” and such?

I hate them, too… but I suppose it’s for the mobile crowd that don’t make distinctions between sms, fb/whatsapp messages, and email altogether.

I wonder if all those gmail accounts will be seen like yahoo addresses one day.

douglasg14b,
@douglasg14b@lemmy.world avatar

Yeah, I use firstname@thelastnames.co

And EVERY DAMN PERSON corrects .co to .com

Unfortunately the .com.and .net are both used.

shrugal,

You can avoid the warmup by using an SMTP relay, and you can just use the one from your DNS provider if you’re not planning to send hundreds of mails per day.

Moonrise2473, in Why docker

About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)

The alternative to “don’t really know what’s in the image” usually is: “just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root”. Requires much more trust than a container that you can delete with no traces in literally seconds

If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.

Downgrading to a previous version (or a beta preview) of the app you’re running due to bugs it’s trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru

Finally, migrating to a new fresh server is just docker compose down, then rsync to new server, and then docker compose up -d. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.

Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes

itsnotits,

the program is run* as root

NaibofTabr, in In search for free domain I got one but some questions
  1. Assuming that you mean that you are using the domain name to point to services which are at a residential, dynamic IP address, you will need to set up a Dynamic DNS service.
  2. If a product is free, you’re the product.
tapdattl,

While I normally agree on #2, it doesnt really apply to Tailscale. Tailscale isn’t completely free, they have a free tier to generate business but it’s limited to 3 users per tailnet. Also its cryptographically impossible for them to snoop on your traffic.

NaibofTabr,

I was referring to OP’s use of IPQuick. This isn’t a service I’m familiar with and it doesn’t seem to be affiliated with any organization that I’m familiar with either.

scrchngwsl, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)

Not sure exactly what you’re asking but I have a Coral mini pcie with frigate and it works great. Hardly any cpu and tiny power consumption.

sylverstream,

Okay thanks!

Vendetta9076, in Running immich, HA and Frigate on a RPi4 with Coral or on a HP Prodesk 700 g4 (Intel 8th gen)
@Vendetta9076@sh.itjust.works avatar

Migrating is your best move. Hardware go brr

sylverstream,

Okay thanks!

RootBeerGuy, in Pinry, the open-source tiling image board
@RootBeerGuy@discuss.tchncs.de avatar

Maybe thats what you mean in your post, but development seems to have stopped 2 years ago. Are there any open issues? Or maybe an active fork?

perishthethought,

Hmmm, I hadn’t noticed that before but you’re right. There are open issues and also pull requests which were never merged.

lemann, in How do you monitor your servers / VPS:es?

I used to pass all the data through to Home Assistant and show it on some dashboards, but I decided to move over to Zabbix.

Works well but is quite full-featured, maybe moreso than necessary for a self hoster. Made a mediatype integration for my announciator system so I hear issues happening with the servers, as well as updates on things, so I don’t really need to check manually. Also a custom SMART template that populates the disk’s physical location/bay (as the built in one only reports SMART data).

It’s notified me of a few hardware issues that would have gone unnoticed on my previous system, and helped with diagnosing others. A lot of the sensors may seem useless, but trust me, once they flag up you should 100% check on your hardware. Hard drives losing power during high activity because of loose connections, and a CPU fan failure to name two.

It has a really high learning curve though so not sure how much I can recommend it over something like Grafana+Prometheus - something I haven’t used but the combo looks equally as comprehensive as long as you check your dashboard regularly.

Just wish there were more android apps

SeeJayEmm, in What is your favourite selfhosted wiki software and why?
@SeeJayEmm@lemmy.procrastinati.org avatar

I’ve been using Wiki.js since I asked this question a few months ago. I’ve been pretty happy with it. Stores data in text files using markdown and can synchronize a number of backends. I’ve got mine syncing to a private github repo.

CowsLookLikeMaps,

Thanks! I used the search function but it wasn’t showing up for some reason. I’ll link to it in the OP.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Np. I wasn’t trying to imply anything. I asked a different question, it just had overlap so I thought you might find it useful.

CowsLookLikeMaps,

Likewise :)

DeltaTangoLima, in Why docker
@DeltaTangoLima@reddrefuge.com avatar

To answer each question:

  • You can run rootless containers but, importantly, you don’t need to run Docker as root. Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
  • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
  • It’s the opposite - you don’t really need to care about docker networks, unless you have an explicit need to contain a given container’s traffic to it’s own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I’ve created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

It’s not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

Why? I like to play.

Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

Let’s say there’s a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos… hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

lemmyvore, (edited )

Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.

There is no daemon in rootless mode. Instead of a daemon running containers in client/server mode you have regular user processes running containers using fork/exec. Not running as root is part and parcel of this approach and it’s a good thing, but the main motivator was not “what if someone breaks out of the container” (which doesn’t necessarily mean they’d get all the privileges of the running user on the host and anyway it would require a kernel exploit, which is a pretty tall order). There are many benefits to making running containers as easy as running any kind of process on a Linux host. And it also enabled some cool new features like the ability to run only partial layers of a container, or nested containers.

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

Yep, all true. I was oversimplifying in my explanation, but you’re right. There’s a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

TCB13, in Why docker
@TCB13@lemmy.world avatar

Why docker?

Its all about companies re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer.

We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker or isn’t a 3rd party cloud xyz deploy-from-github service.

oh but the underlying technologies aren’t proprietary

True, but this Docker hype invariably and inevitably leads people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. In this particular case is Docker Hub / Kubernetes BS and all the cloud garbage around it.

oh but there are alternatives like podman

It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT/Podman and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.

lots of mess in the system (mounts, fake networks, rules…)

Yes, a total mess of devices hard to audit, constant ram wasting and worse than all it isn’t as easy change a docker image / develop things as it used to be.

Shimitar,

Is all this true? Its a perspective I didn’t considered, but feels true, don’t know if it is tough.

lemmyvore,

It’s not true. I mean sure there are companies that try to lock you into their platforms but there’s no grand conspiracy of the lizard people the way OP makes it sound.

Different people want different things from software. Professionals may prefer rootless podman or whatever but a home user probably doesn’t have the same requirements and the same high bar. They can make do with regular docker or with running things on the metal. It’s up to each person to evaluate what’s best for them. There’s no “One True Way” of hosting software services.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

This is a really bad take. I’m all for OSS, but that doesn’t mean that there isn’t value with things like Docker.

Yes, developers know less about infra. I’d argue that can be a good thing. I don’t need my devs to understand VLANs, the nuances of DNS, or any of that. I need them to code, and code well. That’s why we have devops/infra people. If my devs to know it? Awesome, but docker and containerization allows them to focus on code and let my ops teams figure out how they want to put it in production.

As for OSS - sure, someone can come along and make an OSS solution. Until then - I don’t really care. Same thing with cloud providers. It’s all well and good to have opinions about OSS, but when it comes to companies being able to push code quickly and scalably, then yeah I’m hiring the ops team who knows kubernetes and containerization vs someone who’s going to spend weeks trying to spin up bare iron machines.

MartianSands, in Why docker

I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.

As for your user & permissions concern, are you aware that docker these days can be configured to map “root” in the container to a different user? Personally I prefer to use podman though, which doesn’t have that problem to begin with

micka190, (edited )

I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.

Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn’t have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran docker compose up -d.

If my server ever crashes, I can just copy it over and start from scratch.

aniki,

I really need to get into Woodpecker.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #