You need to change the Heimdall urls to the the tailscale urls. I’ll update this post soon.
My old set up has openmediavault as the base system.
I installed tailscale directly to that base system. (The OS)
My old ip links in Heimdall stopped working.
From memory… You need to go to the tail scale website dashboard. Iirc by default you have some random numbers as your tailscale URL. The other option is to use their magic DNS which gives you random words as a URL. Either way you will need to edit you Heimdall links. So if it’s currently 192.167.1.1:8096 you need to change it to buffalo-cow.tailscale:8096. (Or something to that effect.)
What I did was just duplicate my current Heimdall and used a different port number… Then change all the urls to the tailscale urls.
Your current containers should remain untouched aside from the the Heimdall one with the correct app urls.
Except that the services are “unable to open” and “other” even from the tailscale admin panel. The top two services, heimdal and portainer, are the only ones with an “open” link.
edit: if I stop heimdall in Docker, the situation is the same, except no start page.
Hmm… I’m not sure. If your making it to Heimdall and portainer I don’t see why the other containers wouldn’t work. I just remember having to redo my Heimdall links.
Is tailscale installed on the base operating system?
OP here’s a troubleshooting approach i would take:
ensure services can be reached locally, thus eliminating tailscale as a variable. test on the host itself as well as another device on the same network.
attempt connecting, with tailscale enabled, to the services directly. meaning, go to the hosts’s tailscale IP:port in a browser and NOT through heimdall
if the above work, then it’s an issue with heimdall. edit the config as previously mentioned to link the services to the host’s tailscale IP:port, or have two instances of heimdall - one for local and one for remote
I think I figured it out, just have to implement the fix. I think the problem is the lack of 443’s published by the containers. Looks like I may be able to modify the ports easily in Portainer.
one of the benefits of things like docker is creating a very lightweight configuration, and keeping it separate from your data.
ive setup things so i only need to rsync my data and configs. everything else can be rebuilt. i would classify this as 'disaster recovery'.
some people reeeeally want that old school, bare-metal restore. which i have to admit, i stopped attempting years ago. i dont need the 'high availability' of entire system imaging for my personal shit.
Do you have tips to save multiple location, as I also have non-docker configs to backup in /etc and /home, how do you do it ? just multiple rsync command in a sh script file with cron executing it periodically ? or is there a way to backup multiple folder with one command ?
Docker can be run rootless. Podman is rootless by default.
I build certain containers from scratch. Very popular FOSS software can be trusted, but if you’re as paranoid, you should probably run the bare-minimum software in the first-place.
It’s a mess if you’re not used to it. But yes, normal unix networking is somewhat simpler (like someone mentioned, LXC containers can be a decent idea). Well, you’ll realise that Docker is not really top-dog in terms of complexity when you start playing with the big boys like full-fledged k8s
I don’t know, maybe ?
But I recommand strongly to have your own domain name.
As long as you do nothing illegal, when you own a domain name, you have legal recourse to keep it. It’s not the case for an email service mail like gmail, which can ban you for no reason tomorrow, and you have no recourse to get back your email address back.
It’s a few euro per year, plus you can mutualize the cost with your family, take a domain name with your last name, this will allow your whole family to have firstname@lastname.yourcountrytld.
I just looked for my lastname, it’s around $10 per year.
I’ll repeat this again, but it means you will own this domain name, you have legal ressources and big companies won’t be able to take your mail address from you.
Else, use duckdns if you really don’t want to pay anything.
You just made a mistake into saying what domain name you will take, someone may buy it before you in order to extort money from you.
It probably won’t happen but…
That reminds me of my Linux server teacher in university. We were to buy a domain name from Namecheap or Gandi during class with some free credits, and the teacher was recommending lastname[dot]com if that was available.
I happened to say aloud “yep, mylastname[dot]com is available” and he quickly sushed me as if I had named Voldemort aloud in Hogwarts, telling me that saying it is a really bad move… lol
I don’t want to be rude but if you can’t afford a domain you probably shouldn’t be hosting a fediverse server.
Honestly do you even need to expose services to the internet? Internet exposure is dangerous and is not necessary for 95% of things. You can use a mesh VPN like netbird or Tailscale if you need remote access.
This isn’t me trying to offend you I just think it would be wise to reduce the scope of you projects.
Do you have any particular way of organizing the links themselves? I’ve moved to hosting all my bookmarks in Obsidian as well and am curious as to how others go about it
I treat links like atomic notes. I add as much detail as I feel like to each link, sometimes I go back and add tags and notes. Then I have an exceptionally poor process that attempts to go back to each link, get the archivebox archive and uses python to attempt to grab the article text (I tried using newspaper3k at first, but it’s unmaintained, so moved to readability). Then sticks the resulting link text into the note.
Honestly It’s a mess, and I really haven’t figured out how to do link things together very well, but, for now, it’s my little disaster of a solution.
Use Nextcloud AIO mastercontainer, set up joplin with Nextcloud sync (which is webdav). Use the builtin backup function in Nextcloud AIO container to backup nextcloud and the files it contains that are your joplin notes (and anything else you use nextcloud for).
I even use Nextcloud for its Gpoddersync app to keep my podcast subs/progress from Antennapod.
I went to the queue and nothing was there, only one out of my 15 trackers was down,
I saw somewhere you can make the software look for seasons by navigating to the show and clicking the magnifying glass next to it, and now it’s added a bunch of episodes to the queue.
I’ll have to dig through the log file because now it’s downloading hundreds of episodes so the log got all thicc on me
Anyway to make it prefer whole seasons though? I’ve got 146 torrents running now, lol
It depends whether a whole season torrent exists or not. If sonarr can identify one thats a whole season, it should download that when you search at season level. If youve searched individual episode at a time, youll get a single one.
You can do an interactive search and iirc specify full season during that search
You may need to play around with quality settings (pr trackers) if you notice that it never downloads season packs.
Also when you add a new show, at the bottom of the window, there should be a checkbox asking you whether or not you want it to automatically search for missing episodes, so be sure that’s checked.
The magnifying glass next to each season header will automatically search for season packs and pick a download for you. The person icon will do it interactively, where you see the results and select which one(s) you want to download.
This is the case across Sonarr. Magnifying glass at the top of a series will auto search for all missing, monitored episodes. Same applies at individual episode level, but the the person icon does it interactively, in case you want to select the specific release you want to download.
Start small. Find good used hardware first before thinking what services to run. I would start with an old desktop.
Self-hosting is a journey, not a destination. No matter what you buy you'll probably need to buy new hard drives. Used hard drives are a bit of a gamble.
Where do people buy used systems in Denmark? Show us a few things you're interested in and people can give you recommendations.
Also, instead of Photoprism, I would suggest Immich. I was a huge supported of Photoprism for years (even donated money) but their development is too slow. Immich is way faster and has an android app. Anyways, give it a look.
I think 8 GB of RAM is sufficient for all those services. I run them all with Yunohost and I rarely get over 4 GB RAM used.
dba.dk is a pretty popular site for buying used stuff in Denmark, but for electronics I usually go on eBay and sort by EU only (IIRC they removed that option so now the results are tainted with lots of UK gear that’ll be hit with import taxes).
Both deals sound amazing to me, but get 8GB or prepare for RAM upgrade. 4GB could be enough for what you listed there, but you might find more services to run in the near future 🤪
I think those tiny PCs are perfect if you dont need more SATA ports. Its hard to beat them with that low price
I’ve been doing Linux server administration for 20 years now. You’ll always have to duckduckgo things. You’ll never keep it all in your head, even just a single server with a handful of services. Docker and containers really isn’t too hard. Just start small and build from there. If you can learn how the chroot command works, you’ve pretty much learned docker. It’s just chroot with more features.
Yep same here. Professional IT for over 25 years. Nobody knows everything. It’s ok to fail. Just keep swimming. And when you do get something working…. that high is unbelievable. It’s like a drug addiction and will drive you to do more and more. Good luck!!!
Dude- it’s like you’re reading my mind. I’ve installed Nextcloud 4 different times, the most recent being on docker desktop in Win11. I’ve resorted to using chatgpt to help me with the commands. LITERALLY EVERY STEP RESULTS IN AN ERROR. The Collabora office suite (necessary to view or edit cloud docs without downloading them) WILL NOT DOWNLOAD. The “php -d memory_limit=512M occ app:install richdocumentscode” chatgpt and Nextcloud suggest is not recognized by the terminal. You can’t just download Collabora, cuz fuck you, i guess, and you can’t access Docker’s actual file system from windows explorer.
I’ve typed nonsense into various black screens for upward of 20 hours now, and nextcloud is “working” locally. I can access my giant hard drive from my android nextcloud app, but it’s SLOW AS FUCK.
I can’t imagine how many man-hours it would take to open the server to the internet. Makes me want to fucking barf just thinking about it.
I’ve been fucking with Linux since 2005 and have yet to get a single thing to work correctly. I guess I’m the only one who thinks an (mostly) invisible file system in incomprehensible repetitive folders, made of complete nonsense commands might not be the best way to operate a computer system.
I’m really frustrated if you can’t tell.
On another topic, trying to get Ollama to run on my Lubuntu VM was also impossible. I guess if everyone knew it was going to force you to somehow retroactively configure every motherfucking aspect of the install nobody would bother. You can sudo all day and it still denies me permission to do things LISTED IN THE MOTHERFUCKING DOCUMENTATION.
Is this all just low-effort poorf** bullshit that doesn’t actually work?
The problem child for me right now is a game built in node.js that I’m trying to host/fix. It’s lagging at random with very little reason, crashing in new and interesting ways every day, and resisting almost all attempts at instrumentation & debugging. To the point most things in DevTools just lock it up full stop. And it’s not compatible with most APMs because most of the traffic occurs over websockets. (I had Datadog working, but all it was saying was most of the CPU time is being spent on garbage collection at the time things go wonky–couldn’t get it narrowed down, and I’ve tried many different GC settings that ultimately didn’t help)
I haven’t had any major problems with Nextcloud lately, despite the fragile way in which I’ve installed it at work (Nextcloud and MariaDB both in Kubernetes). It occasionally gets stuck in maintenance mode after an update, because I’m not giving it enough time to run the update and it restarts the container and I haven’t given enough thought to what it’d take to increase that time. That’s about it. Early on I did have a little trouble maintaining it because of some problems with the storage, or the database container deciding to start over and wipe the volume, but nothing my backups couldn’t handle.
I have a hell of a time getting the email to stay working, but that’s not necessarily a Nextcloud problem, that’s a Microsoft being weird about email problem (according to them it is time to let go of ancient apps that cannot handle oauth2–Nextcloud emailer doesn’t support this, same with several other applications we’re running, so we have to do some weird email proxy stuff)
I am not surprised to hear some of the stories in this thread, though. Nextcloud’s doing a lot of stuff. Lots of failure points.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.