As in you upgraded from a previous Lemmy? More than likely your database is migrating and it can take a while. ~30 minutes or more depending on your server.
You need to change the Heimdall urls to the the tailscale urls. I’ll update this post soon.
My old set up has openmediavault as the base system.
I installed tailscale directly to that base system. (The OS)
My old ip links in Heimdall stopped working.
From memory… You need to go to the tail scale website dashboard. Iirc by default you have some random numbers as your tailscale URL. The other option is to use their magic DNS which gives you random words as a URL. Either way you will need to edit you Heimdall links. So if it’s currently 192.167.1.1:8096 you need to change it to buffalo-cow.tailscale:8096. (Or something to that effect.)
What I did was just duplicate my current Heimdall and used a different port number… Then change all the urls to the tailscale urls.
Your current containers should remain untouched aside from the the Heimdall one with the correct app urls.
Except that the services are “unable to open” and “other” even from the tailscale admin panel. The top two services, heimdal and portainer, are the only ones with an “open” link.
edit: if I stop heimdall in Docker, the situation is the same, except no start page.
Hmm… I’m not sure. If your making it to Heimdall and portainer I don’t see why the other containers wouldn’t work. I just remember having to redo my Heimdall links.
Is tailscale installed on the base operating system?
OP here’s a troubleshooting approach i would take:
ensure services can be reached locally, thus eliminating tailscale as a variable. test on the host itself as well as another device on the same network.
attempt connecting, with tailscale enabled, to the services directly. meaning, go to the hosts’s tailscale IP:port in a browser and NOT through heimdall
if the above work, then it’s an issue with heimdall. edit the config as previously mentioned to link the services to the host’s tailscale IP:port, or have two instances of heimdall - one for local and one for remote
I think I figured it out, just have to implement the fix. I think the problem is the lack of 443’s published by the containers. Looks like I may be able to modify the ports easily in Portainer.
I’ve not thought about nor worried about wear and tear. I did a search but didn’t find anything. Are you just being cautious? Or perhaps you only access files occasionally?
Either way, you may want to creat a bash alias in your .bashrc file so that you can type a simple command like mountnas or ‘nas’ and you might have another to run the umount command to unmount it.
Since my NAS runs my camera recordings and backups and some containers, I figure wear from mounting conveniently shouldn’t be an issue…
You guessed right: I indeed use those files on my computer very occasionally and I’d rather make a shortcut / alias (like you rightly suggested) than mounting the share at every boot. True, if you have quality disks (which are getting more difficult to find nowadays) you shouldn’t be worried about wear.
On a side note I could do my tag editing just fine, thanks again for your help!
Have you looked at using the Funnel feature in Tailscale, instead of port mapping? This gets external traffic onto your Tailscale network (for anyone who doesn’t have Tailscale) for specific resources, courtesy of Tailscale servers.
If you’re just going to open ports to the world, Tailscale isn’t really necessary (it’s useful for you and anyone on TS, since you can use the Serve feature to permit other Tailscale networks to have access to specific resources).
This sounds like exactly what I need. If I wanted to share my Linux Distros share with my dad, he wouldn’t need to install tailscale and feck with all that?
If you want cheap new drives check out shucks.top.
You can get used Enterprise drives on eBay if you want to got that way. Look for a seller with lots of sales, a good rating, and a reasonable return policy.
I have been looking at that as an option just feel a little hesitant to buy used drives. But if the wise gentlmen of lemmy reccommend it how bad can it be.
Just weight your risks. Old drives can fail early, and enterprise drives consume more power. Old drives probably not for mirrors or RAID5. RAID6 and spare HDD on shelf may save your data one day. It is a lottery.
Don’t buy used drives if you don’t know how to check them, can’t afford to waste the money and/or aren’t buying from somewhere with excellent return policy.
I would suggest having an nginx as a reverse proxy (I prefer avoiding a container as it’s easier to manage) and the have your services in whatever medium you prefer.
Not expert, but basically you should port forward wireguard port 51820 to your server, install wireguard server, create client(s) and load QR code (or config) on android/laptop and you are set. Pi hole DNS and everything else should work just like when you are on home wifi.
You can leave your CF for public access, but do you really need PF 80 and 443 if you are using CF tunnels? (I thought you dont, but I never used CF. Feels like its more safe to hve CF tunnels if you dont need to PF, but you have a middle man you have to trust)
I went with a pi running pi-hole. I got it as a project where the tool is the project. But, it’s essential infrastructure now and I don’t want to mess with it incase I break it. I’m an idiot with a poor history with pi guides so far, so I will break it. It’s running the adblock fine, I assume it’s doing the tracking and malware blocking fine too.
Sadly, that’s where I leave the project for now, I had intended to give it a HDD and some… other… software but I really don’t want to break it. I tried convincing the better half that I obviously need to N+1 but she wisely did not see reason.
If you want to try setting it up in high availability with failover, give me a poke. And until then - go to Teleporter in the settings, and download the backup. You can restore from there.
One thing worth saying is this - you can grab a cheap refurbished ssd (the smaller - the better), check it’s SMART data for any red flags, and attach it to the pi as OS disk. It will be much more reliable than SD, but overkill if you only run pi on the box. Alternatively look into log2ram, it keeps your SD card alive for longer :D but backup first!
Thanks. I already have Log2Ram running to prolong the life of the SD. My planned disaster relief is a spare SD, already set up and taped to the box ready to swap and reboot in case of emergency. SD cards are cheap so chucking <£10 at the setup once in a while is no big thing. A fresh install on the new SD allows me to improve on what I’ve already done, for example the new SD I’ll run DietOS instead of Raspbian, and reinforce skills. Less time efficient but that’s no matter when the box is working and it’s a hobby. I can then keep the old SD card taped inside the case as a physical back up. Perhaps more expensive in the long run, but an SD card taped to the inside of the case with simple instructions is an easy sell to the fiancée.
My experience with guides has shaken my confidence quite a bit. Which is fine, I’ll get over myself and the point is to learn, so me hitting snags is a good thing. But, until I have a functioning back up I’m not going to be fucking with it. Facebook cannot go down on account of my education.
But if I may, I have one question, a bunch of recommendations have the setup “segregated” (I dunno the word) in Docker and Portainers but I don’t understand the rationale. I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?
My current set up is Raspbian OS running Pi-hole as ad, tracker, malware block and DHCP (the ISP router is a Sky2 box so no IP or DNS customisation), Log2Ram and UncomplicatedFireWall.
I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?
So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.
The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).
Basically I have a very simple host setup with only a few packages installed. Then I would remotely configure and start up my containers, expose ports etc. And I can cleanly define where my configuration is, back up only that particular folder for example and keep the rest of the setup easy to redeploy.
I have nothing to add, and an upvote isn’t enough. Truly, thank you for your time, there’s a lot to think about.
I think for this initial iteration I’m going to direct install in the name of keeping it simple. Next go around I’ll try containerising, just to learn if nothing else. If I out-grow the Pi4 they’ll be good skills to have.
Any reason the VPN can’t stay as-is? Unless you don’t want it on the unraid box at all anymore. But going to unraid over VPN then out the rest of the network from there is a perfectly valid use case.
Well, I didn’t realize that was an option to be honest, lol. I am having some issues with that box at the moment though so having a pi or my router acting as the gateway appealed to me with it’s longer uptime
You’re absolutely right! I’m not super tech-savvy and I was convinced that those file sharing protocols were more or less equivalent (I only tried to compare in terms of speed). I never payed much attention to it because my other computers were doing fine with one or the other.
Adding a wireguard system that has iptables adjuated to include forwarding and masquerading will allow your single wireguard connection to see the rest of your LAN www.stavros.io/posts/how-to-configure-wireguard/
If you are totally new to wireguard setup, I found that reviewing all of these links gave me a better understanding of how the configuration setup worked. No one site seemed to cover it all, and each on had some good tips or explanation about a certain part of wireguard.
That great, thanks for the info. I was able to get Wireguard setup in unraid but they make it pretty easy, so I didn’t have a problem. I just didn’t think about connecting to the entire network, not just the server.
I’ve set my secondary DNS to cloudflare, and my pihole still blocked ads for me. I assumed the secondary DNS server is used only if the primary can’t be pinged. But i haven’t actually looked into it.
Afaik there is no primary and secondary, you cant tell which one to use. Its best to have 2 piholes, but having 2nd DNS set to nextdns or something like that should be fine if you cant run 2 instances, probably not the best setup tho
Putting Cloudflare as my secondary would allow some requests to get through and then often the device whose requests went to Cloudflare would continue using Cloudflare for a while.
The best solution I found was to run a second Pihole and use it as the secondary.
You can use something like orbital sync to keep them syncronized
Pretty much. Not sure how the router determines which DNS to use, but mine seems to latch onto whichever one serves up results the fastest, which would inevitably be cloudflare direct after the pihole returns enough blocks.
So I use a Raspberry Pi Zero W as a dedicated pihole, and my Pi 4 seedbox acts as its own pihole and as a redundant backup. Then use gravity-sync from the Zero to the 4 to mirror the settings.
Another cool trick is using tailscale to ensure your portable devices always can access your Pihole(s) from anywhere and then setting those server’s tailscale addresses as your DNS servers in tailscale.
This way you can always use your DNS from anywhere, even on cell data or on public networks
I keep a third instance of Pihole running on a VPS and use it as the first DNS server in tailscale so it will resolve a bit faster than my local DNS servers when I’m away from home
Huh, I’ll definitely look into that. Both times I tried to route external pihole access, somehow other mystery services found it and it slowed to a crawl from getting absolutely pounded by requests not from me. Thanks for that tip!
PiHole and similar services just use DNS blocking, which only works if the ads are served via a third-party ad server. Sites with their own ad inventory (YouTube, Facebook, Twitter, etc) can’t be blocked this way since they can just serve the ads from the same domain as their regular content.
Not sure of any downside yet but setting your country to Albania via vpn removes all YouTube ads on Apple TV. Was just informed of this yesterday and as mentioned there may be reasons to not do this.
If you’re comfortable self hosting you can use isponsorblocktv to block ads/sponsorship on YouTube on AppleTv and various smart TVs. I use this + Pi-Hole github.com/dmunozv04/iSponsorBlockTV
mary DNS Server: Clients will first attempt to use the primary DNS server specified in their network settings. This ser
What’s the point tho? If your PiHole fails you need to know otherwise you could be risking days / months of web surfing in the fallback DNS server without even noticing it.
As for a reply, there’s no RFC that specifies that a specific order is applied to DNS servers. So in short, you can’t have a fallback that is reliable and most operating systems will just load balance or opportunistically pick between the two.
Thank you, this is what I was worrying about. As for the “why”, even if my server is quite stable, a shutdown may be necessary and sometimes slowdowns with pi-hole happened. Some redundancy would have been better.
Well, I’m not sure you read the other comments but there is confirmation that for clients there isn’t an order for DNS servers from RFC2182:
The distinction between primary and secondary servers is relevant only to the servers for the zone concerned, to the rest of the DNS there are simply multiple servers.
All are treated equally at first instance, even by the parent server that delegates the zone. Resolvers often measure the performance of the various servers, choose the “best”, for some definition of best, and prefer that one for most queries.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.