selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

OminousOrange, in Starting over and doing it "right"
@OminousOrange@lemmy.ca avatar

For ease of setup and use, I’ve found Twingate to be great for outside access to my network.

Malice,

I’ll take a look at that one as well, thank you!

BearOfaTime, (edited ) in Starting over and doing it "right"

Not sure why you need a new router for PiHole. If your machines all point to the Pihole for DNS, it works. Router has almost nothing to do with what provides DNS, other than maybe having it’s DHCP config include the Pihole for DNS.

Even then, you can setup the Pihole to be both DHCP and DNS (which helps for local name resolution anyway), and then just turn off DHCP in your router.

As I understand it, Tailscale and Nginx fulfill the same requirements. I lean toward TS myself, I like how administration works, and how it’s a virtual network instead of an in-bound VPN. This means devices just see each other on this network, regardless of the physical network to which they’re connected. This makes it easy to use the same local-network tools you normally use. For example, you can use just one sync tool, rather than one inside the LAN, and one that can span the internet. You can map shares right across a virtual network as if it were a LAN. TS also enables you to access devices that can’t run TS, such as printers, routers, access points, etc, by enabling its Subnet Router.

Tailscale also has a couple features (Funnel and Share) which enable you to (respectively), provide internet access to specific resources for anyone, or enable foreign Tailscale networks to access specific resources.

I see Proxmox and TrueNAS as essentially the same kind of thing - they’re both Hypervisors (virtualizatiin hosts) with True adding NAS capability. So I can’t think of a use-case for running one on the other (TrueNAS has some docs around virtualizing it, I assume the use-case is for a test lab, I wouldn’t think running TN, or any NAS, virtualized is an optimal choice, but hey, what do I know? ).

While I haven’t explored both deeply, I lean toward TrueNAS, but that’s because I need a NAS solution and a hypervisor, and I’ve seen similar solutions spec’d many times for businesses - I’ve seen it work well. Plus TrueNAS as a company seems to know what they’re doing, they have a strong commercial arm with an array of hardware options. This tells me they are very invested in making True work well, and they do a lot of testing to ensure it works, at least on their hardware. Having multiple hardware products requires both an extensive test group and support organization.

Proxmox seems equivalent, except they do just the software part, as far as I’ve seen.

Two similar products for different, but similar/overlapping use-cases.

Best advice I have is to make a list of Functional Requirements, abstract/high-level needs, such as “need external access to network for management”. Don’t think about specific solutions, just make the list of requirements. Then map those Functional requirements to System requirements. This is often a one-to-many mapping, as it often takes multiple System requirements to address a single functional requirement.

For example, that “external access” requirement could map out to a VPN system requirement, but also to an access control requirement like SSO, and then also to user management definitions.

You don’t have to be that detailed, but it’s good to at least have the Functional-to-System mapping so you always know why you did something.

Malice,

You make a very good argument for Tailscale, and I think I’ll definitely be looking deeper into that.

I like your suggestion to map out functional requirements, and then go from there. I think I’ll go ahead and start working on a decent map for that.

As far as the new router for pi-hole… my super-great, wonderful, most awesome ISP (I hope the sarcasm is evident, haha; the provider is AT&T) dictates that I use their specific modem/router (not optional), and they also do not allow me to change DHCP on that mandated hardware. So my best option, so far as I’ve seen, is to use the ISP’s box in pass-through with a better router behind it that I can actually set up to use pi-hole.

Thank you for your thoughts and suggestions! I’m going to take a deeper look at Tailscale and get started properly mapping high-level needs/wants out, with options for each.

terminhell,

Ya don’t need ATT’s modem. Some copy pasta I’ve put together:

If it’s fiber, you don’t need the modem. You’ll still need it once every few months.

Things you’ll need:

  1. your own router
  2. cheap 4 port switch (1gig pref)

Setup: Connect gpon (the little fiber converter box they installed on the wall near modem) wan to any port on 4port switch. Then from switch to gpon port of modem (usually red or green port). Make sure modem fully syncs. Once this happens, you can move the cable from the modem to your own routers wan port. Done! Allow router a few moments to sync as well.

Now, every once in a while they’ll send a line refresh signal that will break this, or if a power outage occurs. In such case, you’ll just plug back in their modem, move cable back to gpon port of modem, wait for sync. Move cable back to router.

Bonus: Hook up all this to a battery backup and you’ll have Internet even during power outages, at least for a while.

Malice,

Huh, this is interesting, I’ll have to take another look into this. Thanks for the lead!
And I do have a UPS, and it is, indeed, pretty glorious that my internet, security cameras, and server all stay online for a good bit of time after an outage, and don’t even flinch when the power is only out briefly. Convenience and peace of mind. Well worth a UPS.

BearOfaTime,

Since their modem is handing out DHCP addresses, is there any reason why you couldn’t just connect that cable to your router’s internet port, and configure it for DHCP on that interface? Then the provider would always see their modem, and you’d still have functional routing that you control.

Since consumer routers have a dedicated interface for this, you don’t have to make routing tables to tell it which way to the internet, it already knows it’s all out that interface.

Just make sure your router uses a different private address range for your network than the one handed out by the modem.

So your router should get a DHCP and DNS settings from the modem, and will know it’s the first hop to the internet.

I do this to create test networks at home (my cable modem has multiple ethernet ports), using cheap consumer wifi routers. By using the internet port to connect, I can do some minimal isolation just by using different address ranges, not configuring DNS on those boxes, and disabling DNS on my router.

Malice,

Their modem is my router; it’s both. That’s why I need a new one, to do exactly as you’re describing (is my understanding, although another post here suggests otherwise).

BearOfaTime,

You should still be able to run your own router with it treating their router as the next hop.

BearOfaTime,

Lol, sarcasm received, loud n clear!

Yea, they all suck that way. I still use my own router for wifi. It’s just routing, and your own router will know which way to the internet, unless there’s something I don’t understand about your internet connection. See my other comment below.

Yea, requirements mapping like this is standard stuff in the business world, usually handled by people like Technical Business/Systems Analysts. Typically they start with Business/Functional Requirements, hammered out in conversations with the organization that needs those functions. Those are mapped into System Requirements. This is the stage where you can start looking at solutions, vendor systems, etc, for systems that meet those requirements.

System Requirements get mapped into Technical Requirements - these are very specific: cpu, memory, networking, access control, monitor size, every nitpicky detail you can imagine, including every firewall rule, IP address, interface config. The System and Technical docs tend to be 100+/several hundred lines in excel respectively, as the Tech Requirements turn into your change management submissions. They’re the actual changes required to make a system functional.

paf, in Starting over and doing it "right"

If z2m, zwavejs,… Are installed from the adon store of HA, all you have to do is create a full backup of HA, and all your automations will be saved and restored automatically.

Malice,

I am running HA in a container, so that’s not an option, unfortunately. If I’m being honest, though, it’s probably not a bad idea to start fresh with HA and re-import individual automations one-by-one, because HA has a lot of “slop” leftover from when I was first learning it and playing around with it.

nmhforlife, in SquareSpace dropping the ball.

Yeah I don’t get it. How can a domain registrar not offer DDNS? I’m looking at Cloudflare.

Certainly_No_Brit,

Cloudflare offers an API, which can be used to update the records. It’s as good as a DDNS.

dave,
@dave@feddit.uk avatar

I’ve been using AWS R53 for this for ages and it works well. Not specifically recommending AWS but using dynamic updates rather than a DDNS service (or running your own name server which I’ve also done).

Darkassassin07,
@Darkassassin07@lemmy.ca avatar

Idk, but it seems really stupid.

Having not actually looked into it at all:

I’m wondering if they have an api for updating records instead of traditional DDNS. Not the same thing AFAIK.

Either way, I’m already using cloudflare as a nameserver so this shouldn’t matter as much as I thought.

something15525, in SquareSpace dropping the ball.

porkbun.com is what I’m planning on switching to!

azl,

Just want to clarify - after looking at Porkbun’s DNS offerings, it does not appear they do DDNS either. Is that correct? So they are not any better than SquareSpace for that service. Porkbun does have an API interface.

It looks like Namecheap has DDNS support (at least I get valid-looking results when I search for that on their website).

I haven’t changed registrars in 10+ years. I am in the same boat re. Google -> SquareSpace. Is DDNS deprecated in favor of API’s across the board? It looks more complicated to set up.

i_am_not_a_robot,

You don’t actually need DDNS. If your provider has an API you can update your addresses using the API. kb.porkbun.com/…/190-getting-started-with-the-por…

darkfarmer,

I use porkbun and ddclient (ddclient.net). Not sure if that helps you or not but there it is

FrostyCaveman, (edited )

Pro tip: If you use Porkbun, don’t leave your domain’s authoritative DNS with Porkbun nameservers.

Over the year or so I had my stuff configured this way, on at least one occasion (that I know about… I was still setting up my observability stack during this year), the servers were flapping hard for over a day, causing my records to magically vanish from existence intermittently.

I tried contacting them every way I could, hell I even descended into the quagmire of Twitter and created an account so I could tweet at them… and got silence.

Pretty disappointing. I ended up moving all my DNS to AWS Route 53 after a few hours of pulling out my hair. They did eventually respond to my email like a day later, after I’d already moved everything over.

But idk maybe I’m wrong expecting an indie domain registrar to have super high availability on their nameservers… oh well

ShortN0te, in SquareSpace dropping the ball.

Change your nameservers to cloudflare or something, then use their API to setup ddns yourself which dynamically updates the dns entries.

Darkassassin07, (edited )
@Darkassassin07@lemmy.ca avatar

I’m an idiot.

I already do this. The swap to Squarespace wont actually effect me.

🤦

foggy, in SquareSpace dropping the ball.

Name cheap has always been my favorite registrar to work with.

Great service, responsive support, normal prices. 🤷‍♂️

It’s not hard to not piss off your clients in their industry but somehow GoDaddy and BlueHost tend to rank high on everyone’s shit list.

thejevans, in SquareSpace dropping the ball.
@thejevans@lemmy.ml avatar

As others have stated, porkbun + cloudflare + ddclient will do everything you need.

rammjet, in SquareSpace dropping the ball.

Namecheap and Cloudflare.

I use a bash script and cron to update Cloudflare using the following:

github.com/NChaves/Cloudflare_DNS_API_bash

Darkassassin07,
@Darkassassin07@lemmy.ca avatar

Oh fuck.

I just remembered I use cloudflare as my name servers, google (well, Squarespace now) only handles the registration.

I probably don’t have to do anything then.

Kinda feel like a moron now…

rjc,
@rjc@lemmy.world avatar

Why not switch your registration to cloudflare. They are awesome as long as you want to use their DNS.

Darkassassin07,
@Darkassassin07@lemmy.ca avatar

Tbh, laziness and lack of need.

I’ll probably reconsider once renewal comes around, but that’s ~4 years away. Until then, as long as things continue functioning: meh. Doesn’t really make a difference.

Father_Redbeard, (edited ) in Termius alternative ?
@Father_Redbeard@lemmy.ml avatar

Not self hosted, but Tabby is the closest I’ve found. But I still don’t like it as much as Termius. And from what other, more experienced people have said, Tabby is bloated, requiring way more system resources than a terminator emulator app should.

Also, I asked a related question here if you want to read some other suggestions.

snekerpimp,

Tabby has a server that lets you synch your profiles.

poVoq, in Termius alternative ?
@poVoq@slrpnk.net avatar
jgkawell, in Termius alternative ?
@jgkawell@lemmy.world avatar

I’ve been running Teleport for a while now and it’s been great. It can even manage access to things like Kubernetes clusters which is fantastic in my use case. I’ve been using their free community edition and no complaints so far.

Voroxpete, (edited ) in SquareSpace dropping the ball.

Namecheap does DyDNS. I’ve been using them for years, really solid.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Second for namecheap. It’s reliable, easy,. … and cheap

teawrecks, in Starting over and doing it "right"

I need everything to be fully but securely accessible from outside the network

I wouldn’t be able to sleep at night. Who is going to need to access it from outside the network? Is it good enough for you to set up a VPN?

The more stuff visible on the internet, the more you have to play IT to keep it safe. Personally, I don’t have time for that. The safest and easiest system to maintain a system is one where possible connections are minimized.

Malice,

I sometimes travel for work, as an example, and need to be able to access things to take care of things while I’m away and the girlfriend is home, or when she’s with me and someone else is watching the place (I have a dog that needs petsat). I definitely have the time to tinker with it. Patience may be another thing, though, lol.

Linuturk,
@Linuturk@lemmy.world avatar

Tailscale would allow you access to everything inside your network without having it publicly accessible. I highly recommend that since you are new to security.

Malice,

Heavily leaning this way, thank you for another vote!

teawrecks,

It’s not clear to me how tailscale does this without being a VPN of some kind. Is it just masking your IP and otherwise just forwarding packets to your open ports? Maybe also auto blocking suspicious behavior if they’re clearly scanning or probing for vulnerabilities?

lowdude,

That’s exactly what it is. I haven’t looked into it too much, but as far as I know it’s main advantage is simplifying the setup process, which in turn reduces the chances of a misconfigured VPN.

LufyCZ, in Starting over and doing it "right"

Just fyi - running TrueNAS with zfs as a VM under Proxmox is a recipe for disaster, as me how I know.

Zfs needs direct drive access, with VMs, the hypervisor virtualizes the adapter which is then passed through, which can mess things up.

What you’d need to do is buy a sata/sas card and pass the whole card through, then you can use a vm.

Malice,

The more replies like this I get, the more I’m inclined to set up a second computer with just TrueNAS and let it do nothing but handle that. I assume that, then, would be usable by the server running proxmox with all its containers and whatnots.

Thank you for the input!

LufyCZ,

If you want to learn zfs a bit better though, you can just stick with Proxmox. It supports it, you just don’t get the nice UI that TrueNAS provides, meaning you’ve got to configure everything manually, through config files and the terminal.

Lakuz,

You can run Virtual Machines and containers in TrueNAS Scale directly. The “Apps” in TrueNAS run in K3s (a lightweight Kubernetes) and you can run plain Docker containers as well if you need to.

TrueCharts provides additional apps and services on top of the official TrueNAS supported selection.

I have used Proxmox a lot before TrueNAS. At work and in my homelab. It’s great, but the lack of Docker/containerd support made me switch eventually. It is possible to run Docker on the same host as Proxmox, but in the end everything I had was running in Docker. This made most of what Proxmox offers redundant.

TrueNAS has been a better fit for me at least. The web interface is nice and container based services are easier to maintain through it. I only miss the ability to use BTRFS instead of ZFS. I’ve had some annoying issues with TrueCharts breaking applications on upgrades, but I can live with the occasional troubleshooting session.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #