I use Backblaze B2 for my backups. Storing about 2tb, comes out to about $10/mo, which is on par with Google One pricing. However, I get the benefit of controlling my data, and I use it for tons more than just photos (movies/shows etc).
If you want a cheaper solution and have somewhere else you can store off-site (e.g. family/friend’s house), you can probably use a raspberry pi to make a super cheap backup solution.
I’m still too container stupid to understand the right way to do this. I’m running it in docker under kubernetes and sometimes I don’t update nextcloud for a long time then I do a container update and it’s all fucked because of incompatible php versions of some shit.
I don’t remember much about how to use kubernetes but if you can specify a tag like nextcloud:28 instead of nextcloud:latest you should have a safer time with upgrades. Then make sure you always upgrade all the way before moving to a newer major version, this is crucial.
This is ultimately why I ditched Nextcloud. I had it set up, as recommended, docker, mariadb, yadda yadda. And I swear, if I farted near the server Nextcloud would shit the bed.
I know some people have a rock solid experience, and that’s great, but as with everything, ymmv. For me Nextcloud is not worth the effort.
I have had various sticks and Roku highest end models and then got the latest ATV with hard wire port that adds Dolby vision and high frame rate HDR. I have a 2022 high-end TV.
The video quality is noticeably better. Not sure of older ATV, but this is clearly better than the top end Roku. Also, I’m not sure if it is the same on older tvs
The other thing is that you want to hard wire if at all possible. Even the best wifi can’t touch the reliability of a wire
I had the same problem as OP. My solution was to port forward to my server but then block connections from all IP addresses accept from my work, which I added to an allowlist.
It’s working well so far, but I think the Cloudflare tunnel is the better option.
CloudFlare is a good place for beginners to start. Setting up a reverse proxy can be daunting the first time. Certainly better than no reverse proxy.
That being said, having your own reverse proxy is nice. Better security since the certificates are controlled by your server. Also complex stuff becomes possible.
My traefik uses keys encrypt wild card domains to provide HTTPS for internal LAN only applications (vault warden) while providing external access for other things like seafile.
I also use traefik with authentik for single sign on. Traefik allows me to secure apps like sonarr with single sign on from my authentik setup. So I login once on my browser and I can access many of my apps without any further passwords.
Authentik also allows oAuth so I can use that for seafile, freshrss and immich. Authentik allows jellyfin login with LDAP. (This last paragraph could be setup with CloudFlare as well).
This is the way. My setup is very similar except I only use authentik for Nextcloud. I don't expose my "arr" services to the Internet so I don't feel it necessary to put them behind authentik, although I could if I wanted.
Using Duo's free 10 personal licenses is also great as it can also plug into authentik for MFA through the solution.
The primary reason to put authentik in front of arrs is so I don’t have to keep putting in different password for each when logging in. I disable the authentication for each of them in the app itself and then disable the exposed docker port as well so the only way to access it it via traefik + authentik. It has local access only so isn’t directly exposed to the internet.
10 free accounts on duo is very nice but I hate being locked into things (not self hosted). An open source or self hosted alternative to duo would be great.
I use caddy and it does everything for me, but my limited understanding is that the dns entry for which the certs are requested must point to the ip address at which caddy is listening. So if I have a DNS entry like internal.domain.com which resolves to 10.0.0.123 and caddy is listening on that address I can get a http connection, but not an https connection, because letsencrypt can’t verify that 10.0.0.123 is actually under my control.
*.local.domain.com -> has its own cert but the * can be anything and the same cert can be used for anything in place of the star as many times as you want and therefore doesn’t need to be internet accessible to verify. That way vaultwarden.local.domain.com remains local only.
There is an alternate verification method using an API key to your DNS provider, if it’s a supported one. That method doesn’t need any IP to be assigned (doesn’t care if there are A/AAAA records or where they point because it can verify the domain directly).
deSEC.io is a good example of a good, reputable and free DNS provider that additionally allows you to manage API keys. The catch is that they require you to enable DNSSEC (their mission is similar to Let’s Encrypt, but for DNS).
I see that you want to use the cert for intranet apps btw.
What I did was get two LE wildcard certs, one for *.my.dom and one for *.local.my.dom. Both of them can be obtained and renewed with the API approach without any further care to what they actually point at.
Also, by using wildcards, you don’t give away any of your subdomains. LE requests are public so if you get a cert for a specific subdomain everybody will know about it. local.my.dom will be known but since that’s only used on my LAN it doesn’t matter.
Then what I do for externally exposed apps is to point my.dom to an IP (A record) and either make a wildcard CNAME for everything *.my.dom to my.dom, or explicit subdomain CNAME’s as needed, also to my.dom.
This way you only have one record to update for the IP and everything else will pick it up. I prefer the second approach and I use a cryptic subdomain name (ie. don’t use jellyfin.my.dom) so I cut down on brute force guessing.
The IP points at my router, which forwards 443 (or a different port of you prefer) to a reverse proxy that uses the *.my.dom LE cert. If whatever tries to access the port doesn’t provide the correct full domain name they get an error from the proxy.
For the internal stuff I use dnsmasq which has a feature that will override all DNS resolves for anything ending with .local.my.dom to the LAN IP of the reverse proxy. Which uses the *.local.my.dom LE cert for these ones but otherwise works the same.
No. The video card is only wired to send video out through it’s ports (which don’t exist) and the ports on the motherboard are wired to go to the nonexistent iGPU on the CPU.
In windows you’re not sending the signal directly through another port. You’re sending the dGPU’s signal through the iGPU to get to the port.
On a laptop with nvidia optimus or AMD’s equivalent you can see the increased iGPU usage even though the dGPU is doing the heavy lifting. it’s about 30% usage on my 11th gen i9’s iGPU routing the 3080s video out to my 4k display.
Will check it out. Setting up postfix + dovecot with dmarc and postgres was a funny experience but it’s starting to slip out of my memory how I did it and I don’t want to be through it again.
I looked at this, it looks pretty rudimentary compared to something like Mailcow-dockerized which has a full docker stack with clamAV, sieve, etc that you can add Roundcube on to, and has worked very well for me for years. There are precious few jmap clients out there so that’s not much of a consideration really. I’d rather have rspamd itself rather than their fork of it because then I can depend on the original’s documentation, because their documentation doesn’t seem very comprehensive comparatively.
Plus, I’d rather have a stack of separate docker containers rather than a single container that munges it all together, but maybe that’s not a big deal. I like to let Postgres manage the postgres container image and not put another layer in there.
I don’t think it’s you, it generally is a bad practice to have multiple processes inside a container. It usually defeats most of the isolation, introduces problems with handling zombie processes (therefore you need an init) and restarting tools when they crash (then you need something like supervisord, which I guess this image might use - I didn’t check). Each software adds dependencies, which can conflict (again defeating the idea of containers), and of course CVEs. Then you have a problem with users etc.
So yeah, containers are generally not meant to be used this way. The project might be cool but I would be very uncomfortable running it like this, especially if that’s going to be my primary email, with all the password resetting capabilities etc.
Reading the Dockerfile in their repo, it’s simply a clean debian:slim with four compiled rust binaries placed into it. There’s no services, no supervisord, nothing except the mail server binaries themselves.
How’s performance on that setup? I own the case and am looking to do the exact same vdev setup this next year, but am wondering if the wider vdevs negatively impact performance in any noticeable way. Also wondering if 128gb of ram is too little for that kind of setup with 20tb drives, I feel like I might have to find out the hard way…
Performance is great IMO, I store all my Plex media on this setup as a network share and never have any issues or slowdowns. I only use the setup as a strict NAS nothing else.
I started with 9 drives at 12tb first, about 3 years later (mid this year) i added the second vdev to my main pool. 9 drives 20tb each.
V-devs do not require to be the same size between v-devs, but they do require to have the same amount of drives in each.
I have unraid and proxmox setups on other machines running independently. Plex and other software for example all access my TrueNAS over the network.
For the TrueNAS system IMO you don’t need much “horsepower”. I run it on a 12 year old motherboard, 12gb ram and a 60gb SSD to boot. Nothing special at all. Unraid and proxmox on the other hand is where I spend the money on ram and processing power.
My Network is gigabit and I get full speed on network transfers, looking to do 10gb in the future, but that would require 10gb NIC’s in all my PC’s and new network switches. Don’t see it effecting my TrueNAS sytem setup. Besides your network transfer is only as fast as the read/write of the drives.
SSH may be installed on the pi but may need to be enabled. That was the second to last bullet point in the requirements. The final on being to install Ansible. If you did not get the requirements taken care of, installation will not be successful.
Please first try to SSH into your pi. Once you have that done, you should install Ansible. After that, you should be able to run the playbook from step 7 and we can proceed from there.
I’m not trying to be mean, but I think you might be trying to jump straight into the deep end before learning to swim. While the commands have been included in the guide in order for you to be able to install this, it really does help to understand what those commands do, and what they mean. I suggest first getting to know your pi a little bit better, learning how to get SSH going on that and then moving on to installing Ansible. There’s information on the raspberry pie website on how to get SSH enabled on your pi.
No not really. You first enable it on the raspberry pie. Then you access your raspberry pie from your normal computer by running this command in your command line or shell: ssh user@1.2.3.4 where ‘user’ is your raspberry pi user (pi by default), and ‘1.2.3.4’ is the ip address of the pi.
It should already be there if it’s a Win or Linux, you just need to enable SSH on the pi, then you can remote into it by running this from a command line / shell:
ssh pi@1.2.3.4
Where ‘pi’ is your user on your pi, and ‘1.2.3.4’ is the IP address or hostname for the pi.
Just want to add too that installing and hosting something like Lemmy is not really a beginner task. I’m not trying to discourage, quite the opposite. You should just know this will be a challenging endeavor, but will be rewarding once you do complete it, and you will learn a lot in the process.
If I’m supposed to be reading that top comment I don’t see where you state what your results were. You apparently “had errrors” but neglected to note any down and now “you don’t” have errors.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.