with a NAS i tend to go with a commercial product, and only for that purpose. it stores the data, maybe serves it up as file server. thats the NAS one job.
my processing happens on another box, like your pi. i want the nas focused on its single purpose.
so my suggestion would be to pickup a netgear/synology whatever, but only use it as a nas.
if you want to expand that pi, just use a real machine and upgrade yourself to maybe a nice docker setup.
This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don’t mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I’d trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).
I also endorse u/originalucifer’s comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.
You mentioned that your cpu is getting maxed out on wireguard. That makes a lot of sense since it’s generally not hardware accelerated, old low end CPUs could struggle here.
What choices do you have for protocols with your VPN software?
I want to use glueten container, but I’m flexible about everything else. I can try openVPN server, but not sure what AES128 means (I know its some kind of encryption, but don’t know how to use that in my case). There are many different servers to chose, Ill try few with UDP openVPN. thanks
Ok in that case. The goal is to use a cipher suite that works well on your device that is still secure. AES is accelerated on most processors these days. But you’ll want to confirm that by looking up your specific cpu (both host and client machines!) and checking for AES acceleration.
AES-128-GCM would be my suggestion.
UDP mode provides less overhead, so it should be faster for you.
Alternatively you could use IPsec instead of openvpn but that’s a chore to configure. But it has the benefit of being free and being natively supported by many devices.
You would still want to configure an appropriate cipher suite that’s fast and secure.
Imo this is not enshitification yet, but I’m concerned it could pave the way! It all depends on whether they make using your own content harder to promote this, or if it’s just a side hustle to add another revenue stream.
VPN limiting your bandwidth? Sounds like a CPU issue. You’ll be surprised how much CPU overhead it takes to encrypt and decrypt traffic at such high speeds.
I don’t need, but wasn’t sure am I using full capacity of what I’m paying for and want to learn more. This is my hobby, I enjoy setting up things more than using the server lol edit: and yeah feels like CPU capped
I enjoy setting up things more than using the server lol
Also me in life, in games… I like min-maxing, making it as efficient as things will allow.
So I asked chatgpt what professions would be best for a person like that and from the 10 answers it spat out, surprise surprise, I’ve worked as 2 out of the top 3.
I stand with you for the subdomain and bare metal thing. There are many great applications that I’m facing trouble implementing since I don’t have control over A domain settings within my setup. Setting mysite.xyz/something is trivial that I have full control over. Docker thing I can understand to some extent but I wish it was as simple as python venv kind of thing.
I’m sure people will come after me saying this or that is very easy but your post proves that I’m not alone. Maybe someone will come to the rescue of us novices too.
No, it’s not that. The point is not that using a sub domain is easy or not, you might not have access to using one or maybe your setup is just “ugly” using one or you just don’t want to use one.
Its standard practice in all web based software to allow base URLs. Maybe the choice of framework wasn’t the best one from this point of view.
As for docker, deploying immich on bare metal should be fairly easy, if they provided binaries to download. The complex part is to build it not deploy.
But you gave me an idea: get the binaries from the docker image… Maybe I will try.
Once you have the bins, deploying will be simple. Updating instead will be more complex due to having to download a new docker image and extract again each time.
Another such application that I wish had easy implementation for what you call base URLs is Apache Superset. Such a great application that I’m unable to use in my setup.
you may need to check your server’s DNS configuration or make sure that the hostname “lemmy-ui” is correctly defined and reachable in your network. It looks like it’s expecting the lemmy-ui to be on the .57 machine. If you are expecting it on the .62 then something is misconfigured in the script.
It just looks like it can’t find that host.
Sorry I can’t be more help. I don’t run a Lemmy instance and I’m not familiar with the ansible config you are using.
I don’t know about you but I want the companies to take self hosted and Foss solutions seriously. The fact that they are wanting to work with him is a major step in the right direction. It would be dumb to discourage companies from supporting foss.
Are they supporting FOSS, or looking to buy out the project to make it a closed in-house solution and avoid the bad publicity they created this last week?
I think you’re asking for alternative front ends to git, rather than GitHub?
I’m not sure if you want to retain access to Issues, Actions, Discussions and everything else on GitHub, but through another interface. Or if you’re asking to make a clean break from that data and ecosystem.
If it’s the former, then I think it’s either the web app (which you don’t like), or the CLI (gh). If it’s the latter, then I think any of the other options mentioned by others will do.
selfhosted
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.