selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

savedbythezsh, in worth selfhosting immich or similar? what about backups?

I use Backblaze B2 for my backups. Storing about 2tb, comes out to about $10/mo, which is on par with Google One pricing. However, I get the benefit of controlling my data, and I use it for tons more than just photos (movies/shows etc).

If you want a cheaper solution and have somewhere else you can store off-site (e.g. family/friend’s house), you can probably use a raspberry pi to make a super cheap backup solution.

Gooey0210, (edited )

If you have 1tb+ of data you can get a cheaper option just by moving to hetzner (also, even storj is cheaper than backblaze)

governorkeagan,

This is exactly what I was looking for last night, thank you!

recapitated, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

Always works great for me.

I just run it (behind haproxy on a separate public host) in docker compose w/ a redis container and a hosted postgres instance.

Automatically upgrade minor versions daily by pulling new images. Manually upgrade major versions by updating the compose file.

Literally never had a problem in 4 years.

cyberpunk007,

I’m still too container stupid to understand the right way to do this. I’m running it in docker under kubernetes and sometimes I don’t update nextcloud for a long time then I do a container update and it’s all fucked because of incompatible php versions of some shit.

recapitated,

I don’t remember much about how to use kubernetes but if you can specify a tag like nextcloud:28 instead of nextcloud:latest you should have a safer time with upgrades. Then make sure you always upgrade all the way before moving to a newer major version, this is crucial.

There are varying degrees of version specificity available: hub.docker.com/_/nextcloud/tags

Make sure you’re periodically evaluating your site with scan.nextcloud.com and following all of the recommended best practices.

madnificent,

Kubernetetes is crazy complex when comparing to docker-compose. It is built to solve scaling problems us self-hosters don’t have.

First learn a few docker commands, set some environment variables, mount some volumes, publish a port. Then learn docker-compose.

Tutorials are plenty, if those from docker.com still exist they’re likely still sufficient.

cyberpunk007,

Yeah I’m only running it because truenas scale uses it

mosiacmango,

They have an "all in one" docker installer for the above because you are far from alone here.

harsh3466, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

This is ultimately why I ditched Nextcloud. I had it set up, as recommended, docker, mariadb, yadda yadda. And I swear, if I farted near the server Nextcloud would shit the bed.

I know some people have a rock solid experience, and that’s great, but as with everything, ymmv. For me Nextcloud is not worth the effort.

LordKitsuna,

If all you want is files and sharing try Seafile

harsh3466,

That’s what I’ve got running now, and for me Seafile is been rock solid.

geekworking, in AppleTV complete replacement opinions

I have had various sticks and Roku highest end models and then got the latest ATV with hard wire port that adds Dolby vision and high frame rate HDR. I have a 2022 high-end TV.

The video quality is noticeably better. Not sure of older ATV, but this is clearly better than the top end Roku. Also, I’m not sure if it is the same on older tvs

The other thing is that you want to hard wire if at all possible. Even the best wifi can’t touch the reliability of a wire

randomcruft,
@randomcruft@lemmy.sdf.org avatar

Got it, and yes, current ATV is hardwired. Wi-Fi in my home wasn’t too bad, but wired is definitely better. Appreciate the response / thoughts.

valkyre09, in Self-hosted VPN that can be accessed via browser extension

I use Cloudflare tunnels for this very reason, you can protect access to the page behind a login (I use azure AD).

It basically acts like a reverse proxy allowing me access to those local resources without anything being installed on the client computer.

thefactremains,

This is the right answer.

The only other solution I can think of would be to put a device in the middle (such as this router).

lemmyvore,

Or you can use the CF Tunnel equivalent from Tailscale, called Funnel.

tailscale.com/blog/reintroducing-serve-funnel

k4j8,

I had the same problem as OP. My solution was to port forward to my server but then block connections from all IP addresses accept from my work, which I added to an allowlist.

It’s working well so far, but I think the Cloudflare tunnel is the better option.

Lem453, in What's the point of a reverse proxy and does cloudflare give all the benefits of one?

CloudFlare is a good place for beginners to start. Setting up a reverse proxy can be daunting the first time. Certainly better than no reverse proxy.

That being said, having your own reverse proxy is nice. Better security since the certificates are controlled by your server. Also complex stuff becomes possible.

My traefik uses keys encrypt wild card domains to provide HTTPS for internal LAN only applications (vault warden) while providing external access for other things like seafile.

I also use traefik with authentik for single sign on. Traefik allows me to secure apps like sonarr with single sign on from my authentik setup. So I login once on my browser and I can access many of my apps without any further passwords.

Authentik also allows oAuth so I can use that for seafile, freshrss and immich. Authentik allows jellyfin login with LDAP. (This last paragraph could be setup with CloudFlare as well).

Maximilious, (edited )
@Maximilious@kbin.social avatar

This is the way. My setup is very similar except I only use authentik for Nextcloud. I don't expose my "arr" services to the Internet so I don't feel it necessary to put them behind authentik, although I could if I wanted.

Using Duo's free 10 personal licenses is also great as it can also plug into authentik for MFA through the solution.

Lem453, (edited )

The primary reason to put authentik in front of arrs is so I don’t have to keep putting in different password for each when logging in. I disable the authentication for each of them in the app itself and then disable the exposed docker port as well so the only way to access it it via traefik + authentik. It has local access only so isn’t directly exposed to the internet.

10 free accounts on duo is very nice but I hate being locked into things (not self hosted). An open source or self hosted alternative to duo would be great.

throwafoxtrot,

How do you get certs for internal applications?

I use caddy and it does everything for me, but my limited understanding is that the dns entry for which the certs are requested must point to the ip address at which caddy is listening. So if I have a DNS entry like internal.domain.com which resolves to 10.0.0.123 and caddy is listening on that address I can get a http connection, but not an https connection, because letsencrypt can’t verify that 10.0.0.123 is actually under my control.

Lem453, (edited )

You are completely correct…for normal certs. Internal domains require a wild card cert with DNS challenge.

This video explains how to set it up with traefik

youtu.be/liV3c9m_OX8

I’d bet caddy can do something similar.

Basically you have:

  1. Seafile.domain.com -> has it’s own cert
  2. *.local.domain.com -> has its own cert but the * can be anything and the same cert can be used for anything in place of the star as many times as you want and therefore doesn’t need to be internet accessible to verify. That way vaultwarden.local.domain.com remains local only.
lemmyvore,

There is an alternate verification method using an API key to your DNS provider, if it’s a supported one. That method doesn’t need any IP to be assigned (doesn’t care if there are A/AAAA records or where they point because it can verify the domain directly).

deSEC.io is a good example of a good, reputable and free DNS provider that additionally allows you to manage API keys. The catch is that they require you to enable DNSSEC (their mission is similar to Let’s Encrypt, but for DNS).

throwafoxtrot,

Thanks, good to know. I’ll see if can set that up.

lemmyvore,

I see that you want to use the cert for intranet apps btw.

What I did was get two LE wildcard certs, one for *.my.dom and one for *.local.my.dom. Both of them can be obtained and renewed with the API approach without any further care to what they actually point at.

Also, by using wildcards, you don’t give away any of your subdomains. LE requests are public so if you get a cert for a specific subdomain everybody will know about it. local.my.dom will be known but since that’s only used on my LAN it doesn’t matter.

Then what I do for externally exposed apps is to point my.dom to an IP (A record) and either make a wildcard CNAME for everything *.my.dom to my.dom, or explicit subdomain CNAME’s as needed, also to my.dom.

This way you only have one record to update for the IP and everything else will pick it up. I prefer the second approach and I use a cryptic subdomain name (ie. don’t use jellyfin.my.dom) so I cut down on brute force guessing.

The IP points at my router, which forwards 443 (or a different port of you prefer) to a reverse proxy that uses the *.my.dom LE cert. If whatever tries to access the port doesn’t provide the correct full domain name they get an error from the proxy.

For the internal stuff I use dnsmasq which has a feature that will override all DNS resolves for anything ending with .local.my.dom to the LAN IP of the reverse proxy. Which uses the *.local.my.dom LE cert for these ones but otherwise works the same.

fuckwit_mcbumcrumble, in Hardware question

No. The video card is only wired to send video out through it’s ports (which don’t exist) and the ports on the motherboard are wired to go to the nonexistent iGPU on the CPU.

Appoxo,
@Appoxo@lemmy.dbzer0.com avatar

Depends. You can send the signal in Windows through another port.
But if it works without an iGPU…

fuckwit_mcbumcrumble,

In windows you’re not sending the signal directly through another port. You’re sending the dGPU’s signal through the iGPU to get to the port.

On a laptop with nvidia optimus or AMD’s equivalent you can see the increased iGPU usage even though the dGPU is doing the heavy lifting. it’s about 30% usage on my 11th gen i9’s iGPU routing the 3080s video out to my 4k display.

Appoxo,
@Appoxo@lemmy.dbzer0.com avatar

In that case nevermind.
Carry on.

EdgeRunner, (edited ) in Problem while trying to setup an instance

At first glance, i would say you need to add Jakob as a sudo user first :

askubuntu.com/questions/7477/ddg#7484

And then install ansible-playbook,

Infinitus,

Thanks!

EdgeRunner,

You welcome mate,
I hope thats good for you now, don’t hesitate to ask for the next steps, if you encounter others issues.

Gl, and have fun,

eskuero, (edited ) in Stalwart v0.5.0
@eskuero@lemmy.fromshado.ws avatar

This looks nice, even has a clean docker image.

Will check it out. Setting up postfix + dovecot with dmarc and postgres was a funny experience but it’s starting to slip out of my memory how I did it and I don’t want to be through it again.

ikidd,
@ikidd@lemmy.world avatar

I looked at this, it looks pretty rudimentary compared to something like Mailcow-dockerized which has a full docker stack with clamAV, sieve, etc that you can add Roundcube on to, and has worked very well for me for years. There are precious few jmap clients out there so that’s not much of a consideration really. I’d rather have rspamd itself rather than their fork of it because then I can depend on the original’s documentation, because their documentation doesn’t seem very comprehensive comparatively.

Plus, I’d rather have a stack of separate docker containers rather than a single container that munges it all together, but maybe that’s not a big deal. I like to let Postgres manage the postgres container image and not put another layer in there.

sudneo,

I don’t think it’s you, it generally is a bad practice to have multiple processes inside a container. It usually defeats most of the isolation, introduces problems with handling zombie processes (therefore you need an init) and restarting tools when they crash (then you need something like supervisord, which I guess this image might use - I didn’t check). Each software adds dependencies, which can conflict (again defeating the idea of containers), and of course CVEs. Then you have a problem with users etc.

So yeah, containers are generally not meant to be used this way. The project might be cool but I would be very uncomfortable running it like this, especially if that’s going to be my primary email, with all the password resetting capabilities etc.

eskuero,
@eskuero@lemmy.fromshado.ws avatar

Does it run multiple processes inside the container? Looks like the entrypoint only launchs one.

ace,
@ace@lemmy.ananace.dev avatar

Reading the Dockerfile in their repo, it’s simply a clean debian:slim with four compiled rust binaries placed into it. There’s no services, no supervisord, nothing except the mail server binaries themselves.

NarrativeBear, in Those who are self hosting at home, what case are you using? (Looking for recommendations)

Fractal Design Define 7 XL

You can fit 18 HDDs into it plus 5 SSDs at the same time without custom mount points.

Though you do need to buy the extra brackets and trays.

Not my build exactly but a example.

…unraid.net/…/97612-fractal-design-define-7-build…

Gormadt,
@Gormadt@lemmy.blahaj.zone avatar

18 HDDs into it plus 5 SSDs

Sweating intensifies

That’s a lot of drives, I’ll have to look into that one for sure

NarrativeBear,

I run TrueNAS myself in this case. I have two v-devs of 8 drives each in raid 2.

Both v-devs have a extra spare each. 4 SSDs are used for quick read and write and the 5th SSD for os boot.

JGrffn, (edited )

How’s performance on that setup? I own the case and am looking to do the exact same vdev setup this next year, but am wondering if the wider vdevs negatively impact performance in any noticeable way. Also wondering if 128gb of ram is too little for that kind of setup with 20tb drives, I feel like I might have to find out the hard way…

NarrativeBear,

Performance is great IMO, I store all my Plex media on this setup as a network share and never have any issues or slowdowns. I only use the setup as a strict NAS nothing else.

I started with 9 drives at 12tb first, about 3 years later (mid this year) i added the second vdev to my main pool. 9 drives 20tb each.

V-devs do not require to be the same size between v-devs, but they do require to have the same amount of drives in each.

I have unraid and proxmox setups on other machines running independently. Plex and other software for example all access my TrueNAS over the network.

For the TrueNAS system IMO you don’t need much “horsepower”. I run it on a 12 year old motherboard, 12gb ram and a 60gb SSD to boot. Nothing special at all. Unraid and proxmox on the other hand is where I spend the money on ram and processing power.

My Network is gigabit and I get full speed on network transfers, looking to do 10gb in the future, but that would require 10gb NIC’s in all my PC’s and new network switches. Don’t see it effecting my TrueNAS sytem setup. Besides your network transfer is only as fast as the read/write of the drives.

themachine, in Could someone explain how to set up a lemmy instance with ansible for an absolute beginner

You tried what exactly earlier today?

Sheeple, (edited )
@Sheeple@lemmy.world avatar

deleted_by_author

  • Loading...
  • arudesalad,

    No I didn’t

    WhiteOakBayou,

    Needlessly dismissive for someone who needs help. Yes he is probably in over his head but who hasn’t been.

    Sheeple, (edited )
    @Sheeple@lemmy.world avatar

    …that was meant to be a joke. I had a gut feeling I should have used a tone indicator. My bad

    arudesalad,

    I was following the steps on the Lemmy-ansible github page

    RCTreeFiddy,

    And which step in this process did you get stuck, and what were the errors, if any?

    You gotta give us some more info here.

    arudesalad, (edited )

    Step 7. I dont have the errors now but I don’t think I had ansible or ssh set up correctly

    I dont really understand it as this is the first thing I am trying to selfhost other than a minecraft server.

    RCTreeFiddy, (edited )

    SSH may be installed on the pi but may need to be enabled. That was the second to last bullet point in the requirements. The final on being to install Ansible. If you did not get the requirements taken care of, installation will not be successful.

    Please first try to SSH into your pi. Once you have that done, you should install Ansible. After that, you should be able to run the playbook from step 7 and we can proceed from there.

    arudesalad,

    Do I do that from my normal pc? I’ve never used ssh before

    RCTreeFiddy,

    I’m not trying to be mean, but I think you might be trying to jump straight into the deep end before learning to swim. While the commands have been included in the guide in order for you to be able to install this, it really does help to understand what those commands do, and what they mean. I suggest first getting to know your pi a little bit better, learning how to get SSH going on that and then moving on to installing Ansible. There’s information on the raspberry pie website on how to get SSH enabled on your pi.

    arudesalad,

    Alright, thanks for trying to help. Will I need ssh on my main pc to get it to work on my pi?

    RCTreeFiddy, (edited )

    No not really. You first enable it on the raspberry pie. Then you access your raspberry pie from your normal computer by running this command in your command line or shell: ssh user@1.2.3.4 where ‘user’ is your raspberry pi user (pi by default), and ‘1.2.3.4’ is the ip address of the pi.

    muntedcrocodile,
    @muntedcrocodile@lemmy.world avatar

    Bold of u to assumw they are using linux as there main pc os. If they are using windows i beleive it doesnt come with an ssh client.

    PeachMan,
    @PeachMan@lemmy.world avatar

    You can SSH using command line. I do have a Windows Pro license, but I THINK that it’s not exclusive to Pro…

    muntedcrocodile,
    @muntedcrocodile@lemmy.world avatar

    Huh i vagly remember needing putty but i havnt used windows in almost 5years now.

    PeachMan,
    @PeachMan@lemmy.world avatar

    Yeah I also installed putty a long time ago, I forget if it was actually necessary or if I was just afraid of command line back then.

    southsamurai,
    @southsamurai@sh.itjust.works avatar

    Yeah, legit, I’ve messed around with this kind of thing before, and I wouldn’t attempt to run lemmy myself. Major pain in the ass.

    RCTreeFiddy,

    It should already be there if it’s a Win or Linux, you just need to enable SSH on the pi, then you can remote into it by running this from a command line / shell:

    ssh pi@1.2.3.4

    Where ‘pi’ is your user on your pi, and ‘1.2.3.4’ is the IP address or hostname for the pi.

    Just want to add too that installing and hosting something like Lemmy is not really a beginner task. I’m not trying to discourage, quite the opposite. You should just know this will be a challenging endeavor, but will be rewarding once you do complete it, and you will learn a lot in the process.

    arudesalad,

    Also in the comment this one is replying to, I meant to say set up correctly

    themachine,

    And what exactly happens?

    arudesalad,

    I’ve replied to a different comment in this thread about what happened already

    themachine,

    This it should be no issue for you to copy and paste that answer in our conversation.

    PeachMan,
    @PeachMan@lemmy.world avatar

    Lol. It should also be no issue for you to find the comment and read their answer

    CosmicApe,
    @CosmicApe@kbin.social avatar

    They're asking for quite detailed help for a reasonably difficult project, the least they can do is supply all the info to the people trying to help.

    arudesalad,
    themachine,

    If I’m supposed to be reading that top comment I don’t see where you state what your results were. You apparently “had errrors” but neglected to note any down and now “you don’t” have errors.

    scrubbles, in First Nas Build
    @scrubbles@poptalk.scrubbles.tech avatar

    Unraid is great, I use it daily. I grew past it in some aspects, but it’s a great starter OS.

    Agree with other commenter. Don’t discount backups. Unraid is not a backup. Plan to lose all of your data someday.

    ptrckstr, in Should I use Restic, Borg, or Kopia for container backups?

    Im using borgmatic, a wrapper around Borg that has some extra functionality.

    Very happy with it, does exactly as advertised.

    loganb, in Should I use Restic, Borg, or Kopia for container backups?

    Highly recommend restic. Simple and flexible. Plus I’ve actually used it on two occasions to recover from dead boot drives.

    ponchow8NC, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?

    Curious about the age of the oldest one

    surfrock66,
    @surfrock66@lemmy.world avatar

    I started collecting in probably 2007, so manufactured before that for sure.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 23829096 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 174

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 6307840 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 25