selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

mateomaui, in Noob question about PiHole

I would avoid it, as it may use the alternate instead of the pihole at anytime. If you want redundancy, it’s best to have a second pihole.

TCB13, in Planning on setting up Proxmox and moving most services there. Some questions
@TCB13@lemmy.world avatar

It’s 2024, avoid Proxmox and safe yourself a LOT of headaches down the line.

You most likely don’t need Proxmox and its pseudo-open-source bullshit. My suggestion is to simply with with Debian 12 + LXD/LXC, it runs VMs and containers very well. Proxmox ships with an old kernel that is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circumstances.

What I would suggest you to use use instead is LXD/Incus.

LXD/Incus provides a management and automation layer that really makes things work smoothly - essentially what Proxmox does but properly done. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).

Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

I draw your attention to containers (not docker), LXC containers because for most people full virtualization isn’t even required. In a small homelab if you can have containers that behave like full operating systems (minus the kernel) including persistence, VMs might not be required. Either way LXD/Incus will allow for both and you can easily mix and match and use what you require for each use case.

For eg. I virtualize the official HomeAssistant image with LXD because we all know how hard is to get that thing running, however my NAS / Samba shares are just a LXD Debian 12 container with Samba4, Nginx and FileBrowser. Sames goes for torrent client that has its own container. Some other service I’ve exposed to the internet also runs a full VM for isolation.

Like Proxmox, LXD/Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. I can guarantee you that most people running Proxmox today it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no BS licenses and way less overhead.

Yes, there’s a WebUI for LXD as well!

https://lemmy.world/pictrs/image/9caa6ea8-17b1-48f6-a8c2-ff3f606f3482.pnghttps://lemmy.world/pictrs/image/a5a110b2-ed6f-431f-a767-0a21fb337a6b.png

MangoPenguin,
@MangoPenguin@lemmy.blahaj.zone avatar

How well does it handle backups, and are they deduplicated incremental ones like proxmox backup server makes?

TCB13,
@TCB13@lemmy.world avatar

I do regular snapshots of my containers live and sometimes restore them, no issues there. De-duplication and incremental features are (mostly) provided by the storage backend, if you use BTRFS or ZFS for your storage pool every container will be a volume that you can snapshot, rollback, export at any time. LXD also provides tools to make those operations: documentation.ubuntu.com/lxd/…/instances_backup/

MangoPenguin,
@MangoPenguin@lemmy.blahaj.zone avatar

That makes sense, but no remote backups over the network? Local snapshots I don’t really count as backups.

lazynooblet,
@lazynooblet@lazysoci.al avatar

Can someone explain the benefits of LXD without the opinionated crap?

TCB13, (edited )
@TCB13@lemmy.world avatar

create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).

provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

What else do you need.

possiblylinux127, (edited )

Your comment is wrong in a few ways and suggests using a LXC which is way slower than docker or podman and lacks the easy setup.

Proxmox is good because it makes it easy to create VMs and setup least access. It also has as new of kernel as stable Debian so no, its not terribly out of date.

If you want to suggest that someone install Debian + Docker compose that would make more sense. This isn’t a good setup for more advanced setups and it doesn’t allow for a not of flexibility.

TCB13,
@TCB13@lemmy.world avatar

This was a discussion about management solutions such as Proxmox and LXD and NOT about containerization technologies like Docker or LXC. Also Proxmox uses the Proxmox VE Kernel that is derived from Ubuntu.

Your comment makes no sense whatsoever. I’m not even sure you know the difference between LXD and LXC…

node815,

Since you didn’t include a link to the source for your recommendation:

github.com/canonical/lxd

I’ve been on Proxmox for 6 or so months with very few issues and have found it to work well in my instance, I do appreciate seeing another alternative and learning about it too! I very specifically like Proxmox as it gives me an actual IP on my router’s subnet for my machines such as Home Assistant. So instead of the 192.168.122.1 it rolls a nice 192.168.1.X/24 IP which fits my range which makes it easier for me to direct my outside traffic to it. Does this also do this? Based on your screenshots, maybe not, IDK.

TCB13,
@TCB13@lemmy.world avatar

it gives me an actual IP on my router’s subnet for my machines

Yes you configure LXD/Incus’ networking to use a bridge and it will simply delegate the task to your router instead of proving IPs itself. One of my nodes actually runs the two setups at the same time, I’ve a bunch of containers on an internal range and then my Home Assistant VM getting an IP from my router.

jgkawell,
@jgkawell@lemmy.world avatar

Thanks for the link! I’ve been running Proxmox for years now without any of the issues like the previous commenter mentioned. Not that they don’t exist, just that I haven’t hit them. I really like Proxmox but love hearing about alternatives. One day I might get bored and want to set things up new with a different stack and anything that’s more free/open is better in my book.

Assman, in Have you tried LocalGPT PrivateGPT or other similar alternatives to ChatGPT?
@Assman@sh.itjust.works avatar

deleted_by_author

  • Loading...
  • SoleInvictus, (edited )
    @SoleInvictus@lemmy.world avatar

    It’s good for me because I’m piss poor at programming. In my defense, I’m not a programmer or even programmer adjacent. I do see how it wouldn’t be useful to a pro. It also has occasionally given me garbage advice that an expert would spot right away while I had to figure out in my own that it was ‘hallucinating’ again. There’s nothing better for learning than troubleshooting, though!

    bogo,

    I can absolutely see it getting useful for a pro. It’s already a better version of IDE templates. If you have to write boilerplate code this can already do that. It’s a huge time saver for the things you’d have to go look up to remember how to do and piece together yourself.

    Example: today I wanted a quick way to serve my current working directory over HTTP so I could do some quick web work. I asked ChatGPT to write me a bash function I could stick in my profile to do this, and I told it to pick a random unused port. That would have taken me much longer had I went to lookup how to do that all. The only hint I gave it was to use the Python builtin module for serving http.

    exu,

    I’ve found it’s pretty good for translating between steps so to speak.

    Converted some bash to python relatively quickly by giving it snippets and fixing errors as it made them.

    I also had success generating an ansible playbook based on my own previously written install instructions for SillyTavern and llama.cpp.

    I could do both of those tasks myself, but thar would be more difficult than having a mostly correct translation and fixing some errors.

    scarilog,

    There’s a project called Tabby that your can host as a server on a machine that has a GPU, and has a VSCode extension that connects to the server.

    The default model is called starcoder, and it’s the small version, 1B parameters. The downside is that it’s not super smart (but still an improvement over built in tools), but since it’s such a small model, I’m getting sub-second processing times.

    thisfro, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

    I have nextcloud running since nearly 5 years and it never failed once. Only dowtime is when the backup fails and somehow maintenance mode is still enabled (technically not a crash)

    For those interested: Running in docker with mariadb in a stack, checking updates with watchtower everyday and pulling from stable, backups with borg(matic)

    sv1sjp,
    @sv1sjp@lemmy.world avatar

    ++same

    Docker:nextcloud+mariadb+caddy

    UndercoverUlrikHD, in Comparing compression in AV1, x264, and x265
    @UndercoverUlrikHD@programming.dev avatar

    Feels like certain information is missing. You get very different results both in encoding time and file size depending what preset you use.

    CRF value also can’t be translated 1:1 between codecs so comparing e.g. h265 CRF 21 to h264 CRF 21 doesn’t mean much.

    DaGeek247,
    @DaGeek247@kbin.social avatar

    I consider the 'good enough' level to be, if I didn't pixel peep, I couldn't tell the difference. The visually lossless levels were the first crf levels where I couldn't tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.

    I know that av1, x264, and x265 all have different ways of compressing video. Obviously, the whole point of this was to get a better idea of what that actually looked like. Everything on the visually lossless section is completely indistinguishable to my eyes, and everything on the good enough section has very minor bits of compression only noticed when i'm looking for it in a still image. This does not require the same codec to compare and contrast with.

    Frankly, for anything other than real-time encoding, I don't actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.

    UndercoverUlrikHD,
    @UndercoverUlrikHD@programming.dev avatar

    Frankly, for anything other than real-time encoding, I don’t actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.

    Encoding speed heavily depends on your preset. Veryslow will give you better compression than medium or fast, but at a heavy expense of encoding speed. You’re not gonna re-encode a movie overnight on slow preset. GPU encoding will also give you worse result than CPU encode so that’s something one would have to take into consideration. It’s not a big deal when you’re streaming, but if it’s for video files, I’d much prefer using the CPU.

    I consider the ‘good enough’ level to be, if I didn’t pixel peep, I couldn’t tell the difference. The visually lossless levels were the first crf levels where I couldn’t tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.

    I was mostly talking about how you organised your table by using CRF values as the rows. It implies that one should compare the results in each row, however that wouldn’t be a comparison that makes much sense. E.g. looking at row “24” one might think that av1 is less effective than h264/5 due to greater file size, but the video quality is vastly different. A more “informative” way to present the data might have been to organise each row by their vmaf score.

    Hopefully I don’t come across as too cross or argumentative, just want to give some feedback on how to present the data in clearer way for people who aren’t familiar with how encoding works.

    GekkoState,

    Why is GPU encoding worse than CPU encoding?

    glizzyguzzler,
    @glizzyguzzler@lemmy.blahaj.zone avatar

    GPU encoding uses (relatively) simpler fixed function encoders that do it much faster than the CPU which uses its general purpose transistors to run an encoding algorithm. End result is GPU encoding is speedy at the cost of visual quality per bitrate; the file size is bigger for same visual quality as a CPU encode. Importantly for storing your videos - CPU encoding, while much slower, will get your file size smaller at the same visual quality threshold you desire, so you can save more videos per drive!

    Moonrise2473, (edited ) in Started to move off Google (not strictly self-hosted)

    I moved off to zoho

    Much cheaper than proton and offers much more.

    They’re not doing like proton and close basic stuff like IMAP and SMTP as a way to force you on the official apps

    I especially love the feature where you can bounce emails based on domains, keywords or TLDs. My spam folder is finally empty. IMHO bounce back spam is much better, as the spammers get a response that the address is invalid and hopefully stop wasting their limited computing resources on that address.

    Zoho is not open source, but proton is a “fake” open source that is mostly used for marketing: they opened only the UI, which communicates with a proprietary protocol to a proprietary server - useless. They also reject or ignore any pull request on GitHub.

    AcornCarnage,
    @AcornCarnage@lemmy.world avatar

    What Zoho plan are you using? I can’t quite tell what the difference between the free and lite tiers is except for IMAP/POP support.

    I moved over to Proton earlier this year and have had a good experience so far, but I’m not married to it or anything.

    Moonrise2473,

    i started with the mail basic (10 euro yearly for 10gb) but then because i switched from “secondary email that forwards to gmail” to “primary email that imports from gmail”, i had to move to the more expensive plan

    lemmyvore,

    Proton has been gradually closing down access to proprietary apps only. After they’re done you won’t be able to take your email anywhere else.

    If you have your own domain you’ll be able to host it elsewhere but you would leave behind email, calendar, aliases etc. and restarting from scratch.

    At that point “encrypted” starts smelling more like “hostage”. It’s generally a bad idea to be tied down to a specific email provider.

    You could wake up tomorrow to find out Proton has been acquired and the new owners can charge anything yet want for continued service.

    AcornCarnage,
    @AcornCarnage@lemmy.world avatar

    I mean, that’s going to be a risk you take with any hosted service. I currently self-host my contacts and calendar, but I have no interest in hosting my own email again.

    lemmyvore,

    I don’t self host my email either. I got my registrar, DNS and email separate from each other so if any of them goes bad I can switch with minimum fuss.

    But that makes it all the more important to be able to download all your mail from your provider.

    Proton currently has two proprietary things you can use to download, a “bridge” PC app that pretends to speak IMAP, and a download tool. The bridge will be discontinued after they launch their propeietary PC mail app so that leaves just the proprietary download tool, which only does .eml. format.

    AcornCarnage,
    @AcornCarnage@lemmy.world avatar

    Okay, I’m following. So who would you recommend as an email provider?

    lemmyvore,

    That’s a very broad question that depends a lot on your usage. My needs may be different from yours.

    I’m currently using Migadu because:

    • Unlimited domains, mailboxes, accounts and aliases for a flat fee. I’m managing accounts for myself as well as family and extended family members and it comes out much cheaper this way than services that ask $5-10/account.
    • Very nice management interface with all the bells and whistles but with reasonable defaults and easy to use.
    • The company is based in Switzerland and the mail hosted in EU (France).
    • Standard email service with everything you’d expect (the regular protocols, spam protection, webmail, full compatibility with clients etc.)
    Atemu,
    @Atemu@lemmy.ml avatar

    They’re not doing like proton and close basic stuff like IMAP and SMTP as a way to force you on the official apps

    The reason Proton cannot do IMAP/SMTP is that they cannot read your emails which is required for both. That’s a feature, not a bug.

    PM works with any app as long as the app implements their custom protocol for which there are at least two FOSS implementations as a reference.

    proton is a “fake” open source that is mostly used for marketing: they opened only the UI, which communicates with a proprietary protocol to a proprietary server - useless

    While I’d also prefer their back-end to be OSS, it’s not nearly as critical as the clients.
    As a user, it doesn’t make a difference. I’m paying for an opaque service either way.

    All the interesting stuff (E2EE, zero access storage) happen in the clients anyways. The BE is fairly uninteresting; it’s a mail server + zero-access encryption + Proton account handling. If you really wanted to build a mail service similar to Proton, you could build that yourself and probably would have to anyways.

    Moonrise2473, (edited )

    i think instead the opposite. The backend is the real interesting part, and the only way that we can be sure that “they cannot read the emails” (they arrive in clear, saved with reversible encryption and they have a key for it - if you use their services to commit crimes they will collaborate with the law enforcement agencies like everyone else)

    imap/smtp can be toggled with a warning, if that’s really their concern. As of now i have the feeling that’s instead blocked to keep users inside (no IMAP = no easy migration to somewhere else) or to limit usage (no SMTP = no sending mass email)

    Atemu,
    @Atemu@lemmy.ml avatar

    The backend is the real interesting part, and the only way that we can be sure that “they cannot read the emails”

    While I’d still prefer it, OSS can’t really help with that because what’s really required here is remote attestation.
    That is an unsolved problem to my knowledge; there is no way to know which software they’re actually running. Even if they published the source code, they could trivially apply a patch in their deployment that stores all incoming email somewhere and you’d be none the wiser.

    Even if they published source code and could somehow prove to you that they’re running a version derived from it, you would still not be safe from surveillance as one could simply MITM all connections. See i.e. notes.valdikss.org.ru/jabber.ru-mitm/.

    That’s likely one of the reasons they do everything they can to make PGP accessible to every user.

    imap/smtp can be toggled with a warning, if that’s really their concern

    It’s plain and simply not how their service works. They’d have to build most of their service a second time but unencrypted.

    It’s like asking Signal to build in support for IRC; it does not make sense for them to do that in any way without malicious intent needed.

    no IMAP = no easy migration to somewhere else

    You have IMAP access via the bridge. That’s what it’s for.

    ikidd,
    @ikidd@lemmy.world avatar

    Zoho and PM have two entirely different reasons for existence. If you don’t want E2EE (assuming the other sender is on PM) then by all means, use Zoho. And IMAP isn’t E2EE compatible in the slightest, what they’re charging for is the decryption bridge that makes it work with an IMAP client. They had to come up with that, it’s not just a switch you flip at PMs end that makes IMAP work.

    clmbmb, in Anybody Using Nebula?

    What is nebula?

    goatsarah,

    @clmbmb @brownmustardminion Looks like some sort of Tailscale clone.

    Sanyanov,

    But also self-hosted (the central server, i.e. “lighthouse”) and open-source

    EncryptKeeper,

    Given that Nebula is older than Tailscale, and was inspired by tinc, it’d be more accurate to say that Tailscale is the clone.

    PeachMan,
    @PeachMan@lemmy.world avatar

    I wouldn’t call it a clone, Tailscale didn’t invent mesh VPN’s. I believe Nebula is fully self hosted, while Tailscale makes initial connections through their servers. That means Nebula is more secure and private if you’re paranoid, but also harder to set up. They’re also based on different VPN protocols.

    Tailscale actually published a surprisingly unbiased comparison: tailscale.com/compare/nebula

    daed,

    Should probably be pointed out (and I assume the tailscale link does), but Tailscale offers a fully self-hosted option called Headscale also

    pluja, (edited )

    Tailscale does not offer this. It is a community project. Headscale is not official.

    daed,

    My mistake! I saw it referenced on the official site and assumed.

    Lem453, (edited ) in Should I use Restic, Borg, or Kopia for container backups?

    Borg (specifically Borg Matic) has been working very well for me. I run it on my main server and then on my Nas I have a Borg server docker container as the repository location.

    I also have another repository location my on friends Nas. Super easy to setup multiple targets for the same data.

    I will probably also setup a Borg base account for yet another backup.

    What I liked a lot here was how easy it is to make automatic backups, retention policy and multiple backup locations .

    Open source was a requirement so you can never get locked out of your data. Self hosted. Finally the ability to mount the backup as a volume / drive. So if I want a specific file, I mount that snapshot and just copy that one file over.

    ShortN0te, in SquareSpace dropping the ball.

    Change your nameservers to cloudflare or something, then use their API to setup ddns yourself which dynamically updates the dns entries.

    Darkassassin07, (edited )
    @Darkassassin07@lemmy.ca avatar

    I’m an idiot.

    I already do this. The swap to Squarespace wont actually effect me.

    🤦

    spez_, in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?

    I have 1 RPI 4 (8GB RAM) running:

    • OpenMediaVault
    • Transmission
    • ArchiveBox & LinkWarden (testing between the two)
    • Gitea
    • Audiobookshelf
    • FileBrowser
    • Vaultwarden
    • Jellyfin
    • Atuin
    • Joplin
    • Paperless-NGX
    • Immich

    On another RPI (4GB) I have Home Assistant

    cashews_best_nut,

    Your Pi runs all that?! I’ve setup Homeassistant on a Tinkerboard and it’s slow as shit with nothing else running. :(

    owenfromcanada,
    @owenfromcanada@lemmy.world avatar

    Not sure what kind of tinker board you’re working with, but the power of Pis has increased exponentially through its generations. There are tasks that would run slowly on a dedicated Pi2 that ran easily in parallel with a half dozen other things on a Pi4.

    The older ones can still be useful, just for less intensive tasks.

    RootBeerGuy,
    @RootBeerGuy@discuss.tchncs.de avatar

    Out of interest from someone with an Rpi4 and Immich, did you deactivate the machine learning? I did since I was worried it will be too much for the Pi, just curious to hear if its doable or not after all.

    Matt, in What is your prefered way to get audiobooks/podcasts/ebooks for your audiobookshelf?

    I use Downpour for Audiobooks. It is similar to Audible where audiobooks can be purchased individually, or there is a subscription that provides credits to purchase audiobooks. The audiobooks are drm-free and can be downloaded. I have not found a way to automate the download and transfer to my Audiobookshelf server, but I don’t mind doing it manually considering I average around two or three audiobooks a month.

    wreckedcarzz, in Plex To Launch a Store For Movies and TV Shows
    @wreckedcarzz@lemmy.world avatar

    The Slashdot post points to this article www.theverge.com/…/plex-store-movies-tv-shows

    Clubbing4198, in Haier hits Home Assistant plugin dev with takedown notice

    Just downloaded the zips. Fuk haier

    atzanteol, in I broke nextcloud and i cant fix it

    Have you tried reading the docs?

    docs.nextcloud.com/…/reset_admin_password.html

    Gooey0210, in Pi-Hole or something else for network ad blocking?

    Adguard-home is way better than pi-hole imo

    dan,
    @dan@upvote.au avatar

    Plus it’s easy to run multiple AdGuard Home servers and keep them in sync using github.com/bakito/adguardhome-sync

    Gooey0210,

    Oh, oh, oh, gimme that!!

    First time i hear about something like that, i’m going to install it asap

    dan,
    @dan@upvote.au avatar

    It works well! I have one AdGuardHome instance running on my home server and one running on a Raspberry Pi, both using Docker. Having two prevents the internet from breaking in case I have to shut down one of them for some reason.

    Guajojo,

    Pihole user for more than 5 years,.can confirm that it is indeed better, made the switch few months ago

    EncryptKeeper,

    As an AdGuard home user for more than a few years, I switched back to Pihole because it wasn’t really any better. It was also easier to pair pihole with Unbound.

    DreadPotato,
    @DreadPotato@sopuli.xyz avatar

    What makes adguard home better than pihole? Genuinely curious, I’m running pihole now and have been for a couple of years without issues.

    Gooey0210,
  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #