selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

butitsnotme, in worth selfhosting immich or similar? what about backups?

I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.

roofuskit,
@roofuskit@lemmy.world avatar

Fireproof safes don’t protect against heat except what’s high enough to combust paper. Temps will still probably be high enough to destroy a drive with a regular fireproof safe.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

I am super curious about the udev triggering, didn’t know thats possible!

StrawberryPigtails,

Please! That sounds like a slick setup.

Nilz,

This sounds really interesting, please share.

governorkeagan,

I would love to see your script! I’m in desperate need of a better backup strategy for my video projects

PainInTheAES, in Pi-Hole or something else for network ad blocking?

AdGuard Home and blocky are other popular options. I switched over to AdGuard Home a while back because it supported DNS over HTTPS although I’m not sure if that’s still a relevant reason. I run AGH as a docker container but it is easy to run in a LXC or VM. There’s also a tool to sync configs if you need multiple instances. Notice: AGH block lists are formatted like uBlock Origin lists so you will not be able to use PiHole style lists.

DNS based ad blockers won’t work when ads are served from the same place as the content. Which is why DNS based ad blockers don’t work against Twitch or YouTube. So YMMV.

If you’re looking to block interface ads and select streaming service ads there are block lists available like this one. The game with smart TVs is blocking the ads breaks the TV a little because sometimes it calls back to the same servers for updates and misc info like weather.

shalva97, in Why docker

Life is too short to install everything on baremetal.

purplemonkeymad,

For real, at the minimum use a virtual machine.

spookedbyroaches,

Use lxc/lxd to get all of the performance benefits of docker and all the freedom of a vm

umbrella, in Why docker
@umbrella@lemmy.ml avatar

people are rebuffing the criticism already.

heres the main advantage imo:

no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.

docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.

much easier to maintain once you get the hang of things.

million,
@million@lemmy.world avatar

Quick addition, I think for the messy argument the way I would articulate it for folks running servers is it helps you move from pets to cattle.

bjoern_tantau, in Nextcloud zero day security
@bjoern_tantau@swg-empire.de avatar

For protection against ransomware you need backups. Ideally ones that are append-only where the history is preserved.

thisisawayoflife,

Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?

bjoern_tantau,
@bjoern_tantau@swg-empire.de avatar

No, I’d actually be interested in that myself. I currently just rsync to another server.

baccaratrevivify,
@baccaratrevivify@lemmy.world avatar

Borg backup has append only

Rootiest, (edited )
@Rootiest@lemmy.world avatar

I use and love Kopia for all my backups: local, LAN, and cloud.

Kopia creates snapshots of the files and directories you designate, then encrypts these snapshots before they leave your computer, and finally uploads these encrypted snapshots to cloud/network/local storage called a repository. Snapshots are maintained as a set of historical point-in-time records based on policies that you define.

Kopia uses content-addressable storage for snapshots, which has many benefits:

Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.

Multiple copies of the same file will be stored once. This is known as deduplication and saves you a lot of storage space (i.e., saves you money).

After moving or renaming even large files, Kopia can recognize that they have the same content and won’t need to upload them again.

Multiple users or computers can share the same repository: if different users have the same files, the files are uploaded only once as Kopia deduplicates content across the entire repository.

There’s a ton of other great features but that’s most relevant to what you asked.

tuhriel,

Restic can do append-only when you use their rest server (easily deployed in a docker container)

patchexempt,

I’ve used rclone with backblaze B2 very successfully. rclone is easy to configure and can encrypt everything locally before uploading, and B2 is dirt cheap and has retention policies so I can easily manage (per storage pool) how long deleted/changed files should be retained. works well.

also once you get something set up. make sure to test run a restore! a backup solution is only good if you make sure it works :)

thisisawayoflife,

As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.

PerogiBoi, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@PerogiBoi@lemmy.ca avatar

No but now I know what to do with my old hard drive that failed :)

something15525, in SquareSpace dropping the ball.

porkbun.com is what I’m planning on switching to!

azl,

Just want to clarify - after looking at Porkbun’s DNS offerings, it does not appear they do DDNS either. Is that correct? So they are not any better than SquareSpace for that service. Porkbun does have an API interface.

It looks like Namecheap has DDNS support (at least I get valid-looking results when I search for that on their website).

I haven’t changed registrars in 10+ years. I am in the same boat re. Google -> SquareSpace. Is DDNS deprecated in favor of API’s across the board? It looks more complicated to set up.

i_am_not_a_robot,

You don’t actually need DDNS. If your provider has an API you can update your addresses using the API. kb.porkbun.com/…/190-getting-started-with-the-por…

darkfarmer,

I use porkbun and ddclient (ddclient.net). Not sure if that helps you or not but there it is

FrostyCaveman, (edited )

Pro tip: If you use Porkbun, don’t leave your domain’s authoritative DNS with Porkbun nameservers.

Over the year or so I had my stuff configured this way, on at least one occasion (that I know about… I was still setting up my observability stack during this year), the servers were flapping hard for over a day, causing my records to magically vanish from existence intermittently.

I tried contacting them every way I could, hell I even descended into the quagmire of Twitter and created an account so I could tweet at them… and got silence.

Pretty disappointing. I ended up moving all my DNS to AWS Route 53 after a few hours of pulling out my hair. They did eventually respond to my email like a day later, after I’d already moved everything over.

But idk maybe I’m wrong expecting an indie domain registrar to have super high availability on their nameservers… oh well

fenndev, in Joplin alternative needed
@fenndev@leminal.space avatar

Have you looked into either Obsidian or Logseq?

Obsidian is not open source, but uses Markdown for notes just like Logseq. Very popular overall.

krash,

I second obsidian. I was on the verge to jump onto logseq, but found its way of handling notes to be… different. I also felt a dislike of anytype where I don’t really have control over my notes. Obsidian clicked with me from the start and felt right. So I went with it, even though it’s not FOSS (which is usually a hard requirement from me).

helenslunch,
@helenslunch@feddit.nl avatar

How do you self-host Obsidian?

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

That’s the near thing about it… you don’t.

But jokes aside, it is mostly about syncing notes for the selfhosting part. You either go with the official offer, no self-hosting and costs money, or you use a community plug-in, self-hosted, or you use a third program like syncthing, selfhosted.

bbuez,

Syncthing is the way, I had tried setting on nextcloud but never could get it to store how I wanted, but syncthinf was ridiculously easy and should work for anything that uses a folder

helenslunch,
@helenslunch@feddit.nl avatar

I’ll have to look into that. It doesn’t work like Joplin where I can just connect it to the same remote backup within the app, across devices?

bbuez,

There is a plugin for obsidian to work with syncthing, but it seems to still be in development, implementing through the app and selecting the folders also gave me a reason for syncing my camera as well, and was super easy, no portfowarding or anything required

Opisek,

I also switched from Joplin to Obsidian after about half a year. There’s an open-source plugin that lets you self-host a syncing server.

What I found paradoxical is how easy it is to mod and write plugins for Obsidian compared to Joplin. I would’ve thought that modifying the open-source candidate would’ve been easier, but nope.

jaykay,
@jaykay@lemmy.zip avatar

Yeah, I’ve been on Obsidian before, but self-hosted syncing on iOS is a bit finicky.

I’ve heard good things about Logseq, but it’s certainly a waaay different approach to notes. I’ll have to read more about it. Thanks :)

ikidd,
@ikidd@lemmy.world avatar

Literally every note app uses markdown. I’m not sure why people point at that for Obsidian like it’s a unique feature.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

It is not a unique feature. But as a non-FOSS program its notes are not hidden behind proprietary filesystems, so any time you want you can still switch if they go in a direction thr user does not like.

Father_Redbeard,
@Father_Redbeard@lemmy.ml avatar

Not every one stores the files as plain text files in markdown format like Obsidian. Logseq does I believe, but Joplin stores it all in database files which require an export should you decide to leave that app in favor of a other. With Obsidian you just point the new app at the folders full of .md files and away you go. That was the main selling point for me.

ikidd, (edited )
@ikidd@lemmy.world avatar

I don’t know where you’re getting that from. Here is my Joplin folder on my NC server, stuffed with md files from my notes. There are some database driven references in them if you do things like add pictures, and obviously the filename is a UID format, but it’s markdown all the way, baby.

Father_Redbeard,
@Father_Redbeard@lemmy.ml avatar

Have you looked at the contents of those md files? In addition to creating its own hexadecimal file name, it appends the text with a bunch of metadata info. If you were to then take that folder of notes to any other markdown editor like Obsidian, it would be a mess to organize. That is why I’m a stickler for file format agnosticism. There is no vendor lock in and more importantly, no manipulation of the text filenames or contents.

Screenshot of my phone copy of the Obsidian vault directory as an example:

Obsidian md

Crow, in Update: Everyone said GameVault's UI was garbage, so we completely overhauled it.
@Crow@lemmy.world avatar

What do I use this for? Do I install it on my NAS or my gaming pc?

My best guess is this is a self hosted network storage for games and other computers run the games from there? Or do they download the game from there? Is it a way to store game saves? Does it have any use for emulators like yuzu?

Sorry for all the questions, I’m only asking because the software looks really interesting but I just can’t figure out its uses.

chandz05,

Seems like the intro clears some things: gamevau.lt/docs/introIt looks like you install the server component on your NAS/server etc and store your game files/binaries/installers there. Then you can download client applications and download from that location to install on your gaming PC or whatever

steal_your_face,
@steal_your_face@lemmy.ml avatar

So basically Plex/jellyfin for non drm games it sounds like.

WarmSoda,

In case you’re wondering what GameVault is, it’s like having your own video gaming platform for games on your server – think Plex, but for video games

victorz,

Thanks for clearing this up. I definitely have no use for this. I wish I did, but alas.

Vendetta9076,
@Vendetta9076@sh.itjust.works avatar

I second all these questions.

Matt, in What is your favourite selfhosted wiki software and why?

DokuWiki for simplicity. Everything is a text file that can just be copied to a web server. It doesn’t even require a database. And since all the wiki pages are plaintext markdown files, they can still be easily accessed and read even when the server is down. This is great and why I use DokuWiki for my server documentation as well.

dlundh,

This. For exactly this reason.

nimmo,
@nimmo@lem.nimmog.uk avatar

I was going to say that the big downside to that would be a lack of any kind of version control, but I guess if you need that you can always use git and just commit changes there and (optionally) push them to a repository somewhere.

Matt,

Doku still has the typical wiki style version control. It uses other text files to keep a changelog without cluttering the markdown file.

SnotFlickerman, (edited ) in I'm new to networking and self-hosting and have no idea where to start.
@SnotFlickerman@lemmy.blahaj.zone avatar

Not necessarily in this order:


  1. Learn the OSI and TCP/IP layer models.
  2. Learn the fundamentals of IPv4 and IPv6. (Absolutely learn to count bits for IPv4)
  3. Learn and understand the use-cases for routers, switches, and firewalls.
  4. Learn about DNS. (Domain Name System)
  5. Learn about DHCP. (Dynamic Host Configuration Protocol)
  6. Learn important Port Numbers for important Services. (SSH is Port 22, for example. The range of port numbers from 1024 to 49151 are “registered ports” that are generally always the same)
  7. Learn about address classes. (A, B, C are the main ones)
  8. Learn about hardware addresses (MAC address) and how to use ARP to find them.

And more! This is just off the top of my head. Until you’ve studied a lot more, please, for your own sake, don’t open your selfhosted ervices to the wider internet and just keep them local.


And just for fun, a poem:

The inventor of the spanning tree protocol, Radia Perlman, wrote a poem to describe how it works. When reading the poem it helps to know that in math terms, a network can be represented as a type of graph called a mesh, and that the goal of the spanning tree protocol is to turn any given network mesh into a tree structure with no loops that spans the entire set of network segments.

I think that I shall never see

A graph more lovely than a tree.

A tree whose crucial property

Is loop-free connectivity.

A tree that must be sure to span

So packets can reach every LAN.

First, the root must be selected.

By ID, it is elected.

Least cost paths from root are traced.

In the tree, these paths are placed.

A mesh is made by folks like me,

Then bridges find a spanning tree.

— Radia Perlman Algorhyme

gramathy,

Classful networking is well past dead, that’s kinda pointless. Learn VLSM and general subnetting basics instead.

SnotFlickerman, (edited )
@SnotFlickerman@lemmy.blahaj.zone avatar

I mean, isn’t it important to understand the fundamentals so you can understand VLSM better?

Like math, a lot of this knowledge works better when you know the fundamentals and basics, which help you conceptualize the bigger ideas.

On a personal level, I would have had a lot harder time understanding VLSM if I hadn’t had the basic fundamentals of traditional subnetting and classful networking under my belt.

gramathy,

There’s nothing inherently important to classful networking you learn that’s necessary for VLSM. They amount to common convention based on subnet size, and even then nearly nobody actually uses A or B sized subnets except as summary routes, which again, is not inherent to classful networking.

Classful networking has been obsolete for thirty years for good reason, you gain nothing from restricting yourself in that way.

SnotFlickerman,
@SnotFlickerman@lemmy.blahaj.zone avatar

How are you “restricting” yourself by learning that it exists? Nobody is saying “learn about it and use it and never consider anything else.” They asked what fundamentals they should know for networking, and I dumped what I considered the “fundamentals.”

gramathy,

Nothing actually uses classful networking anymore. Any situation where classful network concepts are implemented is necessarily limiting the capabilities of the network. As such it’s completely useless to bother spending time learning it.

CountVon, in File size preference for Radarr?
@CountVon@sh.itjust.works avatar

You can do this with custom formats. You’d want to create a custom format that gives a score if the file is below a certain size threshold (say 1.5GB per hour), then add minimum custom scores to the release profiles you use (e.g. Bluray 1080p). You can also add custom filters for release groups that prioritise file size. YTS for example keeps their releases as small as possible.

capital, in Sounds like Haier is opening the door!

Just set a rate limit? This could have been a code change and a blog post.

Mugmoor,
@Mugmoor@lemmy.dbzer0.com avatar

But then how would they get all the tech blogs to write about them?

possiblylinux127, in Sounds like Haier is opening the door!

Honestly they should find away to make it work with HA instead of the companies servers.

BearOfaTime,

Yep.

Fuck Haier, espscially at this point.

Had they tried working with him furst, they’d have a little moral ground to stand on.

Now the lives are off. How many forks are there if his git repo now? It was a thousand yesterday.

possiblylinux127,

I don’t know about you but I want the companies to take self hosted and Foss solutions seriously. The fact that they are wanting to work with him is a major step in the right direction. It would be dumb to discourage companies from supporting foss.

Darkassassin07, (edited )
@Darkassassin07@lemmy.ca avatar

Are they supporting FOSS, or looking to buy out the project to make it a closed in-house solution and avoid the bad publicity they created this last week?

NegativeInf,

If they buy it, it’s FOSS bro. Fork it. But until that point, diplomatic approaches may be more effective.

possiblylinux127,

Well I think the worst thing that could happen is we just fork it and go on with our lives.

Why would they want a new in house solution? They already have one but home assistant probably is going to be easier for them.

Auli,

Not really self hosted. Uses their online service to pull it into Home Assistant.

taaz, in I love my Gitea. Any tips and tricks?
praise_idleness,

I was aware of forgejo back when I first started hosting Gitea. Didn’t see much of a diff back then so I just went with arguably more popular option at that time.

About few months after it’s mostly just because I’m too lazy of a person.

rufus, (edited )

Forgejo is a fork of Gitea. As of now I don’t think they have diverged much. So they’re (still) about the same. It was mainly created because of the takeover of the domain and trademark by a for profit company. Not because of different functionality.

forgejo.org/compare/-was-forgejo-created

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 22570240 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 174

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10397816 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/ErrorRenderer/HtmlErrorRenderer.php on line 339