selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

MonitorZero, (edited ) in TrueNAS shares help.

There’s an integrity check they implemented last year.

Login to your TrueNAS

Go to the Shares section from the left hand menu

Turn off the share

Launch the app you need

Once the app is fully deployed turn the file share back on again

That should do it. If the system goes down or the app updates and redeploys you’ll need to turn off the share again to pass the integrity check.

k0mprssd,

this seems to have worked, but whenever I go in in filebrowser everything is read only. any way to change this?

Dhs92,

Or mount it as an NFS share if it gives you the option

jubilationtcornpone, in entire system backups onto the server - how?

I use Veeam Backup & Recovery Community Edition. If you’re runing VM’s you have to be on VMWare or Hyper-V. You can also use agents on the individual VM/Server. It also requires a pretty hefty Windows host, at least if you want your backups to complete fairly quickly.

Those are understandably downsides for some people. But, Veeam is in a class by itself. It has no serious competitors and as far as ease of use and reliability, it’s top tier.

I’m lazy. I don’t want to spend a bunch of time configuring finicky backups only to find out I needed one and it failed. I honestly wish there were a comparable open source backup system. I have yet to find anything that works as well.

ZeDoTelhado,

I was reading about it and I actually like a lot this solution’s principle. It reminds me a lot of puppet which I have seen before (for other kind of tasks) to orchestrate several computers. Big shame it works on windows though, since I have a server with docker on ubuntu server at this point and was not really looking forward to change that. But thanks for the suggestion, is for sure very interesting

Mautobu,

Another vote for Veeam. I use it at home and professionally. It’s a solid product and has saved my ass countless times.

RegalPotoo, in Lighter weight replacements for Sentry bug logging
@RegalPotoo@lemmy.world avatar

glitchtip.com

API compatible, but lower resource consumption - is missing some of the newer features (big one for me is tracing, but just install Tempo).

Not actually tried it, but looks promising

dan,
@dan@upvote.au avatar

Thanks! I’ll try it out. I don’t see anything on their site about JavaScript source mapping, so I assume they don’t do it. With Sentry, you upload the source map to the server as part of your JS build process, and their backend automatically maps minified stack traces to unminified ones using the uploaded source map. Maybe I’d be fine losing that in exchange for something lighter weight.

justcallmelarry,

glitchtip.com/blog/2022-04-25-glitchtip-1-12

This blog post mentions that it should be possible at least!

Im currently using their free tier for a hobby project and have been happy with it. Have considered moving over to self hosting the solution, but have been keeping off on it due to resource contraints, but might make the leap soon! Would be nice to get use of the uptime pings, which currently would fill the event way too quickly for the free tier.

dan,
@dan@upvote.au avatar

Perfect, thanks. Strange that it’s not in their docs, but it does seem like their docs are very minimal.

bufke,

Hello, I’m the lead dev of GlitchTip. Fun to see it mentioned here. Source maps are supported. I wish I had time to make the feature easier to use and write better docs. Contributions are welcome. It’s very much a hobby project for the little time I have after work and family. Right now all of my attention is on an event ingest rewrite to work with fewer resources.

dan,
@dan@upvote.au avatar

Nice to see you on here! I understand the lack of time - I’ve got some projects I’ve had on hold for years because of time constraints. I’m definitely going to try Glitchtip.

If I get some free time, I’ll see if I can write some docs about using source maps for JS apps. Sounds like it works in the same way as Sentry’s does.

It was a great idea for GlitchTip to reuse the Sentry SDKs and CLI, because their SDKs are solid. They’ve got the best .NET SDK out of all of the error logging systems I evaluated two years ago which is why I was using Sentry. Unfortunately, Sentry has become significantly heavier over those two years.

phanto, in Planning on setting up Proxmox and moving most services there. Some questions

Do two NICs. I have a bigger setup, and it’s all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.

Edgarallenpwn,
@Edgarallenpwn@midwest.social avatar

So dual NIC on each device and set up another lan on my router? Sorry it seems like a dumb question but just want to make sure.

fuckwit_mcbumcrumble,

Why would you need two nics unless you’re planning on having a proxmox Vm being your router?

FiduciaryOne,

I think two NICs is required to do VLANing properly? Not 100% sure.

DeltaTangoLima, (edited )
@DeltaTangoLima@reddrefuge.com avatar

Nope - Proxmox lets you create VLAN trunks, just like a physical switch.

Edit: here’s one of my Proxmox server network configs.

FiduciaryOne,

Huh, cool, thank you! I’m going to have to look into that. I’d love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

No worries mate. Sing out if you get stuck - happy to provide more details about my setup if you think it’ll help.

FiduciaryOne,

Thanks for the kind offer! I won’t get to this for a while, but I may take you up on it if I get stuck.

monkinto,

Is there a reason to do this over just giving the nic for the vm/container a vlan tag?

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

  • switch trunk port
    • enp2s0f0 (physical)
      • vmbr1 (Linux bridge)
        • vmbr1.60 (Proxmox server interface)
        • vmbr1.100 (Proxmox VLAN interface)
          • virtual guest nic (w/ vlan tag and IP address)
        • vtnet1 (OPNsense “physical” nic, but actually virtual)
          • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

Like I said, it’s a headfuck when you first set it up. Interface-ception.

The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.

Live2day,

No, you can do more than 1 VLAN per port. It’s called a trunk

atzanteol,

I haven’t done it - but I believe Proxmox allows for creating a “backplane” network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn’t interfere with other traffic being used by the VMs and the rest of your network.

You’d just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn’t route anywhere else.

fuckwit_mcbumcrumble,

In proxmox there’s no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you’d create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.

DeltaTangoLima,
@DeltaTangoLima@reddrefuge.com avatar

This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.

possiblylinux127,

Can you explain what benefit that would bring?

Lodra, in entire system backups onto the server - how?
@Lodra@programming.dev avatar

If you’re up for it, it’s generally better to not backup everything. Only backup the data that you need. Like a database. Or photos, music, movies, etc. for personal data. For everything else, it’s best to automate the install and maintenance of your server.

Disclaimer: this does take more effort!

ZeDoTelhado,

Nowadays I sort of do this with seafile. Select folders to sync, open the app every other time to resync stuff, carry on with your day. The only thing I wanted to take away if there is a better way to not have a massive hassle to reinstall everything in case something happens (and in case I forget to select a folder to sync also).

But your suggestion I think is very valid as well. At least for mint have a way to make a more automated installer or similar to get the stuff I use usually. Yet another rabbit hole to go into…

SnotFlickerman, (edited ) in entire system backups onto the server - how?
@SnotFlickerman@lemmy.blahaj.zone avatar

github.com/teejee2008/timeshift

I think TimeShift could maybe work for you, but you might need a script to offload the backups it creates.

restic.net

Restic is another option, but it’s a little less user friendly and is all CLI, if i recall correctly. However I’m pretty sure you can send backups straight to a server via Restic.

ZeDoTelhado, (edited )

I checked a couple of times time shift, but it’s a shame not even ftp is allowed as a backup destination.

As for restic, will give a check later

EDIT: just read about restic, and I think this can be the solution I was looking for. Docker image is available and all, so for me that is a big plus. Once I have the chance I will test drive it and see where it goes. Thanks!

erre, in This Week in Self-Hosted (12 January 2024)
@erre@programming.dev avatar

This is easily my favorite regular post here :D

EncryptKeeper,

It used to be the Noted.lol posts, then when selfh.st showed up there was drama between the two and then noted kinda disappeared.

Dremor, in Help me build a home server
@Dremor@lemmy.world avatar

The budget is way too low imo.

You could repurpose an old workstation, bought dirt cheap on eBay if you are lucky, but even then you’ll have to get yourself an HDD, maybe multiple of them if you want to have data redundancy.

For anything new your best bet is a 2 bay ready made NAS, but you’ll have to invest around 300€ for the cheapest one.

astraeus, (edited )
@astraeus@programming.dev avatar

It is entirely possible to start with a 2-bay drive rack (not a caddy, we want something without the connections) and then run the SATA out the back of the computer to the drives. It’s a compromise for this low a budget, but it’s not a major sacrifice.

www.ebay.com/itm/115485675524

tootbrute, in Help me build a home server

Start small. Find good used hardware first before thinking what services to run. I would start with an old desktop.

Self-hosting is a journey, not a destination. No matter what you buy you'll probably need to buy new hard drives. Used hard drives are a bit of a gamble.

Where do people buy used systems in Denmark? Show us a few things you're interested in and people can give you recommendations.

Also, instead of Photoprism, I would suggest Immich. I was a huge supported of Photoprism for years (even donated money) but their development is too slow. Immich is way faster and has an android app. Anyways, give it a look.

I think 8 GB of RAM is sufficient for all those services. I run them all with Yunohost and I rarely get over 4 GB RAM used.

VonReposti,

dba.dk is a pretty popular site for buying used stuff in Denmark, but for electronics I usually go on eBay and sort by EU only (IIRC they removed that option so now the results are tainted with lots of UK gear that’ll be hit with import taxes).

pinguinebee,

1

I have looked at

Lenovo Thinkcenter sælges. i5-3470 8gb ram 256gb HDD

Price 44 US Dollars 40 Euros

2

I5 4570 Vpro 3.2 ghz 4 gb ram 500 GB Hdd. (3.5")

Price

46 Euros 51 US Dollars

(It’s a bit more expensive and has less RAM, but thinking of that, if the other seller has sold)

rambos,

Both deals sound amazing to me, but get 8GB or prepare for RAM upgrade. 4GB could be enough for what you listed there, but you might find more services to run in the near future 🤪

I think those tiny PCs are perfect if you dont need more SATA ports. Its hard to beat them with that low price

pinguinebee,

I am glad the systems are good enough. Thanks alot for the reply

astraeus,
@astraeus@programming.dev avatar

That think center looks perfect for this use case. Especially if it’s running Debian or arch.

thayer, (edited ) in Help me build a home server

I don’t think you’ll be able to build anything with €100, but you might be able to buy an old PC or laptop locally and use it as is. I’ve never run nextcloud myself, but from I’ve read it’ll be the most taxing service on your list. Everything seems pretty minimal, though I don’t know anything about Photoprism.

Dremor,
@Dremor@lemmy.world avatar

Yeah, for that price you won’t find anything new. For illustration, when I bought a new Athlon 3000G, which was the very lower CPU on their AM4 offering, it was at 55€ without anything else.

ShepherdPie,

Even a Raspberry Pi kit will blow that budget, but they may find some used SFF office PC for around that price.

ripe_banana,
@ripe_banana@lemmy.world avatar

From experience, older thinkpads usually sell for cheap, come with an inbuilt monitor, and are built sturdy. Highly recommend.

astraeus,
@astraeus@programming.dev avatar

Older thinkpads in this price range will not perform well as servers. They will be pretty limited in specs. Better to go with a used SFF or other form-factor business model desktop.

savedbythezsh, in This Week in Self-Hosted (12 January 2024)

Anyone used NocoDb recently? Last time I tried to get it running, it was too buggy to be useful

krash,

I’ve been using it for a while without any noticeable problems. What issues did you run into?

machinin,

Out of curiosity, what do you use it for?

krash,

All kinds of stuff. I use it when I need a way to structure my data:

  • I use it to keep track of software / libs that are of interest, what they are an alternative to. See example here: ibb.co/ncsdt0W
  • I’ve also tried to recreate the functionality of a personal relational management (a la MonicaHQ, or per this post: medium.com/…/my-homegrown-personal-crm-87dffbcf54…) but found it to be an overengineered solution.
  • I also used it to interact and store data through my python apps, to avoid dealing with it directly in python.
  • You can also use it as a Kanban board
  • Also, I’ve been trying to use it as an excel replacement - which is an overengineered solution but you get impeccable dataquality.

Nocodb is a bit wonky, but it is quite easy to work with (front- and backend) and since everything is in the database format you choose - you’re in control of how you want your data.

machinin,

Thanks, this is very helpful.

So, is it kind of a replacement for MS Access, except a non-relation db?

savedbythezsh,

I don’t remember exactly, it was around 2 years ago. It was easy to set up, but I found the feature set to be lacking some essentials, and I ran into a couple of big bugs. Couldn’t really replace my airtable setup yet. Happy to give it another try though!

redcalcium, in PSA: The Docker Snap package on Ubuntu sucks.

I also like to run my container platform as a containerized application in another container platform.

Contend6248,

Double-NAT anyone? 3 times the fun, 2 times the work

thanksforallthefish,

Lol. Yeah that was my reaction to the headline as well. “You did what ?”

Turbo,

:)

redcalcium, (edited )

Why does Docker has a snap version in the first place anyway? Did Canonical pester them to do it?

Edit:

Nope, it’s just Canonical went ahead and publish it there by themselves.

This snap is built by Canonical based on source code published by Docker, Inc. It is not endorsed or published by Docker, Inc.

thesmokingman,

It’s also offered as part of the installation process at least for Ubuntu server. If you don’t know better it bites you real quick.

hperrin,

Now I know better. No more Ubuntu Server.

GenderNeutralBro,

It’s insane how many things they push as Snaps when they are entirely incompatible with the Snap model.

I think everyone first learns what Snaps are by googling “why doesn’t ____ work on Ubuntu?” For me, it was Filebot. Spent an hour or two trying to figure out how the hell to get it to actually, you know, access my files. (This was a few years ago, so maybe things are better now. Not sure. I don’t live that Snap life anymore, and I’m not going back.)

taaz, (edited ) in Thumbnail cache saved to object storage?

Are you absolutely sure that object storage is used? (eg. having proper config file or env vars).

I am running pictrs 5.1 in container with postgres and backblaze b2 (so, there are no volumes that can actually ballon out).

Might it be possible that you are talking about the sled/state files (the part that can be replaced with postgres)?

glowie,
@glowie@h4x0r.host avatar

I am using the from scratch install and on 0.18.5 and using pictrs 0.4.0 beta (or whichever build comes with the embed version)

Initially, when I set my object storage creds in the docker-compose.yml file, it seemed to work and I see in my bucket it populated some files.

But now I don’t think it is writing anything to the bucket. Any ideas? Thanks!

Lem453, in Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages

Thank you for including oAuth options for sign on. Makes a big difference being able to use the same account for all the things like freshRSS, seafile, immich etc.

Kir,

I’m intrigued. How does it work? Do you have a link or an article to point me to?

Lem453, (edited )

The general principle is called single sign on (sso).

The idea is that instead of each all keeping track of users itself, there is another app (sometimes called an identity provider) that does this. Then when you try to log into an app, it takes to the to login of your identity provider instead. When the IP says you are the correct user, it sends a token to the app saying to let you access your account.

The huge benefits are if you are already logged into the IP on a browser for example, the other apps will login automatically without having to put in your password again.

Also for me the biggest benefit is not having to manage passwords for a large number of apps so family that uses my server have 1 account which gives them access to jellyfin, seafile, immich, freshrss etc. If they change that password it changes it for everything. You can enforce minimum password requirements. You can also add 2FA to any app now immediately.

I use Authentik as my identity provider: goauthentik.io/https://goauthentik.io/

There’s good guides to settings it up with traefik so that you get let encrypt certificates and can use traefik for proxy authentication on web based apps like sonarr. There are many different authentication methods an app can choose to use and Authentik essentially supports everything.

youtu.be/CPURnYaW3Zk

SSO should really be the standard for self hosted apps because this way they don’t have to worry about ensuring they have the latest security for user management etc. The app just allows a dedicated identity provider to worry about user management security so the app devs can focus on just the app.

Kir,

Thank you for the detailed answer! It seems really interesting and I will definitely give a try on my server!

dan,
@dan@upvote.au avatar

Authentik is pretty good. Authelia is good too, and lighter weight.

You can combine Authelia with LLDAP to get a web UI for user management and LDAP for apps that don’t support OpenID Connect (like Home Assistant).

Lem453,

If you have to add a whole other app the match what authentik can do, is authelia really lighter weight?

Im joking because authentik does takes a decent chunk of ram but having all protocols together is nice. You can actually make ldap authentication 2FA if you want.

dan,
@dan@upvote.au avatar

Interesting… How does Authentik do 2FA for LDAP?

I’m going to try it out and see how it compares to Authelia. My home server has 64GB RAM and I have VPSes with 16GB and 48GB RAM so RAM isn’t much of an issue :D

Lem453,

Because authentik uses flows, you can insert the 2FA part into any login flow (proxy, oauth, ldap etc)

youtu.be/whSBD8YbVlc

dan, (edited )
@dan@upvote.au avatar

LDAP sends username and password over the network though… It doesn’t use regular web-based authentication. How would it add 2FA to that?

Lem453,

The above YouTube video shows that you can get authentik to send a 2fa push authentication that requires the phone to hit a button in order to complete the authentication flow.

dan,
@dan@upvote.au avatar

Ohhhh, interesting. Sorry, I didn’t watch the video yet. Thank you!!

subtext,

Although in the subscription version, SSO is not available unless you purchase the “Contact Us” version. sso.tax would like a word.

Lem453,

Free for self hosted which is probably what matters to most here

subtext,

Definitely a fair point, always good to see that in a project

aniki, (edited ) in Why docker

1.) No one runs rooted docker in prod. Everything is run rootless.

2.) That’s just patently not true. docker inspect is your friend. Also you can build your own containers trusting no-one. FROM Scratch hub.docker.com/_/scratch/

3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.

eluvatar,

1 is just not true sorry. There’s loads of stuff that only work as root and people use them.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 18878464 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10502144 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 36