selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

BOFH666, in Alternative github frontends?

In run a personal instance of forgejo, love it.

Everything I want regarding version control and workers. And more lightweight on the frontend side.

forgejo.org

Dirk,
@Dirk@lemmy.ml avatar

+1 for Forgejo. Runs butter smooth even on not so high-end machines. You can even mirror your GitHub repos.

Plus: It is not owned by a for-profit organization.

itmosi,

I already self host my git server, I’m looking for an alternative front-end to browse github (because a lot of open source stuff still lives on it).

TCB13, in Joplin alternative needed
@TCB13@lemmy.world avatar

why is it Postgres db…

Why on earth are you using that? Just use WebDAV, you’ll only be required to have some WebDAV server such as Nginx and it will sync GB of notes without issues. joplinapp.org/help/apps/sync/webdav/ medium.com/…/build-a-webdav-server-with-nginx-866…

I would’ve NEVER ever moved to Joplin if it wasn’t able to sync with WebDAV. I’m not into having a special daemon running on a server for that task, makes zero sense.

colebrodine,
@colebrodine@midwest.social avatar

It works great with my self-hosted NextCloud!

jaykay,
@jaykay@lemmy.zip avatar

I need to look into webDAV then :D

TCB13,
@TCB13@lemmy.world avatar

Yes you do ahaha

azron,

This is the way.

AtariDump, (edited ) in Self-hosted VPN that can be accessed via browser extension

I’d be very wary about trying to bypass any workplace restrictions (which includes using a non-company VPN etc. etc.) to access self-hosted services.

Remember, your work computer belongs to the business (unless you’re self employed).

Depending on your line of work this could range from a slap on the wrist onto immediate termination and fines.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

Yeah, that is the unfortunate reality. The better way is going through your IT department to get those extra things you need for work. If you are found out, and I am sure IT will eventually be able to, you will be in trouble.

BearOfaTime,

Also very good advice

cybersandwich, in Help with Audiobookshelf Port Number

You’re using network_mode: “host” which makes the container use the host’s networking directly. When you use host mode, the port mappings are ignored because the container doesn’t have its own IP address, it’s sharing the host’s IP. Remove or change the network mode to see if that fixes it.

OneShotLido,

Perfect. Thanks!

roofuskit, in Stalwart v0.5.0
@roofuskit@lemmy.world avatar

Very interested in this as Gmail is one of my last Google cords to cut. But it doesn’t solve the issue of trying to host it from a non-commercial Internet connection. Last I remember most ISPs won’t let you open the ports required to run an email service on a home connection. Anyone have modern experience with that?

AtariDump,

Most non-business Internet service in the IS has email ports blocked. They don’t open unless you switch to business class Internet and that’s $$$

roofuskit,
@roofuskit@lemmy.world avatar

Thanks for confirming. So pay for a vps to run this on, or just pay an email provider.

AtariDump, (edited )

If the VPS allows email ports to be open.

Then deal with your email going to spam most of the time because you’re domain/IP is so new and not “warmed up” that email systems think it’s all spam.

roofuskit,
@roofuskit@lemmy.world avatar

Yeah, it seems like the latter option is the obvious answer. It’s an awful lot of work you still have to pay for. I’d rather just pay someone to offer me secure email and not harvest my information.

Lichtblitz, (edited )

In my experience, this is nothing more than an urban legend at this point. There are great standards, like DMARC, DKIM, SPF, proper reverse DNS and more, that are much more reliable and are actually used by major mail servers. Pick a free service that scans the publicly visible parts of your email server and one that accepts an email that you send to them and generates a report. Make sure all checks are green. After an initial day of two of getting it right, I’ve never had trouble with any provider accepting mail and the ongoing maintenance is very low.

Milage may vary with an unknown domain and large email volumes or suspicious contents, though.

taladar,

There are literally RBLs in use by many major mail providers that just contain all dynamic IPs. There are others that block entire subnets used by VPSs at certain hosters. In neither of those you can remove your IP yourself (unlike the ones that list individual IPs because of that IP’s reputation).

Lichtblitz, (edited )

Weird, I’ve never had problems over the past 15 years or so and I’ve been using VPS servers exclusively. Maybe my providers were reputable enough.

I realize my evidence is only anecdotal, but that’s why I started “in my experience”. Also, common blacklists are checked by the services I mentioned.

Chobbes,

For what it’s worth I also haven’t had any problems. Maybe we’re just lucky, though.

victorz,

That’s insane to me. How is that a free and open Internet? Should be illegal.

AtariDump,

Too many people get malware that setup an email server and start sending out spam/phishing emails.

victorz,

That’s interesting. Is it easily preventable?

AtariDump,

Yes.

ISPs block email ports on residential connections to prevent this.

victorz,

I meant on the part of the host. Would it be easily preventable on the server if the ports weren’t blocked by the ISP?

AtariDump,

Not for the average person who pays for a home (vs business) internet connection.

victorz,

That’s a shame.

AtariDump,

Why?

I can count on no hands the amount of people I know who want to host their own email server on a residential connection (and that includes myself).

victorz,

Very anecdotal. 🤷‍♂️

AtariDump,
victorz, (edited )

It’s not a shame because of the amount of people we know, or how many people there are in total, that want to self-host email. It’s about the fact that it’s so difficult to set up, and hard to secure. I just wish it were simpler and more secure by default so that more people could roll their own and break free from ad-ridden and privacy-invading email services. 👍

AtariDump,

Makes sense.

jagoan,

Gmail to MXroute when Google threatened to pull the grandfathered free Gmail custom domain thing. Got their lifetime plan, easy enough to configure so outgoing mails don’t get marked as spam. However, the major downside is it’s still using Spam Assassin as spam filter.

nutbutter,

I moved from Gmail to ProtonMail, then to Mailbox.org. Ypu can set up a mailserver on your home server, but you would need a VPS that would forward the traffic to and from your home server without you needing to open any ports. This guide can help you with TLS passthrough.

But setting up your own mailserver is a big hassle. Just pay a trusted provider and keep your inbox, and preferably all emails, encrypted with GPG.

victorz,

What made you switch from Proton to Mailbox, if you don’t mind sharing?

nutbutter,

I was paying $7/m for their mail, VPN and drive services. One of my major reasons to switch was their lack of linux support. They claim that it is hard to find Linux developers. Second reason was their drive’s download and upload speeds were terrible, from where I am sitting. Their VPN service is great. I always got great speeds, but their linux apps have always been terrible. Their mail service is also great, but I would like more control over it, like Mailbox.org. on Mailbox, I can encrypt my inbox using a different key, while also having the SMTP submission feature. I really ned that to integrate emails with my websites and services. Mailbox can also encrypt their cloud drive with our key, while also providing WebDAV support (how cool is that). Their mail app on android is open-source but is not available on f-droid. And the apk they provide on their website neither has a notification functionality, nor does it auto-update. Another reason was that I was limited to 3 custom domains, unless I buy their business plan. Mailbox has no such limit.

One final reason was that I did not want to keep all my apples in one basket. So, for mail, I am using mailbox, for storage, I am using a personal nextcloud and a Hetzner managed nextcloud, for VPN, I started using mullvad, but their speeds are terrible and connections are unreliable. For passwords I am using self-hosted vaultwarden.

There are a few more reasons that I do not remember, now. Proton is great, I still trust them. But these small things really go a long way.

victorz,

Thank you for that detailed reply. You have far greater needs than I do. 😊

It would be cool to do all these things and self-host. One day I’ll get there, in life.

ssdfsdf3488sd,

That’s pretty much exactly my story except I went with fastmail.com, mullvad for vpn (you really need to test with some script to find your best exit nodes I forget which one I used ages ago but it found me a couple of nodes about 1000 kms away from my location and in a different country that I can do nearly a gig through routinely… Maybe it was this script? github.com/bastiandoetsch/mullvad-best-server) . I went with pcloud for a bit but tailscale and now currently netbird make it kind of irrelevant since its’ so easy to get all my devices able to communicate back to my house file server. I want to like hetzner so bad but every time I try it the latency to north america just kills me and the north american offering was really far away and undeveloped last time Itried it

nutbutter,

For me the issue with Mullvad is like this… I connect to a server, I get good speeds, but after an hour or two, I get stuck at 2-3mbps. This issue gets resolved when I reconnect, even to the same server. Also, I like using OpenVPN over TCP, but their speeds, in Mullvad’s case, are terrible for all exit nodes.

It also may be the case that my ISP is deliberately ruining the IPv4 routes because I am connecting to a VPN for privacy.

ssdfsdf3488sd,

Nevee saw that on wireguard once i foind the better connections for my location, weird

hemmes, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@hemmes@lemmy.world avatar

Dude’s the Predator of the IT world

ssdfsdf3488sd,

Pretty sure that title is firmly held by mcafe, even now.

kittykittycatboys, in What software does the Internet Archive run?
@kittykittycatboys@lemmy.blahaj.zone avatar

afaik, archive.org isnt open source. id recommend something like archivebox.io

possiblylinux127,

Archive box is a piece of software and the Internet archive is a organization that is focused on predicting the content on the internet.

The Internet Archive has PBs worth of data. I doubt any home user could manage that.

z00s, (edited )

archive

predicting

?

recapitated,

They’re beating the algorithm

mosiacmango,

Protecting

kittykittycatboys,
@kittykittycatboys@lemmy.blahaj.zone avatar

i dont think op is looking to mirror archive.org, my take was that they wanted someyhing like archive.org but selfhosted and for personal / small-scale use

avidamoeba,
@avidamoeba@lemmy.ca avatar

Exactly. I’m already running a local wiki, but I don’t want stuff I link to in my wiki to result in 404 in a few years. Or worse, to some AI-ridden ad-infested dumpster fire.

laserjet,

You can use something as simple as a browser extension like SingleFile that can automatically download complete, contained copies of anything bookmarked or only certain URLs.

avidamoeba, (edited )
@avidamoeba@lemmy.ca avatar

Oh yes, this looks like a winner. Thanks!

It seems like it’s written in Python too, which means I can maintain it if need be.

Oh boy I wish I had set this up many years ago. I wouldn’t have to resort to scouring !antiquememesroadshow for the top quality memes of the past when I need them…

On a far side of the moon note, I wonder if ActivityPub could be used to federate multiple archiveboxes to create a more resilient Internet Archive alternative. 🤔 Then integrate that with Lemmy to autoarchive links from posts. Aaand lemmy.world ran out of disk space. 🤣

Dehydrated,

+1 for ArchiveBox

False, (edited ) in ELI5: What is OpenStack? How to get started?

Openstack is like self-hosting your own cloud provider. My 2 cents is that it’s probably way overkill for personal use. You’d probably be interested in it if you had a lot of physical servers you wanted to present as a single pooled resource for utilization.

How does one install it?

From what I heard from a former coworker - with great difficulty.

What is the difference between a hypervisor/openstack/a container service (podman,docker)?

A hypervisor runs virtual machines. A container service runs containers which are like virtual machines that share the host’s kernel (more to it than that but that’s the simplest explanation). Openstack is a large ecosystem of pieces of software that runs the aforementioned components and coordinates it between a horizontally scaling number of physical servers. Here’s a chart showing all the potential components: …wikimedia.org/…/Openstack-map-v20221001.jpg

If you’re asking what the difference between a container service and a hypervisor are then I’d really recommend against pursuing this until you get more experience.

Chocrates,

To add, hypervisor is very low level, below the operating system often. Hypervisors allow you to run multiple operating systems on the same hardware.

Containers are isolated processes running within an operating system using stuff like cgroups.

PropaGandalf, (edited )

It’s for getting acquainted with the whole software stack. Also I have enough free time for it :) I’m also very well aware what the difference between a container service and a hypervisor are, I’m just a little overwhelmed by what open stack can do.

redcalcium,

Deploying openstack seems like a very fun and frustrating experience. If you succeed, you should consider graduating from selfhosting and entering hosting business. Then, maybe post your offering on lowendtalk. Not many providers there use openstack so you might be able to lead the pack there.

False, (edited )

Fair enough. Personally I’d start with their documentation then: docs.openstack.org/install-guide/

For OS it looks like they support RHEL/CentOS, Ubuntu, Debian, and SUSE so I’d stick with one of those.

Starbuck,

I used to be a certified OpenStack Administrator and I’ll say that K8s has eaten its lunch in many companies and in mindshare.

But if you do it, look at triple-o instead of installing from docs.

PropaGandalf,

Would you mind explaining why this shift happened? Isn’t OpenStack more capable than any k8 setup?

Starbuck, (edited )

There is a lot of complexity and overhead involved in either system. But, the benefits of containerizing and using Kubernetes allow you to standardize a lot of other things with your applications. With Kubernetes, you can standardize your central logging, network monitoring, and much more. And from the developers perspective, they usually don’t even want to deal with VMs. You can run something Docker Desktop or Rancher Desktop on the developer system and that allows them to dev against a real, compliant k8s distro. Kubernetes is also explicitly declarative, something that OpenStack was having trouble being.

So there are two swim lanes, as I see it: places that need to use VMs because they are using commercial software, which may or may not explicitly support OpenStack, and companies trying to support developers in which case the developers probably want a system that affords a faster path to production while meeting compliance requirements. OpenStack offered a path towards that later case, but Kubernetes came in and created an even better path.

PS: I didn’t really answer your question”capable” question though. Technically, you can run a kubernetes cluster on top of OpenStack, so by definition Kubernetes offers a subset of the capabilities of OpenStack. But, it encapsulates the best subset for deploying and managing modern applications. Go look at some demos of ArgoCD, for example. Go look at Cilium and Tetragon for network and workload monitoring. Look at what Grafana and Loki are doing for logging/monitoring/instrumentation.

Because OpenStack lets you deploy nearly anything (and believe me, I was slinging OVAs for anything back in the day) you will never get to that level of standardization of workloads that allows you to do those kind of things. By limiting what the platform can do, you can build really robust tooling around the things you need to do.

aniki, (edited ) in Why docker

1.) No one runs rooted docker in prod. Everything is run rootless.

2.) That’s just patently not true. docker inspect is your friend. Also you can build your own containers trusting no-one. FROM Scratch hub.docker.com/_/scratch/

3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.

eluvatar,

1 is just not true sorry. There’s loads of stuff that only work as root and people use them.

Lem453, in Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages

Thank you for including oAuth options for sign on. Makes a big difference being able to use the same account for all the things like freshRSS, seafile, immich etc.

Kir,

I’m intrigued. How does it work? Do you have a link or an article to point me to?

Lem453, (edited )

The general principle is called single sign on (sso).

The idea is that instead of each all keeping track of users itself, there is another app (sometimes called an identity provider) that does this. Then when you try to log into an app, it takes to the to login of your identity provider instead. When the IP says you are the correct user, it sends a token to the app saying to let you access your account.

The huge benefits are if you are already logged into the IP on a browser for example, the other apps will login automatically without having to put in your password again.

Also for me the biggest benefit is not having to manage passwords for a large number of apps so family that uses my server have 1 account which gives them access to jellyfin, seafile, immich, freshrss etc. If they change that password it changes it for everything. You can enforce minimum password requirements. You can also add 2FA to any app now immediately.

I use Authentik as my identity provider: goauthentik.io/https://goauthentik.io/

There’s good guides to settings it up with traefik so that you get let encrypt certificates and can use traefik for proxy authentication on web based apps like sonarr. There are many different authentication methods an app can choose to use and Authentik essentially supports everything.

youtu.be/CPURnYaW3Zk

SSO should really be the standard for self hosted apps because this way they don’t have to worry about ensuring they have the latest security for user management etc. The app just allows a dedicated identity provider to worry about user management security so the app devs can focus on just the app.

Kir,

Thank you for the detailed answer! It seems really interesting and I will definitely give a try on my server!

dan,
@dan@upvote.au avatar

Authentik is pretty good. Authelia is good too, and lighter weight.

You can combine Authelia with LLDAP to get a web UI for user management and LDAP for apps that don’t support OpenID Connect (like Home Assistant).

Lem453,

If you have to add a whole other app the match what authentik can do, is authelia really lighter weight?

Im joking because authentik does takes a decent chunk of ram but having all protocols together is nice. You can actually make ldap authentication 2FA if you want.

dan,
@dan@upvote.au avatar

Interesting… How does Authentik do 2FA for LDAP?

I’m going to try it out and see how it compares to Authelia. My home server has 64GB RAM and I have VPSes with 16GB and 48GB RAM so RAM isn’t much of an issue :D

Lem453,

Because authentik uses flows, you can insert the 2FA part into any login flow (proxy, oauth, ldap etc)

youtu.be/whSBD8YbVlc

dan, (edited )
@dan@upvote.au avatar

LDAP sends username and password over the network though… It doesn’t use regular web-based authentication. How would it add 2FA to that?

Lem453,

The above YouTube video shows that you can get authentik to send a 2fa push authentication that requires the phone to hit a button in order to complete the authentication flow.

dan,
@dan@upvote.au avatar

Ohhhh, interesting. Sorry, I didn’t watch the video yet. Thank you!!

subtext,

Although in the subscription version, SSO is not available unless you purchase the “Contact Us” version. sso.tax would like a word.

Lem453,

Free for self hosted which is probably what matters to most here

subtext,

Definitely a fair point, always good to see that in a project

MangoPenguin, in what if your cloud=provider gets hacked ?
@MangoPenguin@lemmy.blahaj.zone avatar

So … conclussion ???

Have backups.

Only 2 copies of your data stored in the same place isn’t enough, you want 3 at minimum and at least 1 should be somewhere else.

SchizoDenji,

What if the data is leaked/compromised?

pearsaltchocolatebar,

That’s why you use encryption.

MangoPenguin, (edited )
@MangoPenguin@lemmy.blahaj.zone avatar

Backups are usually encrypted from most popular backup programs, either by default or as an option (restic, borg, duplicati, veeam, etc…). So that would take care of someone else getting their hands on your backup data.

I never store my actual files on a cloud service, only encrypted backups.

For local data on my devices, my laptop is encrypted with bitlocker, and my Android phone is by default. My desktop at home is not though.

Treczoks,

Indeed. Whatever you put in a cloud needs backups. Not only at the cloud provider, but also “at home”.

There has been a case of a cloud provider shutting down a few months ago. The provider informed their customers, but only the accounting departments that were responsible for the payments. And several of those companies’ accounting departments did not really understand the message except for “needs no longer be paid”.

So for the rest of the company, the service went down hard after a grace period, when the provider deleted all customer files, including the backups…

roofuskit, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@roofuskit@lemmy.world avatar

I don’t have the space to hoard garbage.

TheBat, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?
@TheBat@lemmy.world avatar

What

einlander, in Does anyone else harvest the magnets and platters from old drives as a monument to selfhosting history?

That’s a funny looking Stanley cup.

atzanteol, in Raspberry as NAS, multiple HDDs and an enclosure

It seems weirdly difficult to find a good solution to attach HDDs to my pi.

Being a nas is not at all what a pi is made for. So it’s not surprising at all.

JonhhyWanker,

The Raspberry Pi would be a great low power device to have always on with some storage attached to backup to, store family photos, etc.

So not a high performance NAS, but good enough for this use case.

atzanteol,

not a high performance NAS

That is an understatement.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 18878464 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10502144 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 38