selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

SteefLem, in what if your cloud=provider gets hacked ?
@SteefLem@lemmy.world avatar

Its just some elses computer. Said this since the beginning

kristoff,

The issue is not cloud vs self-hosted. The question is “who has technical control over all the servers involved”. If you would home-host a server and have a backup of that a network of your friend, if your username / password pops up on a infostealer-website, you will be equaly in problem!

ferngully, in authentik .. how to backup ?

You should only have to backup the Postgres database. But it won’t hurt to have a copy of your compose file as well.

This GitHub issue has the steps you should use. And answers all your other questions too.

kristoff,

Great thanks! (also thanks to Mike … you have some valid points)

thelittleblackbird, in Planning build: Power efficient headless steam machine, and later upgrade for AI tasks

Some tips here:

  • get a platinum rated power supply, if you can afford it go for a titanium. The efficiency in the power supply is half of the efficiency of the rig
  • reduce the number of the modules to the minimum
  • get a platinum rated power supply ;)
  • get big passive coolers, you want to idle the fans
  • reduce the number of usb and connectors to the minimum. Their converters are not the most efficient. Try not to connect enything on them.
  • NO mechanical parts (including fans or water coolers)
  • set schedulers to conservative or power efficient. You don’t want to spike the power just because a task is 2ms longer than expected.
  • pick a power efficient CPU/gpu (I think we can discard this one based in your choices)
  • use the latest amd adaptative undervoltage technology to ensure to reduce the wattage of the cores
  • try to reduce to the bareminimum the number of background tasks /services running.

And that’s all. Sometimes there is a component of trial and error because sometimes the curve performance / power is not entirely linear and you don’t want to hit exponential-non-linear zone.

Good luck and if you can post you build with numbers and some lessons learnt would be great

Good luck

rambos,

Just to add my experience about PSU efficiency: for low power consumption (20-50W) you need PSU rated for minimum power your system needs. So if you are idling at 30W on 700W PSU your efficiency will be super bad because that PSU was made for higher loads and you are using <10%. No matter what PSU class you choose, efficiency will be better if your usage is at 40-70% of PSU max power. This is based on testing multiple desktop ATX PSUs for my small homelab

thelittleblackbird,

Definitely.

I forgot to add that it would be necessary not to overdimension the set up. Any extra power is something that needs to be powered.

But with the chosen cpu and GPU there is not a lot of room here.

Lettuceeatlettuce, in Owncast Community
@Lettuceeatlettuce@lemmy.ml avatar

Subbed!

ozoned,

Awesome! TY! Who couldn’t use more lettuce eating lettuce in their life?

Now all we need is some fruit cannibalism and we’ll have a well rounded meal! :-D

Lettuceeatlettuce,
@Lettuceeatlettuce@lemmy.ml avatar

Haha totally!

butt_mountain_69420, in Nextcloud Performance Improvements

Is there a way to self-host nextcloud by downloading one file, docker container, .nzb, .jpg, ANYTHING that includes all these parts and can just plug in and run? Is that a thing, or do all self-hosters spend every waking hour sudo updating?

tofubl,

You mean like the AIO image, the one officially supported way to install Nextcloud?

But if you want to tune it, I’m afraid you’ll have to run sudo tune once per waking hour.

azron,

Nextcloud AIO or all in one. It works relatively well. I run both my own container and an AIO instance and I’ve been pretty happy with it, I’ll likely migrate to it for my docker only one in the near future. Nextcloud AIO

butt_mountain_69420, (edited )

I didn’t think Nextcloud AIO would actually work with existing files on a separate drive. I know it says it will … but … I’m not interested in buying a gigantic new harddrive to clone all my data to just to run one program.

Also, if it’s running in WSL or a VirtualBox VM it would be fucking hell to get it to play nice with the network.

thatsnothowyoudoit, (edited ) in Nextcloud zero day security
@thatsnothowyoudoit@lemmy.ca avatar

Nextcloud isn’t exposed, only a WireGuard connection allows for remote access to Nextcloud on my network.

The whole family has WireGuard on their laptops and phones.

They love it, because using WireGuard also means they get a by-default ad-free/tracker-free browsing experience.

Yes, this means I can’t share files securely with outsiders. It’s not a huge problem.

Chewy7324,

Wireguard is awesome and doesn’t even show up on the battery usage statistics of my phone.

With such a small attack surface I don’t have to worry about zero days for vaultwarden and immich.

BearOfaTime,

Tailscale has a feature called Funnel that enables you to share a resource over Tailscale to users who don’t have Tailscale.

Wonder if Wireguard has something similar (Tailscale uses Wireguard)

thatsnothowyoudoit,
@thatsnothowyoudoit@lemmy.ca avatar

Neat, I’ll have to look it up. Thanks for sharing!

gerowen, (edited ) in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

I’ve hosted mine for years on my own bare metal Debian/Apache install and 28 is the first update that has been a major pain. I’ve had the occasional need to install a new package to enable a new feature, or needed to add new/missing indices to the database, but the web interface literally tells you how to do those things, so they’re not hard.

28 though broke several of the “featured” apps that I use regularly, like “Retention”. It also introduced some questionable UI changes that they had to fix with the recent .1 update. I’ll get occasional errors when trying to move or delete files in the web interface and everything. 28 really feels like beta software, even though we’re a point release in and I got it from the “stable” update channel.

mhzawadi,

I’ve not moved to 28 yet, might wait a bit longer from your post. My 27 is rock solid, I don’t understand why so many have issues with nextcloud.

Maybe the docker installs are pants

unique_hemp,

I have run nextcloud:latest on Docker for the last 2 years and have had 0 problems. Maybe upgrading all the time works better than by releases.

gerowen,

I’m on my laptop so I thought I would elaborate on my first comment to give you things to watch out for if/when you update. I’ve been hosting mine with the zip file manually installed with my own Apache/PHP/MySQL/MariaDB setup for ages now without issue. It’s been rock solid except for, like I said, the occasional changes required to take advantage of new features such as adding new indices to the database or installing an additional php addon. Here’s the things that I noticed with updating to 28.

  • The 3 dot/ellipses menu was missing in the web interface and was replaced with dedicated buttons for “Download”, “Add to Favorites” and “Delete”. Shift clicking was also broken. This meant that when I, for example, take a lot of photos for a holiday, I can’t use the web interface to select a large range of multiple files and then move them all from “InstantUpload” into a more permanent album. I either had to use the mobile app, or do them one at a time. The ellipses menu, along with the options to bulk “move/copy” have been added back since then with the *.1 update, but shift clicking in the web interface to select a range of files is still broken.
  • The “Retention” app, which is listed as a “Featured” app doesn’t function any more. I used it to automatically delete backups of my Signal messenger, files in the “InstantUpload” folder that were over a year old, etc. You can enable it, but it doesn’t actually work and just throws errors in the log file, which is now reported in the “Overview” portion of the “Administration” page with a note of “X number of errors since somedate”, and prevents you getting the green checkmark. It’s probably safe to assume that other apps will also have issues because I had half a dozen get automatically disabled with the update.
  • Occasionally when I use the web interface to move or copy a file, I’ll get an error message that the operation failed. Sometimes this is true, sometimes it’s not and the operation actually succeeded. If it ends up being true and the move did actually fail, doing it again results in a successful move.

It seems like they’ve made some substantial under-the-hood changes to the user interface that shouldn’t have been shipped to the “stable” channel. It’s not completely broken, it “is” usable, especially after they restored my bulk move/copy button, but I still can’t use the Retention app, at least last time I looked, so I’ve literally got daily cron scripts to check those folders for old files and delete them, then trigger an occ files:scan of the affected directories to keep the Nextcloud database in sync with the changes. This however, bypasses the built-in trash bin so I can’t recover the files in the event of an issue. I actually considered rolling back to 27 for a bit, but decided against it, so if I were you, I would stick with 27 for a while and keep an ear to the ground regarding any issues people are having that are or aren’t getting fixed in 28.

mhzawadi,

Thanks for the heads up, will wait for 28.0.2 as that is currently cooking.

On the Retention app thing, I got into tagging to remove old backups. Will have in the morning for how I set it up

SecurityPro, in Nextcloud Performance Improvements
@SecurityPro@lemmy.ml avatar

I had been running Nextcloud on an old laptop using Ubuntu, but that machine died. I have a Windows PC originally built for gaming that I am considering using for Nextcloud. Anyone have any experience with NC and Windows? Thought on the DB switch on Windows?

tofubl,

I don’t think you’ll do yourself any favours setting it up on Windows directly. How about docker+wsl2?

SecurityPro, (edited )
@SecurityPro@lemmy.ml avatar

I have docker on the machine now and thought I’d try that type of install first. Sorry, I’m not familiar with the abbreviation “wsl2”

blasterx, (edited )

it stands for Windows Subsystem for Linux. Here is a link on how to install it.

ikidd,
@ikidd@lemmy.world avatar

100% agree with tofubi, Docker on Windows is a form of self-abuse, like cutting yourself. It’s a train wreck for anything other than a little bit of testing for development work. You will come away with a bad taste in your mouth about Docker, I avoided containers for years because I started with them on Windows docker.

I’ve run a lot of different scenarios with docker, what I’ve come down to as the cleanest and easiest to maintain is Debian 12 with the Docker convenience script. It’s fast, hassle free, and doesn’t have a bunch of layers of weirdness like using Ubuntu Server with a docker snap that makes troubleshooting a nightmare.

dan,
@dan@upvote.au avatar

for anything other than a little bit of testing for development work.

It’s really awesome for development work, though. Visual Studio has built-in Docker support, so I can run my app and its unit tests on both Windows and Linux (via Docker) at the same time on the same system during development.

tofubl,

This sounds interesting.

I use docker in vscode for latex. It saves me the trouble of having to install texlive on my system. I have a task defined that mounts my sources in and runs the compilation in the container.

Would love to hear about your work flow.

abominable_panda, (edited ) in Streaming local Webcam in a Linux machine, and acessing it when on vacations - which protocol to choose?

MediaMTX can sort a lot of this for you. Then its just a matter of accessing your feed on vlc.

VPN is the safer option of accessing your network

Personally, I use this as a camera proxy bit it can record. I use zoneminder otherwise

shadowintheday2,

Thank you, I managed to get it working with MediaMTX and DockoVPN I still don’t know how I would manage dynamic IP changes during the days I’m away, that would break the VPN

tapdattl,

I just set up a security camera for my dad’s office: zoneminder running the webcam and tailscale for access anywhere.

abominable_panda,

Amazing! Congrats :)

For the dynamic ip address that you can get a free domain name from afraid or noip or maybe others and point your vpn to your domain name instead of direct ip address. Following that you can run cron job scripts to ensure the ip address that the domain points to is up to date

Administrator,

this is the way. Not sure if you can watch webrtc streams with vlc though. But you can always use rtmp or hls

TCB13, (edited )
@TCB13@lemmy.world avatar

MediaMTX

Going to Mars seems easier and less resource intensive than that thing.

MediaMTX can sort a lot of this for you. Then its just a matter of accessing your feed on vlc.

Here is how you really “just access your feed from VLC” in three easy easy steps:

Step 1. Configure nginx repositories (nginx.org/en/linux_packages.html)

Step 2. Install nginx / nginx-rtmp

Step 3. Edit nginx config to add:


<span style="color:#323232;">rtmp {
</span><span style="color:#323232;">        server {
</span><span style="color:#323232;">                listen 1935;
</span><span style="color:#323232;">                chunk_size 4096;
</span><span style="color:#323232;">                allow publish 127.0.0.1;
</span><span style="color:#323232;">                deny publish all;
</span><span style="color:#323232;">
</span><span style="color:#323232;">                application live {
</span><span style="color:#323232;">                        live on;
</span><span style="color:#323232;">                        exec_pull /usr/bin/ffmpeg -f v4l2 -input_format h264 -video_size 1920x1080 -i /dev/video4 -copyinkf -codec copy -f flv rtmp://127.0.0.1/live/stream;
</span><span style="color:#323232;">                        record off;
</span><span style="color:#323232;">                }
</span><span style="color:#323232;">        }
</span><span style="color:#323232;">}
</span>

A few notes:

  • /dev/video4 is your camera;
  • Some systems (debian) may require this sudo usermod -a -G video www-data to make sure it will work. Because ffmpeg will be launched with the www-data user that doesn’t have access to the video cameras.
  • It will even turn off the camera if nobody is connected;
  • Use ffmpeg -f v4l2 -list_formats all -i /dev/video0 to find what formats your camera supports;
  • Watch the stream from VLC with the url: rtmp://device-ip/live/stream

Enjoy.

oranki, in Jellyfin on a vps

Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.

Storage in the cloud is expensive, there’s just no way around it.

crony,
@crony@lemmy.cronyakatsuki.xyz avatar

I will most likelly just do that in the end.

Relly hope god will have mercy on me and allow me to move out soon to a bigger place.

electric_nan,

Why do you say that? I use it for my 12+ TB library and it works fine. I’m on the west coast USA, and my vps and storage box are on the east coast.

originalucifer, in Am I in over my head? Need some encouragement!
@originalucifer@moist.catsweat.com avatar

start small, and you should be able to do it no problem.

first off, ignore the wd. its storage. you dont want your storage and your processing mixing (i wouldnt anyway)

  • find yourself an old, shitty pc with >=4gb of ram, processor irrelevant.
  • slap a small ssd in, or dont. install linux
  • install docker
  • start installin containers

lots of available, preconfigured containers with instruction over at:
https://hub.docker.com

when you get your containers functional you can connect your media software (jellfyin) to the wd storage

andrew,
@andrew@lemmy.stuart.fun avatar

Mixing storage and processing is now cool again. It’s just called hyper converged infrastructure.

wreckedcarzz,
@wreckedcarzz@lemmy.world avatar

old, shitty pc

processor irrelevant

I knew this day would come! blows the dust off my gateway machine with a P4 @ 1.6GHz Look, it’s even got a fdd, perfect for backup duty! If I could only find that Zip drive though…

originalucifer,
@originalucifer@moist.catsweat.com avatar

id be shocked if that p4 had 4gb of ram though

wreckedcarzz, (edited )
@wreckedcarzz@lemmy.world avatar

It can do 2 sticks of 2gb, though it’s not 64bit capable

BeatTakeshi,
@BeatTakeshi@lemmy.world avatar

My Pentium III had a turbo switch… Nostalgia

quizno50, (edited ) in Am I in over my head? Need some encouragement!

I’ve been doing Linux server administration for 20 years now. You’ll always have to duckduckgo things. You’ll never keep it all in your head, even just a single server with a handful of services. Docker and containers really isn’t too hard. Just start small and build from there. If you can learn how the chroot command works, you’ve pretty much learned docker. It’s just chroot with more features.

billwashere,

Yep same here. Professional IT for over 25 years. Nobody knows everything. It’s ok to fail. Just keep swimming. And when you do get something working…. that high is unbelievable. It’s like a drug addiction and will drive you to do more and more. Good luck!!!

KLISHDFSDF, in 13 Feet Ladder
@KLISHDFSDF@lemmy.ml avatar

If you’re on Firefox on desktop/laptop, check out Bypass Paywall [0]. It was removed from the firefox add-on store due to a DMCA claim [1], but can be manually installed (and auto updates) from gitlab. The dev even provides instructions on how to add custom filters to uBlock Origin [2], so you don’t have to add another extension but still get some benefit.

[0] gitlab.com/…/bypass-paywalls-firefox-clean

[1] winaero.com/mozilla-has-silently-removed-the-bypa…

[2] gitlab.com/…/bypass-paywalls-clean-filters

hdnsmbt,

Your correct indexing is highly appreciated!

desmosthenes,
@desmosthenes@lemmy.world avatar

took the words right out my mouth

AtariDump,

It must have been while he was kissing you.

ASeriesOfPoorChoices,

also, bypass paywalls clean on notfirefox, like Chrome, or Kiwi (android).

ssdfsdf3488sd,

That’s the dude who was butt hurt about something this dude did: github.com/iamadamdev/bypass-paywalls-chrome

and so forked it and arguably does a better job, lol.

krash, in PSA: The Docker Snap package on Ubuntu sucks.

But this is by design, snap containers aren’t allowed to read data outside of their confinements. Same goes for flatpak and OCI-containers.

I don’t use snap myself, but it does have its uses. Bashing it just because it’s popular to hate on snap won’t yield a healthy discussion on how it could be improved.

aniki, (edited )

Snap can be improved with this one simple step

sudo apt-get purge snapd -y

There’s no improving snap. it sucks – full stop. just the mount clutter alone makes it garbage.

The solution exists and its called flatpak and it works MUCH BETTER than canonical-only scholck.

Limitless_screaming,
@Limitless_screaming@kbin.social avatar

Snap sucks, but not for the reason OP stated. There's a decillion reasons for why Snaps suck, why make up a reason that applies to other formats that are actually good?

hperrin,

Ok then don’t publish an application that clearly needs access to files outside of the /home directory. Or at least be upfront about how limited it is when run as a snap.

peter,
@peter@feddit.uk avatar

The Linux community loves to put the responsibility on the user to understand every facet of what they’re trying to do without explaining it

MangoPenguin,
@MangoPenguin@lemmy.blahaj.zone avatar

Agreed, it’s not user friendly at all.

throwafoxtrot,

Does it clearly need access to files outside the /home directory though?

You said your volume mount failed. How about mounting something inside your home folder into the docker container?

hperrin, (edited )

I have a 20TB RAID array that I use for a number of services mounted at /data. I would like Nextcloud to have access to more than the 128GB available to /home. I’m not willing to move my data mount into /home and reconfigure the ~5 other services that use it just to work around some stupid Snap limitation. Who knows whether Snap even can access data across filesystems if they’re mounted in home. I wouldn’t put it past the Snap devs to fall down on that point either.

Yes, Docker clearly needs access to all files. It is meant for running server software, and server software is supposed to be flexible in its setup. To me, this limitation makes it completely unusable. Nextcloud is only the first service that needed access to that directory. I’ll also be running MinIO there for blob storage for a Mastodon server. I’ll probably move Jellyfin into a Docker container, and it’ll need access too.

The fact that this giant issue with Snap is not made clear is my biggest problem with it. I had to figure it out myself over the course of two hours when there are zero warnings or error messages explaining it. What an absolutely unnecessary waste of time, when it could have warned me at install that if I wanted a completely functional version of Docker, I should use the apt package.

I will never use any Snap package again. This was such a bad experience that I probably won’t even be using Ubuntu Server going forward. I already use Fedora for desktop. And the fact that a few people here are basically saying it’s my fault for not already knowing the limitations imposed on Snap packages is just making it more obvious that Ubuntu has become a toxic distro. It’s sad, because Ubuntu got me into Linux back with Hardy Heron 8.04. I’ve been running Ubuntu servers since 9.10. I used to be excited every six months for the new Ubuntu release. It’s sad to see something you loved become awful.

thesmokingman,

The issue here is that Canonical pushed the snap install without warning about its reduced functionality. I don’t think highlighting a wildly different experience between a snap install and the Docker experience people are used to from the standard package install is “bashing it just because it’s popular to hate on snap.” For example, if you take a fresh Ubuntu server 22 install and use the snap package, not realizing that snaps have serious limitations which are not explicitly called out when the snap is offered in the installation process, you’re going to be confused unless you already have that knowledge. It also very helpfully masks everything so debugging is incredibly difficult if you are not already aware of the snap limitations.

hperrin, (edited )

This exactly. Because some poor shmuck might spend two hours trying to get Nextcloud to work with it.

elfio, in Tempo – An open source music client for Subsonic built natively for Android, now with Android Auto support

I saw you did some steps in order to bring this to f-droid. Is it still on your roadmapm?

antoniocappiello,

Yes, it’s been on my roadmap for a while. I also created a pull request several months ago to enter the repo but it was never accepted (it’s also my fault because I didn’t follow the verification process properly).

elfio,

Thanks for the info!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #