selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

haui_lemmy, (edited ) in Why docker

Imo, yes.

  • only run containers from trusted sources (btw. google, ms, apple have proven they cant be trusted either)
  • run apps without dependency hell
  • even if someone breaks in, they’re not in your system but in a container
  • have everything web facing separate from the rest
  • get per app resource statistics

Those are just what was in my head. Probably more to be said.

Gooey0210,
  1. Even if someone breaks in, they are not a user, but root 🤝
haui_lemmy,

*in that container, not in the system

invertedspear,

Also the ability to snapshot an image, goof around with changes, and if you don’t like them restore the snapshot makes it much easier to experiment than trying to unwind all the changes you make.

haui_lemmy,

I havent actually tried that. Might need to check it out. :)

Moonrise2473, in Why docker

About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)

The alternative to “don’t really know what’s in the image” usually is: “just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root”. Requires much more trust than a container that you can delete with no traces in literally seconds

If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.

Downgrading to a previous version (or a beta preview) of the app you’re running due to bugs it’s trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru

Finally, migrating to a new fresh server is just docker compose down, then rsync to new server, and then docker compose up -d. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.

Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes

itsnotits,

the program is run* as root

Samsy, (edited ) in PSA: The Docker Snap package on Ubuntu sucks.

TIL, docker has a snap package, and can’t stop laughing. What’s next? A flatpak or AppImage?

andrew,
@andrew@radiation.party avatar

A flatpak of the snap, running in a docker container inside a vm for maximum security.

Bransonb3, in AppleTV complete replacement opinions

I have tried Roku, Fire TV, Chromecast (not the new models with an interface), and AppleTV. So far Apple TV is the cleanest without ads or sponsored content on the home screen.

If you find something better please let me know.

radix,
@radix@lemmy.world avatar

I like my Roku, but it would be much more annoying without a pihole to block the ads.

AtariDump,

And telemetry.

wreckedcarzz,
@wreckedcarzz@lemmy.world avatar

When I switched my family from predatory directv, this was obviously a question I had, and I ended up going with chromecasts (gen 2 and 3/ultra). Once I showed them how to use their phone as the controller, it immediately clicked, which was fantastic. I thought about an atv or an android box, but that would involve multiple profiles and remembering to switch when someone else wanted to use it (android TV boxes have this buried in the system settings; and I’m the only one with an apple account). Ads were a showstopper for me too, so the pictures/art on the cc when idle was great.

Curious why you went the other way :o

AtariDump,

Because Google is collecting data on EVERYTHING you do.

lemmy.world/comment/6326127

wreckedcarzz,
@wreckedcarzz@lemmy.world avatar

But as a person who doesn’t use G services (well, Grayjay)… the question still stands

AtariDump,

You use Google services whether you know it or not.

www.forbes.com/sites/jasonevangelho/2019/…/amp/

randomcruft,
@randomcruft@lemmy.sdf.org avatar

Understood about the ads / sponsored content. I’ve not used anything but an ATV, but I’ve heard similar (ads, interface, etc.). If I come up with a different solution, I will revive the post and let folks know. Thanks.

Gutless2615, in Plex To Launch a Store For Movies and TV Shows

I’m old enough to have not trusted Plex since the original XBMC split.

cyberpunk007,

I still have an original Xbox with Xbmc on it lol

billwashere,

Yeah I think I do too in the attic somewhere. Mid chips on those things where a bitch back then when it first started up. I think they got better though.

Gutless2615,

007 Nightfire softmod all the way.

billwashere,

They made a software jailbreak?!?! So I just looked this up and I have Splinter Cell and MechAssault. I may have to dig that thing out and give this a try.

Gutless2615,

My man.

eager_eagle, (edited ) in rsync speed goes down over time
@eager_eagle@lemmy.world avatar

Bandwidth (disk and network) is just one metric. Could it be an increase in number of IOPS due to syncing several small files?

Tangent5280,

yeah this is what i thought too. proliferation of small files.

CalicoJack, in File size preference for Radarr?

You got a remux, which is uncompressed. You can turn those off in Radarr to avoid those surprises.

If you want to fine-tune your file sizes (and quality) further, you can set up custom formats and quality profiles. The Trash Guides explain it well, the “HD Blu-ray + Web” profile on that page is a solid starting point. It’ll usually grab 6-12GB movies, but you can tweak it if you want them smaller.

relaymoth,

Trash Guides FTW. I’ve used them for all my *arr setups and it’s been flawless.

TwiddleTwaddle,

Doesn’t Trash Guides prefer larger files though? Iirc if you just do everything as they recommend you’ll always be grabbing the highest quality stuff available, which is the opposite of what this person wants.

relaymoth,

The guide doesn’t set an upper bound on the UHD quality profiles, but that doesn’t mean you have to set up yours exactly the same.

I have mine set with reasonable limits and have never run into a problem with file size, just have to make sure you’re setting the values to something that’s a) realistic and b) that you can live with.

One thing to note: if you set your threshold cutoffs properly you don’t have to worry about downloading files that are always at the upper end of the limit. Once the service downloads a file that meets the threshold it stops downloading for that episode/movie. If it grabs a file that’s below the threshold, it will keep trying to upgrade the file until the threshold is met.

phanto, in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?

I have an x86 proxmox setup. I stuck a kill-o-watt on it. Keep your pi setup if it does what you want, and realize that there’s someone out there who is jealous of your power bill.

chunkystyles,

My x86 Proxmox consumes about 0.3 kwh a day at around 15% average load. I’ve only had the Kill A Watt on it for a day, so I don’t know how accurate that is, but it shouldn’t be too far off.

BearOfaTime,

How bad is it?

My current file server, an old gaming rig, consumes 100w at idle.

I’m considering a TrueNAS box running either 2.5" ssd’s or NVME sticks (My storage target is under 8TB, and that’s including 3 years projected growth).

krash,

Holy crap! I have a n100 SFF that consumes 5-6 w idle (with WiFi on) and I have an old i5 (gen 6 I think) that consumes 30 at idle. Your rig is defiantly not meant to act as a server (unless you want to mine bitcoons or run boinc…)

BearOfaTime,

Lol, yea, it’s old, was built for performance, and hasn’t run right in a while.

I’m looking to setup a NAS and turn that thing off

helenslunch,
@helenslunch@feddit.nl avatar

How bad is it? My current file server, an old gaming rig, consumes 100w at idle.

That’s very bad haha. Most home servers for personal use are using 7-10w.

Although you’ll have to do the math with your local energy prices to determine how important that is. It’s probably not.

BearOfaTime,

It’s $1/day. I’ve done the math a few times

helenslunch,
@helenslunch@feddit.nl avatar

Yeah so you’d make your money back pretty quickly picking up a dedicated PC for that.

saiarcot895, (edited )

$1/day? At 100W average power usage, that’s 2.4kWh per day, suggesting that where you live, the price is 41.67 cents per kWh, roughly double that of California.

Is electricity that expensive where you live?

Edit: it’s been a while since I lived in the Bay area, I hadn’t realized that the electricity price now ranges from 38-62 cents per kWh, depending on rate plan and time.

stevehobbes, (edited )

Go tweak your power and fan settings. 100w at idle is way too much unless it’s 15 years old.

Fans, especially small ones are very sneaky energy hogs. Turn them waaay down.

BearOfaTime,

Nothing to be done. It’s old. Only fan to adjust is cpu, and I can tell when the cooler is getting dirty because the fan stays at higher speeds.

Otherwise there’s one large, slow rpm fan in the case, always on low speed.

nezbyte,

Depends on what your server is running. Multiple GPUs, HDDs, and other fun items start to add up to well over 100W. I justify it by using it to keep my 3d printer filament dry.

stevehobbes,

If you have multiple GPUs in your home server you’re probably doing it wrong. But even then, at idle, with no displays connected, the draw will be surprisingly low.

Most systems with some ssd/NVMe, 2-4 DIMMs and maybe a drive or two should idle closer to 50w-60w.

DarkDarkHouse,
@DarkDarkHouse@lemmy.sdf.org avatar

If you’re getting two gaming PCs out of one hypervisor, you might be doing it right.

nezbyte,

Agreed, don’t do what I do if you value your power bill. To be fair, my network switch pulls more power than my cobbled together server anyhow.

fuckwit_mcbumcrumble,

Newer CPU’s tend to use a good chunk more power under low loads than some older ones. Going from 1st Gen. Ryzen to 2nd Gen. got me about 20 watts higher total system power draw with my use case. And 3rd Gen. is even worse.

Intel is MUCH worse at it than AMD, but every Gen. AMD keeps cranking up those boost clocks and power draw and it really can make a difference at low to mid range loads.

My Ryzen 3000 based system uses about 90 watts at “idle” with all my stuff running and the hard drives on.

stevehobbes,

It’s probably more about aggressive default bios speeds. Tweak your c states / bios overclocking / pcie power management / windows power management features. Idle power has gone down on most chips.

The Ryzen 3000 should truly idle closer to 20-30w.

fuckwit_mcbumcrumble,

That is after tweaking bios settings. Originally I was at around 100 watts, now I’m closer to 80.

Keep in mind that’s with a bunch of hard drives, and it’s not a 100% idle, more of a 90% idle which is where modern “race to idle” CPUs struggle the most.

originalucifer, in Stalwart v0.5.2
@originalucifer@moist.catsweat.com avatar

dunno mail has kinda been off my list.

like taco bell... it always sounds good until youre just about finished and you realize what youve done to yourself

redcalcium, (edited ) in 13 Feet Ladder

It amazes me that all it takes is just changing user agent to Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) and it can bypass paywalls on many sites? I thought those sites would try harder (e.g. checking if the ip address is truly belong to google), but apparently not.

aniki,

Same. I thought there would be more stuff happening in the background but when I saw it’s just hijacking the google bot headers to display the html i was a bit disappointed it’s so stupidly easy.

andrew,
@andrew@radiation.party avatar

Checking ip ownership is a moving target more likely to result in outcomes these sites don’t want (accidentally blocking google bots and preventing results from appearing on google).

Checking useragent is cheap, easier, unlikely to break (for this purpose, anyway) and the percentage of folks who know how to bypass this check is relatively slim, with a pretty small financial impact.

douglasg14b,
@douglasg14b@lemmy.world avatar

It’s not necessarily a moving target when entire blocks can be associated with Google.

andrew,
@andrew@radiation.party avatar

Unless they are permanently only using specific addresses or blocks and will never change that up, I’d consider it a moving target.

efstajas,

Google literally has an official list of IP ranges for their crawlers, complete with an API that returns the current IP ranges that you can use to automate a check. Hardly a moving target, and even if it is, it doesn’t matter if you know exactly where the target is at all times.

jabathekek, in Sounds like Haier is opening the door!
@jabathekek@sopuli.xyz avatar

The spacing in the email screwed up the formatting:

Dear Andre,

I’m Gianpiero Morbello, serving as the Head of IOT and Ecosystem at Haier Europe.

It’s a pleasure to hear from you. We just received your email, and coincidentally, I was in the process of sending you a mail with a similar suggestion.

I want to emphasize Haier Europe’s enthusiasm for supporting initiatives in the open world. Please note that our IOT vision revolves around a three-pillar strategy:

  • achieving 100% connectivity for our appliances,
  • opening our IOT infrastructure (we are aligned with Matter and extensively integrating third-party connections through APIs, and looking for any other opportunity it might be interesting),
  • and the third pillar involves enhancing consumer value through the integration of various appliances and services, as an example we are pretty active in the energy management opening our platform to solution which are coming from energy providers.

Our strategy’s cornerstone is the IOT platform and the HON app, introduced on AWS in 2020 with a focus on Privacy and Security by Design principles. We’re delighted that our HON connected appliances and solutions have been well-received so the number of connected active consumers is growing day after day, with high level of satisfaction proven by the high rates we receive in the App stores.

Prioritizing the efficiency of HON functions when making AWS calls has been crucial, particularly in light of the notable increase in active users mentioned above. This focus enables us to effectively control costs.

Recently, we’ve observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives, but also to cooperate in better serving your community.

I propose scheduling a call involving our IOT Technology department to address the issue comprehensively and respond to any questions both parties may have.

Hope to hear back from you soon.

Best regards

Gianpiero Morbello Head of Brand & IOT Haier Europe

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Thanks, on my phone and can’t edit it well right now

ashok36, (edited ) in Plex To Launch a Store For Movies and TV Shows

I was trying to think how Plex thinks this is going to play out, knowing that this move will piss off their customer base. Then I realized, this isn’t a play for Plex’s existing customer base. This is a play for their customer’s “friends and family” that are enjoying shared libraries already.

Their ‘customer’ base has for many many years been developing a large user base of technologically naive people with Plex apps installed who could never run their own server. If Plex knows, for example, that for every paying customer there’s three other users pulling from someone’s library, that’s a huge opportunity for them to convert those users to paying customers.

Everyone that set up a Plex server and then shared it with your tech-phobic parents, cousins, friends, etc… We made this possible.

I don’t like it but I can’t argue with the logic from Plex here.

-edit- Tightened up the grammar.

skozzii,

I like this theory and I hope your logic check out. I am a little worried that I will have to make a change soon.

cyberpunk007,

Yikes this is a good theory. Eventually we will be snuffed out. Sharing will stop. Etc.

billwashere,

Well maybe not. Without the shared libraries I doubt the tech-phobic users will stick around for movies they can likely find other places, especially since I doubt Plex gets very good deals for content.

ashok36,

I wouldn’t be so sure if that. It’s possible, yeah, but if my theory is right they see the library sharing as the carrot to get normies to download the plex app onto their roku or apple TV.

Pivoting to a streaming only app would close off that avenue for user acquisition permanently.

avidamoeba, (edited )
@avidamoeba@lemmy.ca avatar

Is that bad though? I don’t mind renting a movie I really like even if my friend has it on their Plex. Especially if it’s from a small studio. Currently I do that via Google TV. Plex Inc being a small private company might use the money better than a publicly traded giant. I wouldn’t mind my friends and family spending a few bucks on it either.

Of course if Plex starts enshitifying existing private streaming features to push this, that’ll be another matter altogether. Which would not be unexpected.

lambda,
@lambda@programming.dev avatar

Yeah, I’ve been switched to using Jellyfin anyways. I hope they can do this successfully. They haven’t been the best for a while anyways…

avidamoeba,
@avidamoeba@lemmy.ca avatar

I’ll probably trial Jellyfin too in preparation to migrate the family should push come to shove.

lambda,
@lambda@programming.dev avatar

It’s worth donating if you have the means to. I paid for a lifetime Plex subscription. So, I felt uncomfortable not donating to Jellyfin. They take donations on open collective.

princessnorah, in Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages
@princessnorah@lemmy.blahaj.zone avatar

Is there the potential for SingleFile html archives rather than pdf & screenshots? I’d imagine it’d be a fair bit smaller file.

cmhe,

Or other standard archiving formats like WARC.

There also is github.com/ArchiveBox/ArchiveBox which looks a bit similar.

krash, in PSA: The Docker Snap package on Ubuntu sucks.

But this is by design, snap containers aren’t allowed to read data outside of their confinements. Same goes for flatpak and OCI-containers.

I don’t use snap myself, but it does have its uses. Bashing it just because it’s popular to hate on snap won’t yield a healthy discussion on how it could be improved.

aniki, (edited )

Snap can be improved with this one simple step

sudo apt-get purge snapd -y

There’s no improving snap. it sucks – full stop. just the mount clutter alone makes it garbage.

The solution exists and its called flatpak and it works MUCH BETTER than canonical-only scholck.

Limitless_screaming,
@Limitless_screaming@kbin.social avatar

Snap sucks, but not for the reason OP stated. There's a decillion reasons for why Snaps suck, why make up a reason that applies to other formats that are actually good?

hperrin,

Ok then don’t publish an application that clearly needs access to files outside of the /home directory. Or at least be upfront about how limited it is when run as a snap.

peter,
@peter@feddit.uk avatar

The Linux community loves to put the responsibility on the user to understand every facet of what they’re trying to do without explaining it

MangoPenguin,
@MangoPenguin@lemmy.blahaj.zone avatar

Agreed, it’s not user friendly at all.

throwafoxtrot,

Does it clearly need access to files outside the /home directory though?

You said your volume mount failed. How about mounting something inside your home folder into the docker container?

hperrin, (edited )

I have a 20TB RAID array that I use for a number of services mounted at /data. I would like Nextcloud to have access to more than the 128GB available to /home. I’m not willing to move my data mount into /home and reconfigure the ~5 other services that use it just to work around some stupid Snap limitation. Who knows whether Snap even can access data across filesystems if they’re mounted in home. I wouldn’t put it past the Snap devs to fall down on that point either.

Yes, Docker clearly needs access to all files. It is meant for running server software, and server software is supposed to be flexible in its setup. To me, this limitation makes it completely unusable. Nextcloud is only the first service that needed access to that directory. I’ll also be running MinIO there for blob storage for a Mastodon server. I’ll probably move Jellyfin into a Docker container, and it’ll need access too.

The fact that this giant issue with Snap is not made clear is my biggest problem with it. I had to figure it out myself over the course of two hours when there are zero warnings or error messages explaining it. What an absolutely unnecessary waste of time, when it could have warned me at install that if I wanted a completely functional version of Docker, I should use the apt package.

I will never use any Snap package again. This was such a bad experience that I probably won’t even be using Ubuntu Server going forward. I already use Fedora for desktop. And the fact that a few people here are basically saying it’s my fault for not already knowing the limitations imposed on Snap packages is just making it more obvious that Ubuntu has become a toxic distro. It’s sad, because Ubuntu got me into Linux back with Hardy Heron 8.04. I’ve been running Ubuntu servers since 9.10. I used to be excited every six months for the new Ubuntu release. It’s sad to see something you loved become awful.

thesmokingman,

The issue here is that Canonical pushed the snap install without warning about its reduced functionality. I don’t think highlighting a wildly different experience between a snap install and the Docker experience people are used to from the standard package install is “bashing it just because it’s popular to hate on snap.” For example, if you take a fresh Ubuntu server 22 install and use the snap package, not realizing that snaps have serious limitations which are not explicitly called out when the snap is offered in the installation process, you’re going to be confused unless you already have that knowledge. It also very helpfully masks everything so debugging is incredibly difficult if you are not already aware of the snap limitations.

hperrin, (edited )

This exactly. Because some poor shmuck might spend two hours trying to get Nextcloud to work with it.

SirMaple_, in Started to move off Google (not strictly self-hosted)
@SirMaple_@lemmy.sirmaple.ca avatar

If you have Proton Premium point your domain to SimpleLogin and use it. Its included with Proton Premium. Its helped me root out 2 places so far that have sold my email address or were compromised and failed to disclosure.

AlecSadler,

Serious question, why SimpleLogin vs Proton aliases?

originalucifer,
@originalucifer@moist.catsweat.com avatar

if youre running a full domain, you dont even need to manually create alias' unless you need to reply/send as.

i've found i rarely need to do that, so you can literally just use an email address literally off the top of your head, have it all forwarded to a catch all and youre done. none of this extra service stuff. again, unless you require 'send as/aliasing'.

AlecSadler,

Yeah, my bad, that’s what I do - so I just wasn’t sure what the benefit of SimpleLogin was…fully open to admit maybe I’m missing something though.

I basically create an email alias for every service I use and when leaks happen I know exactly who the offender is - which is nice…I guess.

kontox,

You cannot turn off the proton aliases, one of my aliases (those with +) got compromised and I’m still getting phishing emails on that one. You can create a rule for that mail but you cannot completely disable it. There is also Proton Pass which does the same as SimpleLogin and also stores Passwords. You should check it out as well.

AlecSadler,

Ahh, okay, that makes some sense. Thanks!

helenslunch,
@helenslunch@feddit.nl avatar

You cannot turn off the proton aliases

What do you mean? Of course you can.

helenslunch,
@helenslunch@feddit.nl avatar

I’ve caught a couple but they weren’t subtle about it at all. I got an email from Norton antivirus that referenced the seller directly. No shame.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 18878464 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10502144 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 38