selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

cosmic_slate, in What happens to my instance if my domain expires?
@cosmic_slate@dmv.social avatar

If someone else buys the domain, then your instance likely won’t exist anymore and you’ll have to get a new domain.

Spend the $12/yr on a .com, it’s a lot less of a headache in the long-run.

accidentalloris,

Yes! I got a cheap .tech domain, and it kept increasing price year after year. Eventually it got a lot cheaper to just grab a .com.

Cloudflare has the best prices for domain names, they sell them at cost

HotChickenFeet,

Is .com fixed in some way to prevent the same scaling? I thought it was basically the domain sellers increasing the prices year over year

axzxc1236,

Pick one of the address between 000000.xyz to 999999999.xyz they are sold and renewed at dirt cheap prices.

originalucifer, (edited ) in Started to move off Google (not strictly self-hosted)
@originalucifer@moist.catsweat.com avatar
PerogiBoi,
@PerogiBoi@lemmy.ca avatar

Thank u for using a transparent gif. It’s refreshing and delicious.

XTL,

Wait till you get transparent pngs. It’ll be like it should have always been.

PerogiBoi,
@PerogiBoi@lemmy.ca avatar

Aaaaaa I meant png this whole time. My life is ogre.

InTheEnd2021, in AppleTV complete replacement opinions

I host a Plex server for streaming and my apple TV 4k 2021 would refuse to play high bit rate media. Kept displaying an error message telling me I’ve exceeded the limit. Started searching online and everyone consistently called Nvidia shield pro the best one can buy. Bought it, love it, now have 3. But all I use it Plex. I’ve made my server basically all streaming services combined to one.

randomcruft,
@randomcruft@lemmy.sdf.org avatar

If I may ask, what are you using to host the Plex server? I’ve read about people using NAS devices (Synology, etc. which has Plex available natively) and running a PC with a lot of storage. Appreciate the comment!

chiisana, in AppleTV complete replacement opinions

If you have Apple users at home, the integrated experience and the video quality is going to be very hard to match from other platforms. My parents use Chromecast and it takes so many more steps to send content on to their media system. The video quality when casting also suffers a little, though that may be because they’re using cheap ISP router AP combo box, and I’m using Ubiquiti APs instead. Having said that, I do think the A15 processor in the most recent model is an overkill in the graphics performance department, so I wouldn’t completely rule out device capability compared as the cause of video quality difference.

Based on my readings, I think most recent high end nVIDIA Shield Tv Pro is the only closest in terms of raw performance and even then it may be a bit behind. Tegra X1+ found in the Shield Pro is on Maxwell architecture, which is older than GeForce 1080 series’ Pascal architecture, if I’m not mistaken. This would date it to around 2015-ish; whereas the previously mentioned A15 processor in most recent version of AppleTV 4K was introduced in 2021 with iPhone 13 series.

randomcruft,
@randomcruft@lemmy.sdf.org avatar

And with my luck, the day I buy a Shield is the day they announce a new one :) Luckily it’s just me, so I’m the only to complain if I do something dumb, ha! I’ll start keeping an eye on the Shield, as I’m not in a rush to buy / change.

Appreciate the device info and response!

bjoern_tantau, in Nextcloud zero day security
@bjoern_tantau@swg-empire.de avatar

For protection against ransomware you need backups. Ideally ones that are append-only where the history is preserved.

thisisawayoflife,

Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?

bjoern_tantau,
@bjoern_tantau@swg-empire.de avatar

No, I’d actually be interested in that myself. I currently just rsync to another server.

baccaratrevivify,
@baccaratrevivify@lemmy.world avatar

Borg backup has append only

Rootiest, (edited )
@Rootiest@lemmy.world avatar

I use and love Kopia for all my backups: local, LAN, and cloud.

Kopia creates snapshots of the files and directories you designate, then encrypts these snapshots before they leave your computer, and finally uploads these encrypted snapshots to cloud/network/local storage called a repository. Snapshots are maintained as a set of historical point-in-time records based on policies that you define.

Kopia uses content-addressable storage for snapshots, which has many benefits:

Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.

Multiple copies of the same file will be stored once. This is known as deduplication and saves you a lot of storage space (i.e., saves you money).

After moving or renaming even large files, Kopia can recognize that they have the same content and won’t need to upload them again.

Multiple users or computers can share the same repository: if different users have the same files, the files are uploaded only once as Kopia deduplicates content across the entire repository.

There’s a ton of other great features but that’s most relevant to what you asked.

tuhriel,

Restic can do append-only when you use their rest server (easily deployed in a docker container)

patchexempt,

I’ve used rclone with backblaze B2 very successfully. rclone is easy to configure and can encrypt everything locally before uploading, and B2 is dirt cheap and has retention policies so I can easily manage (per storage pool) how long deleted/changed files should be retained. works well.

also once you get something set up. make sure to test run a restore! a backup solution is only good if you make sure it works :)

thisisawayoflife,

As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.

geekworking, in AppleTV complete replacement opinions

I have had various sticks and Roku highest end models and then got the latest ATV with hard wire port that adds Dolby vision and high frame rate HDR. I have a 2022 high-end TV.

The video quality is noticeably better. Not sure of older ATV, but this is clearly better than the top end Roku. Also, I’m not sure if it is the same on older tvs

The other thing is that you want to hard wire if at all possible. Even the best wifi can’t touch the reliability of a wire

randomcruft,
@randomcruft@lemmy.sdf.org avatar

Got it, and yes, current ATV is hardwired. Wi-Fi in my home wasn’t too bad, but wired is definitely better. Appreciate the response / thoughts.

jackoneill, in AppleTV complete replacement opinions

I ran an Apple TV in the living room for a long time to access my Plex server and whatever subscription my wife has this month. As time went on it got more and more glitchy until it came to the point where I had to power cycle the thing every few days. Replaced it with a cheap fire stick, annoyed the crap out of me. Replaced that with a cheap Roku, it was only slightly better than the shitty firestick.

My wife got me the NVIDIA shield pro for Christmas this year, and I picked up the p2920 controller for it. My god this thing is awesome - not only is it the best tv box I’ve ever used, I can use moonlight to play games on my rig or GeForce now to stream games. I highly recommend this thing

NightAuthor,

Roku really should not sell most of their cheapest options, they’re very bad, while the top of the line Rokus are very solid.

randomcruft,
@randomcruft@lemmy.sdf.org avatar

I’m seeing a few comments on the Nvidia. I know of them, but had not really given them a serious look. Thank you so much!

Bransonb3, in AppleTV complete replacement opinions

I have tried Roku, Fire TV, Chromecast (not the new models with an interface), and AppleTV. So far Apple TV is the cleanest without ads or sponsored content on the home screen.

If you find something better please let me know.

radix,
@radix@lemmy.world avatar

I like my Roku, but it would be much more annoying without a pihole to block the ads.

AtariDump,

And telemetry.

wreckedcarzz,
@wreckedcarzz@lemmy.world avatar

When I switched my family from predatory directv, this was obviously a question I had, and I ended up going with chromecasts (gen 2 and 3/ultra). Once I showed them how to use their phone as the controller, it immediately clicked, which was fantastic. I thought about an atv or an android box, but that would involve multiple profiles and remembering to switch when someone else wanted to use it (android TV boxes have this buried in the system settings; and I’m the only one with an apple account). Ads were a showstopper for me too, so the pictures/art on the cc when idle was great.

Curious why you went the other way :o

AtariDump,

Because Google is collecting data on EVERYTHING you do.

lemmy.world/comment/6326127

wreckedcarzz,
@wreckedcarzz@lemmy.world avatar

But as a person who doesn’t use G services (well, Grayjay)… the question still stands

AtariDump,

You use Google services whether you know it or not.

www.forbes.com/sites/jasonevangelho/2019/…/amp/

randomcruft,
@randomcruft@lemmy.sdf.org avatar

Understood about the ads / sponsored content. I’ve not used anything but an ATV, but I’ve heard similar (ads, interface, etc.). If I come up with a different solution, I will revive the post and let folks know. Thanks.

MrJameGumb, in AppleTV complete replacement opinions
@MrJameGumb@lemmy.world avatar

I’ve never used an Apple TV, but my smart TV is a Roku and it does most of the things you’ve described. I use Crunchyroll and Tubi and a few other streaming apps including Apple’s. I use Prime Music and it has like 99% of the albums I want to listen to. Obviously it doesn’t have Apple Arcade, but I mostly just play games on my phone anyway. I even put a Roku box on an old CRT TV that I use sometimes for watching older shows in SD format lol! I don’t know if this is the type of answer you were looking for but I hope it’s helpful.

randomcruft,
@randomcruft@lemmy.sdf.org avatar

Appreciate your insights on how you use the Roku devices. Understood about gaming, my eyes can’t handle mobile gaming :)

AA5B,

As does my fire stick, and even my Vizio smart TV … all except the Apple Arcade

I’ve bent thinking about moving in the other direction. I try to avoid privacy abuse of the SmartTV and Fire Stick is being enshittified, so what should I use? AppleTV seems interesting to try plus games may be fun

ninjan, in Is this Seagate Exos drive too good to be true?

It’s just the cheapest type of drive there is. The use case is in large scale RAIDs where one disk failing isn’t a big issue. They tend to have decent warranty but under heavy load they’re not expected to last multiple years. Personally I use drives like this but I make sure to have them in a RAID and with backup, anything else would be foolish. Do also note that expensive NAS drives aren’t guaranteed to last either so a RAID is always recommended.

rosa666parks,

Ok cool, I plan on using them in RAID Z1

RunningInRVA,

Make that RAID Z2 my friend. One disk of redundancy is simply not enough. If a disk fails while resilvering, which can and does happen, then your entire array is lost.

SexyVetra,

Hard agree. Regret only using Z1 for my own NAS. Nothings gone wrong yet 🤞but we’ve had to replace all the drives once so far which has led to some buttock clenching.

When I upgrade, I will not be making the same mistake. (Instead I’ll find shiny new mistakes to make)

Archer,

Instead I’ll find shiny new mistakes to make

This should be the community slogan

Atemu,
@Atemu@lemmy.ml avatar

You must be running an icredible HA software stack for uptime increases so far behind the decimal to matter.

RunningInRVA,

That was uncalled for.

Randelung,

To support this: Backblaze consistently reports much higher failure rates for Seagate drives than all others. I personally don’t trust them. All my failed drives are Seagate, but that’s anecdotal. www.backblaze.com/…/hard-drive-test-databackblaze.com/…/backblaze-drive-stats-for-2022/ the by manufacturer graph.

vithigar, (edited )

That tracks with my experience as well. Literally every single Seagate drive I’ve owned has died, while I have decade old WDs that are still trucking along with zero errors. I decided a while back that I was never touching Seagate again.

Passerby6497,

I actually had my first WD failure this past month, a 10tb drive I shucked from an easystore years ago (and a couple moves ago). My Synology dropped the disk and I’ve replaced it, and the other 3 in the NAS bought around the same time are chugging away like champs.

ninjan,

For sure higher but still not high, we’re talking single digit percentage failed drives per year with a massive sample size. TCO (total cost of ownership) might still come out ahead for Seagate being that they are many times quite a bit cheaper. Still drives failures are a part of the bargain when you’re running your own NAS so plan for it no matter what drive you end up buying. Which means have cash on hand to buy a new one so you can get up to full integrity as fast as possible. (Best is of course to always have a spare on hand but that isn’t feasible for a lot of us.).

leraje, (edited ) in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times
@leraje@lemmy.blahaj.zone avatar

In my own personal experience, Nextcloud;

  • Needs constant attention to prevent falling over
  • Administration is a mess
  • Takes far too long to get used to its 'little ways’
  • Basics like E2EE don’t work
  • Sync works when it feels like it
  • Updating feels like russian roulette
cyberpunk007, (edited )

Updating from my experience is not Russian roulette. It always requires manual intervention and drives me mad. Half the time I just wget the new zip and copy my config file and restart nginx lol.

Camera upload has been fantastic for Android, but once in a while it shits its brains out thinking there are conflicts when there are none and I have to tell it to keep local AND keep server side to make them go away.

viking,
@viking@infosec.pub avatar

The update without fail tells me it doesn’t work due to non-standard folders being present. So, I delete ‘temp’. After the upgrade is done, it tells me that ‘temp’ is missing and required.

Other than that it’s quite stable though… Unless you dare to have long file names or folder depths.

cyberpunk007,

This could be it, but I also remember reading once it might be something to do with php.ini timeout settings too

cm0002,

It’s like…having a toddler LMAO my little digital toddler lololol

harsh3466, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

This is ultimately why I ditched Nextcloud. I had it set up, as recommended, docker, mariadb, yadda yadda. And I swear, if I farted near the server Nextcloud would shit the bed.

I know some people have a rock solid experience, and that’s great, but as with everything, ymmv. For me Nextcloud is not worth the effort.

LordKitsuna,

If all you want is files and sharing try Seafile

harsh3466,

That’s what I’ve got running now, and for me Seafile is been rock solid.

LordKitsuna, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times

I didn’t realize that next Cloud was so bad, might I recommend people having issues try Seafile? Also open source and I’ve been using it for many years without issues. It doesn’t have as many features and it doesn’t look as shiny but it’s rock solid

Have a random meme from my instance

seafile.kitsuna.net/f/074ad17b12ad47e8a958/

sebsch,

Nextcloud ist just fine. Using it since more than 7 years now with zero problems

Geert,
@Geert@lemmy.world avatar

I’m having a hard time believing that… There is a difference between being able to fix the update issues every time without problems or having no problems at all. But if so, neat.

bruhduh, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times
@bruhduh@lemmy.world avatar

Same with my arch install, didn’t touched it for 2 months even though laptop was turned off it decided to die when i launched it and run pacman -syu

FedFer,

I’d say that it’s your fault for running a system upgrade after 2 months and not expecting something to break but it’s not that unreasonable either

TeaEarlGrayHot,

I disagree–a system (even Arch!) should be able to update after a couple months and not break! I recently booted an EndeavourOS image after 6 months and was able to update it properly, although I needed to completely rebuild the keyring first

ayaya,
@ayaya@lemdro.id avatar

Arch and EndeavourOS are the same thing. There is no functional difference between using one or the other. They both use pacman and have the same repos.

TeaEarlGrayHot,

Very true–the specific EOS repo has given me a bit of trouble in the past, but it takes like 3 commands to remove it and then you’ve got just arch (although some purests may disagree 🤣)

FedFer,

I know this is how it’s supposed to be and how it should be but sadly it doesn’t always go this way and arch is notoriously known for this exact problem, the wiki itself tells you to check what’s being upgrades before doing because it might break. Arch is not stable if you don’t expect it to be unstable.

aard,
@aard@kyu.de avatar

I’m using opensuse tumbleweed a lot - this summer I’ve found an installation not touched for 2 years. Was about to reinstall when I decided to give updating it a try. I needed to manually force in a few packages related to zypper, and make choices for conflicts in a bit over 20 packages - but much to my surprise the rest went smoothly.

Xavier,

I regularly “deep freeze” or make read-only systems from Raspberry Pi, Ubuntu, Linux Mint LMDE and others Linux Distros whereas I disable automatic updates everywhere (except for some obvious config/network/hardware/subsystem changes I control separately).

I have had systems running 24/7 (no internet, WiFi) for 2-3 years before I got around to update/upgrade them. Almost never had an issue. I always expected some serious issues but the Linux package management and upgrade system is surprisingly robust. Obviously, I don’t install new software on a old system before updating/upgrading (learned that early on empirically).

Automatic updates are generally beneficial and helps avoid future compatibility/dependency issues on active systems with frequent user interaction.

However, on embedded/single purpose/long distance/dedicated or ephemeral application, (unsupervised) automatic updates may break how the custom/main software may interact with the platform. Causing irreversible issues with the purpose it was built for or negatively impact other parts of closed circuit systems (for example: longitudinal environmental monitoring, fauna and flora observation studies, climate monitoring stations, etc.)

Generally, any kind of update imply some level of supervision and testing, otherwise things could break silently without anyone noticing. Until a critical situation arises and everything break loose and it is too late/too demanding/too costly to try to fix or recover within a impossibly short window of time.

u_tamtam, in Do any of you have that one service that just breaks constantly? I'd love to love Nextcloud, but it sure makes that difficult at times
@u_tamtam@programming.dev avatar

Take that as you want but a vast majority of the complaints I hear about nextcloud are from people running it through docker.

xantoxis,

Does that make it not a substantive complaint about nextcloud, if it can’t run well in docker?

I have a dozen apps all running perfectly happy in Docker, i don’t see why Nextcloud should get a pass for this

recapitated,

I have only ever run nextcloud in docker. No idea what people are complaining about. I guess I’ll have to lurk more and find out.

u_tamtam,
@u_tamtam@programming.dev avatar

See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another “heavy” container like Gitlab not also result in the same outcome?

recapitated,

Things should not care or mostly even know if they’re being run in docker.

u_tamtam,
@u_tamtam@programming.dev avatar

Well, that is boldly assuming:

  • that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory
  • that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process
  • that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
  • that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

And this is even before assuming that docker abstractions are free (which they are not)

bdonvr, (edited )

Most containers don’t package DB servers, Precisely so you don’t have to run 10 different database servers. You can have one Postgres container or whatever. And if it’s a shitty container that DOES package the db, you can always make your own container.

that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS… what are you on about? They’re not locked up little boxes. You can edit the config files, environment variables, whatever you want.

u_tamtam,
@u_tamtam@programming.dev avatar

Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.

Well, that’s not the case of the official Nextcloud image: hub.docker.com/_/nextcloud (it defaults to sqlite which might as well be the reason of so many complaints), and the point about services duplication still holds: github.com/docker-library/repo-info/…/nextcloud

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…

True, but how large do you estimate the intersection of “users using docker by default because it’s convenient” and “users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed”?

I’m not saying it’s not feasible, I’m saying that nextcloud’s packaging can be quite tricky due to the breadth of its scope, and by the time you’ve given yourself fair chances for success, you’ve already thrown away most of the convenience docker brings.

bdonvr,

Docker containers should be MORE stable, if anything.

u_tamtam,
@u_tamtam@programming.dev avatar

and why would that be? More abstraction thrown in for the sake of sysadmin convenience doesn’t magically make things more efficient…

bdonvr,

Nothing to do with efficiency, more because the containers are come with all dependencies at exactly the right version, tested together, in an environment configured by the container creator. It provides reproducibility. As long as you have the Docker daemon running fine on the host OS, you shouldn’t have any issues running the container. (You’ll still have to configure some things, of course)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 18878464 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10502144 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 25