I host a Plex server for streaming and my apple TV 4k 2021 would refuse to play high bit rate media. Kept displaying an error message telling me I’ve exceeded the limit. Started searching online and everyone consistently called Nvidia shield pro the best one can buy. Bought it, love it, now have 3. But all I use it Plex. I’ve made my server basically all streaming services combined to one.
If I may ask, what are you using to host the Plex server? I’ve read about people using NAS devices (Synology, etc. which has Plex available natively) and running a PC with a lot of storage. Appreciate the comment!
If you have Apple users at home, the integrated experience and the video quality is going to be very hard to match from other platforms. My parents use Chromecast and it takes so many more steps to send content on to their media system. The video quality when casting also suffers a little, though that may be because they’re using cheap ISP router AP combo box, and I’m using Ubiquiti APs instead. Having said that, I do think the A15 processor in the most recent model is an overkill in the graphics performance department, so I wouldn’t completely rule out device capability compared as the cause of video quality difference.
Based on my readings, I think most recent high end nVIDIA Shield Tv Pro is the only closest in terms of raw performance and even then it may be a bit behind. Tegra X1+ found in the Shield Pro is on Maxwell architecture, which is older than GeForce 1080 series’ Pascal architecture, if I’m not mistaken. This would date it to around 2015-ish; whereas the previously mentioned A15 processor in most recent version of AppleTV 4K was introduced in 2021 with iPhone 13 series.
And with my luck, the day I buy a Shield is the day they announce a new one :) Luckily it’s just me, so I’m the only to complain if I do something dumb, ha! I’ll start keeping an eye on the Shield, as I’m not in a rush to buy / change.
Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?
I use and love Kopia for all my backups: local, LAN, and cloud.
Kopia creates snapshots of the files and directories you designate, then encrypts these snapshots before they leave your computer, and finally uploads these encrypted snapshots to cloud/network/local storage called a repository. Snapshots are maintained as a set of historical point-in-time records based on policies that you define.
Kopia uses content-addressable storage for snapshots, which has many benefits:
Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.
Multiple copies of the same file will be stored once. This is known as deduplication and saves you a lot of storage space (i.e., saves you money).
After moving or renaming even large files, Kopia can recognize that they have the same content and won’t need to upload them again.
Multiple users or computers can share the same repository: if different users have the same files, the files are uploaded only once as Kopia deduplicates content across the entire repository.
There’s a ton of other great features but that’s most relevant to what you asked.
I’ve used rclone with backblaze B2 very successfully. rclone is easy to configure and can encrypt everything locally before uploading, and B2 is dirt cheap and has retention policies so I can easily manage (per storage pool) how long deleted/changed files should be retained. works well.
also once you get something set up. make sure to test run a restore! a backup solution is only good if you make sure it works :)
As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.
I have had various sticks and Roku highest end models and then got the latest ATV with hard wire port that adds Dolby vision and high frame rate HDR. I have a 2022 high-end TV.
The video quality is noticeably better. Not sure of older ATV, but this is clearly better than the top end Roku. Also, I’m not sure if it is the same on older tvs
The other thing is that you want to hard wire if at all possible. Even the best wifi can’t touch the reliability of a wire
I ran an Apple TV in the living room for a long time to access my Plex server and whatever subscription my wife has this month. As time went on it got more and more glitchy until it came to the point where I had to power cycle the thing every few days. Replaced it with a cheap fire stick, annoyed the crap out of me. Replaced that with a cheap Roku, it was only slightly better than the shitty firestick.
My wife got me the NVIDIA shield pro for Christmas this year, and I picked up the p2920 controller for it. My god this thing is awesome - not only is it the best tv box I’ve ever used, I can use moonlight to play games on my rig or GeForce now to stream games. I highly recommend this thing
I have tried Roku, Fire TV, Chromecast (not the new models with an interface), and AppleTV. So far Apple TV is the cleanest without ads or sponsored content on the home screen.
When I switched my family from predatory directv, this was obviously a question I had, and I ended up going with chromecasts (gen 2 and 3/ultra). Once I showed them how to use their phone as the controller, it immediately clicked, which was fantastic. I thought about an atv or an android box, but that would involve multiple profiles and remembering to switch when someone else wanted to use it (android TV boxes have this buried in the system settings; and I’m the only one with an apple account). Ads were a showstopper for me too, so the pictures/art on the cc when idle was great.
Understood about the ads / sponsored content. I’ve not used anything but an ATV, but I’ve heard similar (ads, interface, etc.). If I come up with a different solution, I will revive the post and let folks know. Thanks.
I’ve never used an Apple TV, but my smart TV is a Roku and it does most of the things you’ve described. I use Crunchyroll and Tubi and a few other streaming apps including Apple’s. I use Prime Music and it has like 99% of the albums I want to listen to. Obviously it doesn’t have Apple Arcade, but I mostly just play games on my phone anyway. I even put a Roku box on an old CRT TV that I use sometimes for watching older shows in SD format lol! I don’t know if this is the type of answer you were looking for but I hope it’s helpful.
As does my fire stick, and even my Vizio smart TV … all except the Apple Arcade
I’ve bent thinking about moving in the other direction. I try to avoid privacy abuse of the SmartTV and Fire Stick is being enshittified, so what should I use? AppleTV seems interesting to try plus games may be fun
It’s just the cheapest type of drive there is. The use case is in large scale RAIDs where one disk failing isn’t a big issue. They tend to have decent warranty but under heavy load they’re not expected to last multiple years. Personally I use drives like this but I make sure to have them in a RAID and with backup, anything else would be foolish. Do also note that expensive NAS drives aren’t guaranteed to last either so a RAID is always recommended.
Make that RAID Z2 my friend. One disk of redundancy is simply not enough. If a disk fails while resilvering, which can and does happen, then your entire array is lost.
Hard agree. Regret only using Z1 for my own NAS. Nothings gone wrong yet 🤞but we’ve had to replace all the drives once so far which has led to some buttock clenching.
When I upgrade, I will not be making the same mistake. (Instead I’ll find shiny new mistakes to make)
That tracks with my experience as well. Literally every single Seagate drive I’ve owned has died, while I have decade old WDs that are still trucking along with zero errors. I decided a while back that I was never touching Seagate again.
I actually had my first WD failure this past month, a 10tb drive I shucked from an easystore years ago (and a couple moves ago). My Synology dropped the disk and I’ve replaced it, and the other 3 in the NAS bought around the same time are chugging away like champs.
For sure higher but still not high, we’re talking single digit percentage failed drives per year with a massive sample size. TCO (total cost of ownership) might still come out ahead for Seagate being that they are many times quite a bit cheaper. Still drives failures are a part of the bargain when you’re running your own NAS so plan for it no matter what drive you end up buying. Which means have cash on hand to buy a new one so you can get up to full integrity as fast as possible. (Best is of course to always have a spare on hand but that isn’t feasible for a lot of us.).
Updating from my experience is not Russian roulette. It always requires manual intervention and drives me mad. Half the time I just wget the new zip and copy my config file and restart nginx lol.
Camera upload has been fantastic for Android, but once in a while it shits its brains out thinking there are conflicts when there are none and I have to tell it to keep local AND keep server side to make them go away.
The update without fail tells me it doesn’t work due to non-standard folders being present. So, I delete ‘temp’. After the upgrade is done, it tells me that ‘temp’ is missing and required.
Other than that it’s quite stable though… Unless you dare to have long file names or folder depths.
This is ultimately why I ditched Nextcloud. I had it set up, as recommended, docker, mariadb, yadda yadda. And I swear, if I farted near the server Nextcloud would shit the bed.
I know some people have a rock solid experience, and that’s great, but as with everything, ymmv. For me Nextcloud is not worth the effort.
I didn’t realize that next Cloud was so bad, might I recommend people having issues try Seafile? Also open source and I’ve been using it for many years without issues. It doesn’t have as many features and it doesn’t look as shiny but it’s rock solid
I’m having a hard time believing that… There is a difference between being able to fix the update issues every time without problems or having no problems at all. But if so, neat.
I disagree–a system (even Arch!) should be able to update after a couple months and not break! I recently booted an EndeavourOS image after 6 months and was able to update it properly, although I needed to completely rebuild the keyring first
Arch and EndeavourOS are the same thing. There is no functional difference between using one or the other. They both use pacman and have the same repos.
Very true–the specific EOS repo has given me a bit of trouble in the past, but it takes like 3 commands to remove it and then you’ve got just arch (although some purests may disagree 🤣)
I know this is how it’s supposed to be and how it should be but sadly it doesn’t always go this way and arch is notoriously known for this exact problem, the wiki itself tells you to check what’s being upgrades before doing because it might break. Arch is not stable if you don’t expect it to be unstable.
I’m using opensuse tumbleweed a lot - this summer I’ve found an installation not touched for 2 years. Was about to reinstall when I decided to give updating it a try. I needed to manually force in a few packages related to zypper, and make choices for conflicts in a bit over 20 packages - but much to my surprise the rest went smoothly.
I regularly “deep freeze” or make read-only systems from Raspberry Pi, Ubuntu, Linux Mint LMDE and others Linux Distros whereas I disable automatic updates everywhere (except for some obvious config/network/hardware/subsystem changes I control separately).
I have had systems running 24/7 (no internet, WiFi) for 2-3 years before I got around to update/upgrade them. Almost never had an issue. I always expected some serious issues but the Linux package management and upgrade system is surprisingly robust. Obviously, I don’t install new software on a old system before updating/upgrading (learned that early on empirically).
Automatic updates are generally beneficial and helps avoid future compatibility/dependency issues on active systems with frequent user interaction.
However, on embedded/single purpose/long distance/dedicated or ephemeral application, (unsupervised) automatic updates may break how the custom/main software may interact with the platform. Causing irreversible issues with the purpose it was built for or negatively impact other parts of closed circuit systems (for example: longitudinal environmental monitoring, fauna and flora observation studies, climate monitoring stations, etc.)
Generally, any kind of update imply some level of supervision and testing, otherwise things could break silently without anyone noticing. Until a critical situation arises and everything break loose and it is too late/too demanding/too costly to try to fix or recover within a impossibly short window of time.
See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another “heavy” container like Gitlab not also result in the same outcome?
that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory
that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
And this is even before assuming that docker abstractions are free (which they are not)
Most containers don’t package DB servers, Precisely so you don’t have to run 10 different database servers. You can have one Postgres container or whatever. And if it’s a shitty container that DOES package the db, you can always make your own container.
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
You can typically configure the software in a docker container just as much as you could if you installed it on your host OS… what are you on about? They’re not locked up little boxes. You can edit the config files, environment variables, whatever you want.
Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.
You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…
True, but how large do you estimate the intersection of “users using docker by default because it’s convenient” and “users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed”?
I’m not saying it’s not feasible, I’m saying that nextcloud’s packaging can be quite tricky due to the breadth of its scope, and by the time you’ve given yourself fair chances for success, you’ve already thrown away most of the convenience docker brings.
Nothing to do with efficiency, more because the containers are come with all dependencies at exactly the right version, tested together, in an environment configured by the container creator. It provides reproducibility. As long as you have the Docker daemon running fine on the host OS, you shouldn’t have any issues running the container. (You’ll still have to configure some things, of course)
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.