I really don’t understand all those posts: I use nginx, apparmor, partially even modsecurity, I use collabora office official debian package, face recognition, email, update regularly (waiting for major upgrades for every app I use to be updated), etc. and literally never had a problem in the last 5 years except for my own experiment. True, only 5 people use my instance, but Nextcloud is rock solid for me
Likewise. I have been running it for years, almost no problem that I can think of. My setup is pretty vanilla, Apache, MySQL. It’s running in a container behind a reverse proxy. I keep it as up to date as possible. Only 3 people use mine, and I don’t use very many apps: files, notes, bookmarks, calendar, email.
I was trying for the 3rd time to install the collabora office app in nextcloud. I think it’s hilarious they know it’s going to time out and they give you a bogus command to run to fix it. So unnecessarily irritating.
I’ve been running nextcloud since before it was nextcloud. Was owncloud then moved to next cloud.
Another user put it best. It always feels 75% complete. Sync isn’t fast, gives errors that self correct when restarting the all. Most plugins are even more janky or feel super barren.
I wanted to like it so much but I stopped being able to trust most plugins which meant I had dedicated apps for those things and used nextcloud only for file sync.
If you only want file sync then seafile is vastly superior so that’s what I now have.
Yeah, I wish Nextcloud focused more on the file manager side of their applications. I was using it on my TrueNAS instance and it seems like an unfinished product. E2EE is not enabled by default and looks like their implementation is not perfect either.
Sounds like a common software issue. All the features where developed to 80%, and then moved on to the next feature. Leaving that last, difficult, time consuming, 20% open and unfinished.
It’s the difference between more corporate or Enterprise projects and FOSS projects in a lot of ways. Even once that project matures and becomes a more corporate product the same attitude towards completeness and correctness tends to persist.
(not saying foss is bad, just that the bar tends to be lower in my experience of building software, for many legitimate reasons).
It’s “cultural” in a way depending on the project.
LibreOffice wants to call with broken rendering on Windows, but the changelog mentions new tasty features. But FOSS can do it, Debian can. Those project managers should learn from their approach, whatever it is.
If you have Apple users at home, the integrated experience and the video quality is going to be very hard to match from other platforms. My parents use Chromecast and it takes so many more steps to send content on to their media system. The video quality when casting also suffers a little, though that may be because they’re using cheap ISP router AP combo box, and I’m using Ubiquiti APs instead. Having said that, I do think the A15 processor in the most recent model is an overkill in the graphics performance department, so I wouldn’t completely rule out device capability compared as the cause of video quality difference.
Based on my readings, I think most recent high end nVIDIA Shield Tv Pro is the only closest in terms of raw performance and even then it may be a bit behind. Tegra X1+ found in the Shield Pro is on Maxwell architecture, which is older than GeForce 1080 series’ Pascal architecture, if I’m not mistaken. This would date it to around 2015-ish; whereas the previously mentioned A15 processor in most recent version of AppleTV 4K was introduced in 2021 with iPhone 13 series.
And with my luck, the day I buy a Shield is the day they announce a new one :) Luckily it’s just me, so I’m the only to complain if I do something dumb, ha! I’ll start keeping an eye on the Shield, as I’m not in a rush to buy / change.
I could never get the AIO setup to work well for some reason. It was also a couple versions behind it seemed.
I…uh…know it’s not popular on the fed, but I use the nextcloud snap package and it’s been rock solid. It’s always up -to-date and they have a backup/export feature too.
People talk a lot of smack on snap, but installed the nextcloud snap 5 years ago to check out nextcloud and see if I liked it. I did, and the snap was so easy that it stuck around for 5 years. I didn’t do anything except update the underlying OS. It is really well maintained.
I just migrated off of it to get a little more flexability, but I have nothing but good things to say about it.
I couldn’t make things easy for myself when I migrated, because I wanted to use postgres, while the snap uses mysql/mariadb and I wanted S3 storage instead of file system.
In the end I just pulled down all the user filed and exported the calendars and contacts manually, then imported them on the new instance.
There are some blog posts on migrating db types, but my install is very minimal and I just didn’t want the headache.
If you don’t want to change the database type, then you can just dump the db from the snap, backup the user file directory, then restore into the new database and rsync up all the files.
Maybe Jellyfin, where I believe you can force a low bitrate for every remote client. It wouldn’t be “adjust to internet speed” but you could minimise buffering that way.
Of course. Youtube and the like “pre-transcode” it so that would be one way for Jellyfin to better solve it, at the cost of a significant amount of disk space.
You can get an intel arc a310 for ~$90 and it has absolutely insane transcode performance, so depending on how large your library is it might even end up cheaper than buying more storage to just live-transcode everything.
I suspect the delay would still be longer than a Youtube like implementation which may need to switch transcodes multiple times, but that’s probably unrealistic at this point anyway.
Transcoding everything to AV1 could be a solution too, since high resolutions can look quite good at low bitrates, so you could limit it to 5mbps or 10mbps for any resolution and be done with it. But I’m not sure Jellyfin supports that, and at least from the UI it doesn’t give you particularly fine grained control over resolution/bitrates. Perhaps having a secondary library of just AV1 transcodes that you handle manually (perhaps even using a software encoder) could be an option for some.
The client side is also an issue, with not that many devices supporting hardware decoding (although I’ve found it’s fast enough in software with most modern smartphones at least).
if you’re switching between formats yeah it’s going to need to start over on the transcoding. If you don’t it’s actually better because it just caches it on disk. From that point it’s basically native.
Jellyfin does support limiting external network speeds, and individual client speeds, so if you setup your transcoding correctly, and the clients support those codecs, it’ll work.
I’m a big fan of Jellyfin. I run it at home with a dedicated Nvidia A2000 for hardware transcoding. It’s able to transcode multiple 4k streams with tonemapping faster than they can play.
As much as I’d love to use Jellyfin, there are two major issues: My internet connection is so slow, that I’d be lucky to stream 720p at a low bitrate. I’d spend the money on a faster connection, but I live in an area that doesn’t even get cell phone service. My options are DSL and Starlink, and I have both; the DSL is just slow, and Starlink uplink speed isn’t much better, plus I have plenty of obstructions that make it somewhat unreliable. The second problem is that Jellyfin has too steep of a learning curve. Telling my relatives “oh, if it starts buffering, just lower the bitrate” isn’t an option. Not to mention, I’d have to run it on a VPS, and hosting a VPS with the resources required for this is way too expensive for me.
While this is conclusively stoned as “cpu” issues, in case anyone else finds this thread…
While your isp can’t read the data over the VPN, they CAN see that you’re using a VPN and intentionally slow down your connection with traffic shaping because you’re putting so much data through the vpn.
Most people seem to just want to use RPIs as a very slow Linux server for some reason…
Use it to play around with hardware integration with the GPIO pins. Get a sensor HAT and start recording temperatures, write some code that turns on/off an LED, build a robot controller, etc. There are lots of kits and documentation on the various things you can do!
It is! Especially if you want to write the code yourself. It’s an interesting design problem if you start to consider cases where the PI may be offline (mobile on a battery in my case). Do you lose that data? Store and forward? In memory or to a local data store? It’s a fun rainy-weekend project.
Word of caution - HATs can be a rather inaccurate in their temperature monitoring. The Pi gets warm. I had done my work using a PTC thermistor that was distanced from the Pi itself. I’ve got a friend using a HAT and it’s been very off (up to 10C above ambient!). A Pi Zero may not give off as much heat as, say a Pi4 though. YMMV.
Unluckily last time I wanted to do sensor stuff the ~20 euro air quality multi-sensor (co2, pm1-10, humidity, voc?) board got lost in transit and I didn’t bother since :(
The original plan was use it with my esp32 dev board (wroom32, so wifi) to have a portable sensor, this RPi was supposed to be the collection server (mqtt, influx, grafana).
I should revisit this idea soon, thanks for reminding me!
SBCs like the RPi are kind of awkwardly in-between a microcontroller like an Arduino or ESP32 that you can actually trust with handling GPIO and data logging, and a real Linux system that can actually do meaningful computational work.
Pretty much the only task I’ve found them reliably appropriate for is running OctoPrint, really really light computer vision tasks for robotics, or hooking up an RTL-SDR to use as a police/HAM scanner. Outside of those, it’s so much easier to use either a cheaper and more reliable MCU or a much more powerful old laptop or desktop.
You can write code that has access to more resources. I had a RPI once that showed code build status on an led strip (red failed, green passed). It was a Java program that connected to AWS SQS for build event notifications. A micro controller would be much harder to do that on.
This right here. As a member of the OpenNIC project, I used to run an open resolver and this required a lot of hands-on maintenance. Basically what happens is someone sends a very small packet requesting the lookup of something which returns a huge amount of data (like DNSSEC records). They can make thousands of these requests in a short period, attempting to flood out the target domain’s DNS servers and effectively take them offline, by using your open server as the attacker.
At the very least, you need to have strict rate-limiting controls on DNS lookups. And since the requests come in through UDP, they can spoof their IP address so you can’t simply block an attacker. When I ran into this issue, I wrote up scripts to monitor for a lot of requests to the same domain name and outright block those until the attack stopped. It wasn’t a great solution, but it did at least make sure my system wasn’t contributing to an attack.
Your best bet is to only respond to DNS requests for your own domain(s). If you really want an open resolver, think about limiting it by creating some sort of sign-up method (for instance, ddns servers use a specific URL to register the changing IP of known users), but still keep the rate-limiting in place.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.