I’ve been running nextcloud since before it was nextcloud. Was owncloud then moved to next cloud.
Another user put it best. It always feels 75% complete. Sync isn’t fast, gives errors that self correct when restarting the all. Most plugins are even more janky or feel super barren.
I wanted to like it so much but I stopped being able to trust most plugins which meant I had dedicated apps for those things and used nextcloud only for file sync.
If you only want file sync then seafile is vastly superior so that’s what I now have.
Yeah, I wish Nextcloud focused more on the file manager side of their applications. I was using it on my TrueNAS instance and it seems like an unfinished product. E2EE is not enabled by default and looks like their implementation is not perfect either.
Sounds like a common software issue. All the features where developed to 80%, and then moved on to the next feature. Leaving that last, difficult, time consuming, 20% open and unfinished.
It’s the difference between more corporate or Enterprise projects and FOSS projects in a lot of ways. Even once that project matures and becomes a more corporate product the same attitude towards completeness and correctness tends to persist.
(not saying foss is bad, just that the bar tends to be lower in my experience of building software, for many legitimate reasons).
It’s “cultural” in a way depending on the project.
LibreOffice wants to call with broken rendering on Windows, but the changelog mentions new tasty features. But FOSS can do it, Debian can. Those project managers should learn from their approach, whatever it is.
Weird. I’ve had a Pi-Hole + Unbound running on a Pi Zero since 2018 and it’s never had any issues. I expected the Zero to kinda suck but it has been nothing but smooth sailing. It gets USB power from my router and even if my router reboots the Pi also auto reboots itself.
I do next to no maintenance on it and it just keeps on chugging along. Maybe once every six months or so I SSH in and do a pihole -up and that’s it.
There is also lowendspirit, but in both cases you have to be very careful what you buy - not everything that is advertised there will work as advertised or will work long-term
I consider the 'good enough' level to be, if I didn't pixel peep, I couldn't tell the difference. The visually lossless levels were the first crf levels where I couldn't tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.
I know that av1, x264, and x265 all have different ways of compressing video. Obviously, the whole point of this was to get a better idea of what that actually looked like. Everything on the visually lossless section is completely indistinguishable to my eyes, and everything on the good enough section has very minor bits of compression only noticed when i'm looking for it in a still image. This does not require the same codec to compare and contrast with.
Frankly, for anything other than real-time encoding, I don't actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.
Frankly, for anything other than real-time encoding, I don’t actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.
Encoding speed heavily depends on your preset. Veryslow will give you better compression than medium or fast, but at a heavy expense of encoding speed. You’re not gonna re-encode a movie overnight on slow preset. GPU encoding will also give you worse result than CPU encode so that’s something one would have to take into consideration. It’s not a big deal when you’re streaming, but if it’s for video files, I’d much prefer using the CPU.
I consider the ‘good enough’ level to be, if I didn’t pixel peep, I couldn’t tell the difference. The visually lossless levels were the first crf levels where I couldn’t tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.
I was mostly talking about how you organised your table by using CRF values as the rows. It implies that one should compare the results in each row, however that wouldn’t be a comparison that makes much sense. E.g. looking at row “24” one might think that av1 is less effective than h264/5 due to greater file size, but the video quality is vastly different. A more “informative” way to present the data might have been to organise each row by their vmaf score.
Hopefully I don’t come across as too cross or argumentative, just want to give some feedback on how to present the data in clearer way for people who aren’t familiar with how encoding works.
GPU encoding uses (relatively) simpler fixed function encoders that do it much faster than the CPU which uses its general purpose transistors to run an encoding algorithm. End result is GPU encoding is speedy at the cost of visual quality per bitrate; the file size is bigger for same visual quality as a CPU encode. Importantly for storing your videos - CPU encoding, while much slower, will get your file size smaller at the same visual quality threshold you desire, so you can save more videos per drive!
Domain naming authorities require identification for the registration of domains. You cannot purchase domains anonymously. You can pay Njalla and they own the domain, and they’ll tell you that you can control it, but you have no rights to it in any kind of dispute.
All my <TLD> are redacted and I still full own. in the .nz space there has to be a real contact person, and I’m ok with that as I’m a big boy that has been online for over 1/4 of a century now.
Never had a single functional problem with Nextcloud, other than the fact that it’s oppressively slow with the amount of files I’ve shoved into it. Mind you I also don’t use MySQL/MariaDB which I consider a garbage-tier DB. Despite Postgres not being the “Recommended DB” for Nextcloud it works perfectly for me. Maybe that’s the difference.
Cloudflare tunnels are layer 7, so it’s not unlimited access by any means. This also means that certain things will break btw, for example if your website uses websockets to load information, that isn’t supported.
Next, I’d put the computer that is going to be hosting into an isolated vlan of its own and access via external URL only.
If you’re going to use docker images, make sure to vet that they’re updated often and always spin up the latest.
That document doesn’t say what layer. But it does say it supports Websockets.
Just odd that when I try to set it up using a named tunnel I don’t get an option to specify the WS service type. However it does require a service type if you want to connect to it.
Looking at this page it would seem that it’s a layer 7. Although I could be wrong, but my front end app has issues finding my backend service for websockets.
Granted I even tried to connect to my private computer using other protocols. I couldn’t get through. Anyway I’m most likely going to be taking that project offline soon.
No, but I thought I clarified that when I said it’s basically wireguard VPN which operates using tcp/udp (layer 3.) layer 7 is stuff like https. CF tunnels are lower level.
Page you linked is missing the layer between CF and source server so it doesn’t indicate layer. You can lookup wireguard protocol if you want more details.
I’m not self hosting an instance, but kbin is super fucking broken lately and it’s getting really frustrating. It’s been about a week. I submitted a ticket in their Git repo, but no response.
If you have Proton Premium point your domain to SimpleLogin and use it. Its included with Proton Premium. Its helped me root out 2 places so far that have sold my email address or were compromised and failed to disclosure.
if youre running a full domain, you dont even need to manually create alias' unless you need to reply/send as.
i've found i rarely need to do that, so you can literally just use an email address literally off the top of your head, have it all forwarded to a catch all and youre done. none of this extra service stuff. again, unless you require 'send as/aliasing'.
You cannot turn off the proton aliases, one of my aliases (those with +) got compromised and I’m still getting phishing emails on that one. You can create a rule for that mail but you cannot completely disable it. There is also Proton Pass which does the same as SimpleLogin and also stores Passwords. You should check it out as well.
I have nextcloud running since nearly 5 years and it never failed once. Only dowtime is when the backup fails and somehow maintenance mode is still enabled (technically not a crash)
For those interested: Running in docker with mariadb in a stack, checking updates with watchtower everyday and pulling from stable, backups with borg(matic)
Can you explain what you mean by “visually lossless”? Is this a purely subjective classification, or is there a specific definition or benchmark you used?
Visually lossless means I couldn't tell an image difference even when pixel peeping with imgsli. Good enough means I couldn't tell a difference in video, but could occasionally see a compression artifact in imgsli.
Very anecdotally, I saw a little speed improvement but not all that much. DB size increased a bit. I’ll be sticking with it for the time being because why not.
Isnt’t port 81 where usually the nginx proxy manager webui is served? I think you should just forward the requests directly to port 80 and 443 respectively.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.