My wonderful MongoDB powered, old as fuck mFi vm. It’s running on Ubuntu 14 because that’s the last supported version and Ubiquiti abandoned this shit decades ago. It’s set to restore and reboot once a month. That usually keeps shit working lol
I haven’t had any issues with Nextcloud yet. But any torrent client refuses to work. I’ve tried various qbittorrent containers, transmission, deluge briefly, they all work for a while but eventual refuse to do anything.
I’m still too container stupid to understand the right way to do this. I’m running it in docker under kubernetes and sometimes I don’t update nextcloud for a long time then I do a container update and it’s all fucked because of incompatible php versions of some shit.
I don’t remember much about how to use kubernetes but if you can specify a tag like nextcloud:28 instead of nextcloud:latest you should have a safer time with upgrades. Then make sure you always upgrade all the way before moving to a newer major version, this is crucial.
Yeah the Docker version hated me, mainly due to it sometimes getting a bit behind on updates and then having schema mismatches if I ran an update in that missed the previous one. No issues with the Snap thus far
I used to have this problem. I started pulling a version number (like 27) instead of “latest” so that I could just pull minor releases when I did updates, and then I manually step up the version in the docker-config file for major versions when I’m ready for them. (I don’t like to pull a major release version until there’s been 1 or 2 maintenance releases since my nextcloud is fairly critical for my family)
Only complaints I have with Nextcloud are that it’s slow and updates suck over the web interface. But apart from that it has been reliable. I’m not running it through Docker. In fact, my installation is so old that the database tables still have an oc_ prefix.
You might want to try migrating your nextcloud instance to postgres instead of mysql/mariadb. Many people says they get some big performance boost. I’m going to try it myself next weekend to see if it’s true.
Mine is a snap install that started 3 years ago on virtual box and was ported over to proxmox. It has never broken, updates automatically, and generally seems to work just fine.
It doesn’t load instantly, but it doesn’t drag by any means.
Updating from my experience is not Russian roulette. It always requires manual intervention and drives me mad. Half the time I just wget the new zip and copy my config file and restart nginx lol.
Camera upload has been fantastic for Android, but once in a while it shits its brains out thinking there are conflicts when there are none and I have to tell it to keep local AND keep server side to make them go away.
The update without fail tells me it doesn’t work due to non-standard folders being present. So, I delete ‘temp’. After the upgrade is done, it tells me that ‘temp’ is missing and required.
Other than that it’s quite stable though… Unless you dare to have long file names or folder depths.
This is ultimately why I ditched Nextcloud. I had it set up, as recommended, docker, mariadb, yadda yadda. And I swear, if I farted near the server Nextcloud would shit the bed.
I know some people have a rock solid experience, and that’s great, but as with everything, ymmv. For me Nextcloud is not worth the effort.
I didn’t realize that next Cloud was so bad, might I recommend people having issues try Seafile? Also open source and I’ve been using it for many years without issues. It doesn’t have as many features and it doesn’t look as shiny but it’s rock solid
I’m having a hard time believing that… There is a difference between being able to fix the update issues every time without problems or having no problems at all. But if so, neat.
I disagree–a system (even Arch!) should be able to update after a couple months and not break! I recently booted an EndeavourOS image after 6 months and was able to update it properly, although I needed to completely rebuild the keyring first
Arch and EndeavourOS are the same thing. There is no functional difference between using one or the other. They both use pacman and have the same repos.
Very true–the specific EOS repo has given me a bit of trouble in the past, but it takes like 3 commands to remove it and then you’ve got just arch (although some purests may disagree 🤣)
I know this is how it’s supposed to be and how it should be but sadly it doesn’t always go this way and arch is notoriously known for this exact problem, the wiki itself tells you to check what’s being upgrades before doing because it might break. Arch is not stable if you don’t expect it to be unstable.
I’m using opensuse tumbleweed a lot - this summer I’ve found an installation not touched for 2 years. Was about to reinstall when I decided to give updating it a try. I needed to manually force in a few packages related to zypper, and make choices for conflicts in a bit over 20 packages - but much to my surprise the rest went smoothly.
I regularly “deep freeze” or make read-only systems from Raspberry Pi, Ubuntu, Linux Mint LMDE and others Linux Distros whereas I disable automatic updates everywhere (except for some obvious config/network/hardware/subsystem changes I control separately).
I have had systems running 24/7 (no internet, WiFi) for 2-3 years before I got around to update/upgrade them. Almost never had an issue. I always expected some serious issues but the Linux package management and upgrade system is surprisingly robust. Obviously, I don’t install new software on a old system before updating/upgrading (learned that early on empirically).
Automatic updates are generally beneficial and helps avoid future compatibility/dependency issues on active systems with frequent user interaction.
However, on embedded/single purpose/long distance/dedicated or ephemeral application, (unsupervised) automatic updates may break how the custom/main software may interact with the platform. Causing irreversible issues with the purpose it was built for or negatively impact other parts of closed circuit systems (for example: longitudinal environmental monitoring, fauna and flora observation studies, climate monitoring stations, etc.)
Generally, any kind of update imply some level of supervision and testing, otherwise things could break silently without anyone noticing. Until a critical situation arises and everything break loose and it is too late/too demanding/too costly to try to fix or recover within a impossibly short window of time.
See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another “heavy” container like Gitlab not also result in the same outcome?
that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory
that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
And this is even before assuming that docker abstractions are free (which they are not)
Most containers don’t package DB servers, Precisely so you don’t have to run 10 different database servers. You can have one Postgres container or whatever. And if it’s a shitty container that DOES package the db, you can always make your own container.
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
You can typically configure the software in a docker container just as much as you could if you installed it on your host OS… what are you on about? They’re not locked up little boxes. You can edit the config files, environment variables, whatever you want.
Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.
You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…
True, but how large do you estimate the intersection of “users using docker by default because it’s convenient” and “users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed”?
I’m not saying it’s not feasible, I’m saying that nextcloud’s packaging can be quite tricky due to the breadth of its scope, and by the time you’ve given yourself fair chances for success, you’ve already thrown away most of the convenience docker brings.
Nothing to do with efficiency, more because the containers are come with all dependencies at exactly the right version, tested together, in an environment configured by the container creator. It provides reproducibility. As long as you have the Docker daemon running fine on the host OS, you shouldn’t have any issues running the container. (You’ll still have to configure some things, of course)
I’ve been running nextcloud since before it was nextcloud. Was owncloud then moved to next cloud.
Another user put it best. It always feels 75% complete. Sync isn’t fast, gives errors that self correct when restarting the all. Most plugins are even more janky or feel super barren.
I wanted to like it so much but I stopped being able to trust most plugins which meant I had dedicated apps for those things and used nextcloud only for file sync.
If you only want file sync then seafile is vastly superior so that’s what I now have.
Yeah, I wish Nextcloud focused more on the file manager side of their applications. I was using it on my TrueNAS instance and it seems like an unfinished product. E2EE is not enabled by default and looks like their implementation is not perfect either.
Sounds like a common software issue. All the features where developed to 80%, and then moved on to the next feature. Leaving that last, difficult, time consuming, 20% open and unfinished.
It’s the difference between more corporate or Enterprise projects and FOSS projects in a lot of ways. Even once that project matures and becomes a more corporate product the same attitude towards completeness and correctness tends to persist.
(not saying foss is bad, just that the bar tends to be lower in my experience of building software, for many legitimate reasons).
It’s “cultural” in a way depending on the project.
LibreOffice wants to call with broken rendering on Windows, but the changelog mentions new tasty features. But FOSS can do it, Debian can. Those project managers should learn from their approach, whatever it is.
Weird. I’ve had a Pi-Hole + Unbound running on a Pi Zero since 2018 and it’s never had any issues. I expected the Zero to kinda suck but it has been nothing but smooth sailing. It gets USB power from my router and even if my router reboots the Pi also auto reboots itself.
I do next to no maintenance on it and it just keeps on chugging along. Maybe once every six months or so I SSH in and do a pihole -up and that’s it.
Never had a single functional problem with Nextcloud, other than the fact that it’s oppressively slow with the amount of files I’ve shoved into it. Mind you I also don’t use MySQL/MariaDB which I consider a garbage-tier DB. Despite Postgres not being the “Recommended DB” for Nextcloud it works perfectly for me. Maybe that’s the difference.
Add comment