It depends on what you were raised with. For me I have all these relevant points in my head for C. 25 is nice, under 20 you slowly need to dress longer stuff. Over 30 is hot, over 40 sucks hard, over 50 can become deadly soon. Body temp is around 37.
I would love to upgrade to one, but from tests I gathered that they have an exceedingly bad idle power draw. Given that the card would idle most of the time, I don’t really want to waste power on it if nvidia and amd manage to stay far lower.
If we only ever act on things we think we got 100% nailed down, we will either be as ignorant as these fools who locked Semmelweis away or we will stop doing anything at all, because realistically there is always a chance we got some seemingly basic understanding wrong.
The only intelligent thing is to work with a good mix of “what you know” paired with a sane amount of “critical thinking” and an assessment of potentially involved risks.
Covid was also an example (at least here in Germany). People fought against the invonvenience of having to wear masks or stay inside (or get vaccinated) because (as they said) we don’t know for certain how dangerous the illness really is and/or how effectice these measures are.
For me the calculation was simple: doing these measures and being wrong has far far less fatal consequences than being wrong and not doing these measures.
To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).
Containers running multiple processes are possible, but hard to pull off and therefore rarely used.
What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec).
Ah then it is fine. No judgement. I just wanted to make sure you don’t underestimate their implications and your wording sounded a bit like you consider them the normal baseline.
So if I put a movement sensor that triggers a light in front of a jewish household, they couldn’t leave on sabbath because their movement would trigger a fire?
One problem is that they need to put a price tag and therefore a timeline on such a project. Due to the complexity and the many unknown unknowns in theses decades worth of accumulated technical debts, no one can properly estimate that. And so these projects never get off and typically die during planning/evaluation when both numbers (cost and time) climb higher and higher the longer people think about it.
IMO a solution would be to do it iteratively with a small team and just finish whenever. Upside: you have people who know the system inside-out at hand all the time should something come up. Downside of course is that you have effectively no meaningful reporting on when this thing is finished.
The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.