So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?
Out of curiosity, what’s preventing someone from making a regulatory db similar to tzdb other than the lack of maintainers?
This seems like the perfect use case for something like this: ship with a reasonable default, then load a specific profile after init to further tweak PM. If regulations change you can just update a package instead of having to update the entire kernel.
Having one program (process) talk to another is dangerous. Think of a stranger trying to come over to me and deliver a message. There’s no way I can guarantee that he isn’t planning to stab me as soon as he sees me.
That’s why we have special mechanisms for programs talking to other programs. Instead of having the stranger deliver the message directly to me, our mutual friend Bob (IPC Library, binder in this case) acts as an intermediary. This way at least I can’t be “directly” stabbed.
What’s preventing the stranger from convincing Bob to stab me? Not much (except for Bob’s own ethics/programming)
To work around this, we have designed programming languages (rust) that don’t work if there’s a possibility of it being corrupted (I would add “at least superficially”, but that’s not the main topic here). Bob was trained by the CIA in anti-brainwashing techniques. It’s really hard to convince Bob to stab me. That’s why it’s such a big deal. We now have a way of delivering messages between two programs that is much safer than before.
The only problem is that the CIA anti-brainwashing techniques (rust) tend to make people slow. So we deliver messages less efficiently than before. Good news is in this case we managed to make Bob almost as fast as before, so we don’t lose our own much while gaining additional security. The people who checked on Bob even made sure to have Bob do the exact same thing as before when delivering messages (using RB Trees), hence this evidence is most likely credible.
I think we may be looking at these wrong. Yes there’s a visible throughput/latency improvement here but what about other factors? Power savings? Cache efficiency? CPU cycles saved for other co-running processes?
These are going to be pretty hard to measure without an x86_64 simulator. So I don’t fault them for not including such benches. But there might be more to the story here.
I grew up in a household where I was taught when cooking salty sweet dishes, you should add just enough sugar to the dish so that it tastes different but you can’t tell why. Otherwise you’ve added too much sugar.
You can definitely taste the sweet in Pineapple pizza…
There are more places where bandwidth is a bottleneck now than 10 years ago.
NIC speeds have gone from 100Gbps to 800Gbps in the last few years while PCIe and DRAM speeds have nowhere increased that much. No way are you going to push all that data through to the CPU on time. Bandwidth is the bottleneck these days and will continue to be a huge issue for the foreseeable future.
Worked in IT, target disk mode is a life saver when you have to recover data from a laptop with a broken screen/keyboard/bad ribbon cable and don’t want to take apart something held together by glue.