Afterwards I found a chatroom thread among Cambridge computer scientists, one of whom had also been told that unless he could pin down the moment of theft no one would look at the footage. He said he had tried to explain sorting algorithms to police - he was a computer scientist, after all. You don’t watch the whole thing, he said. You use a binary search. You fast forward to halfway, see if the bike is there and, if it is, zoom to three quarters of the way through. But if it wasn’t there at the halfway mark, you rewind to a quarter of the way through. It’s very quick. In fact, he had pointed out, if the CCTV footage stretched back to the dawn of humanity it would probably have only taken an hour to find the moment of theft. This argument didn’t go down well.
Combine this with Chrome enforcing manifest v3 starting at June 2024, YouTube ads will be virtually unblockable on Chrome, even with an ads blocking extension installed because Google will be controlling the ad blocking mechanism used by the ad blocker. They can arbitrarily reduce the max number of the blocking rulesets, how often the extension can update the rulesets, or even elect to skip running any rulesets that target YouTube or Google domains.
They could instead severely cripple or outright block Firefox users. Since we are the minority, it won’t affect them. They will just blame it on Firefox and wash their hands off.
Facebook for all its nastiness was very much incompetent in influencing the direction of the web. Look at their failed attempts like free basics.
Google on the other hand has the web tightly in its dirty grip. At this point, they aren’t even pretending to be nice. Even those plans that cause them reputational damage are brought back in some other name.
The only way to stop Google is for the regulatory agencies to put their foot down hard. They should be divided into at least a couple dozen companies that are not allowed to do business with each other.
That's not been my experience. It'll tend to be agreeable when I suggest architecture changes, or if I insist on some particular suboptimal design element, but if I tell it "this bit here isn't working" when it clearly isn't the real problem I've had it disagree with me and tell me what it thinks the bug is really caused by.
Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it's more useful than the first LLMs.
The stalebot is most times useless. The only scenario where I can see use of it is a maintainer waiting for the reporter to add information. But closing issues because no maintainer checked on them? That’s garbage and discourages bug reports.
programmer_humor
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.