I run my own Invidious instance on my local network and its not bad, but you really aren’t able to endlessly doom scroll Youtube recommendations with it. That sounds like a non-issue, but its more difficult to find new content you like without that algorithmic aspect. Technically, Invidious will load playlists, but the UI is designed to maximize the video presence without the other add-ons, so scrolling is a pain. Also, history is unnamed so its just a thumbnail with no other info.
Hah, it still tries with the right-wing conspiracy garbage though every so often though. Its like “Hey…you wanna watch some hate crimes? No? Uh… uh… ok, here’s the 37min of LOTR facts you asked for…”
I was curious how you implemented this as it’s pretty much the default YT bypass qutebrowser users use. Then I read the MIME type addition you did and had a good laugh. That’s clever. Always nice to see a fellow Go user, too.
No, the way you did it is the only way I can think you can. Otherwise it opens up things to arbitrary code execution. I’m not exactly sure how qutebrowser gets away with it, but I know it’s built on QT so maybe it just isn’t running sandboxed or had some special method for calling external binaries/scripts. You might take a look at that project and see, but Firefox/qutebrowser is probably like comparing apples and oranges.
They all have pros and cons. For me, I wanted something that would be accessible from one central point across a zero tier network. This way I wasn’t having to maintain database copies of free tube via rclone or other tool and handle merges. That pretty much just meant Invidious. Someone had actually made a tool to automate docker container deployment and build out the PostgreSQL tables. It turned out to be the simplest solution for me.
I don’t think they can really. I don’t work in that stuff, but skipping isn’t included in YT analytics from what I’ve read. I would bet they rely on something like average view percentage to just make assumptions. For example, if a content creator places the sponsor bit in the first 10% of the video, and average view percentage for that video is 80%, then it is assumed the sponsor bit was watched. I wouldn’t be surprised if sponsors require some form of transparency in analytic reporting for content creators to get paid.
I also would figure that YouTube, as it has no bearing on their revenue, is probably not going to add in analytic features for Skip just for the sake of some third party.
BBC could ID a VPN IP address based on usage and concurrent sessions, but honestly most companies that block VPNs just purchase IP address lists from any number of vendors. Pixalate and DoubleVerify are two that I’ve worked with in the past that both provide that data to clients. They rarely ever block entire IP blocks though, so you might just try reconnecting from a different location/server within the UK until you land on one that works (if any).
I’m pretty sure ML is how Pixalate and DoubleVerify were building their lists, too. The difference is they were footing the bill in terms of resources and time spent to develop a solution. Training ML isn’t hard, its just really time consuming.
I’ve been relying on yt-dlp and hint links to pipe video from Youtube to mpv. Its not a bad solution, but isn’t quite the doom scrolling I want. Here’s an example: files.catbox.moe/688xbo.png
I’ve checked back since, and they have not. That is not to say they couldn’t, though. Essentially all they would need to do is see who made API calls shortly after/before that time and revert the DB changes. Probably more work than they would probably get in return though.