Same here. Every time I ditch Google and go to ddg, I just laugh how irrelevant all the results are and I continue feeding Google with personal data and telemetry and whatnot.
It is true that even Google’s results feel much worse than before but it’s still significantly better than ddg.
I mean, not a great source… That’s just a link to a forum post and the only thing they reference for it “not being secure” was a github PR from 2021… Not saying its great they had teething issues, but that’s literally a year within starting up and they fixed all those issues right away, and had an independent audit done. So I kinda feel like using it to say they’re not secure now isn’t very useful. But if you have something showing their current deployment is insecure please share.
They did a complete infrastructure overhaul at the start of 2023 too moving to their own hardware and such so I imagine more might have changed since 2021 than just those issues.
I paid for Kagi and have been super happy with it. If you don’t mind paying, I highly recommend it. Not having ads or manipulated results is worth it for me.
You can easily spoof the location on most search engines by using a VPN, but I’d much prefer my search engine solely respond to deliberate input from the user
NixOS supports headless LUKS, which was an improvement for me in my last distro-hop. The NixOS wiki even has an example of running a TOR Onion service from initrd to accept a LUKS unlock credential.
As someone who is a data hoarder/curator and dives into the deep ends of web abyss, I use Searx, Startpage and Yandex. I do not mind Startpage only because I no longer use search engines that much anymore. If something truly needs to be searched, Yandex is the absolute, untouchable king for web, image and reverse image searching, and is better than Google for privacy (very low bar but > Google/Bing).
Searx usually does deliver for the common use cases, and Startpage gives Google results minus SEO and sponsored trash.
If I were to rank them for results based on years of experience, Yandex is easily a 10 (ignoring its unbeatable image search), Searx with “default/all” language results a 7, Startpage a 5 (censors Russian/Chinese sources since it is based on Google), Qwant probably 3.5-4 (unavailable in many regions), Google 3, DDG and Bing 2. I am not sure how Metager, Mojeek and Kagi fare, but they probably perform somewhere between Searx with “default/all” language results and DDG.
Why Yandex is so above Searx metasearch is because its indexing is a lot faster than once a day, besides giving the experience of what Google was around 2009/10 and with no SEO crap. You will find the most obscure personal blog and website there, and DMCA bullshit does not work in Russia, which would work on any of these other search engines or metasearch instance owners.
I’ve never seen the point of this search engine or any commercial alternative to google. It’s all just varying layers of proxy to Google. You might as well just find a searx instance and use that because it’s all the same crap at the end of the day.
One of my admin friends said it’s not really made with desktop users in mind but more for people who need to set up (lots of) computers / servers quite often (= admins). If you’re not planning on distro hopping or reinstalling your system all the time it doesn’t really do anything for you that any other distro plus a good backup strategy already does. Plus you can use the Nix package manager without installing NixOS on the distro you’re on right now, if you wanna check it out.
Not all, but some will and that’s good enough. Security and privacy is all about layers, not guaranteed solutions.
That said, if you have “business” with a company, they are probably using your registered home address to understand how to deal with your local laws/regulations. e.g. If you’re using a registered google account and don’t have an address in a state that offers protection, its very unlikely they’ll extend any privacy policies to you just because your IP says you’re in California, for example.
OTOH, if you don’t have a registered address/account/profile and your IP is coming out of California, its possible some companies will apply stricter policies based on your preference.
To your original point though, yes, shady companies will continue to behave in unethical ways.
@mypasswordis1234@fmstrat It is possible to beat fingerprinting with a vpn + delete all cookies + turn resist fingerprinting to true in about:config of Firefox.
Yes, it is not as decentralised as you have thought. I thought this is a fairly known fact. If you need something truly decentralized, I2P is probably the way.
Traffic is flowing through computers of volunteers, that part is indeed decentralized, but your client needs to find them, and that happens through a centralized service, through a “directory authory” if I’m not mistaken
I2P has a mechsnism for banning routers, permanently or temporarily.
It looks it knows what to block from a local blocklist file and from a “blocklist feed”, but I don’t know what’s the latter right now. I hope you can excuse me on that, I’m also quite new on the topic.
So how does I2P work, I vaguely remember something about it like slowly building a network as you keep your own connection on, and that the architecture makes it much better for torrenting. Is it worth looking into and learning about or is it just slow bad internet?
Well, yeah, about the speed… it’s not fast. And probably never will be fast as plain internet. Just imagine what is happening: each service you connect to is usually 6 hops away, which in the worst case (where each pair of peers is the furthest possible from each other) would require traffic to take 3 rounds between e.g. west asia and the usa. Here’s an other explanation with a diagram: geti2p.net/en/faq#slow
But that’s just the latency, and it can be tuned. If you want to play online games with a group of people over I2P, you could use for instance a 1-hop tunnel, and ask the others too to use a 1-hop tunnel, and now it’s totally different. Of course this hurts your and the other players anonymity, but it could be acceptable, especially if you make it select a router relatively close to you.
Bandwidth is again a different topic, I think that could improve even without sacrificing on the tunnel length, with more (relatively) high bandwidth routers joining the network, but of course your tunnel’s bandwidth will always be limited by the slowest router in the chain. Fortunately there are ways to have a tunnel through more performant routers.
On how does it work: when you start up your router (a software package, through which other programs can use the network), it asks a bunch of preconfigured servers about known I2P peers, through a process called reseeding. Afaik there are currently 12 preconfigured reseed servers, but you can bring your own, or if you know someone with an I2P router who you trust, they can make a reseed file for you which you can import.
After that, your router will talk to the other routers it now knows about, and ask them too about the routers they know.
This means that it’s better (while not necessary) to have a dedicated machine on which a router is always running and online, instead of having it run for the 30 minutes every time you power on your desktop. It doesn’t have to be powerful, it can be a low power consumption SBC (like a raspberri pi or similar), and I think it’s also possible to set up an unused android phone for this purpose with an app, but you probably don’t want it to use your mobile data plan.
On why is it better for torrenting: I don’t remember the details on that.
What I remember is that it’s often said that the protocol was “built for that”.
But there’s also another thing: vandwitdh is naturally less of a scarcity here, compared to Tor. Connecting to the network requires the use of a “router”, which besides giving access to it for you, also automatically contributes to the network with your internet connection’s bandwidth capacity (except if limited by the tech of your ISP, like with CGNAT; it can still contribute some but usually it’s less), and in turn most users will provide a “relay” to the network. On the Tor network, most users are just users, their clients are not participating in routing the traffic of other users, and so they are only consuming the capacity provided by others.
Also, afaik torrenting on Tor always needs to make use of an exit node to access the tracker and all the peers, while on I2P it all happens inside the network, without placing a huge load on outproxies (exit nodes in I2P terms)
It may seem that I2P has a bunch of downsides, and it may discourage you from using it, but let me tell you how I think about it.
I don’t use it for everything, just as I don’t use the Tor network on a daily basis, but when I need it it’s there, it makes me easier to search on a few private matters, and it runs in the background so I’m basically effortlessly helping the other users, when not counting the initial setup and the electricity costs of course (the former was not much, and the latter does not depend on this in my case)
Very interesting, and thank you for the write up! Might be worth looking and preconfigured reseeds if I was to dabble in it, but generally I just don’t have use for powerful anonymity tools currently. Always rad to hear about the tech though!
privacy
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.