The point of useing it is that privacy invasive sites like twitch or skribbl.io would still work. Twitch technicality works fine on stock Firefox unless you don’t save your history, how dare you.
They will work on ungoogled chromium too though, I guess.
In theory there is even the ability to store a chrome:flags override and use it like a user.js. So you could use upstream chromium and not rely on outdated stuff.
I will try it out after work. Do you know a way to provent automatic openings instead of librewolf? I’m currently using Hyprland and was using the appimage so it doesn’t have any conflicts.
Exactly default browser. Yes I tryed native and flatpak packages but it would constantly open all other browser instead of librewolf. Even if I defined a other one in the mineapps file
No default browser works normally but no idea how to set that in Hyprland.
I highly advise against Appimages. Flatpak is only useful if you dont trust the app which is a valid opinion, but poorly then the browser cant sandbox websites on its own. So native packages are the best option for security it you trust the browser.
Perfect would be to have the browser isolated and also using its sandbox to isolate websites from each other. I dont know if this works though, on Android it does (not with Firefox poorly as they didnt implement it)
So one vor two days later anx I can say now that I switch from thorium to ungoogled chromium Wayland. Didn’t have issues with defaults and yea its pretty much the same
No the base Browser needs to be hardened. On top of that you can install addons but privacy badger is pretty weak afaik, and canvas is just one vector. There still is UA, Apis, referrer policies, WebGL etc
Yea I can do that. I mean it will take a time but it should be possible. Tbh just don’t wanna use brave. www.deviceinfo.me is a hood site for checking how hard you browser i s hardened
I’ve been using Thorium recently with no issues. Before I was using Vivaldi.
Edit, Firefox is my main browser. Thorium is used as an alt for the 2 websites that don’t work in Firefox.
Edit 2; seems the developer of Thorium has made some err questionable choices. Not with the browser itself, but a mild furry nsfw easter egg, and a link to some site talking about their beliefs against a common medical procedure performed on baby boys. I have not seen either for myself as they have both been removed as the browser gained a sudden spike in popularity.
And it is also outdated and not privacy optimised (which seems way less documented than with Firefox). Not sure if appimages even have sandbox or if that is broken too.
Well MS being anti competitive as usual. Side note, I like Tuta very much, finally an independent provider, but I would never use it as they don’t provide IMAP/SMTP.
You can de-Google an Android phone with a custom ROM and have a phone that you have control over and know nobody is spying on you by running a firewall on the phone.
Actually, you can, with Lockdown for iOS or Lulu for macOS. There are other alternatives available, these are just a pair of FOSS examples. You can totally block *.apple.com if you really want to.
It’s not quite the same though. With a custom android ROM, you can be pretty confident that everything kernel-and-up is not spying on you. On iOS and macOS, you don’t have the same level of verifiability, as the OS could just circumvent any VPN/firewall you might have configured. They might pinky promise not to, but without running another external firewall it’s not really verifiable.
It said that Google put it in their aggregated report. Not that they disclosed it. There is a big difference between ‘we got 100 requests’ and ‘we got 10 requests for X info, 30 for Y info’.
ETA: I just looked at the data again, it’s broken in to categories like FISA NSL etc, then it just gives a range of requests 0-1000 etc.
no, do not mistake yourself. they are not proprietary blobs. the whole browser is proprietary. they release a tarball with the chromium code + some changes. but the whole UI which are the main changes are proprietary (after all, like any Chromium browser, it’s mostly a re-skinned Chromium, they don’t make any changes to the engine).
It’s a proprietary browser. They just release a bunch of code for marketing purposes. Don’t believe me? Try compiling it, and tell me if what you get is Vivaldi minus some blobs.
“Note that, of the three layers above, only the UI layer is closed-source. Roughly 92% of the browser’s code is open source coming from Chromium, 3% is open source coming from us, which leaves only 5% for our UI closed-source code.”
Straight from the horses mouth. So 92% of it is the same as every other chromium browser. 3% is their oss code and 5% is closed source. That 5% more than actual open source browsers.
Excuse me? I switched to Manjaro with Xfce about 3 months ago, and if I wasn’t high at the time and remember everything correctly, the default web browser was simply absent. Which is an excellent choice, in my opinion.
So, it’s default for Cinnamon distribution. That’s like saying Amazon AppStore is default for Android just because some manufacturers install it by default. Consider reading the article before quoting it, please.
looks like Cromite bad beacuse it has AdBlock plus instead of uBlock origin and uses google as a default search engine and includes the chrome web store
Thorium is good for privacy and speed but not security, Vivaldi isn’t that private, ungoogled chromium removes everything google. Brave also has packages available for manual installation if you want to give it another try
Its version is outdated and it has no focus on Privacy. Also important to see if privacy from Google or from the actual sites you visit i.e. fingerprint prevention.
The repo shows all the patches. It uses some patches from ungoogled chromium for privacy. It isn’t my recommendation here, I just mentioned it because Brave didn’t work for OP
Chrome or Chromium? Because that “hardening” is only the switches they allow you to use, so if its full of proprietary tracking software it is not hardened at all
Chrome. I know that might be hard to believe but the switches work. You can absolutely stop Google from prefetching their usual services. Plus I don’t login with a Google account on the browser, that makes a huge difference.
There really isn’t much difference. I used Ungoogled-chromium before now. I use Chrome for selfish reasons. The flatpak for it(dev version) is auto updated with no human input required so I get fixes and security patches earlier and I kinda like that release.
Just so you know, Chromium Browsers are more secure if you use the native package. But just for privacy reasons I would not run Chrome unrestricted in my system.
Chromium Browsers are more secure if you use the native package.
This conclusion is relative for everyone as we all have different security needs. Plus there’s no easier, better supported way to sandbox Chrome on Linux other than using Flatpak’s permission model.
It’s also ironic for you to be speaking about security when you are installing/updating your browser using random curl bash scripts.
You havent looked at the repo. And we are talking about different sandboxes here.
The browsers sandbox websites, this is broken if the entire browser is sandboxed as you need to remove that capability to do so.
My bash script pulls in the official brave repo and gpg key, fix the access permissions and that is it. Brave has no documentation on how to use their repo without dnf so this is needed.
The repo has gpg verification enabled and the system will update the browser.
Please dont spread misinformation if you havent even looked at the “random bash script” that does not handle the updatingô
It’s not about us. It’s about the rest of the world, a large portion of whom uses M365. These blocks mean we can’t communicate with potential employers, family, government institutions, universities, etc.
Here I am, maintaining several block lists (max of 500 entries per list) on our M365 tenant of spam and phishing domains and addresses, and not a one comes from Tuta, Proton, or any other privacy provider. Nearly all are gmail, outlook, and icloud, with a few customs sprinkled in. Their claim that it’s to fight spam is BS.
Gonna repeat myself since iMessage hasn’t improved one bit after four years. I also added some edits since attacks and Signal have improved.
iMessage has several problems:
iMessage uses RSA instead of Diffie-Hellman. This means there is no forward secrecy. If the endpoint is compromised at any point, it allows the adversary who has
a) been collecting messages in transit from the backbone, or
b) in cases where clients talk to server over forward secret connection, who has been collecting messages from the IM server
to retroactively decrypt all messages encrypted with the corresponding RSA private key. With iMessage the RSA key lasts practically forever, so one key can decrypt years worth of communication.
I’ve often heard people say “you’re wrong, iMessage uses unique per-message key and AES which is unbreakable!” Both of these are true, but the unique AES-key is delivered right next to the message, encrypted with the public RSA-key. It’s like transport of safe where the key to that safe sits in a glass box that’s strapped against the safe.
The RSA key strength is only 1280 bits. This is dangerously close to what has been publicly broken. On Feb 28 2023, Boudet et. al broke a 829-bit key.
1280-bit RSA key has 79 bits of symmetric security. 829-bit RSA key has ~68 bits of symmetric security. So compared to what has publicly been broken, iMessage RSA key is only 11 bits, or, 2048 times stronger.
The same site estimates that in an optimistic scenario, intelligence agencies can only factor about 1507-bit RSA keys in 2024. The conservative (security-consious) estimate assumes they can break 1708-bit RSA keys at the moment.
(Sidenote: Even the optimistic scenario is very close to 1536-bit DH-keys OTR-plugin uses, you might want to switch to OMEMO/Signal protocol ASAP).
Under e.g. keylength.com, no recommendation suggest using anything less than 2048 bits for RSA or classical Diffie-Hellman. iMessage is badly, badly outdated in this respect.
iMessage uses digital signatures instead of MACs. This means that each sender of message generates irrefutable proof that they, and only could have authored the message. The standard practice since 2004 when OTR was released, has been to use Message Authentication Codes (MACs) that provide deniability by using a symmetric secret, shared over Diffie-Hellman.
This means that Alice who talks to Bob can be sure received messages came from Bob, because she knows it wasn’t her. But it also means she can’t show the message from Bob to a third party and prove Bob wrote it, because she also has the symmetric key that in addition to verifying the message, could have been used to sign it. So Bob can deny he wrote the message.
Now, this most likely does not mean anything in court, but that is no reason not to use best practices, always.
The digital signature algorithm is ECDSA, based on NIST P-256 curve, which according to safecurves.cr.yp.to is not cryptographically safe. Most notably, it is not fully rigid, but manipulable: “the coefficients of the curve have been generated by hashing the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90”.
iMessage is proprietary: You can’t be sure it doesn’t contain a backdoor that allows retrieval of messages or private keys with some secret control packet from Apple server
iMessage allows undetectable man-in-the-middle attack. Even if we assume there is no backdoor that allows private key / plaintext retrieval from endpoint, it’s impossible to ensure the communication is secure. Yes, the private key never leaves the device, but if you encrypt the message with a wrong public key (that you by definition need to receive over the Internet), you might be encrypting messages to wrong party.
You can NOT verify this by e.g. sitting on a park bench with your buddy, and seeing that they receive the message seemingly immediately. It’s not like the attack requires that some NSA agent hears their eavesdropping phone 1 beep, and once they have read the message, they type it to eavesdropping phone 2 that then forwards the message to the recipient. The attack can be trivially automated, and is instantaneous.
So with iMessage the problem is, Apple chooses the public key for you. It sends it to your device and says: “Hey Alice, this is Bob’s public key. If you send a message encrypted with this public key, only Bob can read it. Pinky promise!”
Proper messaging applications use what are called public key fingerprints that allow you to verify off-band, that the messages your phone outputs, are end-to-end encrypted with the correct public key, i.e. the one that matches the private key of your buddy’s device.
EDIT: This has actually has some improvements made a month ago! Please see the discussion in replies.
When your buddy buys a new iDevice like laptop, they can use iMessage on that device. You won’t get a notification about this, but what happens on the background is, that new device of your buddy generates an RSA key pair, and sends the public part to Apple’s key management server. Apple will then forward the public key to your device, and when you send a message to that buddy, your device will first encrypt the message with the AES key, and it will then encrypt the AES key with public RSA key of each device of your buddy. The encrypted message and the encrypted AES-keys are then passed to Apple’s message server where they sit until the buddy fetches new messages for some device.
Like I said, you will never get a notification like “Hey Alice, looks like Bob has a brand new cool laptop, I’m adding the iMessage public keys for it so they can read iMessages you send them from that device too”.
This means that the government who issues a FISA court national security request (stronger form of NSL), or any attacker who hacks iMessage key management server, or any attacker that breaks the TLS-connection between you and the key management server, can send your device a packet that contains RSA-public key of the attacker, and claim that it belongs to some iDevice Bob has.
You could possibly detect this by asking Bob how many iDevices they have, and by stripping down TLS from iMessage and seeing how many encrypted AES-keys are being output. But it’s also possible Apple can remove keys from your device too to keep iMessage snappy: they can very possibly replace keys in your device. Even if they can’t do that, they can wait until your buddy buys a new iDevice, and only then perform the man-in-the-middle attack against that key.
To sum it up, like Matthew Green said[1]: “Fundamentally the mantra of iMessage is “keep it simple, stupid”. It’s not really designed to be an encryption system as much as it is a text message system that happens to include encryption.”
Apple has great security design in many parts of its ecosystem. However, iMessage is EXTREMELY bad design, and should not be used under any circumstances that require verifiable privacy.
In comparison, Signal
Uses Diffie Hellman + Kyber, not RSA
Uses Curve25519 that is a safe curve with 128-bits of symmetric security, not 79 bits like iMessage.
Uses Kyber key exchange for post quantum security
Uses MACs instead of digital signatures
Is not just free and open source software, but has reproducible builds so you can be sure your binary matches the source code
Features public key fingerprints (called safety numbers) that allows verification that there is no MITM attack taking place
Does not allow key insertion attacks under any circumstances: You always get a notification that the encryption key changed. If you’ve verified the safety numbers and marked the safety numbers “verified”, you won’t even be able to accidentally use the inserted key without manually approving the new keys.
privacy
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.