firefox were using self-hosted mercurial + git with sync
they just dropped mercurial, they’re still not on github
only misc. libraries and the android frontend are on github, and firefox/mozilla has never used gitlab
Blacklists like these aggressively and unapologetically collect all privacy-focused email domains they find, including simple forwarding and tagging services. With more and more sites using these lists to reject or black-hole email addresses, it has become difficult to protect one’s self from spam and cross-site account tracking.
Dear web developers, please don’t use these lists. Well-intended or not, they are privacy and user-hostile.
Devs can use them to block DISPOSABLE mails, not PRIVACY legitimate emails. That’s why it is critical to remove privacy oriented email domains from such lists
I’m okay with people using burner email addresses to get my free content, I just need to be able to filter them out of my list so it doesn’t drive up bounces and hurt deliverability.
AWS SES, for example, is fucking rabid about bounces. Being able to filter out addresses you know are going to bounce is pretty important.
Can a list like this be used for anti-privacy measures? Absolutely! Does that mean we should never create lists like this? For me that depends on whether or not you think we should prevent encryption because bad actors can use it for bad purposes.
You’re getting into very sketchy territory by saying a dev who is using a public GitHub repo to solve their problems needs to take it down because of how others are abusing it. Should the original dev be punished by their email provider because they shouldn’t be allowed to use this? Should anything that has potential harm be required to be a private repo? Who gets to decide all of that?
In the interest of specifics, can you point to where this specific list has done harm? I spent a fair amount of time looking around to make sure I wasn’t going out on a limb for someone with neutral views.
You’re getting into very sketchy territory by saying a dev who is using a public GitHub repo to solve their problems needs to take it down
No, I don’t believe I said any such thing. Since you mention it, though, I think taking this list down and removing the false positives before bringing it back up would be the responsible thing to do.
In the interest of specifics, can you point to where this specific list has done harm?
I know from personal experience and investigation (both as a user and on the admin side) that there are now many cases of privacy-focused email addresses being rejected, or even worse, accepted and then silently black-holed, due to the domains being inappropriately added to lists like this one. I don’t know of a place where people report such cases so they can be documented in aggregate, but if I find one, I’ll be sure to bookmark it in case your question comes up again in the future.
So you’re lumping this resource into a bucket with other resources that were malicious but you have no direct connection from this resource to harm you claim it causes? You’re saying a dev using this list to allow people to download free content but prune emails to save his bounce rate is doing bad things and needs to convert their FOSS use-case to yours?
Who gets to decide? You didn’t answer that and in the interest of good faith I’ll pull that one down as the important one since it follows from the argument I feel you’re making.
You’ve ignored my questions attempting to flesh out your point and refuse to link this specific list to anything bad. I don’t think you understand good or bad faith. Good luck with that!
I feel like having different attributes for each domain might be helpful so that those services using the list can filter for just the things they care about such as burner emails, anonymous registration, whether it requires any email/phone verification, etc. Right now domains kind of have the problem of just being on the list or not, with no indication on why they might be a problem.
The beauty of open source code is that you can fork this project and add that. The repo maintainer seems to have a simple litmus test for whether or not something should be on the list: is it something that will cause a bounce for email distribution? That’s a really subjective test so you kinda have to talk to the repo maintainer about answering it. I suspect they feed it into a library, perhaps one of the ones linked, for use with their platform, so their problem is most likely solved.
Oh I’m sure there will be. It will be technically difficult (but not impossible) for them to allow other app-stores and sideloading but have the hardware and software be different enough in both markets to not have some slip through.
I suspect there will be lots of hacky shit for this.
I remember that there was an identifier based on model number. FaceTime wasn’t allowed in the Middle East for a while, there was a way to tell if the model will support it based on the last character after the / in the model number. Middle East models won’t even have the app at all.
Propably they’ll do the same for models sold in the EU.
There are already hardware variants of the same iPhone. I think the US gets an iPhone with all eSIM, and China has two physical SIM slots.
Literally all I had to do to make my phone have a usable performance again was to set the region to France, and the language to English. I should add that it was totally fine before an update.
SSL/TLS, the “S” in HTTPS, and other network encryption protocols such as SSH, use a technique called a Diffie-Hellman key exchange. This is a mode of cryptography where each side generates two keys: a public half and a private half. Anything encrypted with the public half is only decryptable by the associated private half (and vice versa).
You and Youtube only ever exchange the public halves of your respective key pairs. If someone snoops on the key exchange all they can do is insert spoofed messages, not decrypt real ones.
Moreover, the keypairs are generated on the fly for each new session rather than reused. This means that even a future compromise of youtube won’t unlock old sessions. This is a concept called forward secrecy.
Message spoofing is prevented by digital signatures. These also use the Diffie-Hellman principle of pairs of public/private keys, but use separate longer-term key pairs than those used with encryption. The public half of youtube’s signing key, as presented by the server when you connect to it, has to be digitally signed by a well-known public authority whose public signing key was shipped with your web browser.
this is very detailed answer thank you. however I face an ambiguity regarding this:
This is a mode of cryptography where each side generates two keys: a public half and a private half. Anything encrypted with the public half is only decryptable by the associated private half (and vice versa).
How can this private half be something that I know, Youtube knows but impossible for the snooper to our communication to know??
Youtube never knows the private half of your key pair. That never leaves your system.
Anything encrypted with the private half can only be decrypted with the public half, and anything encrypted with the public half can only be decrypted with the private half. These halves are known as the public key and the private key. Each side of the connection generates their own key pairs.
We both generate a set of keys, and exchange the public halves with each other. I then want to send you a message: I first encrypt it using my private key, I then encrypt it again using your public key and send that to you.
In order to read that message, you first decrypt it using your private key. This ensures the message was intended for you and wasn’t modified in transit, as you are the only one with access to that private key and only its matching public key could have been used to encrypt that layer.
You then decrypt it a second time using my public key. As I’m the only one with access to my own private key, you can be sure the message was sent by me.
As long as that resulted in a readable message; You’ve now verified who sent the message, that it was intended for you, and that the contents have not been modified or read in transit.
All this, including the key exchange is handled for you by the https (tls) protocol every time you connect to a website. Each of the messages sent between you and the site are encrypted in this manner.
The best way I find to think about it is a padlocked box.
The public key is a box with an open padlock on it. I can give it to anyone. If someone puts a message inside the box they can lock the padlock, but they don’t have the key to open it again.
I keep the key private. If someone sends me a locked box that has my padlock on it, only I have the key to open it and read the message.
Anything encrypted with the private half can only be decrypted with the public half, and anything encrypted with the public half can only be decrypted with the private half.
This is not true. In key pair cryptography, the public key used only for encryption and the private key is used only for decryption.
no, it isn’t bidirectional, public = encrypt, private = decrypt, that’s it. You can address a message to multiple recipients though (when using GPG), so often in case of email a message is addressed both to yourself and your recipient, so both you and your recipient have access to message text
You’re not mistaken, it is definitely possible with at least RSA, though, I would guess it may not always be possible. It also sounds like it’s still a bad idea unless you know all of the parameters used to generate the keys and can be sure what information is actually encoded in the keys.
Your computer generates two keys. One to encrypt a message. One to decrypt the message. The encrypt key is public. The decrypt key is private. Your computer shares the public key with YouTube. The private key is never shared.
YouTube does the same thing for your computer.
Your computer will have YouTube’s public key and your computer’s private key…
Your computer will be able to encrypt messages to send to YouTube that only YouTube will be able to decrypt. Even your computer will not be able to decrypt these messages after it has encrypted them using YouTube’s public key.
Since the decryption keys are never shared they can’t be snooped. That is why it is only possible for an attacker to encrypt new messages but not read any messages from either sender.
When you have privacy settings, what you really have is a lie.
It starts out with good intentions, like those in this post, but eventually everyone forgets that the platform still sees your posts and does not give a shit about selling them.
I would rather acknowledge from the very beginning that this entire system is not private, so there is never such a misunderstanding.
Everyone should post and comment with caution, just like you use caution with what you say in public places.
The way you use caution saying something in a public place that you don’t want everyone to hear is by keeping your voice down so that only certain people can hear it. Without privacy settings there is no equivalent to that.
Sup. And all this data would still be federating, it has to be. That just means that some data-collecting company could make a fake instance and get everything together. Or someone could just fork it back.
How exactly does a jury trial work in a case like this? Aren’t juries supposed to be “peers” of the accused? How can a corporation be tried by a jury of its peers?
GPS, mobile network tracking, IP, region the device is sold in (us iphones have a block of plastic where everyone else has a sim card slot), apple store region.
Also VPN, fake apple store region. If detected during download/install also RF-shielding to prevent GPS and mobile network (if download, also needs a wifi signal inside the shield to download at all).
Lot of workarounds for lot of possible detections.
That doesn’t answer the question. Sure, in isolation, Android app ecosystem isn’t ideal. But it’s so so much better at allowing competition than the apple one.
From what i read about it, Apple has a walled garden but charges a flat fee for everyone and has no special deals. Everyone pays the same and they make a little money off of the store but also the hardware sold.
Whereas Google has been caught treating certain parties differently, such as Spotify, something called Project Hug, where they gave extra benefits to parties at risk of leaving the play store, among other unequal dealings.
So the crux of the question is not about the monopoly itself, but the fact that Google is treating market players differently and throwing its weight around to influence the market to its advantage.
I believe no. I’m running Firefox with arkenfox user.js and when I take this test www.bromite.org/detect it shows a new and different fingerprint if everytime i close and reopen the browser. Feel free to try it for yourself.
And while Brave may be private from outsiders, it is far from private from Brave Software themselves and I wouldn’t trust them if I was honest with you. If you want an alternative chromium based browser, check out Vivaldi. They don’t have aaaas many privacy features built in as Brave does but you can still get very private and obviously tack on Ublock origin and a customized DNS block list like you normally would with any other browser. And they are significantly more trustworthy than Brave
Maybe Cromite (the main bromite fork) would be better. Vivaldi isn’t great, but it also isn’t brave. It allows for blocklist importing and user scripts, and is on desktop Windows as well.
I’d say a normal phone is a lot worse than smartphones in general, unless you don’t care about all your communications being readable by the carrier. With a smartphone you can make actually encrypted calls and texts over trustworthy applications/protocols (Signal, Matrix, Simplex, etc.), on a phone you’re stuck with the carrier service; another thing that comes to mind is the storage, as far as I know there are no normal phones with an encrypted filesystem while it is default for a long while on Android.
On the other hand, if your new smartphone model isn’t loaded with a privacy respecting ROM you’ll also have at least some data sent to other third parties like Google and whatnot, but if you can change the ROM, then the potential for better privacy far outweighs the benefits of normal phones doing fewer things with your data by default. If you’re going to use your new smartphone like an old phone, to make carrier calls and SMS, then there will be near to no improvements (except storage security maybe) and as you say, more data snooping
A normal phone doesn’t have AGPS download ephemeris (edit:they may today, I haven’t looked into it for a while), doesn’t have Google Services tracking everything, or third party apps phoning home.
I’d say by default a smartphone is way worse, it has fsr more data collection by default, even without an account. Every data point a feature phone has, a smartphone has, plus more.
Voice calls and SMS use the exact same infrastructure in exactly the same way on both types of phones.
But it can be mitigated quite a bit on Android by not using an account on it, disabling GPS, wifi, Bluetooth.
They could also debloat it to reduce some of the background nonsense (Universal Android Debloat has a “safe to disable” list). (I’m assuming it’s not an unlocked Pixel or a phone that’s on the Lineage list).
If they don’t care about apps, I’d even add NoRoot Firewall, configure it for always on, and set it to block all network access by default. This would be a Global Pre-Filter using asterisk (*) for both the address and port fields with both Wifi and Cell boxes checked (system apps will still have network access, this only affects users apps on a non-rooted phone).
Other than root or flashing a custom OS (like Lineage or Divest, Graphene if they were lucky enough to get an unlocked Pixel), this is about the best that can be done.
Google and hardware manufacturers aren’t motivated to make open devices. Quite the opposite, really.
They learned their lesson from the BIOS wars of the 80’s that resulted in standardized hardware interface, so any compliant OS could be installed. This is what gave MS the ability to beat IBM at their own game, and prevented strong DRM.
Phones don’t have a standardized BIOS like that, so each brand requires drivers built specifically for it (also a bit of a result of using Linux as the base, since it’s a monolithic OS). Without those drivers you can’t install an OS, and each device is different.
Google and friends like it this way, their long-term goal is fully locked down phones that you don’t control and can’t modify, so they can fully implement DRM.
Years ago, I worked for a company that provided phone location for emergency services (fire, police, medical) to the big 3 cellular companies in the US. It required cell providers to install special hardware; back then, GPS was less ubiquitous, but it (still) suffers from accuracy in urban environments; it doesn’t take much to block GPS signals. Also, you don’t need access to anything more than the service provider’s logs to do trilateration; it’s harder to get GPS data from a phone without having software on the phone. In any case, Google pioneered getting around that by mapping wifi signals and supplementing poor GPS with trilateration, and it was good enough. Even back then, our lunch was being eaten by the cost of our systems, and work-arounds like wifi mapping.
Anyway, fast forward a decade and I’m working for a company that provides emergency support for customers who are traveling, and we’re looking at ways to locate customers’ business phones to provide relevant notifications. One of the issues was that there are places in the world where data connections are not great, and it was not acceptable for us to just ignore clients without data connections. One of the things we explored was called zero-length SMS. It’s what it sounds like: an SMS message with zero-length does not alert the phone, but it does cause a ping to the phone. It was an idea that didn’t pan out, but that’s not relevant.
Cell phones have a lot of power-saving algorithms that try to reduce the amount of chatter – both to reduce load on cell towers, but because all that cellular traffic is battery-intensive. So, if you’re a government trying to track a phone, and you’re working with a cell provider, and you don’t have a backdoor in the phone, then you will be able to see which cell tower the phone last spoke with, but that probably won’t give you very good location data and it may not update frequently. This is especially true in rural environments, where there’s low density and a single cell tower might have a service radius of 3 miles – that’s a lot of area.
If you’re tracking someone by phone, a normal cell connection may not be granular enough. Sending SMSes to a phone can force the phone to ping the tower and give you more data points about where the phone may be, how it’s moving, and so on.If you’re lucky, you can get pings from multiple towers, which might allow you to trilaterate to within a dozen meters.
Push notifications use data, but I wouldn’t be surprised if there’s some of that going on, too. It says “through Apple and Google’s servers” which means they’re talking about the push notification servers and not the phones. Android phones are constantly sending telemetry back to Google, so if that is what they’re doing sending push notifications is probably more useful to them for Apple phones.
The article is light on details, but that’d be my guess. Forcing traffic to get more frequent cell tower pings and more data points for trilateration.
Just been reading up on this, they’re basically using the push device ID to see when certain devices are receiving data and from what apps. It sounds like more work than its worth, but it’s clearly something that’s being used widely.
It doesn’t seem like a huge stretch. If somebody had a stored collection, and didn’t share the server with anybody, why not point Plex at that folder? There’s even an *arr for it, so it fits right into the usual stack.
I understand your point of view. I share that philosophy to some degree. However nothing is a guarantee. But a high degree of certainty is achievable. But that doesn’t answer my question. Is there a messaging platform with a panic button that deletes the chat log and call logs from all user involved which can be triggered from any member.
Edit wording and update. This got downvoted because of a misinterpretation of what I was saying when I said high degree of certainty. All I meant was this isn’t supposed to a fool proof blanket feature and the world doesn’t run on absolutes of course. For instance signal works with a high degree of certainty that youll be secure. I was conveying its highly probable this feature under correct parameters would function correctly. Simply a step in the chain of failsafes. None the less. Thanks for your replies.
I wouldn’t agree with that. Whats stopping the other user screenshotting it? Taking a photo with another device? Or even simply disconnecting from the network so the device can’t even receive the “kill switch command”?
privacy
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.