This dev in the link gathers address data and inserts it into map obf files. You download each one you need then you put the obf file in osmands android data directory where your map files usually go. https://github.com/pnoll1/osmand_map_creation/releasesThere’s also an app called addresstogps that allows you to lookup an address and it converts it into gps coordinates and allows you to open it in any map app. However addresstogps uses Google as a back end. Ive found that dev in the links address lookup to be so robust, there’s almost never a time when I need to use a different address lookup.
I meant the developer in the link I posted. Obf files are just the file type that the maps are in. Android directory is when you go into your android file manager, there is a list of folders like android or downloads. When you choose the android folder, the next folders are data, media,and obb.
Why not just use an app? Thats what addresstogps does pretty much. Its very easy. But for some people its a deal breaker that it uses Google as a back end for data. The other thing is it requires data to work.
So addresstogps is way easier but it requires data to work. If you’re going hiking or camping and cant get cell service for example, addresstogps wouldnt work but, those map files preloaded with address data would. They don’t require data. Which are the obf map files.
The link I posted, the dev has inserted address data into the map file for every state in the us plus some other countries. You download the obf file for each state you need. Then in android data subdirectory in your filemanager where osm puts its own map obf files you’ll replace there’s with these. You’ll have to either use a third party file manager app or a computer will work. If your maps are on the SD card, then it’ll be in the same directory but under the SD card.
My guy you posted like 74 comments so maybe I missed the one with the link in it.
I see no reason to go through the trouble of dealing with these workarounds when it’s simply not a problem for other apps and when I can already find the data with GW Maps.
1984.hosting is great, I’ve been using their service for a couple of years now. They’re based in Iceland (really strong privacy laws) and have options for crypto payment if you don’t want to reveal yourself through your payment method. As with all registrars, they’ll need an email address (or alias) to reach you at in case there’s a domain dispute, and while they also ask for address and phone number, they’ve never had me actually verify anything beyond the email. If you give a fake address and phone number, then you’ll just need to understand that if someone challenges your domain, it will be very difficult for you to prove ownership with fake details (not as if that’s likely to happen unless you’re allowing the site to be crawled by a search engine though). I only have a domain through them, not a hosted webserver, but they seem to have good options for hosting. I know that they handle Let’s Encrypt certs automatically for hosted sites, and they run off green energy (geothermal) if that matters to you.
The interest might be theirs but the “legitimate” part absolutely has to incorporate a written justification somewhere within the the depths of the mandated records of processing activities that explains why the business/institution couldn’t possibly do what they’re doing without processing that particular piece of user data. “I want that” is not legitimate interest in the sense of Article 6.
Agree. But practically they may claim using such data to improve their systems. This is a valid LI justification. But still it provides no benefits to users to whom those data are collected from, while at the same time increases their risks (such as mishandling of their data - which is common since it’s very difficult to handle data 100% correctly).
This is a great idea! I wish more websites did warrant canaries, and those that do often fail to maintain them or plan for the case when a gag order prevents them from updating an existing canary. The only thing I would suggest is making it more clear that being in an alpha stage means that the product should not be relied upon in critical situations.
A failed warrant canary is effectively a triggered warrant canary. If its triggered, you have to assume the company has been issued a warrant, and is therefore vulnerable.
What do you mean by a failed warrant canary? In most cases there is no clear failure because there’s no clear plan in place to maintain them.
For example, if a website has a statement “we have received 0 warrants”. When was that published? Yesterday? A year ago? More? Even if it has a date, say 6 months ago. What does that mean? That they only update it every year? Or maybe there were meant to update it they just forgot, maybe they aren’t allowed to update it due to a gag order.
Due to the way each website does things differently with no clear guidelines, there isn’t actually a defined failure case.
While you can find examples of companies doing it correctly, it’s also easy to find companies who do not. Also, some update theirs seemingly daily but don’t actually state this. Sure, you can check and see that it was updated “today”, but what if it doesn’t get updated and you don’t know its “typically” updated daily. Again, no date for the next update.
These are all examples of companies who do not explicitly specify when the next update will be: kagi.com/privacy nordvpn.com/security-efforts/ cloudflare.com/transparency/
Someone please correct me if I am wrong, but I was under the impression that warrent canaries were a broken concept. Anyone with the power to submit a warrant to a company also has the ability to prevent the company from triggering their canary.
The idea is that there is no such action as “triggering the canary” that the government can stop them from taking. Instead they refrain from updating it, thus alerting people that something has occurred. However, since the point of a canary is that not updating it raises concerns, I’m not sure how this service makes any sense (alerts on new canaries?).
The idea is that there is a big difference between the government saying “don’t tell anyone about this” and saying “you must make a false statement (the canary) every X amount of time indefinitely.” In the past courts in the US have taken a fairly dim view of the government trying to compel speech. There are some example cases at en.m.wikipedia.org/wiki/Compelled_speech#United_S….
That actually could be useful, by having a completely external company send a notification without action by the company receiving the warrant, it may be possible to circumvent the prohibition on alerting users.
None of those compelled speech examples include national security though, which has its own level of rules and courts. (I am not American or a lawyer, so i may be wrong).
And if a company can be compelled to hand over customer data, why wouldnt they be hand over access to the systems that update the canaries?
The other issue is thar once a canary is triggered, it cant be reset, which means that XXX agency can trigger the canary with something meaningless, and then its forever untrustworthy.
You may well be correct, and they are sufficient, but i am not convinced that canaries work, especially against the higher level adversaries.
Yes, most of those points are the concerns with warrant canaries. So far as we know the concept is totally untested in court so it’s hard to say what the result would be until it happens.
Updating the canary should require a human input (like a password to unlock the GPG key), which is not sometime the government would generally get access to (they make a request for data about XYZ user, and the company turns it over; they wouldn’t get actual access to the production system). The government could seek a ruling to force the company to update the canary, but as such a thing hasn’t been granted before (at least as far as we know), it’s not a guarantee. So, there is a chance that the warrant canary will serve to alert users to something happening, which is better than nothing. But because of its untested nature, it might be broken by a court.
I’m not sure I understand your point about “once it’s triggered it can’t be reset.” If a company fails to update their canary on schedule it means something happened that they can’t disclose. Once they are released from the NDA they can release a new canary explaining what happened.
Wikipedia does claim that patriot act subpeonas can penalise any disclosure of the subpeona. But i am not a lawyer, and afaik this is untested (or at least undisclosed :/ )
In September 2014, U.S. security researcher Moxie Marlinspike wrote that “every lawyer I’ve spoken to has indicated that having a ‘canary’ you remove or choose not to update would likely have the same legal consequences as simply posting something that explicitly says you’ve received something.”
I think my point is that a gag order with a long time out essentially kills the canary, even if it doesnt affect the vast majority of the services users.
Thanks for your response though, I appreciate the additional information.
I wonder where mandated sonograms and abortions are bad disclaimers to patients seeking abortions falls.
That speecch is mandated, yet SCOTUS barred California from mandating crisis pregancy centers reporting to patients you cannot get an abortion here but instead call these numbers to schedule one
Lots of controversies outside the topic of the thread, but certainly examples of mandated speech and rulings to prevent mandated speech.
I think that’s the purpose of the “next update” part. As long as the ability to refresh that timestamp is gated behind a passphrase (for 5A protection) then it functions as a deadman switch for the canary.
Passphrases only work in locales with 5a or similar protection, and either have to be managed by a single person or have the potential to be leaked.
Great for small businesses, but unworkable at the enterprise level.
But having a canary mechanism for smaller businesses is crucial, because they can’t afford to put a wall of lawyers between them and potential government overreach.
The canary is triggered through inaction, not action. The government would have to compel the target of the subpoena to keep updating the canary on schedule.
The table in the ACLU report is kind of interesting. I mean, I was confused about the could be shared with law enforcement and the could be used to discipline my friends but then seeing the Could be used to identify trans/reproductive health makes those amounts completely understandable as well as the undocumented statement.
I always feel like I’m being watched 32%
How it could be used to discipline me or my friends 27%
What our school and companies they contract with do with the data (such as sell it, analyze it, etc.) 26%
How it limits what resources I feel I can access online 24%
Could be shared with law enforcement 22%
Could be used against me in the future by a college or an employer 21%
Could be used to identify students seeking reproductive health care (such as contraception or abortion care) 21%
Could be used to identify students seeking gender-affirming care (such as transgender students seeking hormones) 18%
Could be used against immigrant students, especially those who are undocumented 18%
How it limits what I say online 17%
Could be used to “out” LGBTQIA+ students 13%
I have no concerns regarding surveillance in my school 27%
`
Source: YouGov. School Surveillance, fielded October 20-26, 2022. Commissioned by ACLU TABLE 1 Students’ Concerns About School Surveillance
Hell, I finished school over a decade ago now, but even as an adult, I feel like I'm being constantly watched. This kind of overreaching, omnipresent surveillance is genuinely not good for individuals and by extension, society at large. Human beings do not act naturally when they feel their every move is being watched. Anxiety, distrust, paranoia, depression, etc. can all manifest, and it scares me to know that this kind of "for your safety" surveillance has become so normalized.
It isn't normal. It is affecting the average person's mental health, even if they don't know it. It is affecting society at a very base level as a result. What a world...
Note: Found the one big thing I wanted in the ACLU stuff but I’m not reading through the Vice News report at this moment: As Vice News reported, “The few published studies looking into the impacts of [student surveillance] tools indicate that they may have the effect [of] breaking down trust relationships within schools and discouraging adolescents from reaching out for help.”83 Ironically, the same tools the EdTech Surveillance industry is promoting as a means for identifying students in need of help may actually be dis-couraging those students from reaching out to school officials and other adults for help when they need it.
privacyguides
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.