Why Do Android Flashlight Apps Need Dozens of Permissions?

This site may earn affiliate commissions from the links on this page. Terms of use.

No one should be downloading a flashlight app in the Year of Our Lord 2019 — that’s why both Google and Apple have integrated the ability into their devices as part of the base operating system. Avast security researcher Luis Corrons decided to evaluate the security of flashlight apps after the wave of concern around the Russian-owned Faceapp software. According to his work, there are still 937 flashlight applications on Google Play, despite the fact that Flashlight capabilities are baked into the Android OS. Many of these applications request far more permissions from end users than they ever need to function.

Instead of being limited to the functions you’d expect a flashlight to need (access the LED flash itself, download ads from the internet, and lock-screen access so the flashlight can be turned on or off without unlocking the device), many of these apps request far more. The average number of permissions requested by app is 25. 408 applications request 10 permissions or fewer, but 262 of them require 50 permissions or more. The table below shows the worst offenders:

Now, just because an application is requesting a lot of permissions doesn’t necessarily mean it is requesting them for nefarious purposes. But when Corrons dug deeper, the issues kept getting worse. A massive number of applications request permission to kill background processes, access your fine-grained location data, control Bluetooth connections, record audio, download data without notification, and write to your contacts list. A few even process incoming calls.

As Corrons discusses, the reason these apps have such ludicrous permissions isn’t because they’re actually trying to hook you up with Nigerian princes with large fortunes to dispose of. It’s undoubtedly so they can gather data and then sell it to other firms as part of their efforts to endlessly monetize all of human existence. He steps through how some of these apps are developed by studios with multiple multi-million downloads on the app store. All of the apps require the same invasive permissions, and they’re almost certainly funneling data to the same invisible group of partners.

Google, of course, could stop this kind of garbage in its tracks by forcing app developers to only request permissions that they can plausibly prove they need, and by tightening the approval process to make this kind of rampant data-collecting against its own terms of service. Google doesn’t, because that would alert people to how much of their own daily device usage is uploaded to third-party corporations in the first place. The companies that take advantages of loose user permission requirements aren’t exploiting a loophole; they’re using the system in the manner in which it’s intended to operate. Corrons notes that it’s extremely important for users to be aware of what kind of permissions their applications request. This is true, but it also puts the impetus of fixing the problem solely on the end-user.

Google has allowed its app store to be abused by people who are running massive data harvesting regimes — and it’s on Google to fix that problem, not end-users. Nobody should be downloading a flashlight app on a modern device. But Google shouldn’t be allowing applications to request permissions that they have no business requesting, either.

Now Read:

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

DMVs Are Selling Data to Private Investigators, Marketing Firms

This site may earn affiliate commissions from the links on this page. Terms of use.

A new report shows that the DMVs (Department of Motor Vehicles) in many states are taking full advantage of the modern information economy, and they’re making bank doing it. The data we’re required to hand over by law in order to qualify for a driver’s license is being used for very different purposes than you likely intend. Specifically, it’s being sold to private investigators.

That’s the result of a major Motherboard investigation into how DMVs are using the personal data of the citizens they supposedly serve. Like a lot of companies these days, DMVs sell data. Insurance companies buy some of the data, but much of it is being sold to other sources, like private investigators. Such data is apparently popular for surveilling cheating spouses, and the same private investigators that advertise such services are apparently major purchasers.


Data and graph by Vice

Multiple DMVs stressed that they don’t sell social security numbers or photographs, as if this represents some kind of meaningful protection. Some contracts with these investigators are for bulk searches; some are targeted searches. The cost per search is as low as $0.01, and these contracts can run for months at a time.

“The selling of personally identifying information to third parties is broadly a privacy issue for all and specifically a safety issue for survivors of abuse, including domestic violence, sexual assault, stalking, and trafficking,” Erica Olsen, director of Safety Net at the National Network to End Domestic Violence, told Motherboard in an email. “For survivors, their safety may depend on their ability to keep this type of information private.”

All of this is perfectly legal, thanks to the Driver’s Privacy Protection Act, which was passed in 1994. While that law was specifically intended to increase the protections surrounding DMV databases, it included specific carve-outs for private investigators. Granted, the text of the law states that private investigators are only allowed to access these records for a “permitted” DPPA use, but apparently that’s not an issue.

The exact data sold varies from state to state, but it typically includes at least a name and address. Other data, including zip codes, phone numbers, date of birth, and email address are also included depending on the state. The DMV also sells data to credit reporting companies like Experian and LexisNexis. Delaware has arrangements with more than 300 entities. Wisconsin has more than 3,000.

Why are DMVs going down this road? Money. Delaware brought in $384,000 for itself between 2015 and 2019, while the Wisconsin DMV brought in $17M in 2018 alone, up from just $1.1M in 2015. In Florida, the DMV made an eye-popping $77M just in 2017. The contracts with various DMVs explicitly state that the purpose of these agreements is to generate revenue, and the states are aware that some of the information they sell to third-parties is abused. Whether their controls for catching and locking abusers out of these systems are adequate are an entirely different question.

It is long past time for the United States to pass better privacy laws. There is absolutely no justification for the current free-for-all. There is no standard for how data-sharing agreements should be overseen. Local investigations have found that Florida is selling data to marketing firms, not just private investigators, and some citizens have been hit with an onslaught of robocalls and spam as a result. Florida sells data to Acxiom, one of the largest data brokers in America. Acxiom is not a PI firm, just in case you were wondering. Citizens who have been slammed with robocalls, direct mail, and even door-to-door salesman showing up at their homes as a result of this relentless data-selling have no recourse. There’s no one to complain to, there’s no way to get taken off the lists, and there’s no way to prevent their own data from being endlessly sold. Robocalls have become such an epidemic, people now actively avoid answering the phone unless they know the number of the person calling them.

People often ask questions like “Why should I care if someone sells my data?” but don’t connect the question to the fact that they get 15 robocalls a day. Sexual assault and domestic violence survivors may not have those kind of options. But privacy shouldn’t be a right that depends on whether someone is threatening to harm you physically. Privacy should be the default state, particularly when it concerns the government organizations virtually all of us are required to interface with.

If you ever drive in the United States, you must have a driver’s license. Just as with credit reporting agencies, none of us get any choice in the manner. The legal system allows states and the federal government to create effectively mandatory standards because it recognizes that doing so helps ensure the safety of everyone. But if the legal system is going to require that citizens submit data to the federal and/or state government for licensing and registration purposes, it ought to simultaneously require that said data is kept private and only accessed under strictly controlled conditions. The idea that people “opt in” to these practices simply by existing has been stretched past the breaking point. It’s time for a change.

Now Read:

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity – gpgmail

Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells gpgmail. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Kaspersky Products Leak Everything You Do Online, Straight Through Incognito Mode

This site may earn affiliate commissions from the links on this page. Terms of use.

Kaspersky Labs does not enjoy the best reputation. The company has been linked to Russian intelligence, the Department of Homeland Security has banned its use in government computers, and Best Buy will not sell its products. In 2017, news broke that the Israelis had observed Russian intelligence operatives using Kaspersky software to spy on the United States. Now, an investigation of the company’s antivirus software has uncovered a major data leak that goes back to 2015.

According to German publication C’t, Kaspersky antivirus injects a Universally Unique Identifier (UUID) into the source code of every single website that you visit. This UUID value is unique to the computer and the installation of the software. The value injected into each and every website never changes, even if you use a different browser or access the internet using a browser’s Incognito Mode.

C’t found the injection because one of their antivirus software evaluators came across the same line of source code in multiple websites. Installing the application on different systems resulted in the creation of different UUID values. Assigned UUIDs didn’t change over time, indicating that they were static. And because these values are injected into the source code of every single website that you visit, it means that the sites you track can track you back. As C’t writes:

Other scripts running in the context of the website domain can access the entire HTML source any time, which means they can read the Kaspersky ID.

In other words, any website can read the user’s Kaspersky ID and use it for tracking. If the same Universally Unique Identifier comes back, or appears on another website of the same operator, they can see that the same computer is being used.

After building a proof-of-concept and testing that users with Kaspersky antivirus installed could indeed be tracked straight through incognito mode, C’t contacted Kaspersky. The flaw now has a formal name: CVE-2019-8286. Kaspersky has argued that it’s a fairly minimal problem that would require advanced techniques to exploit. Kaspersky has patched its software so that it now only injects information about which version of a Kaspersky product you use into each and every website you visit, not a unique identifier specific to your personal machine. C’t is not happy with this fix and believes it still constitutes a security risk.

C’t’s proof of concept. Image by C’t.

A bug that identifies a computer to a website that knows how to listen for that information is potentially quite valuable. Even if Kaspersky has no external database associating UUIDs with specific installations, broadcasting a UUID straight through incognito mode means that a webserver logs a visit from a specific computer. If that machine is associated with a specific individual, you’ve established a link.

Is it possible that Kaspersky simply made a terrible security decision when it implemented its antivirus software? Absolutely. The fact that a bug exists doesn’t automatically mean that someone nefarious was using it. But these types of coincidences are interesting, to say the least. Broadcasting a UUID as part of antivirus software operation is not the kind of attack avenue most of us would expect. It’s the type of fingerprinting method that an intelligence agency might be very interested in using to track who was accessing very specific websites, but not the kind of thing that a regular malware operation would have much interest in. Of course, one could also argue that this is why the bug snuck in to start with. Kaspersky’s flaw, in this reading, isn’t deliberate nefariousness; it’s an accident that reflects the company’s chief focus on stopping ordinary malware, not state actors.

I don’t know which perception is right. But I would at least suggest investigating an antivirus provider with fewer allegations of foreign intelligence cooperation if this sort of issue is a concern to you.

Now Read:

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

FTC Toadies for Equifax, Begs Citizens to Register for Largely Worthless Credit Monitoring

This site may earn affiliate commissions from the links on this page. Terms of use.

In theory, organizations like the FTC exist to safeguard United States citizens. In practice, all too often, these organizations are far more beholden to the companies they supposedly regulate than the citizens whose rights they protect. Last week, the FTC announced a settlement with Equifax, in which individuals whose data was stolen — that’s basically everyone in the United States — were eligible for $125 in compensation. Given the breadth and importance of the data Equifax allowed to be stolen, one might think that kind of minimal compensation would be the least the company could offer, given that it leaked social security numbers, addresses, phone numbers, dates of birth, and names.

Now, however, the FTC has changed its tune. Far too many people have registered for the $125 settlement. Under the proposed settlement structure, only $31M has been set aside to provide these refunds. That translates to $125 for 248,000 people. The Equifax hack affected 147 million people. In other words, according to the FTC, only 0.16 percent of Americans were estimated to request $125. Now our government is begging its own citizens to accept near-worthless free credit monitoring (which costs Equifax literally nothing to provide) rather than asking for a tiny cash settlement in exchange for one of the most egregious database thefts of all time.

Just Buy It Pick Free Credit Report Monitoring

The FTC’s new blog post is headlined “Equifax data breach: Pick free credit report monitoring.” Robert Schoshinski, the Assistant Director, Division of Privacy and Identity Protection, writes:

The free credit monitoring is worth a lot more – the market value would be hundreds of dollars a year. And this monitoring service is probably stronger and more helpful than any you may have already, because it monitors your credit report at all three nationwide credit reporting agencies, and it comes with up to $1 million in identity theft insurance and individualized identity restoration services.

The FTC blog post does not note that the only reason the pool of cash for refunds is so small is the FTC deal with Equifax only allocates $31M to the relevant fund. While the agreement with Equifax included up to $425M to help victims of the breach, the overwhelming majority of the money is earmarked for other purposes. That’s dealt with in a separate press release. The government also doesn’t note that under the terms of the deal, it will be extremely difficult for anyone to prove an incidence of identity theft was tied to the Equifax database theft because that database has never been detected for sale on any hacking website. This implies it was stolen by a state actor rather than a conventional hacker.

Hurrah. R0ckH4rd69Lvr doesn’t have your data; Russia or China probably does. That’s vastly better.

Most financial websites do not agree with the FTC’s claim that free credit monitoring is worth “a lot more.” To quote Levar Burton, “You don’t have to take my word for it.” Here’s a sampling of quotes and links on the topic:

NerdWallet: “NerdWallet recommends avoiding such offerings from credit bureaus.”
US News & World Report: “It’s of some value if you are a victim of identity theft, but its value is rather narrow.”
CNBC: “Credit monitoring services may not be worth the cost”
CNN Money: “Most of what these products provide you can easily do yourself, and for free.”
LendingTree: “The paid credit monitoring services won’t necessarily monitor your reports any better than a free service.”

Maryland Attorney General Brian Frosh captured the spirit of the issue far better in his comments about the settlement last week. Speaking about the ~147M victims of the Equifax hack, he noted: “Most of them—most of us—did not sign up… We did not choose Equifax,” Frosh said. “It chose us. It collected our personal information, it compiled it, analyzed that information, and sold the product and some of the raw data to other people. Their carelessness with our personal data will cause harm perhaps for millions of Americans.”

Slate’s argument, made last week, was that customers had a moral obligation to claim this funding, to send a message to Equifax and other companies about the critical importance of data security and to hold them accountable for failing to do so. Nobody chooses to do business with Equifax, TransUnion, or Experian. These institutions compile financial records and credit reports on Americans without consent, to provide global information about one’s credit history. There is no way to voluntarily withdraw from the system and credit checks are so important for so many life events, there would be little practical way for any but the richest Americans to do so.

Facebook got hit with a $5B fine for Cambridge Analytica, but Equifax is skating by with a $671M fine. According to the FTC, this was a deliberate decision to protect Equifax. “We want to make sure we don’t bankrupt the company or have them go out of business,” Maneesha Mithal, a data and privacy subject matter expert with the FTC, told Ars Technica. “We want to make sure they have the funds and resources to protect consumers going forward.”

Yes. Because nothing speaks to the importance of protecting consumers like a slap on the wrist when a company loses the data of 147 million Americans. Nothing promotes trust like the FTC publishing a shameful, toadying blog post declaring the value of worthless monitoring services that the company being fined can provide at no cost to itself.

Details on how to object to the settlement, should you wish to do so, are on the FAQ linked at the EquifaxBreachSettlement page. You cannot ask the Court to change the settlement, but you can advocate for it to be approved or denied. A $125 payment for a few million Americans was bad enough, but the government’s behavior in this case, not to mention the terms of the settlement itself, are both insulting.

Now Read:

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something