Google launches an open-source version of its differential privacy library – gpgmail


Google today released an open-source version of the differential privacy library it uses to power some of its own core products. Developers will be able to take this library and build their own tools that can work with aggregate data without revealing personally identifiable information either inside or outside their companies.

“Whether you’re a city planner, a small business owner, or a software developer, gaining useful insights from data can help make services work better and answer important questions,” writes Miguel Guevara, a product manager in the company’s Privacy and Data Protection Office. “But, without strong privacy protections, you risk losing the trust of your citizens, customers, and users. Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual’s data to be distinguished or re-identified.”

As Google notes, the current version of the Apache-licensed C++ library focuses on features that are typically hard to build from scratch and includes many of the standard statistical functions that developers would need (think count, sum, mean, variance, etc.). The company also stresses that the the library includes an additional library for “rigorous testing” (because getting differential privacy right is hard), as well as a PostreSQL extension and a number of recipes to help developers get started.

These days, people often roll their eyes when they see ‘Google’ and ‘privacy’ in the same sentence. That’s understandable (though I think there is considerable tension inside the company about this, too). In this case, however, this is unquestionably a useful tool for developers that will allow them and the users they serve to build tools that analyze personal data without compromising the privacy of the people whose data they are working with. Typically, building those takes some considerable expertise, to the point where they may either not build them or simply not bother to to include these privacy features. With a library like this, they have no excuse not to implement differential privacy.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Mental health websites in Europe found sharing user data for ads – gpgmail


Research by a privacy rights advocacy group has found popular mental health websites in the EU are sharing users’ sensitive personal data with advertisers.

Europeans going online to seek support with mental health issues are having sensitive health data tracked and passed to third parties, according to Privacy International’s findings — including depression websites passing answers and results of mental health check tests direct to third parties for ad targeting purposes.

The charity used the open source Webxray tool to analyze the data gathering habits of 136 popular mental health web pages in France, Germany and the UK, as well as looking at a small sub-set of online depression tests (the top three Google search results for the phrase per country).

It has compiled its findings into a report called Your mental health for sale.

“Our findings show that many mental health websites don’t take the privacy of their visitors as seriously as they should,” Privacy International writes. “This research also shows that some mental health websites treat the personal data of their visitors as a commodity, while failing to meet their obligations under European data protection and privacy laws.”

Under Europe’s General Data Protection Regulation (GDPR), there are strict rules governing the processing of health data — which is classified as special category personal data.

If consent is being used as the legal basis to gather this type of data the standard that must be obtained from the user is “explicit” consent.

In practice that might mean a pop-up before you take a depression test which asks whether you’d like to share your mental health with a laundry list of advertisers so they can use it to sell you stuff when you’re feeling low — also offering a clear ‘hell no’ penalty-free choice not to consent (but still get to take the test).

Safe to say, such unvarnished consent screens are as rare as hen’s teeth on the modern Internet.

But, in Europe, beefed up privacy laws are now being used to challenge the ‘data industrial complex’s systemic abuses and help individuals enforce their rights against a behavior-tracking adtech industry that regulators have warned is out of control.

Among Privacy International’s key findings are that —

  • 76.04% of the mental health web pages contained third-party trackers for marketing purposes
  • Google trackers are almost impossible to avoid, with 87.8% of the web pages in France having a Google tracker, 84.09% in Germany and 92.16% in the UK
  •  Facebook is the second most common third-party tracker after Google, with 48.78% of all French web pages analysed sharing data with Facebook; 22.73% for Germany; and 49.02 % for the UK.
  • Amazon Marketing Services were also used by many of the mental health web pages analysed (24.39% of analyzed web pages in France; 13.64 % in Germany; and 11.76% in the UK)
  • Depression-related web pages used a large number of third-party tracking cookies which were placed before users were able to express (or deny) consent. On average, PI found the mental health web pages placed 44.49 cookies in France; 7.82 for Germany; and 12.24 for the UK

European law around consent as a legal basis for processing (general) personal data — including for dropping tracking cookies — requires it to be informed, specific and freely given. This means websites that wish to gather user data must clearly state what data they intend to collect for what purpose, and do so before doing it, providing visitors with a free choice to accept or decline the tracking.

Dropping tracking cookies without even asking clearly falls foul of that legal standard. And very far foul when you consider the personal data being handled by these mental health websites is highly sensitive special category health data.

It is exceedingly difficult for people to seek mental health information and for example take a depression test without countless of third parties watching,” said Privacy International technologist Eliot Bendinelli in a statement. “All website providers have a responsibility to protect the privacy of their users and comply with existing laws, but this is particularly the case for websites that share unusually granular or sensitive data with third parties. Such is the case for mental health websites.”

Additionally, the group’s analysis found some of the trackers embedded on mental health websites are used to enable a programmatic advertising practice known as Real Time Bidding (RTB). 

This is important because RTB is subject to multiple complaints under GDPR.

These complaints argue that the systematic, high velocity trading of personal data is, by nature, inherently insecure — with no way for people’s information to be secured after it’s shared with hundreds or even thousands of entities involved in the programmatic chain, because there’s no way to control it once it’s been passed. And, therefore, that RTB fails to comply with the GDPR’s requirement that personal data be processed securely.

Complaints are being considered by regulators across multiple Member States. But this summer the UK’s data watchdog, the ICO, essentially signalled it is in agreement with the crux of the argument — putting the adtech industry on watch in an update report in which it warns that behavioral advertising is out of control and instructs the industry it must reform.

However the regulator also said it would give players “an appropriate period of time to adjust their practices”, rather than wade in with a decision and banhammers to enforce the law now.

The ICO’s decision to opt for an implied threat of future enforcement to push for reform of non-compliant adtech practices, rather than taking immediate action to end privacy breaches, drew criticism from privacy campaigners.

And it does look problematic now, given Privacy International’s findings suggest sensitive mental health data is being sucked up into bid requests and put about at insecure scale — where it could pose a serious risk to individuals’ rights and freedoms.

Privacy International says it found “numerous” mental health websites including trackers from known data brokers and AdTech companies — some of which engage in programmatic advertising. It also found some depression test websites (namely: netdoktor.de, passeportsante.net and doctissimo.fr, out of those it looked at) are using programmatic advertising with RTB.

“The findings of this study are part of a broader, much more systemic problem: The ways in which companies exploit people’s data to target ads with ever more precision is fundamentally broken,” adds Bendinelli. “We’re hopeful that the UK regulator is currently probing the AdTech industry and the many ways it uses special category data in ways that are neither transparent nor fair and often lack a clear legal basis.”

We’ve reached out to the ICO with questions.

We also asked the Internet Advertising Bureau Europe what steps it is taking to encourage reform of RTB to bring the system into compliance with EU privacy law. At the time of writing the industry association had not responded.

The IAB recently released a new version of what it refers to as a “transparency and consent management framework” intended for websites to embed to collect consent from visitors to processing their data including for ad targeting purposes — legally, the IAB contends.

However critics argue this is just another dose of business as usual ‘compliance theatre’ from the adtech industry — with users offered only phoney choices as there’s no real control over how their personal data gets used or where it ends up.

Earlier this year Google’s lead privacy regulator in Europe, the Irish DPC, opened a formal investigation into the company’s processing of personal data in the context of its online Ad Exchange — also as a result of a RTB complaint filed in Ireland.

The DPC said it will look at each stage of an ad transaction to establish whether the ad exchange is processing personal data in compliance with GDPR — including looking at the lawful basis for processing; the principles of transparency and data minimisation; and its data retention practices.

The outcome of that investigation remains to be seen. (Fresh fuel has just today been poured on with the complainant submitting new evidence of their personal data being shared in a way they allege infringes the GDPR.)

Increased regulatory attention on adtech practices is certainly highlighting plenty of legally questionable and ethically dubious stuff — like embedded tracking infrastructure that’s taking liberal notes on people’s mental health condition for ad targeting purposes. And it’s clear that EU regulators have a lot more work to do to deliver on the promise of GDPR.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Apple still has work to do on privacy – gpgmail


There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.

On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.

As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.

Are we really so sure that thesis holds?

But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?

On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.

But even directly on privacy, Apple is running into problems, too.

 

To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Web host Hostinger says data breach may affect 14 million customers – gpgmail


Hostinger said it has reset user passwords as a “precautionary measure” after it detected unauthorized access to a database containing information on millions of its customers.

The breach is said to have happened on Thursday. The company said in a blog post it received an alert that one of its servers was improperly accessed. Using an access token found on the server, which can give access to systems without needing a username or a password, the hacker gained further access to the company’s systems, including an API database containing customer usernames, email addresses, and scrambled passwords.

Hostinger said the API database stored about 14 million customers records. The company has more than 29 million customers on its books.

“We have restricted the vulnerable system, and such access is no longer available,” said Daugirdas Jankus, Hostinger’s chief marketing officer.

“We are in contact with the respective authorities,” said Jankus.

An email from Hostinger explaining the data breach. (Image: supplied)

News of the breach broke overnight. According to the company’s status page, affected customers have already received an email to reset their passwords.

The company said that financial data wasn’t taken in the breach, nor was customer website files or data affected.

But one customer who was affected by the breach accused the company of being potentially “misleading” about the scope of the breach.

A chat log seen by gpgmail shows a customer support representative telling the customer it was “correct” that customers’ financial data can be retrieved by the API but that the company does “not store any payment data.” Hostinger uses multiple payment processors, the representative told the customer, but did not name them.

“They say they do not store payment details locally, but they have an API that can pull this information from the payment processor and the attacker had access to it,” the customer told gpgmail.

We’ve reached out to Hostinger for more, but a spokesperson didn’t immediately comment when reached by gpgmail.

Related stories:


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Splunk acquires cloud monitoring service SignalFx for $1.05B – gpgmail


Splunk, the publicly traded data processing and analytics company, today announced that it has acquired SignalFx for a total price of about $1.05 billion. Approximately 60 percent of this will be in cash and 40 percent in Splunk common stock.

Splunk acquires cloud monitoring service SignalFx for $1.05B

SignalFx, which emerged from stealth in 2015, provides real-time cloud monitoring solutions, predictive analytics and more. Upon close, Splunk argues, this acquisition will allow it to become a leader in “in observability and APM for organizations at every stage of their cloud journey, from cloud-native apps to homegrown on-premises applications.”

Indeed, the acquisition will likely make Splunk a far stronger player in the cloud space as it expands its support for cloud-native applications and the modern infrastructures and architectures those rely on.

Ahead of the acquisition, SignalFx had raised a total of $178.5 million, according to Crunchbase. Investors include General Catalyst, Tiger Global Management, Andreessen Horowitz and CRV. Its customers include the likes of AthenaHealth, Change.org, Kayak, NBCUniversal, and Yelp.

“Data fuels the modern business, and the acquisition of SignalFx squarely puts Splunk in position as a leader in monitoring and observability at massive scale,” said Doug Merritt, President and CEO, Splunk, in today’s announcement. “SignalFx will support our continued commitment to giving customers one platform that can monitor the entire enterprise application lifecycle. We are also incredibly impressed by the SignalFx team and leadership, whose expertise and professionalism are a strong addition to the Splunk family.”


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Europe’s top data protection regulator, Giovanni Buttarelli, has died – gpgmail


Europe’s data protection supervisor, Giovanni Buttarelli, has died.

His passing yesterday, aged 62, was announced by his office today — which writes:

It is with the deepest regret that we announce the loss of Giovanni Buttarelli, the European Data Protection Supervisor. Giovanni passed away surrounded by his family in Italy, last night, 20 August 2019.

We are all profoundly saddened by this tragic loss of such a kind and brilliant individual. Throughout his life Giovanni dedicated himself completely to his family, to the service of the judiciary and the European Union and its values. His passion and intelligence will ensure an enduring and unique legacy for the institution of the EDPS and for all people whose lives were touched by him.

Ciao Giovanni

Buttarelli was appointed to the key oversight role monitoring the implementation of EU privacy rules for a five year term, starting in December 2014.

Among his achievements in the post was overseeing the transition to a new comprehensive data protection framework, the General Data Protection Regulation (GDPR), which came into force last year — a shift of gear towards enforcement that has shone a global spotlight on the bloc’s approach to privacy at a time when the implications of not putting meaningful checks on data-mining giants are writ large across Western democracies.

The jury is still out on how effectively Europe’s regulators will enforce the GDPR against powerful platform giants but a large number of open investigations are now pending.

Buttarelli also personally pressed the case for regulators to collectively grasp the nettle — to tackle what he described as “real cases like that of Facebook’s terms of service”.

At the same time as working for a consistent and comprehensive application of the GDPR, he believed further interventions would be needed to steer the application of powerful technologies in a fair and ethical direction.

This included advocating for greater joint working between privacy and competition regulators — calling for them to “adopt a position on the intersection of consumer protection, competition rules and data protection” and use “structural remedies to make the digital market fairer for people”.

He has also sought to accelerate innovation and debate around data ethics, which was the theme of a major privacy conference he hosted last year.

In an interview with gpgmail last year he warned that laws alone won’t stop data being used to discriminate unfairly — while asserting that online discrimination “is not the kind of democracy we deserve”.

The sad news of Buttarelli’s passing has shocked the data protection community — which has responded with an outpouring of tributes on social media.

 

Prior to joining the European Commission, Buttarelli was secretary general of Italy’s data protection watchdog.

He also served for many years as a judge in his home country.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity – gpgmail


Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells gpgmail. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds – gpgmail


New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading gpgmail back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

StockX was hacked, exposing millions of customers’ data – gpgmail


It wasn’t “system updates” as it claimed. StockX was mopping up after a data breach, gpgmail can confirm.

The fashion and sneaker trading platform pushed out a password reset email to its users on Thursday citing “system updates,” but left users confused and scrambling for answers. StockX told users that the email was legitimate and not a phishing email as some had suspected, but did not say what caused the alleged system update or why there was no prior warning.

A spokesperson eventually told gpgmail that the company was “alerted to suspicious activity” on its site but declined to comment further.

But that wasn’t the whole truth.

An unnamed data breached seller contacted gpgmail claiming more than 6.8 million records were stolen from the site in May by a hacker. The seller declined to say how they obtained the data.

In a dark web listing, the seller put the data for sale for $300. One person at the time of writing already bought the data.

The seller provided gpgmail a sample of 1,000 records. We contacted customers and provided them information only they would know from their stolen records, such as their real name and username combination and shoe size. Every person who responded confirmed their data as accurate.

The stolen data contained names, email addresses, scrambled password (believed to be hashed with the MD5 algorithm and salted), and other profile information — such as shoe size and trading currency. The data also included the user’s device type, such as Android or iPhone, and the software version. Several other internal flags were found in each record, such as whether or not the user was banned or if European users had accepted the company’s GDPR message.

Under those GDPR rules, a company can be fined up to four percent of its global annual revenue for violations.

When reached prior to publication, neither spokesperson Katy Cockrel nor StockX founder Josh Luber responded to a request for comment. A voicemail left on the spokesperson’s cell was not returned.

Jake Williams, founder of Rendition Infosec, said the company “robbed their users of the chance to evaluate their exposure” by not informing customers of the breach when it happened.

StockX was last month valued at over $1 billion after a $110 million fundraise.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Is FaceApp Safe to Use?


This site may earn affiliate commissions from the links on this page. Terms of use.

Over the past couple of weeks, FaceApp—the AI-driven photo augmentation tool for smartphones—became the source of a major data privacy controversy that appears to have been greatly overstated. Nevertheless, it points to a clear and common issue about the rights we may give up with potentially any app we allow on our devices.

What Happened With FaceApp?

On July 14th, developer Joshua Nozzi tweeted an accusation (since removed) stating that FaceApp seemed to be upload all photos in a user’s library and not only the photos a given user selects for use with the app’s services. He also pointed to Russian involvement with the company, emboldening common concerns over illicit Russian involvement in US data-related matters. Within a couple of days, pseudonymous security researcher Elliot Alderson responded to 9t05Mac’s coverage of Nozzi’s accusation with evidence to the contrary. FaceApp also responded with a statement to 9t05Mac with similar intent. Here is the abridged version:

We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.

FaceApp performs most of the photo processing in the cloud. We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud.

Even though the core R&D team is located in Russia, the user data is not transferred to Russia.

Although 9t05mac jumped the gun by publishing Nozzi’s accusation, as his claims were proven false, Chance Miller—the article’s author—raises an important point:

It’s always wise to take a step back when apps like FaceApp go viral. While they are often popular and can provide humorous content, there can often be unintended consequences and privacy concerns.

Nozzi’s false accusation seems more like an honest mistake than a malicious act and Miller’s point illustrates why we’re likelier to panic when unrelated circumstances paint a picture of danger. While we should always take a moment to find evidence of our claims before publishing, in order to avoid inciting a widespread panic unnecessarily, it’s not hard to see how someone could make this mistake when people are on high alert for this type of activity.

Is Any App Truly Safe to Use?

Although FaceApp hasn’t tricked anyone into providing ownership of their photo library in order to build a massive database of US citizens for the Russian government—or whatever conspiracy theory you prefer—this incident highlights how easily we provide broad permissions without considering the consequences each time we download an app.

When an app requests access to data on your smartphone, it casts a wide net out of necessity. Photo apps don’t request the right to save photos or access only the photos you explicitly present, but your photo library on the whole.  You can’t provide microphone and camera access, or really anything else, with granular permissions that give you control over what the app can do.  Furthermore, smartphones don’t provide a simple way for people to see what apps do. Logs of any kind, or a means of monitoring network activity, are not made available to the average user.

For this reason, most users don’t have the ability to discover if an app breaks their trust or not. Until we have better control over what apps can and cannot access on our devices we have to consider the worst-case scenario with every download. Unless a person has the knowledge and willingness to regularly monitor app activity, as well as read (and understand) each app’s terms of service in their entirety, that person cannot rule out the possibility of malicious use of their data.  After all, Facebook was just fined $5 billion for allowing the highly non-consensual leak of user data (not that it mattered) and much of that occurred through a person’s association with a user who downloaded the problematic app.

While most commonly used apps don’t find themselves in controversial situations like this, data leaks occur with enough frequency that we need to remember what we risk with every contribution of our personal information. Every access granted, every photo uploaded, and every bit of information we provide an app—whether it identifies us directly or indirectly—provides a company with new information about us that they often claim ownership of through their terms of service. They may or may not use the collected data for disagreeable purposes but they afford themselves the right through a process they know almost everyone will ignore. Companies need broad language in their legal agreements to protect themselves. Unfortunately, this legal necessity also cultivates a framework for taking advantage of users when a company publishes an app for the purposes of data collection.

Granular permissions on smartphones take a step toward solving this problem, but it won’t prevent apps from continuing to request broad permissions and requiring access as the price of admission. At this point, most of us know that we’re paying with our data when we’re not paying with our dollars but the problematic difference lies in the exact cost.  Most people probably wouldn’t mind if FaceApp used their selfies to improve the quality of the service but might feel differently if that data were used for another reason. Even without supplying our entire photo libraries, and even if FaceApp deletes the images 48 hours later, they’ve still provided themselves with more than enough time to gain value from the data users willingly provide. While it appears they have no malicious intent, it’s unclear what our provided data costs us because we don’t know how they use it.

The same applies to nearly every single app we download.  Without transparency, we’re paying a cost determined in secret. With repeat action over many, many apps, it becomes very difficult to pinpoint the source of any problems that result. FaceApp appears to operate like every other app: requesting broad data permissions out of necessity and reducing liability through a terms-of-service agreement. With every app, we need to ask ourselves if the service it provides is worth the gamble of an unknown cost.

Now read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something