Mental health websites in Europe found sharing user data for ads – gpgmail


Research by a privacy rights advocacy group has found popular mental health websites in the EU are sharing users’ sensitive personal data with advertisers.

Europeans going online to seek support with mental health issues are having sensitive health data tracked and passed to third parties, according to Privacy International’s findings — including depression websites passing answers and results of mental health check tests direct to third parties for ad targeting purposes.

The charity used the open source Webxray tool to analyze the data gathering habits of 136 popular mental health web pages in France, Germany and the UK, as well as looking at a small sub-set of online depression tests (the top three Google search results for the phrase per country).

It has compiled its findings into a report called Your mental health for sale.

“Our findings show that many mental health websites don’t take the privacy of their visitors as seriously as they should,” Privacy International writes. “This research also shows that some mental health websites treat the personal data of their visitors as a commodity, while failing to meet their obligations under European data protection and privacy laws.”

Under Europe’s General Data Protection Regulation (GDPR), there are strict rules governing the processing of health data — which is classified as special category personal data.

If consent is being used as the legal basis to gather this type of data the standard that must be obtained from the user is “explicit” consent.

In practice that might mean a pop-up before you take a depression test which asks whether you’d like to share your mental health with a laundry list of advertisers so they can use it to sell you stuff when you’re feeling low — also offering a clear ‘hell no’ penalty-free choice not to consent (but still get to take the test).

Safe to say, such unvarnished consent screens are as rare as hen’s teeth on the modern Internet.

But, in Europe, beefed up privacy laws are now being used to challenge the ‘data industrial complex’s systemic abuses and help individuals enforce their rights against a behavior-tracking adtech industry that regulators have warned is out of control.

Among Privacy International’s key findings are that —

  • 76.04% of the mental health web pages contained third-party trackers for marketing purposes
  • Google trackers are almost impossible to avoid, with 87.8% of the web pages in France having a Google tracker, 84.09% in Germany and 92.16% in the UK
  •  Facebook is the second most common third-party tracker after Google, with 48.78% of all French web pages analysed sharing data with Facebook; 22.73% for Germany; and 49.02 % for the UK.
  • Amazon Marketing Services were also used by many of the mental health web pages analysed (24.39% of analyzed web pages in France; 13.64 % in Germany; and 11.76% in the UK)
  • Depression-related web pages used a large number of third-party tracking cookies which were placed before users were able to express (or deny) consent. On average, PI found the mental health web pages placed 44.49 cookies in France; 7.82 for Germany; and 12.24 for the UK

European law around consent as a legal basis for processing (general) personal data — including for dropping tracking cookies — requires it to be informed, specific and freely given. This means websites that wish to gather user data must clearly state what data they intend to collect for what purpose, and do so before doing it, providing visitors with a free choice to accept or decline the tracking.

Dropping tracking cookies without even asking clearly falls foul of that legal standard. And very far foul when you consider the personal data being handled by these mental health websites is highly sensitive special category health data.

It is exceedingly difficult for people to seek mental health information and for example take a depression test without countless of third parties watching,” said Privacy International technologist Eliot Bendinelli in a statement. “All website providers have a responsibility to protect the privacy of their users and comply with existing laws, but this is particularly the case for websites that share unusually granular or sensitive data with third parties. Such is the case for mental health websites.”

Additionally, the group’s analysis found some of the trackers embedded on mental health websites are used to enable a programmatic advertising practice known as Real Time Bidding (RTB). 

This is important because RTB is subject to multiple complaints under GDPR.

These complaints argue that the systematic, high velocity trading of personal data is, by nature, inherently insecure — with no way for people’s information to be secured after it’s shared with hundreds or even thousands of entities involved in the programmatic chain, because there’s no way to control it once it’s been passed. And, therefore, that RTB fails to comply with the GDPR’s requirement that personal data be processed securely.

Complaints are being considered by regulators across multiple Member States. But this summer the UK’s data watchdog, the ICO, essentially signalled it is in agreement with the crux of the argument — putting the adtech industry on watch in an update report in which it warns that behavioral advertising is out of control and instructs the industry it must reform.

However the regulator also said it would give players “an appropriate period of time to adjust their practices”, rather than wade in with a decision and banhammers to enforce the law now.

The ICO’s decision to opt for an implied threat of future enforcement to push for reform of non-compliant adtech practices, rather than taking immediate action to end privacy breaches, drew criticism from privacy campaigners.

And it does look problematic now, given Privacy International’s findings suggest sensitive mental health data is being sucked up into bid requests and put about at insecure scale — where it could pose a serious risk to individuals’ rights and freedoms.

Privacy International says it found “numerous” mental health websites including trackers from known data brokers and AdTech companies — some of which engage in programmatic advertising. It also found some depression test websites (namely: netdoktor.de, passeportsante.net and doctissimo.fr, out of those it looked at) are using programmatic advertising with RTB.

“The findings of this study are part of a broader, much more systemic problem: The ways in which companies exploit people’s data to target ads with ever more precision is fundamentally broken,” adds Bendinelli. “We’re hopeful that the UK regulator is currently probing the AdTech industry and the many ways it uses special category data in ways that are neither transparent nor fair and often lack a clear legal basis.”

We’ve reached out to the ICO with questions.

We also asked the Internet Advertising Bureau Europe what steps it is taking to encourage reform of RTB to bring the system into compliance with EU privacy law. At the time of writing the industry association had not responded.

The IAB recently released a new version of what it refers to as a “transparency and consent management framework” intended for websites to embed to collect consent from visitors to processing their data including for ad targeting purposes — legally, the IAB contends.

However critics argue this is just another dose of business as usual ‘compliance theatre’ from the adtech industry — with users offered only phoney choices as there’s no real control over how their personal data gets used or where it ends up.

Earlier this year Google’s lead privacy regulator in Europe, the Irish DPC, opened a formal investigation into the company’s processing of personal data in the context of its online Ad Exchange — also as a result of a RTB complaint filed in Ireland.

The DPC said it will look at each stage of an ad transaction to establish whether the ad exchange is processing personal data in compliance with GDPR — including looking at the lawful basis for processing; the principles of transparency and data minimisation; and its data retention practices.

The outcome of that investigation remains to be seen. (Fresh fuel has just today been poured on with the complainant submitting new evidence of their personal data being shared in a way they allege infringes the GDPR.)

Increased regulatory attention on adtech practices is certainly highlighting plenty of legally questionable and ethically dubious stuff — like embedded tracking infrastructure that’s taking liberal notes on people’s mental health condition for ad targeting purposes. And it’s clear that EU regulators have a lot more work to do to deliver on the promise of GDPR.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Apple still has work to do on privacy – gpgmail


There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.

On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.

As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.

Are we really so sure that thesis holds?

But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?

On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.

But even directly on privacy, Apple is running into problems, too.

 

To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Europe’s top data protection regulator, Giovanni Buttarelli, has died – gpgmail


Europe’s data protection supervisor, Giovanni Buttarelli, has died.

His passing yesterday, aged 62, was announced by his office today — which writes:

It is with the deepest regret that we announce the loss of Giovanni Buttarelli, the European Data Protection Supervisor. Giovanni passed away surrounded by his family in Italy, last night, 20 August 2019.

We are all profoundly saddened by this tragic loss of such a kind and brilliant individual. Throughout his life Giovanni dedicated himself completely to his family, to the service of the judiciary and the European Union and its values. His passion and intelligence will ensure an enduring and unique legacy for the institution of the EDPS and for all people whose lives were touched by him.

Ciao Giovanni

Buttarelli was appointed to the key oversight role monitoring the implementation of EU privacy rules for a five year term, starting in December 2014.

Among his achievements in the post was overseeing the transition to a new comprehensive data protection framework, the General Data Protection Regulation (GDPR), which came into force last year — a shift of gear towards enforcement that has shone a global spotlight on the bloc’s approach to privacy at a time when the implications of not putting meaningful checks on data-mining giants are writ large across Western democracies.

The jury is still out on how effectively Europe’s regulators will enforce the GDPR against powerful platform giants but a large number of open investigations are now pending.

Buttarelli also personally pressed the case for regulators to collectively grasp the nettle — to tackle what he described as “real cases like that of Facebook’s terms of service”.

At the same time as working for a consistent and comprehensive application of the GDPR, he believed further interventions would be needed to steer the application of powerful technologies in a fair and ethical direction.

This included advocating for greater joint working between privacy and competition regulators — calling for them to “adopt a position on the intersection of consumer protection, competition rules and data protection” and use “structural remedies to make the digital market fairer for people”.

He has also sought to accelerate innovation and debate around data ethics, which was the theme of a major privacy conference he hosted last year.

In an interview with gpgmail last year he warned that laws alone won’t stop data being used to discriminate unfairly — while asserting that online discrimination “is not the kind of democracy we deserve”.

The sad news of Buttarelli’s passing has shocked the data protection community — which has responded with an outpouring of tributes on social media.

 

Prior to joining the European Commission, Buttarelli was secretary general of Italy’s data protection watchdog.

He also served for many years as a judge in his home country.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity – gpgmail


Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells gpgmail. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps – gpgmail


Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds – gpgmail


New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading gpgmail back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

How safe are school records? Not very, says student security researcher – gpgmail


If you can’t trust your bank, government or your medical provider to protect your data, what makes you think students are any safer?

Turns out, according to one student security researcher, they’re not.

Eighteen-year-old Bill Demirkapi, a recent high school graduate in Boston, Massachusetts, spent much of his latter school years with an eye on his own student data. Through self-taught pen testing and bug hunting, Demirkapi found several vulnerabilities in a his school’s learning management system, Blackboard, and his school district’s student information system, known as Aspen and built by Follett, which centralizes student data, including performance, grades, and health records.

The former student reported the flaws and revealed his findings at the Def Con security conference on Friday.

“I’ve always been fascinated with the idea of hacking,” Demirkapi told gpgmail prior to his talk. “I started researching but I learned by doing,” he said.

Among one of the more damaging issues Demirkapi found in Follett’s student information system was an improper access control vulnerability, which if exploited could have allowed an attacker to read and write to the central Aspen database and obtain any student’s data.

Blackboard’s Community Engagement platform had several vulnerabilities, including an information disclosure bug. A debugging misconfiguration allowed him to discover two subdomains, which spat back the credentials for Apple app provisioning accounts for dozens of school districts, as well as the database credentials for most if not every Blackboard’s Community Engagement platform, said Demirkapi.

“School data or student data should be taken as seriously as health data. The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”
Bill Demirkapi, security researcher

Another set of vulnerabilities could have allowed an authorized user — like a student — to carry out SQL injection attacks. Demirkapi said six databases could be tricked into disclosing data by injecting SQL commands, including grades, school attendance data, punishment history, library balances, and other sensitive and private data.

Some of the SQL injection flaws were blind attacks, meaning dumping the entire database would have been more difficult but not impossible.

In all, over 5,000 schools and over five million students and teachers were impacted by the SQL injection vulnerabilities alone, he said.

Demirkapi said he was mindful to not access any student records other than his own. But he warned that any low-skilled attacker could have done considerable damage by accessing and obtaining student records, not least thanks to the simplicity of the database’s password. He wouldn’t say what it was, only that it was “worse than ‘1234’.”

But finding the vulnerabilities was only one part of the challenge. Disclosing them to the companies turned out to be just as tricky.

Demirkapi admitted that his disclosure with Follett could have been better. He found that one of the bugs gave him improper access to create his own “group resource,” such as a snippet of text, which was viewable to every user on the system.

“What does an immature 11th grader do when you hand him a very, very, loud megaphone?” he said. “Yell into it.”

And that’s exactly what he did. He sent out a message to every user, displaying each user’s login cookies on their screen. “No worries, I didn’t steal them,” the alert read.

“The school wasn’t thrilled with it,” he said. “Fortunately, I got off with a two-day suspension.”

He conceded it wasn’t one of his smartest ideas. He wanted to show his proof-of-concept but was unable to contact Follett with details of the vulnerability. He later went through his school, which set up a meeting, and disclosed the bugs to the company.

Blackboard, however, ignored Demirkapi’s responses for several months, he said. He knows because after the first month of being ignored, he included an email tracker, allowing him to see how often the email was opened — which turned out to be several times in the first few hours after sending. And yet the company still did not respond to the researcher’s bug report.

Blackboard eventually fixed the vulnerabilities, but Demirkapi said he found that the companies “weren’t really prepared to handle vulnerability reports,” despite Blackboard ostensibly having a published vulnerability disclosure process.

“It surprised me how insecure student data is,” he said. “School data or student data should be taken as seriously as health data,” he said. “The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”

He said if a teenager had discovered serious security flaws, it was likely that more advanced attackers could do far more damage.

Heather Phillips, a spokesperson for Blackboard, said the company appreciated Demirkapi’s disclosure.

“We have addressed several issues that were brought to our attention by Mr. Demirkapi and have no indication that these vulnerabilities were exploited or that any clients’ personal information was accessed by Mr. Demirkapi or any other unauthorized party,” the statement said. “One of the lessons learned from this particular exchange is that we could improve how we communicate with security researchers who bring these issues to our attention.”

Follet spokesperson Tom Kline said the company “developed and deployed a patch to address the web vulnerability” in July 2018.

The student researcher said he was not deterred by the issues he faced with disclosure.

“I’m 100% set already on doing computer security as a career,” he said. “Just because some vendors aren’t the best examples of good responsible disclosure or have a good security program doesn’t mean they’re representative of the entire security field.”


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

The Great Hack tells us data corrupts  – gpgmail


This week professor David Carroll, whose dogged search for answers to how his personal data was misused plays a focal role in The Great Hack: Netflix’s documentary tackling the Facebook-Cambridge Analytica data scandal, quipped that perhaps a follow up would be more punitive for the company than the $5BN FTC fine released the same day.

The documentary — which we previewed ahead of its general release Wednesday — does an impressive job of articulating for a mainstream audience the risks for individuals and society of unregulated surveillance capitalism, despite the complexities involved in the invisible data ‘supply chain’ that feeds the beast. Most obviously by trying to make these digital social emissions visible to the viewer — as mushrooming pop-ups overlaid on shots of smartphone users going about their everyday business, largely unaware of the pervasive tracking it enables.

Facebook is unlikely to be a fan of the treatment. In its own crisis PR around the Cambridge Analytica scandal it has sought to achieve the opposite effect; making it harder to join the data-dots embedded in its ad platform by seeking to deflect blame, bury key details and bore reporters and policymakers to death with reams of irrelevant detail — in the hope they might shift their attention elsewhere.

Data protection itself isn’t a topic that naturally lends itself to glamorous thriller treatment, of course. No amount of slick editing can transform the close and careful scrutiny of political committees into seat-of-the-pants viewing for anyone not already intimately familiar with the intricacies being picked over. And yet it’s exactly such thoughtful attention to detail that democracy demands. Without it we are all, to put it proverbially, screwed.

The Great Hack shows what happens when vital detail and context are cheaply ripped away at scale, via socially sticky content delivery platforms run by tech giants that never bothered to sweat the ethical detail of how their ad targeting tools could be repurposed by malign interests to sew social discord and/or manipulate voter opinion en mass.

Or indeed used by an official candidate for high office in a democratic society that lacks legal safeguards against data misuse.

But while the documentary packs in a lot over an almost two-hour span, retelling the story of Cambridge Analytica’s role in the 2016 Trump presidential election campaign; exploring links to the UK’s Brexit leave vote; and zooming out to show a little of the wider impact of social media disinformation campaigns on various elections around the world, the viewer is left with plenty of questions. Not least the ones Carroll repeats towards the end of the film: What information had Cambridge Analytica amassed on him? Where did they get it from? What did they use it for? — apparently resigning himself to never knowing. The disgraced data firm chose declaring bankruptcy and folding back into its shell vs handing over the stolen goods and its algorithmic secrets.

There’s no doubt over the other question Carroll poses early on the film — could he delete his information? The lack of control over what’s done with people’s information is the central point around which the documentary pivots. The key warning being there’s no magical cleansing fire that can purge every digitally copied personal thing that’s put out there.

And while Carroll is shown able to tap into European data rights — purely by merit of Cambridge Analytica having processed his data in the UK — to try and get answers, the lack of control holds true in the US. Here, the absence of a legal framework to protect privacy is shown as the catalyzing fuel for the ‘great hack’ — and also shown enabling the ongoing data-free-for-all that underpins almost all ad-supported, Internet-delivered services. tl;dr: Your phone doesn’t need to listen to if it’s tracking everything else you do with it.

The film’s other obsession is the breathtaking scale of the thing. One focal moment is when we hear another central character, Cambridge Analytica’s Brittany Kaiser, dispassionately recounting how data surpassed oil in value last year — as if that’s all the explanation needed for the terrible behavior on show.

“Data’s the most valuable asset on Earth,” she monotones. The staggering value of digital stuff is thus fingered as an irresistible, manipulative force also sucking in bright minds to work at data firms like Cambridge Analytica — even at the expense of their own claimed political allegiances, in the conflicted case of Kaiser.

If knowledge is power and power corrupts, the construction can be refined further to ‘data corrupts’, is the suggestion.

The filmmakers linger long on Kaiser which can seem to humanize her — as they show what appear vulnerable or intimate moments. Yet they do this without ever entirely getting under her skin or allowing her role in the scandal to be fully resolved.

She’s often allowed to tell her narrative from behind dark glasses and a hat — which has the opposite effect on how we’re invited to perceive her. Questions about her motivations are never far away. It’s a human mystery linked to Cambridge Analytica’s money-minting algorithmic blackbox.

Nor is there any attempt by the filmmakers to mine Kaiser for answers themselves. It’s a documentary that spotlights mysteries and leaves questions hanging up there intact. From a journalist perspective that’s an inevitable frustration. Even as the story itself is much bigger than any one of its constituent parts.

It’s hard to imagine how Netflix could commission a straight up sequel to The Great Hack, given its central framing of Carroll’s data quest being combined with key moments of the Cambridge Analytica scandal. Large chunks of the film are comprised from capturing scrutiny and reactions to the story unfolding in real-time.

But in displaying the ruthlessly transactional underpinnings of social platforms where the world’s smartphone users go to kill time, unwittingly trading away their agency in the process, Netflix has really just begun to open up the defining story of our time.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something