Brexit means clear your cookies for democracy – gpgmail


Brexit looks set to further sink the already battered reputation of tracking cookies after a Buzzfeed report yesterday revealed what appears to be a plan by the UK’s minority government to use official government websites to harvest personal data on UK citizens for targeting purposes.

According to leaked government documents obtained by the news site, the prime minister has instructed government departments to share website usage data that’s collected via gov.uk websites with ministers on a cabinet committee tasked with preparing for a ‘no deal’ Brexit.

It’s not clear how linking up citizens use of essential government portals could further ‘no deal’ prep.

Rather the suspicion is it’s a massive, consent-less voter data grab by party political forces preparing for an inevitable general election in which the current Tory PM plans to campaign on a pro-Brexit message.

The instruction to pool gov.uk usage data as a “top priority” is also being justified internally in instructions to civil servants as necessary to accelerate plans for a digital revolution in public services — an odd ASAP to be claiming at a time of national, Brexit-induced crisis when there are plenty more pressing priorities (given the October 31 EU exit date looming).

A government spokesperson nonetheless told Buzzfeed the data is being collected to improve service delivery. They also claimed it’s “anonymized” data.

“Individual government departments currently collect anonymised user data when people use gov.uk. The Government Digital Service is working on a project to bring this anonymous data together to make sure people can access all the services they need as easily as possible,” the spokesperson said, further claiming: “No personal data is collected at any point during the process, and all activity is fully compliant with our legal and ethical obligations.”

However privacy experts quickly pointed out the nonsense of trying to pretend that joined up user data given a shared identifier is in any way anonymous.

 

For those struggling to keep up with the blistering pace of UK political developments engendered by Brexit, this is a government led by a new (and unelected) prime minister, Boris ‘Brexit: Do or Die’ Johnson, and his special advisor, digital guru Dominic Cummings, of election law-breaking Vote Leave campaign fame.

Back in 2015 and 2016, Cummings, then the director of the official Vote Leave campaign, masterminded a plan to win the EU referendum by using social media data to profile voters — blitzing them with millions of targeted ads in final days of the Brexit campaign.

Vote Leave was later found to have channelled money to Cambridge Analytica-linked Canadian data firm Aggregate IQ to target pro-Brexit ads via Facebook’s platform. Many of which were subsequently revealed to have used blatantly xenophobic messaging to push racist anti-EU messaging when Facebook finally handed over the ad data.

Setting aside the use of xenophobic dark ads to whip up racist sentiment to sell Brexit to voters, and ongoing questions about exactly how Vote Leave acquired data on UK voters for targeting them with political ads (including ethical questions about the use of a football quiz touting a £50M prize run on social media as a mass voter data-harvesting exercise), last year the UK’s Electoral Commission found Vote Leave had breached campaign spending limits through undeclared joint working with another pro-Brexit campaign — via which almost half a million pounds was illegally channeled into Facebook ads.

The Vote Leave campaign was fined £61k by the Electoral Commission, and referred to the police. (An investigation is possibly ongoing.)

Cummings, the ‘huge brain’ behind Vote Leave’s digital strategy, did not suffer a dent in his career as a consequence of all this — on the contrary, he was appointed by Johnson as senior advisor this summer, after Johnson won the Conservative leader contest and so became the third UK PM since the 2016 vote for Brexit.

With Cummings at his side, it’s been full steam ahead for Johnson on social media ads and data grabs, as we reported last month — paving the way for a hoped for general election campaign, fuelled by ‘no holds barred’ data science. Democratic ethics? Not in this digitally disruptive administration!

The Johnson-Cummings pact ignores entirely the loud misgivings sounded by the UK’s information commissioner — which a year ago warned that political microtargeting risks undermining trust in democracy. The ICO called then for an ethical pause. Instead Johnson stuck up a proverbial finger by installing Cummings in No.10.

The UK’s Digital, Culture, Media and Sport parliamentary committee, which tried and failed to get Cummings to testify before it last year as part of a wide-ranging enquiry into online disinformation (a snub for which Cummings was later found in contempt of parliament), also urged the government to update election law as a priority last summer — saying it was essential to act to defend democracy against data-fuelled misinformation and disinformation. A call that was met with cold water.

This means the same old laws that failed to prevent ethically dubious voter data-harvesting during the EU referendum campaign, and failed to prevent social media ad platforms and online payment platforms (hi, Paypal!) from being the conduit for illegal foreign donations into UK campaigns, are now apparently incapable of responding to another voter data heist trick, this time cooked up at the heart of government on the umbrella pretext of ‘preparing for Brexit’.

The repurposing of government departments under Johnson-Cummings for pro-Brexit propaganda messaging also looks decidedly whiffy…

Asked about the legality of the data pooling gov.uk plan as reported by Buzzfeed, an ICO spokesperson told us: “People should be able to make informed choices about the way their data is used. That’s why organisations have to ensure that they process personal information fairly, legally and transparently. When that doesn’t happen, the ICO can take action.”

Can — but hasn’t yet.

It’s also not clear what action the ICO could end up taking to purge UK voter data that’s already been (or is in the process of being) sucked out of the Internet to be repurposed for party political purposes — including, judging by the Vote Leave playbook, for microtargeted ads that promote a no holds barred ‘no deal’ Brexit agenda.

One thing is clear: Any action would need to be swiftly enacted and robustly enforced if it were to have a meaningful chance of defending democracy from ethics-free data-targeting.

Sadly, the ICO has yet to show an appetite for swift and robust action where political parties are concerned.

Likely because a report it put out last fall essentially called out all UK political parties for misusing people’s data. It followed up saying it would audit the political parties starting early this year — but has yet to publish its findings.

Concerned opposition MPs are left tweeting into the regulatory abyss — decrying the ‘coup’ and forlornly pressing for action… Though if the political boot were on the other foot it might well be a different story.

Among the cookies used on gov.uk sites are Google Analytics cookies which store information on how visitors got to the site; the pages visited and length of time spent on them; and items clicked on. Which could certainly enable rich profiles to be attached to single visitors IDs.

Visitors to gov.uk properties can switch off Google Analytics measurement cookies, as well as denying gov.uk communications and marketing cookies, and cookies that store preferences — with only “strictly necessary” cookies (which remember form progress and serve notifications) lacking a user toggle.

What should concerned UK citizens to do to defend democracy against the data science folks we’re told are being thrown at the Johnson-Cummings GSD data pooling project? Practice good privacy hygiene.

Clear your cookies. Indeed, switch off gov.uk cookies. Deny access wherever and whenever possible.

It’s probably also a good idea to use a fresh browser session each time you need to visit a government website and close the session (with cookies set to clear) immediately you’re done.

When the laws have so spectacularly failed to keep up with the data processors, limiting how your information is gathered online is the only way to be sure. Though as we’ve written before it’s not easy.

Privacy is personal and unfortunately, with the laws lagging, the personal is now trivially cheap and easy to weaponize for political dark arts that treat democracy as a game of PR, debasing the entire system in the process.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Despite Brexit, UK startups can compete with Silicon Valley to win tech talent – gpgmail


Brexit has taken over discourse in the UK and beyond. In the UK alone, it is mentioned over 500 million times a day, in 92 million conversations — and for good reason. While the UK has yet to leave the EU, the impact of Brexit has already rippled through industries all over the world. The UK’s technology sector is no exception. While innovation endures in the midst of Brexit, data reveals that innovative companies are losing the ability to attract people from all over the world and are suffering from a substantial talent leak. 

It is no secret that the UK was already experiencing a talent shortage, even without the added pressure created by today’s political landscape. Technology is developing rapidly and demand for tech workers continues to outpace supply, creating a fiercely competitive hiring landscape.

The shortage of available tech talent has already created a deficit that could cost the UK £141 billion in GDP growth by 2028, stifling innovation. Now, with Brexit threatening the UK’s cosmopolitan tech landscape — and the economy at large — we may soon see international tech talent moving elsewhere; in fact, 60% of London businesses think they’ll lose access to tech talent once the UK leaves the EU.

So, how can UK-based companies proactively attract and retain top tech talent to prevent a Brexit brain drain? UK businesses must ensure that their hiring funnels are a top priority and focus on understanding what matters most to tech talent beyond salary, so that they don’t lose out to US tech hubs. 

Brexit aside, why is San Francisco more appealing than the UK?


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Facebook’s lead EU regulator is asking questions about its latest security fail – gpgmail


Facebook’s lead data protection regulator in Europe has confirmed it’s put questions to the company about a major security breach that we reported on yesterday.

“The DPC became aware of this issue through the recent media coverage and we immediately made contact with Facebook and we have asked them a series of questions. We are awaiting Facebook’s responses to those questions,” a spokeswoman for the Irish Data Protection Commission told us.

We’ve reached out to Facebook for a response.

As we reported earlier, a security research discovered an unsecured database of hundreds of millions of phone numbers linked to Facebook accounts.

The exposed server contained more than 419 million records over several databases on Facebook users from multiple countries, including 18 million records of users in the U.K.

We were able to verify a number of records in the database — including UK Facebook users’ data.

The presence of Europeans’ data in the scraped stash makes he breach a clear matter of interest to the region’s data watchdogs.

Europe’s General Data Protection Regulation (GDPR) imposes stiff penalties for compliance failures such as security breaches — with fines that can scale as high as 4% of a company’s annual turnover.

Ireland’s DPC is Facebook’s lead data protection regulator in Europe under GDPR’s one-stop shop mechanism — meaning it leads on cross-border actions, though other concerned DPAs can contribute to cases and may also chip in views on any formal outcomes that result.

The UK’s data protection watchdog, the ICO, told us it is aware of the Facebook security incident.

“We are in contact with the Irish Data Protection Commission (DPC), as they are the lead supervisory authority for Facebook Ireland Limited. The ICO will continue to liaise with the IDPC to establish the details of the incident and to determine if UK residents have been affected,” an ICO spokeswoman also told us.

It’s not yet clear whether the Irish DPC will open a formal investigation of the incident.

It does already have a large number of open investigations on its desk into Facebook and Facebook-owned businesses since GDPR’s one-stop mechanism came into force — including one into a major token security breach last year, and many, many more.

In the latest breach instance, it’s not clear exactly when Facebook users phone numbers were scraped from the platform.

In a response yesterday Facebook said the data-set is “old”, adding that it “appears to have information obtained before we made changes last year to remove people’s ability to find others using their phone numbers”.

If that’s correct, the data breach is likely to pre-date April 2018 — which was when Facebook announced it was making changes to its account search and recovery feature, after finding it had been abused by what it dubbed “malicious actors”.

“Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way,” Facebook said at the time.

It would also therefore pre-date GDPR coming into force, in May 2018, so would likely fall under earlier EU data protection laws — which carry less stringent penalties.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Mental health websites in Europe found sharing user data for ads – gpgmail


Research by a privacy rights advocacy group has found popular mental health websites in the EU are sharing users’ sensitive personal data with advertisers.

Europeans going online to seek support with mental health issues are having sensitive health data tracked and passed to third parties, according to Privacy International’s findings — including depression websites passing answers and results of mental health check tests direct to third parties for ad targeting purposes.

The charity used the open source Webxray tool to analyze the data gathering habits of 136 popular mental health web pages in France, Germany and the UK, as well as looking at a small sub-set of online depression tests (the top three Google search results for the phrase per country).

It has compiled its findings into a report called Your mental health for sale.

“Our findings show that many mental health websites don’t take the privacy of their visitors as seriously as they should,” Privacy International writes. “This research also shows that some mental health websites treat the personal data of their visitors as a commodity, while failing to meet their obligations under European data protection and privacy laws.”

Under Europe’s General Data Protection Regulation (GDPR), there are strict rules governing the processing of health data — which is classified as special category personal data.

If consent is being used as the legal basis to gather this type of data the standard that must be obtained from the user is “explicit” consent.

In practice that might mean a pop-up before you take a depression test which asks whether you’d like to share your mental health with a laundry list of advertisers so they can use it to sell you stuff when you’re feeling low — also offering a clear ‘hell no’ penalty-free choice not to consent (but still get to take the test).

Safe to say, such unvarnished consent screens are as rare as hen’s teeth on the modern Internet.

But, in Europe, beefed up privacy laws are now being used to challenge the ‘data industrial complex’s systemic abuses and help individuals enforce their rights against a behavior-tracking adtech industry that regulators have warned is out of control.

Among Privacy International’s key findings are that —

  • 76.04% of the mental health web pages contained third-party trackers for marketing purposes
  • Google trackers are almost impossible to avoid, with 87.8% of the web pages in France having a Google tracker, 84.09% in Germany and 92.16% in the UK
  •  Facebook is the second most common third-party tracker after Google, with 48.78% of all French web pages analysed sharing data with Facebook; 22.73% for Germany; and 49.02 % for the UK.
  • Amazon Marketing Services were also used by many of the mental health web pages analysed (24.39% of analyzed web pages in France; 13.64 % in Germany; and 11.76% in the UK)
  • Depression-related web pages used a large number of third-party tracking cookies which were placed before users were able to express (or deny) consent. On average, PI found the mental health web pages placed 44.49 cookies in France; 7.82 for Germany; and 12.24 for the UK

European law around consent as a legal basis for processing (general) personal data — including for dropping tracking cookies — requires it to be informed, specific and freely given. This means websites that wish to gather user data must clearly state what data they intend to collect for what purpose, and do so before doing it, providing visitors with a free choice to accept or decline the tracking.

Dropping tracking cookies without even asking clearly falls foul of that legal standard. And very far foul when you consider the personal data being handled by these mental health websites is highly sensitive special category health data.

It is exceedingly difficult for people to seek mental health information and for example take a depression test without countless of third parties watching,” said Privacy International technologist Eliot Bendinelli in a statement. “All website providers have a responsibility to protect the privacy of their users and comply with existing laws, but this is particularly the case for websites that share unusually granular or sensitive data with third parties. Such is the case for mental health websites.”

Additionally, the group’s analysis found some of the trackers embedded on mental health websites are used to enable a programmatic advertising practice known as Real Time Bidding (RTB). 

This is important because RTB is subject to multiple complaints under GDPR.

These complaints argue that the systematic, high velocity trading of personal data is, by nature, inherently insecure — with no way for people’s information to be secured after it’s shared with hundreds or even thousands of entities involved in the programmatic chain, because there’s no way to control it once it’s been passed. And, therefore, that RTB fails to comply with the GDPR’s requirement that personal data be processed securely.

Complaints are being considered by regulators across multiple Member States. But this summer the UK’s data watchdog, the ICO, essentially signalled it is in agreement with the crux of the argument — putting the adtech industry on watch in an update report in which it warns that behavioral advertising is out of control and instructs the industry it must reform.

However the regulator also said it would give players “an appropriate period of time to adjust their practices”, rather than wade in with a decision and banhammers to enforce the law now.

The ICO’s decision to opt for an implied threat of future enforcement to push for reform of non-compliant adtech practices, rather than taking immediate action to end privacy breaches, drew criticism from privacy campaigners.

And it does look problematic now, given Privacy International’s findings suggest sensitive mental health data is being sucked up into bid requests and put about at insecure scale — where it could pose a serious risk to individuals’ rights and freedoms.

Privacy International says it found “numerous” mental health websites including trackers from known data brokers and AdTech companies — some of which engage in programmatic advertising. It also found some depression test websites (namely: netdoktor.de, passeportsante.net and doctissimo.fr, out of those it looked at) are using programmatic advertising with RTB.

“The findings of this study are part of a broader, much more systemic problem: The ways in which companies exploit people’s data to target ads with ever more precision is fundamentally broken,” adds Bendinelli. “We’re hopeful that the UK regulator is currently probing the AdTech industry and the many ways it uses special category data in ways that are neither transparent nor fair and often lack a clear legal basis.”

We’ve reached out to the ICO with questions.

We also asked the Internet Advertising Bureau Europe what steps it is taking to encourage reform of RTB to bring the system into compliance with EU privacy law. At the time of writing the industry association had not responded.

The IAB recently released a new version of what it refers to as a “transparency and consent management framework” intended for websites to embed to collect consent from visitors to processing their data including for ad targeting purposes — legally, the IAB contends.

However critics argue this is just another dose of business as usual ‘compliance theatre’ from the adtech industry — with users offered only phoney choices as there’s no real control over how their personal data gets used or where it ends up.

Earlier this year Google’s lead privacy regulator in Europe, the Irish DPC, opened a formal investigation into the company’s processing of personal data in the context of its online Ad Exchange — also as a result of a RTB complaint filed in Ireland.

The DPC said it will look at each stage of an ad transaction to establish whether the ad exchange is processing personal data in compliance with GDPR — including looking at the lawful basis for processing; the principles of transparency and data minimisation; and its data retention practices.

The outcome of that investigation remains to be seen. (Fresh fuel has just today been poured on with the complainant submitting new evidence of their personal data being shared in a way they allege infringes the GDPR.)

Increased regulatory attention on adtech practices is certainly highlighting plenty of legally questionable and ethically dubious stuff — like embedded tracking infrastructure that’s taking liberal notes on people’s mental health condition for ad targeting purposes. And it’s clear that EU regulators have a lot more work to do to deliver on the promise of GDPR.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Apple still has work to do on privacy – gpgmail


There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.

On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.

As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.

Are we really so sure that thesis holds?

But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?

On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.

But even directly on privacy, Apple is running into problems, too.

 

To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

How UK VCs are managing the risk of a ‘no deal’ Brexit – gpgmail


Grab your economic zombie mask: A Halloween “no deal” Brexit is careening into view. New prime minister Boris Johnson has pledged that the country will leave the European Union on October 31 with or without a deal — “do or die” as he put it. A year earlier as the foreign secretary, he used an even more colorful phrase to skewer diplomatic concern about the impact of a hard Brexit on business — reportedly condensing his position to a pithy expletive: “Fuck business.”

It was only a few years ago during the summer of 2016, following the shock result of the UK’s in/out EU referendum, the government’s aspiration was to leave in a “smooth and orderly” manner as the prelude to a “close and special” future trading partnership, as then PM Theresa May put it. A withdrawal deal was negotiated but repeatedly rejected by parliament. The PM herself was next to be despatched.

Now, here we are. The U.K. has arrived at a political impasse in which the nation is coasting toward a Brexit cliff edge. We’re at the brink here, with domestic politics turned upside down, because “no deal” is the only leverage left for “do or die” brexiteers that parliament can’t easily block.

Ironic because there’s no majority in parliament for “no deal.” But the end of the Article 50 extension period represents a legal default — a hard deadline that means the U.K. will soon fall out of the EU unless additional action is taken. Of course time itself can’t be made to grind to a halt. So “no deal” is the easy option for a government that’s made doing anything else to sort Brexit really really hard.

After three full years of Brexit uncertainty, the upshot for U.K. business is there’s no end in sight to even the known unknowns. And now a clutch of unknown unknowns seems set to pounce come Halloween when the country steps into the chaos of leaving with nada, as the current government says it must.

So how is the U.K. tech industry managing the risk of a chaotic exit from the European Union? The prevailing view among investors about founders is that Brexit means uncertain business as usual. “Resilience is the mother of entrepreneurship!” was the almost glib response of one VC asked how founders are coping.

“This is no worse than the existential dread that most founders feel every day about something or other,” said another, dubbing Brexit “just an enormous distraction.” And while he said the vast majority of founders in the firm’s portfolio would rather the whole thing was cancelled — “most realize it’s not going to be so they just want to get on.”


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Europe’s top data protection regulator, Giovanni Buttarelli, has died – gpgmail


Europe’s data protection supervisor, Giovanni Buttarelli, has died.

His passing yesterday, aged 62, was announced by his office today — which writes:

It is with the deepest regret that we announce the loss of Giovanni Buttarelli, the European Data Protection Supervisor. Giovanni passed away surrounded by his family in Italy, last night, 20 August 2019.

We are all profoundly saddened by this tragic loss of such a kind and brilliant individual. Throughout his life Giovanni dedicated himself completely to his family, to the service of the judiciary and the European Union and its values. His passion and intelligence will ensure an enduring and unique legacy for the institution of the EDPS and for all people whose lives were touched by him.

Ciao Giovanni

Buttarelli was appointed to the key oversight role monitoring the implementation of EU privacy rules for a five year term, starting in December 2014.

Among his achievements in the post was overseeing the transition to a new comprehensive data protection framework, the General Data Protection Regulation (GDPR), which came into force last year — a shift of gear towards enforcement that has shone a global spotlight on the bloc’s approach to privacy at a time when the implications of not putting meaningful checks on data-mining giants are writ large across Western democracies.

The jury is still out on how effectively Europe’s regulators will enforce the GDPR against powerful platform giants but a large number of open investigations are now pending.

Buttarelli also personally pressed the case for regulators to collectively grasp the nettle — to tackle what he described as “real cases like that of Facebook’s terms of service”.

At the same time as working for a consistent and comprehensive application of the GDPR, he believed further interventions would be needed to steer the application of powerful technologies in a fair and ethical direction.

This included advocating for greater joint working between privacy and competition regulators — calling for them to “adopt a position on the intersection of consumer protection, competition rules and data protection” and use “structural remedies to make the digital market fairer for people”.

He has also sought to accelerate innovation and debate around data ethics, which was the theme of a major privacy conference he hosted last year.

In an interview with gpgmail last year he warned that laws alone won’t stop data being used to discriminate unfairly — while asserting that online discrimination “is not the kind of democracy we deserve”.

The sad news of Buttarelli’s passing has shocked the data protection community — which has responded with an outpouring of tributes on social media.

 

Prior to joining the European Commission, Buttarelli was secretary general of Italy’s data protection watchdog.

He also served for many years as a judge in his home country.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity – gpgmail


Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells gpgmail. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Microsoft tweaks privacy policy to admit humans can listen to Skype Translator and Cortana audio – gpgmail


Microsoft is the latest tech giant to amend its privacy policy after media reports revealed it uses human contractors to review audio recordings of Skype and Cortana users.

A section in the policy on how the company uses personal data now reads (emphasis ours):

Our processing of personal data for these purposes includes both automated and manual (human) methods of processing. Our automated methods often are related to and supported by our manual methods. For example, our automated methods include artificial intelligence (AI), which we think of as a set of technologies that enable computers to perceive, learn, reason, and assist in decision-making to solve problems in ways that are similar to what people do. To build, train, and improve the accuracy of our automated methods of processing (including AI), we manually review some of the predictions and inferences produced by the automated methods against the underlying data from which the predictions and inferences were made. For example, we manually review short snippets of a small sampling of voice data we have taken steps to de-identify to improve our speech services, such as recognition and translation.

The tweaks to the privacy policy of Microsoft’s Skype VoIP software and its Cortana voice AI were spotted by Motherboard — which was also first to report that contractors working for Microsoft are listening to personal conversations of Skype users conducted through the app’s translation service, and to audio snippets captured by the Cortana voice assistant.

Asked about the privacy policy changes, Microsoft told Motherboard: “We realized, based on questions raised recently, that we could do a better job specifying that humans sometimes review this content.”

Multiple tech giants’ use of human workers to review users’ audio across a number of products involving AI has grabbed headlines in recent weeks after journalists exposed a practice that had not been clearly conveyed to users in terms and conditions — despite European privacy law requiring clarity about how people’s data is used.

Apple, Amazon, Facebook, Google and Microsoft have all been called out for failing to make it clear that a portion of audio recordings will be accessed by human contractors.

Such workers are typically employed to improve the performance of AI systems by verifying translations and speech in different accents. But, again, this human review component within AI systems has generally been buried rather than transparently disclosed.

Earlier this month a German privacy watchdog told Google it intended to use EU privacy law to order it to halt human reviews of audio captured by its Google Assistant AI in Europe — after press had obtained leaked audio snippets and being able to re-identify some of the people in the recordings.

On learning of the regulator’s planned intervention Google suspended reviews.

Apple also announced it was suspending human reviews of Siri snippets globally, again after a newspaper reported that its contractors could access audio and routinely heard sensitive stuff.

Facebook also said it was pausing human reviews of a speech-to-text AI feature offered in its Messenger app — again after concerns had been raised by journalists.

So far Apple, Google and Facebook have suspended or partially suspended human reviews in response to media disclosures and/or regulatory attention.

While the lead privacy regulator for all three, Ireland’s DPC, has started asking questions.

In response to the rising privacy scrutiny of what tech giants nonetheless claim is a widespread industry practice, Amazon also recently amended the Alexa privacy policy to disclose that it employs humans to review some audio. It also quietly added an option for uses to opt-out of the possibility of someone listening to their Alexa recordings. Amazon’s lead EU privacy regulator is also now seeking answers.

Microsoft told Motherboard it is not suspending human reviews at this stage.

Users of Microsoft’s voice assistant can delete recordings — but such deletions require action from the user and would be required on a rolling basis as long as the product continues being use. So it’s not the same as having a full and blanket opt out.

We’ve asked Microsoft whether it intends to offer Skype or Cortana users an opt out of their recordings being reviewed by humans.

The company told Motherboard it will “continue to examine further steps we might be able to take”.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

WebKit’s new anti-tracking policy puts privacy on a par with security – gpgmail


WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells gpgmail. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something