America’s largest companies push for federal online privacy laws to circumvent state regulatory efforts – gpgmail


As California moves ahead with what would be the most restrictive online privacy laws in the nation, the chief executives of some of the nation’s largest companies are taking their case to the nation’s capitol to plead for federal regulation.

Chief executives at Amazon, AT&T, Dell, Ford, IBM, Qualcomm, Walmart, and other leading financial services, manufacturing, and technology companies have issued an open letter to Congressional leadership pleading with them to take action on online privacy, through the pro-industry organization, The Business Roundtable.

“Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened,” the letter says.

The subtext to this call to action is the California privacy regulations that are set to take effect by the end of this year.

As we noted when the bill was passed last year there are a few key components of the California legislation including the following requirements:

  • Businesses must disclose what information they collect, what business purpose they do so for and any third parties they share that data with.

  • Businesses would be required to comply with official consumer requests to delete that data.

  • Consumers can opt out of their data being sold, and businesses can’t retaliate by changing the price or level of service.

  • Businesses can, however, offer “financial incentives” for being allowed to collect data.

  • California authorities are empowered to fine companies for violations.

There’s a reason why companies would push for federal regulation to supersede any initiatives from the states. It is more of a challenge for companies to adhere to a patchwork of different regulatory regimes at the state level. But it’s also true that companies, following the lead of automakers in California, could just adhere to the most stringent requirements which would clarify any confusion.

Indeed many of these companies are already complying with strict privacy regulations thanks to the passage of the GDPR in Europe.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Mental health websites in Europe found sharing user data for ads – gpgmail


Research by a privacy rights advocacy group has found popular mental health websites in the EU are sharing users’ sensitive personal data with advertisers.

Europeans going online to seek support with mental health issues are having sensitive health data tracked and passed to third parties, according to Privacy International’s findings — including depression websites passing answers and results of mental health check tests direct to third parties for ad targeting purposes.

The charity used the open source Webxray tool to analyze the data gathering habits of 136 popular mental health web pages in France, Germany and the UK, as well as looking at a small sub-set of online depression tests (the top three Google search results for the phrase per country).

It has compiled its findings into a report called Your mental health for sale.

“Our findings show that many mental health websites don’t take the privacy of their visitors as seriously as they should,” Privacy International writes. “This research also shows that some mental health websites treat the personal data of their visitors as a commodity, while failing to meet their obligations under European data protection and privacy laws.”

Under Europe’s General Data Protection Regulation (GDPR), there are strict rules governing the processing of health data — which is classified as special category personal data.

If consent is being used as the legal basis to gather this type of data the standard that must be obtained from the user is “explicit” consent.

In practice that might mean a pop-up before you take a depression test which asks whether you’d like to share your mental health with a laundry list of advertisers so they can use it to sell you stuff when you’re feeling low — also offering a clear ‘hell no’ penalty-free choice not to consent (but still get to take the test).

Safe to say, such unvarnished consent screens are as rare as hen’s teeth on the modern Internet.

But, in Europe, beefed up privacy laws are now being used to challenge the ‘data industrial complex’s systemic abuses and help individuals enforce their rights against a behavior-tracking adtech industry that regulators have warned is out of control.

Among Privacy International’s key findings are that —

  • 76.04% of the mental health web pages contained third-party trackers for marketing purposes
  • Google trackers are almost impossible to avoid, with 87.8% of the web pages in France having a Google tracker, 84.09% in Germany and 92.16% in the UK
  •  Facebook is the second most common third-party tracker after Google, with 48.78% of all French web pages analysed sharing data with Facebook; 22.73% for Germany; and 49.02 % for the UK.
  • Amazon Marketing Services were also used by many of the mental health web pages analysed (24.39% of analyzed web pages in France; 13.64 % in Germany; and 11.76% in the UK)
  • Depression-related web pages used a large number of third-party tracking cookies which were placed before users were able to express (or deny) consent. On average, PI found the mental health web pages placed 44.49 cookies in France; 7.82 for Germany; and 12.24 for the UK

European law around consent as a legal basis for processing (general) personal data — including for dropping tracking cookies — requires it to be informed, specific and freely given. This means websites that wish to gather user data must clearly state what data they intend to collect for what purpose, and do so before doing it, providing visitors with a free choice to accept or decline the tracking.

Dropping tracking cookies without even asking clearly falls foul of that legal standard. And very far foul when you consider the personal data being handled by these mental health websites is highly sensitive special category health data.

It is exceedingly difficult for people to seek mental health information and for example take a depression test without countless of third parties watching,” said Privacy International technologist Eliot Bendinelli in a statement. “All website providers have a responsibility to protect the privacy of their users and comply with existing laws, but this is particularly the case for websites that share unusually granular or sensitive data with third parties. Such is the case for mental health websites.”

Additionally, the group’s analysis found some of the trackers embedded on mental health websites are used to enable a programmatic advertising practice known as Real Time Bidding (RTB). 

This is important because RTB is subject to multiple complaints under GDPR.

These complaints argue that the systematic, high velocity trading of personal data is, by nature, inherently insecure — with no way for people’s information to be secured after it’s shared with hundreds or even thousands of entities involved in the programmatic chain, because there’s no way to control it once it’s been passed. And, therefore, that RTB fails to comply with the GDPR’s requirement that personal data be processed securely.

Complaints are being considered by regulators across multiple Member States. But this summer the UK’s data watchdog, the ICO, essentially signalled it is in agreement with the crux of the argument — putting the adtech industry on watch in an update report in which it warns that behavioral advertising is out of control and instructs the industry it must reform.

However the regulator also said it would give players “an appropriate period of time to adjust their practices”, rather than wade in with a decision and banhammers to enforce the law now.

The ICO’s decision to opt for an implied threat of future enforcement to push for reform of non-compliant adtech practices, rather than taking immediate action to end privacy breaches, drew criticism from privacy campaigners.

And it does look problematic now, given Privacy International’s findings suggest sensitive mental health data is being sucked up into bid requests and put about at insecure scale — where it could pose a serious risk to individuals’ rights and freedoms.

Privacy International says it found “numerous” mental health websites including trackers from known data brokers and AdTech companies — some of which engage in programmatic advertising. It also found some depression test websites (namely: netdoktor.de, passeportsante.net and doctissimo.fr, out of those it looked at) are using programmatic advertising with RTB.

“The findings of this study are part of a broader, much more systemic problem: The ways in which companies exploit people’s data to target ads with ever more precision is fundamentally broken,” adds Bendinelli. “We’re hopeful that the UK regulator is currently probing the AdTech industry and the many ways it uses special category data in ways that are neither transparent nor fair and often lack a clear legal basis.”

We’ve reached out to the ICO with questions.

We also asked the Internet Advertising Bureau Europe what steps it is taking to encourage reform of RTB to bring the system into compliance with EU privacy law. At the time of writing the industry association had not responded.

The IAB recently released a new version of what it refers to as a “transparency and consent management framework” intended for websites to embed to collect consent from visitors to processing their data including for ad targeting purposes — legally, the IAB contends.

However critics argue this is just another dose of business as usual ‘compliance theatre’ from the adtech industry — with users offered only phoney choices as there’s no real control over how their personal data gets used or where it ends up.

Earlier this year Google’s lead privacy regulator in Europe, the Irish DPC, opened a formal investigation into the company’s processing of personal data in the context of its online Ad Exchange — also as a result of a RTB complaint filed in Ireland.

The DPC said it will look at each stage of an ad transaction to establish whether the ad exchange is processing personal data in compliance with GDPR — including looking at the lawful basis for processing; the principles of transparency and data minimisation; and its data retention practices.

The outcome of that investigation remains to be seen. (Fresh fuel has just today been poured on with the complainant submitting new evidence of their personal data being shared in a way they allege infringes the GDPR.)

Increased regulatory attention on adtech practices is certainly highlighting plenty of legally questionable and ethically dubious stuff — like embedded tracking infrastructure that’s taking liberal notes on people’s mental health condition for ad targeting purposes. And it’s clear that EU regulators have a lot more work to do to deliver on the promise of GDPR.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

After data incidents, Instagram expands its bug bounty – gpgmail


Facebook is expanding its data abuse bug bounty to Instagram.

The social media giant, which owns Instagram, first rolled out its data abuse bounty in the wake of the Cambridge Analytica scandal, which saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

The idea was that security researchers and platform users alike could report instances of third-party apps or companies that were scraping, collecting and selling Facebook data for other purposes, such as to create voter profiles or build vast marketing lists.

Even following he high profile public relations disaster of Cambridge Analytica, Facebook still still had apps illicitly collecting data on its users.

Instagram wasn’t immune either. Just this month Instagram booted a “trusted” marketing partner off its platform after it was caught scraping millions of users’ stories, locations and other data points on millions of users, forcing Instagram to make product changes to prevent future scraping efforts. That came after two other incidents earlier this year where a security researcher found 14 million scraped Instagram profiles sitting on an exposed database — without a password — for anyone to access. Another incident saw another company platform scrape the profile data — including email addresses and phone numbers — of Instagram influencers.

Last year Instagram also choked developers’ access as the company tried to rebuild its privacy image in the aftermath of the Cambridge Analytica scandal.

Dan Gurfinkel, security engineering manager at Instagram, said its new and expanded data abuse bug bounty aims to “encourage” security researchers to report potential abuse.

Instagram said it’s also inviting a select group of trusted security researchers to find flaws in its Checkout service ahead of its international rollout, who will also be eligible for bounty payouts.

Read more:


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

WebKit’s new anti-tracking policy puts privacy on a par with security – gpgmail


WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells gpgmail. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Facebook contractors said to have collected and transcribed users’ audio without permission – gpgmail


Reset the counter on “number of days since the last Facebook privacy scandal.”

Facebook has become the latest tech giant to face scrutiny over its handling of users’ audio, following a report that said the social media giant collected audio from its users and transcribed it using third-party contractors.

The report came from Bloomberg, citing the contractors who requested anonymity for fear of losing their jobs.

According to the report, it’s not known where the audio came from or why it was transcribed, but that Facebook users’ conversations were often matched against to see if they were properly interpreted by the company’s artificial intelligence.

There are several ways that Facebook collects voice and audio data, including from its mobile devices, its Messenger voice and video service, and through its smart speaker. But the social media giant’s privacy policy makes no clear mention of voice . Bloomberg also noted that contractors felt their work was “unethical” because Facebook “hasn’t disclosed to users that third parties may review their audio.”

Facebook has since stopped the practice.

We’ve asked Facebook several questions, including how the audio was collected, for what reason it was transcribed, and why users weren’t explicitly told of the third-party transcription, but did not immediately hear back.

Facebook becomes the latest tech company to face questions about its use of third-party contractors and staff to review user audio.

Amazon saw the initial round of flak for allowing contractors to manually review Alexa recordings without express user permission, forcing the company to add an opt-out to its Echo devices. Google also faced the heat for allowing human review of audio data, along with Apple which used contractors to listen to seemingly private Siri recordings. Microsoft also listened to some Skype calls made through the company’s app translation feature.

It’s been over a year since Facebook last had a chief security officer in the wake of Alex Stamos’ departure.

Read more:


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Robocall blocking apps caught sending your private data without permission – gpgmail


Robocall-blocking apps promise to rid your life of spoofed and spam phone calls. But are they as trustworthy as they claim to be?

One security researcher said many of these apps can violate your privacy as soon as they are opened.

Dan Hastings, a senior security consultant cybersecurity firm NCC Group, analyzed some of the most popular robocall-blocking apps — including TrapCall, Truecaller, and Hiya — and found egregious privacy violations.

Robocalls are getting worse, with some getting tens or dozens of calls a day. These automated calls demand you “pay the IRS” a fine you don’t owe or pretend to be tech support. They often try to trick you into picking up the phone by spoofing their number to look like a local caller. But as much as the cell networks are trying to cut down on spam, many are turning to third-party apps to filter their incoming calls.

But many of these apps, said Hastings, send user or device data to third-party data analytics companies — often to monetize your information — without your explicit consent, instead burying the details in their privacy policies.

One app, TrapCall, sent users’ phone numbers to a third-party analytics firm, AppsFlyer, without telling users — either in the app nor in the privacy policy.

He also found Truecaller and Hiya uploaded device data — device type, model and software version, among other things — before a user could accept their privacy policies. Those apps, said Hastings, violate Apple’s app guidelines on data use and sharing, which mandate that app makers first obtain permission before using or sending data to third-parties.

Many of the other apps aren’t much better. Several other apps that Hastings tested immediately sent some data to Facebook as soon as the app loaded.

“Without having a technical background, most end users aren’t able to evaluate what data is actually being collected and sent to third parties,” said Hastings. “Privacy policies are the only way that a non-technical user can evaluate what data is collected about them while using an app.”

None of the companies acted on emails from Hastings warning about the privacy issues, he said. It was only after he contacted Apple when TrapCall later updated its privacy policy.

But he reserved some criticism for Apple, noting that app privacy policies “don’t appear to be monitored” as he discovered with Truecaller and Hiya.

“Privacy policies are great, but apps need to get better about abiding by them,” said Hastings.

“If most people took the time to read and try to understand privacy policies for all the apps they use (and are able to understand them!), they might be surprised to see how much these apps collect,” he said. “Until that day, end-users will have to rely on security researchers performing manual deep dives into how apps handle their private information in practice.”

Spokespeople for TrapCall, Truecaller, and Hiya did not comment when reached prior to publication.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proceed – gpgmail


Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.

The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.

Now, thanks to an unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.

The most significant language from the decision from the circuit court seems to be this:

 We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.

“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”

As April Glaser noted in “Slate”, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.

“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”

That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebooks billion-strong population of users.

As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to “Reuters”.

Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.

Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit those figures could amount to real money.

“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”

These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Amazon quietly adds ‘no human review’ option to Alexa settings as voice AIs face privacy scrutiny – gpgmail


Amazon has tweaked the settings for its Alexa voice AI to allow users to opt out of their voice recordings being manually reviewed by the company’s human workers.

The policy shift took effect Friday, according to Bloomberg, which reports that Alexa users will now find an option in the settings menu of the Alexa smartphone app to disable human review of their clips.

The Alexa T&C did not previously inform users of the possibility that audio recordings captured by the service might be manually reviewed by actual humans. (Amazon still doesn’t appear to provide this disclosure on its main website either.)

But the Alexa app now includes a disclaimer in the settings menu that flags the fact human ears may in fact be listening, per the report.

This disclosure appears only to surface if users go digging into the settings menu.

Bloomberg says users must tap ‘Settings’ > ‘Alexa Privacy’ > ‘Manage How Your Data Improves Alexa’ before they see the following text: “With this setting on, your voice recordings may be used to develop new features and manually reviewed to help improve our services. Only an extremely small fraction of voice recordings are manually reviewed.”

The policy tweak comes as regulators are dialling up attention on the privacy risks posed by voice AI technologies.

This week it emerged that Google was ordered by a German data protection watchdog to halt manual reviews of audio snippets generated by its voice AI, after thousands of recordings were leaked to the Belgian media last month which was able to identify some of the people in the clips.

Google has suspended reviews across the whole of Europe while it liaises with EU privacy regulators.

In a statement on its website the Hamburg privacy watchdog raised concerns about other operators of voice AIs, urging EU regulators to make checks on providers such as Amazon and Apple — and “implement appropriate measures”.

Coincidentally (or not) Apple also suspended human reviews of Siri snippets this week — globally, in its case — following privacy concerns raised by a recent UK media report. The Guardian newspaper quoted a whistleblower claiming contractors regularly hearing confidential personal data captured by Siri.

While Google and Apple have entirely suspended human reviews of audio snippets (at least temporarily), Amazon has not gone so far.

Nor does it automatically opt users out. The policy change just lets users disable reviews — which requires consumers to both understand the risk and act to safeguard their privacy.

Amazon’s disclosure of the existence of human reviews is also currently buried deep in the settings, rather than being actively conveyed to users.

It’s not clear whether any of this will wash with regulators in Europe. 

Bloomberg reports that Amazon declined to comment on whether it had been contacted by regulators about the Alexa recordings review program, saying only: “We take customer privacy seriously and continuously review our practices and procedures We’ll also be updating information we provide to customers to make our practices more clear.”

We reached out to Amazon with questions but at the time of writing a spokesperson was not available.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something