Facebook Takes First Steps in Creating Mind-Reading Technology


This site may earn affiliate commissions from the links on this page. Terms of use.

If you still use Facebook after the Cambridge Analytica scandal, Libra, and more privacy and ethics violations than you and your extended family can count on their fingers and toes, you should have no ethical concerns over the computer-brain interface they began developing two years ago. Now, the first fruit of their labors has arrived.

A Facebook-sponsored experiment at the University of California San Francisco successfully created an interface that translates brain signals into dialogue and published their results in Nature Communication. The software reads these signals to determine what you’ve heard and what you said in response without access to any audio of the conversation. The process utilizes high-density electrocorticography (ECoG), which requires sensors implanted in the brain, so there is no immediate concern for any non-consensual (literal) mind reading on Facebook’s part. Furthermore, it’s clear from their published research that the technology still has a long road ahead before it achieves both a natural and practical usefulness:

Here we demonstrate real-time decoding of perceived and produced speech from high-density ECoG activity in humans during a task that mimics natural question-and-answer dialogue. While this task still provides explicit external cueing and timing to participants, the interactive and goal-oriented aspects of a question-and-answer paradigm represent a major step towards more naturalistic applications. During ECoG recording, participants first listened to a set of pre-recorded questions and then verbally produced a set of answer responses. These data served as input to train speech detection and decoding models. After training, participants performed a task in which, during each trial, they listened to a question and responded aloud with an answer of their choice. Using only neural signals, we detect when participants are listening or speaking and predict the identity of each detected utterance using phone-level Viterbi decoding. Because certain answers are valid responses only to certain questions, we integrate the question and answer predictions by dynamically updating the prior probabilities of each answer using the preceding predicted question likelihoods.

Essentially, participants provided live answers to pre-recorded questions and researchers used their brain signal data to train models to understand both what they said and heard.  On average, the software correctly detected correctly perceived questions 76 percent percent of the time and the response of the participant at a lower rate of 61 percent. While it’s easy to concoct nefarious uses for this technology on behalf of Facebook, the technology itself shows a lot of promise in communicating with people who are otherwise unable due to injury or neurodegenerative disorders.

While this research should continue in order to make new medical breakthroughs and help people, it should continue to raise concerns when funded by a company that both wants to predict your future actions and, in some cases, already can.  Will the company literally read minds in the near future?  No, it has to conquer revolutionize the global economy first, and a very controlled 61 percent accuracy achieved through invasive brain sensor implants will take some time to become more precise and people-friendly. Nevertheless, we’ve seen how their privacy problems can escalate significantly when concerns aren’t raised in advance.

Would you like your brain signals used for advertising? Facebook refused to deny they would use the technology for this purpose. Ads are highly manipulative and consumers don’t want them—whether they promote a gentle new body wash or a dubious political agenda.  Nevertheless, advertising revenue nearly reached $105 billion in 2017. Imagine what companies will pay for actual thoughts.

Of course, Facebook insists their Brain API will only read the thoughts you want to share. Facebook spokesperson Ha Thai put it like this:

We are developing an interface that allows you to communicate with the speed and flexibility of voice and the privacy of text. Specifically, only communications that you have already decided to share by sending them to the speech center of your brain. Privacy will be built into this system, as every Facebook effort.

Cause for skepticism aside, consider how many times you’ve put your foot in your own mouth or just wish you hadn’t said something as you’re saying it. Consider having everything you ever say on record. Do you want Facebook to have that data? Do you want anyone to have it? If you don’t, now is a good time to start caring about Facebook’s research because we already know what happens when we wait and see what they’ll do with it.

Title image credit: Adam Dachis, Gan Khoon Lay, and Laymik.

Now read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Is FaceApp Safe to Use?


This site may earn affiliate commissions from the links on this page. Terms of use.

Over the past couple of weeks, FaceApp—the AI-driven photo augmentation tool for smartphones—became the source of a major data privacy controversy that appears to have been greatly overstated. Nevertheless, it points to a clear and common issue about the rights we may give up with potentially any app we allow on our devices.

What Happened With FaceApp?

On July 14th, developer Joshua Nozzi tweeted an accusation (since removed) stating that FaceApp seemed to be upload all photos in a user’s library and not only the photos a given user selects for use with the app’s services. He also pointed to Russian involvement with the company, emboldening common concerns over illicit Russian involvement in US data-related matters. Within a couple of days, pseudonymous security researcher Elliot Alderson responded to 9t05Mac’s coverage of Nozzi’s accusation with evidence to the contrary. FaceApp also responded with a statement to 9t05Mac with similar intent. Here is the abridged version:

We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.

FaceApp performs most of the photo processing in the cloud. We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud.

Even though the core R&D team is located in Russia, the user data is not transferred to Russia.

Although 9t05mac jumped the gun by publishing Nozzi’s accusation, as his claims were proven false, Chance Miller—the article’s author—raises an important point:

It’s always wise to take a step back when apps like FaceApp go viral. While they are often popular and can provide humorous content, there can often be unintended consequences and privacy concerns.

Nozzi’s false accusation seems more like an honest mistake than a malicious act and Miller’s point illustrates why we’re likelier to panic when unrelated circumstances paint a picture of danger. While we should always take a moment to find evidence of our claims before publishing, in order to avoid inciting a widespread panic unnecessarily, it’s not hard to see how someone could make this mistake when people are on high alert for this type of activity.

Is Any App Truly Safe to Use?

Although FaceApp hasn’t tricked anyone into providing ownership of their photo library in order to build a massive database of US citizens for the Russian government—or whatever conspiracy theory you prefer—this incident highlights how easily we provide broad permissions without considering the consequences each time we download an app.

When an app requests access to data on your smartphone, it casts a wide net out of necessity. Photo apps don’t request the right to save photos or access only the photos you explicitly present, but your photo library on the whole.  You can’t provide microphone and camera access, or really anything else, with granular permissions that give you control over what the app can do.  Furthermore, smartphones don’t provide a simple way for people to see what apps do. Logs of any kind, or a means of monitoring network activity, are not made available to the average user.

For this reason, most users don’t have the ability to discover if an app breaks their trust or not. Until we have better control over what apps can and cannot access on our devices we have to consider the worst-case scenario with every download. Unless a person has the knowledge and willingness to regularly monitor app activity, as well as read (and understand) each app’s terms of service in their entirety, that person cannot rule out the possibility of malicious use of their data.  After all, Facebook was just fined $5 billion for allowing the highly non-consensual leak of user data (not that it mattered) and much of that occurred through a person’s association with a user who downloaded the problematic app.

While most commonly used apps don’t find themselves in controversial situations like this, data leaks occur with enough frequency that we need to remember what we risk with every contribution of our personal information. Every access granted, every photo uploaded, and every bit of information we provide an app—whether it identifies us directly or indirectly—provides a company with new information about us that they often claim ownership of through their terms of service. They may or may not use the collected data for disagreeable purposes but they afford themselves the right through a process they know almost everyone will ignore. Companies need broad language in their legal agreements to protect themselves. Unfortunately, this legal necessity also cultivates a framework for taking advantage of users when a company publishes an app for the purposes of data collection.

Granular permissions on smartphones take a step toward solving this problem, but it won’t prevent apps from continuing to request broad permissions and requiring access as the price of admission. At this point, most of us know that we’re paying with our data when we’re not paying with our dollars but the problematic difference lies in the exact cost.  Most people probably wouldn’t mind if FaceApp used their selfies to improve the quality of the service but might feel differently if that data were used for another reason. Even without supplying our entire photo libraries, and even if FaceApp deletes the images 48 hours later, they’ve still provided themselves with more than enough time to gain value from the data users willingly provide. While it appears they have no malicious intent, it’s unclear what our provided data costs us because we don’t know how they use it.

The same applies to nearly every single app we download.  Without transparency, we’re paying a cost determined in secret. With repeat action over many, many apps, it becomes very difficult to pinpoint the source of any problems that result. FaceApp appears to operate like every other app: requesting broad data permissions out of necessity and reducing liability through a terms-of-service agreement. With every app, we need to ask ourselves if the service it provides is worth the gamble of an unknown cost.

Now read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something