Independent report on Facebook bias catalogues mild complaints from conservatives – gpgmail


An independent investigator has issued a preliminary report on its work determining the existence and/or extent of bias against conservatives on Facebook . It’s refreshingly light reading — the complaints are less “Facebook is a den of liberals” and more “we need more transparency on ad policies.”

The report was undertaken in May of last year, when Facebook retained Covington and Burling, led by former Republican Senator Jon Kyl, to look into the allegations loudly being made at the time that there was some kind of anti-conservative bias on the social network.

“We know we need to take these concerns seriously,” wrote VP of global affairs and communications Nick Clegg. Of course, Facebook says it takes everything seriously, so it’s hard to be sure sometimes.

Covington and Burling’s approach was to interview more than a hundred individuals and organizations that fall under the broad umbrella of “conservative” about their concerns. These would be sorted, summarized, and presented to Facebook leadership.

By far the biggest concern wasn’t anything like “they’re censoring us” or “they’re pushing an agenda.” These views, which are often over-amplified, don’t seem to reflect what everyday folks and businesses are having trouble with on the platform.

Instead, the largest concern is transparency. The people interviewed were mainly concerned that the policies behind content moderation, ad approval, fact-checking, and so on were inadequately explained. In the absence of good explanations, these people understandably supplied their own, usually along the lines that they were being targeted inordinately in comparison with those left of them politically.

It’s worth noting here that no evidence that this was or wasn’t the case was sought or presented. The surveys were about concerns people had, and did not extend to anything like “provide the logs where you can see this happened,” or anything like that. What was gathered was strictly anecdotal.

In a way this feels irresponsible, in that anyone could voice their concern about a problem that may very well not exist, or that may not be universally agreed is a problem. For instance some groups complained that their anti-abortion ads featuring premature babies were being removed. Maybe Facebook feels that images of bloody, screaming children will not increase time on site.

Unfortunately hate speech is real and here to stay. But it is valid to take issue with the subjectivity of how it may be determined as such.

But at the same time, the intent was not to quantify and solve bias, necessarily, but to understand how people perceived bias in day-to-day use of the site in the first place.

As you may have perceived, the concerns of conservatives in fact mirror the concerns of liberals: that Facebook is applying unknown and unknowable processes to the selection and display of content on the platform, and that our ability to question or challenge these processes is limited. These are nonpartisan issues.

Facebook’s response since the report was commissioned (in other words, over the last year and a half) has been to generally provide more information whenever it has stepped in to touch a post, ad, or other user data. It now tells people why certain posts are being shown, it has better documented news feed ranking (though not too well, lest someone take advantage), and it has created a better system for making content removal decisions, as well as a better appeal process.

So it says, anyway, but we can hardly take the company at its word that it has increased diversity, improved tools, and so forth. The investigation by Covington and Burling continues and these are but the preliminary results. Clegg writes that “This is the first stage of an ongoing process and Senator Kyl and his team will report again in a few months’ time.”

You can read the full interim report below:

Facebook – Covington Interim Report 1 by gpgmail on Scribd


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Without evidence, Trump accuses Google of manipulating millions of votes – gpgmail


The president this morning lashed out at Google on Twitter, accusing the company of manipulating millions of votes in the 2016 election to sway it toward Hillary Clinton. The authority on which he bases this serious accusation, however, is little more than supposition in an old paper reheated by months-old congressional testimony.

Trump’s tweet this morning actually cited no paper at all, in fact, though he did tag conservative watchdog group Judicial Watch, perhaps asking them to investigate. It’s also unclear who he thinks should sue the company.

Coincidentally, Fox News had just mentioned the existence of such a report about five minutes earlier. Trump has also recently criticized Google and CEO Sundar Pichai over a variety of perceived slights.

In fact, the report was not “just issued,” and does not say what the president suggests it did. What both Fox and Trump appear to be referring to is a paper published in 2017 that described what the authors say was a bias in Google and other search engines during the run-up to the 2016 election.

If you’re wondering why you haven’t heard about this particular study, I can tell you why — it’s a very bad study. Its contents do not amount to anything, let alone evidence by which to accuse a major company of election interference.

The authors looked at search results for 95 people over the 25 days preceding the election and evaluated the first page for bias. They claim to have found that based on “crowdsourced” determinations of bias, the process for which is not described, that most search results, especially on Google, tended to be biased in favor of Clinton.

No data on these searches, such as a sample search and results and how they were determined to be biased, is provided. There’s no discussion of the fact, for example, that Google routinely and openly tailors search results based on a person’s previous searches, stated preferences, location and so on.

In fact, Epstein’s “report” lacks all the qualifications of any ordinary research paper.

There is no abstract or introduction, no methods section to show the statistics work and definitions of terms, no discussion, no references. Without this basic information the document is not only incapable of being reviewed by peers or experts, but is indistinguishable from completely invented suppositions. Nothing in this paper can be in any way verified.

Robert Epstein freely references himself, however: a single 2015 paper in PNAS on how search results could be deliberately manipulated to affect a voter looking for information on candidates, and the many, many opinion pieces he has written on the subject, frequently on far-right outlets the Epoch Times and Daily Caller, but also non-partisan ones like USA Today and Bloomberg Businessweek.

The numbers advanced in the study are completely without merit. Citing math he does not describe, Epstein says that “a pro-Clinton bias in Google’s search results would over time, shift at least 2.6 million votes to Clinton.” No mechanism or justification for this assertion is provided, except a highly theoretical one based on ideas and assumptions from his 2015 study, which had little in common with this one. The numbers are, essentially, made up.

In other words, this so-called report is nothing of the kind — a nonfactual document written with no scientific justification of its claims written by someone who publishes anti-Google editorials almost monthly. It was not published in a journal of any kind, simply put online at a private nonprofit research agency called the American Institute for Behavioral Research and Technology, where Epstein is on staff and which appears to exist almost solely to promote his work — such as it is.

(In response to my inquiry, AIBRT said that it is not legally bound to reveal its donors and chooses not to, but stated that it does not accept “gifts that might cause the organization to bias its research projects in any way.”)

Lastly, in his paper, Epstein speculates that Google may have been manipulating the data they were collecting for the report, citing differences between data from Gmail users and non-users, choosing to throw away all the former while still reporting of it:

As you can see, the search results seen by non-gmail users were far more biased than the results seen by gmail users. Perhaps Google identified our confidants through its gmail system and targeted them to receive unbiased results; we have no way to confirm this at present, but it is a plausible explanation for the pattern of results we found.

I leave it to the reader to judge the plausibility of this assertion.

If that were all, it would be more than enough. But Trump’s citation of this flimsy paper doesn’t even get the facts right. His assertion was that “Google manipulated from 2.6 million to 16 million votes for Hillary Clinton in 2016 Election,” and the report doesn’t even state that.

The source for this false claim appears to be Epstein’s recent appearance in front of the Senate Judiciary Committee in July. Here he received star treatment from Sen. Ted Cruz (R-TX), who asked him to share his expert opinion on the possibility of tech manipulation of voting. Cruz’s previous expert for this purpose was conservative radio talk show host Dennis Prager.

Again citing no data, studies or mechanisms whatsoever, Epstein described 2.6 million as a “rock-bottom minimum” of votes that Google, Facebook, Twitter and others could have affected (he does not say did affected, or attempted to affect). He also says that in subsequent elections, specifically in 2020, “if all these companies are supporting the same candidate, there are 15 million votes on the line that can be shifted without people’s knowledge and without leaving a paper trail for authorities to trace.”

“The methods they are using are invisible, they’re subliminal, they’re more powerful than most any effects I’ve seen in the behavioral sciences,” Epstein said, but did not actually describe what the techniques are. Though he did suggest that Mark Zuckerberg could send out a “get out the vote” notification only to Democrats and no one would ever know — absurd.

In other words, the numbers are not only invented, but unrelated to the 2016 election, and inclusive of all tech companies, not just Google. Even if Epstein’s claims were anywhere near justifiable, Trump’s tweet mischaracterizes them and gets everything wrong. Nothing about any of this is anywhere close to correct.

Google issued a statement addressing the president’s accusation, saying, “This researcher’s inaccurate claim has been debunked since it was made in 2016. As we stated then, we have never re-ranked or altered search results to manipulate political sentiment.”

You can read the full “report” below:

EPSTEIN & ROBERTSON 2017-A Method for Detecting Bias in Search Rankings-AIBRT by gpgmail on Scribd




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Racial bias observed in hate speech detection algorithm from Google – gpgmail


Understanding what makes something offensive or hurtful is difficult enough that many people can’t figure it out, let alone AI systems. And people of color are frequently left out of AI training sets. So it’s little surprise that Alphabet/Google -spawned Jigsaw manages to trip over both of these issues at once, flagging slang used by black Americans as toxic.

To be clear, the study was not specifically about evaluating the company’s hate speech detection algorithm, which has faced issues before. Instead it is cited as a contemporary attempt to computationally dissect speech and assign a “toxicity score” — and that it appears to fail in a way indicative of bias against black American speech patterns.

The researchers, at the University of Washington, were interested in the idea that databases of hate speech currently available might have racial biases baked in — like many other datasets that suffered from a lack of inclusive practices during formation.

They looked at a handful of such databases, essentially thousands of tweets annotated by people as being “hateful,” “offensive,” “abusive,” and so on. These databases were also analyzed to find language strongly associated with African American English or white-aligned English.

Combining these two sets basically let them see whether white or black vernacular had a higher or lower chance of being labeled offensive. Lo and behold, black-aligned English was much more likely to be labeled offensive.

For both datasets, we uncover strong associations between inferred AAE dialect and various hate speech categories, specifically the “offensive” label from DWMW 17 (r = 0.42) and the “abusive” label from FDCL 18 (r = 0.35), providing evidence that dialect-based bias is present in these corpora.

The experiment continued with the researchers sourcing their own annotations for tweets, and found that similar biases appeared. But by “priming” annotators with the knowledge that the person tweeting was likely black or using black-aligned English, the likelihood that they would label a tweet offensive dropped considerably.

Examples of control, dialect priming, and race priming for annotators.

This isn’t to say necessarily that annotators are all racist or anything like that. But the job of determining what is and isn’t offensive is a complex one socially and linguistically, and obviously awareness of the speaker’s identity is important in some cases, especially in cases where terms once used derisively to refer to that identity have been reclaimed.

What’s all this got to do with Alphabet, or Jigsaw, or Google? Well, Jigsaw is a company built out of Alphabet — which we all really just think of as Google by another name — with the intention of helping moderate online discussion by automatically detecting (among other things) offensive speech. Its PerspectiveAPI lets people input a snippet of text and receive a “toxicity score.”

As part of the experiment, the researchers fed a bunch of the tweets in question to Perspective. What they got saw was “correlations between dialects/groups in our datasets and the Perspective toxicity scores. All correlations are significant, which indicates potential racial bias for all datasets.”

chart perspe

Chart showing that African American English (AAE) was more likely to be labeled toxic by Alphabet’s Perspective API.

So basically, they found that Perspective was way more likely to label black speech as toxic, and white speech otherwise. Remember, this isn’t a model thrown together on the back of a few thousand tweets — it’s an attempt at a commercial moderation product.

As this comparison wasn’t the primary goal of the research, but rather a byproduct, it should not be taken as some kind of massive takedown of Jigsaw’s work. On the other hand, the differences shown are very significant and quite in keeping with the rest of the team’s findings. At the very least it is, as with the other datasets evaluated, a signal that the processes involved in their creation need to be reevaluated.

I’ve asked the researchers for a bit more information on the paper and will update this post if I hear back. In the meantime you can read the full paper, which was presented at the Proceedings of the Association for Computational Linguistics in Florence, below:

The Risk of Racial Bias in Hate Speech Detection by gpgmail on Scribd


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

A problem recognized but still unresolved – gpgmail


There are those who praise the technology as the solution to some of humankind’s gravest problems, and those who demonize AI as the world’s greatest existential threat. Of course, these are two ends of the spectrum, and AI, surely, presents exciting opportunities for the future, as well as challenging problems to be overcome.

One of the issues that’s attracted much media attention in recent years has been the prospect of bias in AI. It’s a topic I wrote about in gpgmail (Tyrant in the Code) more than two years ago. The debate is raging on.

At the time, Google had come under fire when research showed that when a user searched online for “hands,” the image results were almost all white; but when searching for “black hands,” the images were far more derogatory depictions, including a white hand reaching out to offer help to a black one, or black hands working in the earth. It was a shocking discovery that led to claims that, rather than heal divisions in society, AI technology would perpetuate them.

As I asserted two years ago, it’s little wonder that such instances might occur. In 2017, at least, the vast majority of people designing AI algorithms in the U.S. were white males. And while there’s no implication that those people are prejudiced against minorities, it would make sense that they pass on their natural, unconscious bias in the AI they create.

And it’s not just Google algorithms at risk from biased AI. As the technology becomes increasingly ubiquitous across every industry, it will become more and more important to eliminate any bias in the technology.

Understanding the problem

AI was indeed important and integral in many industries and applications two years ago, but its importance has, predictably, increased since then. AI systems are now used to help recruiters identify viable candidates, loan underwriters when deciding whether to lend money to customers and even judges when deliberating whether a convicted criminal will re-offend.

Of course, data can certainly help humans make more informed decisions using AI and data, but if that AI technology is biased, the result will be as well. If we continue to entrust the future of AI technology to a non-diverse group, then the most vulnerable members of society could be at a disadvantage in finding work, securing loans and being fairly tried by the justice system, plus much more.

AI is a revolution that will continue whether it’s wanted or not.

Fortunately, the issue around bias in AI has come to the fore in recent years, and more and more influential figures, organizations and political bodies are taking a serious look at how to deal with the problem.

The AI Now Institute is one such organization researching the social implications of AI technology. Launched in 2017 by research scientists Kate Crawford and Meredith Whittaker, AI Now focuses on the effect AI will have on human rights and labor, as well as how to safely integrate AI and how to avoid bias in the technology.

In May last year, the European Union put in place the General Data Protection Regulation (GDPR) — a set of rules that gives EU citizens more control over how their data is used online. And while it won’t do anything to directly challenge bias in AI technology, it will force European organizations (or any organization with European customers) to be more transparent in their use of algorithms. This will put extra pressure on companies to ensure they’re confident in the origins of the AI they’re using.

And while the U.S. doesn’t yet have a similar set of regulations around data use and AI, in December 2017, New York’s city council and mayor passed a bill calling for more transparency in AI, prompted by reports the technology was causing racial bias in criminal sentencing.

Despite research groups and government bodies taking an interest in the potentially damaging role biased AI could play in society, the responsibility largely falls to the businesses creating the technology, and whether they’re prepared to tackle the problem at its core. Fortunately, some of the largest tech companies, including those that have been accused of overlooking the problem of AI bias in the past, are taking steps to tackle the problem.

Microsoft, for instance, is now hiring artists, philosophers and creative writers to train AI bots in the dos and don’ts of nuanced language, such as to not use inappropriate slang or inadvertently make racist or sexist remarks. IBM is attempting to mitigate bias in its AI machines by applying independent bias ratings to determine the fairness of its AI systems. And in June last year, Google CEO Sundar Pichai published a set of AI principles that aims to ensure the company’s work or research doesn’t create or reinforce bias in its algorithms.

Demographics working in AI

Tackling bias in AI does indeed require individuals, organizations and government bodies to take a serious look at the roots of the problem. But those roots are often the people creating the AI services in the first place. As I posited in “Tyrant in the Code” two years ago, any left-handed person who’s struggled with right-handed scissors, ledgers and can-openers will know that inventions often favor their creators. The same goes for AI systems.

New data from the Bureau of Labor Statistics shows that the professionals who write AI programs are still largely white males. And a study conducted last August by Wired and Element AI found that only 12% of leading machine learning researchers are women.

This isn’t a problem completely overlooked by the technology companies creating AI systems. Intel, for instance, is taking active steps in improving gender diversity in the company’s technical roles. Recent data indicates that women make up 24% of the technical roles at Intel — far higher than the industry average. And Google is funding AI4ALL, an AI summer camp aimed at the next generation of AI leaders, to expand its outreach to young women and minorities underrepresented in the technology sector.

However, the statistics show there is still a long way to go if AI is going to reach the levels of diversity required to stamp out bias in the technology. Despite the efforts of some companies and individuals, technology companies are still overwhelmingly white and male.

Solving the problem of bias in AI

Of course, improving diversity within the major AI companies would go a long way toward solving the problem of bias in the technology. Business leaders responsible for distributing the AI systems that impact society will need to offer public transparency so that bias can be monitored, incorporate ethical standards into the technology and have a better understanding of who the algorithm is supposed to be targeting.

Governments and business leaders alike have some serious questions to ponder.

But without regulations from government bodies, these types of solutions could come about too slowly, if at all. And while the European Union has put in place GDPR that in many ways tempers bias in AI, there are no strong signs that the U.S. will follow suit any time soon.

Government, with the help of private researchers and think tanks, is moving quickly in the direction and trying to grapple with how to regulate algorithms. Moreover, some companies like Facebook are also claiming regulation could be beneficial. Nevertheless, high regulatory requirements for user-generated content platforms could help companies like Facebook by making it nearly impossible to compete for new startups entering the market.

The question is, what is the ideal level of government intervention that won’t hinder innovation?

Entrepreneurs often claim that regulation is the enemy of innovation, and with such a potentially game-changing, relatively nascent technology, any roadblocks should be avoided at all cost. However, AI is a revolution that will continue whether it’s wanted or not. It will go on to change the lives of billions of people, and so it clearly needs to be heading in an ethical, unbiased direction.

Governments and business leaders alike have some serious questions to ponder, and not much time to do it. AI is a technology that’s developing fast, and it won’t wait for indecisiveness. If the innovation is allowed to go on unchecked, with few ethical guidelines and a non-diverse group of creators, the results may lead to a deepening of divisions in the U.S. and worldwide.



10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something