Adarga closes £5M Series A funding for its Palantir-like AI platform – gpgmail


AI startup Adarga has closed a £5 million Series A fundraising by Allectus Capital. But this news rather cloaks the fact that it’s been building up a head of steam since it’s founding in 2016, building up – what they say – is a £30 million-plus sales pipeline through strategic collaborations with a number of global industrial partners and gradually building its management team.

The proceeds will be used to continue the expansion of Adarga’s data science and software engineering teams and roll out internationally.

Adarga, which comes from the word for an old Moorish shield, is a London and Bristol-based start-up. It uses AI to change the way financial institutions, intelligence agencies and defence companies tackle problems, helping crunch vast amounts of data to identify possible threats even before they occur. The start-up’s proposition sounds similar to that of Palantir, which is known for working with the US military.

What Adarga does is allow organizations to transform normally data-intensive, human knowledge processes by analyzing vast volumes of data more quickly and accurately. Adarga clients can build up a ‘Knowledge Graph’ about subjects, and targets.

The UK government is a client as well as the finance sector, where it’s used for financial analysis and by insurance companies. Founded in 2016, it now has 26 employees – including data scientists from some of the UK’s top universities.

The company has received support from Benevolent AI, one of the key players in the UK AI tech scene. Benevolent AI, which is worth $2bn after a $115m funding round, is a minority shareholder in Adarga. It has not provided financial backing, but support in kind and technical help.

Rob Bassett Cross, CEO of Adarga, commented: “With the completion of this round, Adarga is focused on consolidating its competitive position in the UK defence and security sector. We are positioning ourselves as the software platform of choice for organisations who cannot deal effectively with the scale and complexity of their enterprise data and are actively seeking an alternative to knowledge intensive human processes. Built by experienced sector specialists, the Company has rapidly progressed a real solution to address the challenges of an ever-growing volume of unstructured data.”

Bassett Cross is an interesting guy, to say the least. You won’t find much about him on LinkedIn, but in previous interviews, he has revealed that he is a former army officer and special operations expert who fought in Iraq and Afghanistan, and was awarded military cross.

The company recently held a new annual event, the Adarga AI Symposium at the The Royal Institution, London, which featured futurist Mark Stevenson, Ranju Das of Amazon Web Services, and General Stanley A. McChrystal.

Matthew Gould, Head of Emerging Technology at Allectus Capital, said: “Adarga has developed a world-class analytics platform to support real-time critical decisioning by public sector and defence stakeholders. What Rob and the team have built in a short time is a hugely exciting example of the founder-led, disruptive businesses that we like to partner with – especially in an ever-increasing global threat landscape.”

Allectus Capital is based in Sydney, Australia and invests across Asia-Pacific, UK and US. It has previously invested in Cluey Learning (Series A, A$20M), Everproof, Switch Automation and Automio.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

The risks of amoral A.I. – gpgmail


Artificial intelligence is now being used to make decisions about lives, livelihoods, and interactions in the real world in ways that pose real risks to people.

We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.

It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.

But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose tradeoffs that affect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.

These tradeoffs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.

The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.

The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a tradeoff between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Profit interests are pushing hard to get them on the road immediately.

Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to benefit the world. Ideally, we mitigate for the downsides in order to get the benefits with minimal harm.

A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm. 

Buyer Beware

Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.

So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.

Image via Getty Images / sorbetto

The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their efforts to test the transparency around government adoption of data analytics tools for predictive algorithms. They filed forty-two open records requests to various public agencies about their use of decision-making support tools.

Their “specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and confidentiality claims were also a significant factor.

Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can benefit from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and benefit the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.

All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s profit interest outweighs a defendant’s right to due process was affirmed by that state’s supreme court in 2016.

Fairness is in the Eye of the Beholder

Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.

In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever definition of accuracy they assume in their modeling.

I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a different standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.

As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to find it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.

A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.

In their formalization, in most situations, differing ideas about what it means to be fair are not just different but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.

Image courtesy of gpgmail/Bryce Durbin

When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artificial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.

Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts off input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to difficult questions. 

Who Will Watch the (AI) Watchers?

One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.

Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.

GettyImages 823303786

Image via Getty Images / nadia_bormotova

Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.

This has apparently been effective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions affecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of offending some wonderful colleagues, the inclination to think about these issues.

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.

Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in effect, penalizing women who applied. In this case, Amazon was sufficiently motivated to ensure their own technology was working as effectively as possible, but will other companies be as vigilant?

As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.

With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and tradeoffs.

At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Why AI needs more social workers, with Columbia University’s Desmond Patton – gpgmail


Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.

In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at gpgmail). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.

A less expected and perhaps more profound finding, however, has been what the introspective founder Prayag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.

In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.

So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.

The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.

desmond cropped 800x800

Desmond Patton. Image via Desmond Patton / Stern Strategy Group

A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.

It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.

Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.

Greg Epstein: How did you end up working in both social work and tech?

Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.

[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Zindi rallies Africa’s data scientists to crowd-solve local problems – gpgmail


Zindi is convening Africa’s data scientists to create AI solutions for complex problems.

Founded in 2018, the Cape Town-based startup allows companies, NGOs or government institutions to host online competitions around data-oriented challenges.

Zindi’s platform also coordinates a group of more than 4,000 data scientists based in Africa who can enroll to join a competition, submit their solution sets, move up a leader board and win the challenge — for a cash prize payout.

The highest purse so far has been $12,000, split across the top three data scientists in a competition, according to Zindi co-founder Celina Lee. Competition hosts receive the results, which they can use to create new products or integrate into their existing systems and platforms.

Zindi’s model has gained the attention of some big corporate names in and outside of Africa. Digital infrastructure company Liquid Telecom has hosted competitions.

This week, the startup announced a partnership with Microsoft to use cloud-based computing service Azure to power Zindi’s platform.

Microsoft will also host (and sport the prize money) for two competitions to find solutions in African agtech. In a challenge put forward by Ugandan IoT accelerator Wazihub, an open call is out for Zindi’s data scientist network to build a machine learning model to predict humidity.

In a $10,000 challenge for Cape Town-based startup FarmPin, Zindi’s leader board is tracking the best solutions for classifying fields by crop type in South Africa using satellite imagery and mobile phones.

 

There’s demand in Africa to rally data scientists to solve problems across the continent’s public and private sectors, according to Zindi CEO Celina Lee.

“African companies, startups, organizations and governments are in this phase right now of digitization and tech where they are generating huge amounts of data. There’s interest in leveraging things like machine learning and AI to capitalize on the asset of that data,” she told gpgmail.

She also noted that “80% of Zindi’s competitions have some sort of social impact angle.”

Lee recognizes a skills gap and skills building component to Zindi as a platform. “Data science skills are relatively scarce still… and companies are looking for ways to access data science and AI solutions and talent,” she said.

“Then there’s this pool of young Africans coming out of universities working in data…looking for opportunities to build their professional profiles, hone their skills and connect to opportunities.”

Lee (who’s originally from San Francisco) co-founded Zindi with South African Megan Yates and Ghanaian Ekow Dukerand, who lead a team of six in the company’s Cape Town office. The startup hopes to get 10,000 data scientists across Africa on its platform by this year and 20,000 by next year, according to Lee.

Zindi Team in Cape Town 1

“The idea is to just keep growing and growing our presence in every country in Africa,” Lee said. Zindi could add some physical presence in additional African countries by the end of this year, Lee added, noting Zindi currently hosts data scientists and competitions online and on the cloud from any country in Africa.

Zindi received its first funds from an undisclosed strategic investor and is in the process of raising a round. The startup, which does not disclose revenues, generates income by taking a fee from hosting competitions.

Zindi is also looking to add a recruitment service to connect data scientists to broader opportunities as a future source of revenue, according to Lee.

As a startup, Zindi’s emerging model could see it enter several existing domains in African business and tech. When Zindi adds recruitment, it could offer a service similar to talent accelerator Andela of connecting skilled African techies to jobs at established firms.

CEO Lee acknowledges such, but makes a distinction between data scientists and Andela’s developer focus. “We’re honing more in on statistical modeling, AI, machine learning and predictive analytics,” she said. “I also think the developer market in Africa is much more mature and lot of developers want to move into data science.”

In addition to competing on tech recruitment, Zindi could also become a cheaper and faster alternative for African companies and governments to contracting big consulting firms, such as Accenture, IBM or Bain.

Zindi’s co-founder Lee confirmed the startup has received inbound partnership interest from some established consulting firms — which indicates they’ve taken note of the startup.

“I think we are a bit disruptive because we’re offering companies in Africa the best data scientists in the continent at their fingertips,” she said.

Lee highlighted a couple distinctions between Zindi and data-driven consulting firms: affordability and potential scale.

The startup could also provide data science solutions to many African organizations that don’t have the resources to pay big consulting firms — meaning Zindi could be on to a much larger addressable market.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something