Silicon Valley has Broken Our Trust – Can We Now Trust Them With Killer AI Robots?

French Islands of New Caledonia. Wikimedia/Brewbook

Leading scientists across the globe believe that the world has reached a critical inflection point in the development of artificial intelligence-powered lethal autonomous weapons. Paul Hannon, the Canadian co-founder of the global Campaign to Stop Killer Robots, has argued on this blog that the global community needs an open dialogue on the “ethical, moral, technical, legal, military and cultural concerns” surrounding this frightful development.

However, the ongoing public brouhaha triggered by the Facebook/Cambridge Analytica privacy breach, which has shaken public confidence in Silicon Valley’s tech giants, has muddled the role of important stakeholders in this dialogue. It is also provoking a larger conversation about the traditional military-private sector technology development ecosystem, which underpins the NATO-led western security order, and is responsible for much of our economic prosperity.

Two such battles surfaced in recent weeks.

First, KAIST University faced calls for a boycott by prominent global AI academics over concerns about its partnership with a South Korean defense company involved in the development of autonomous weapons systems.

In the same week, the New York Times published an internal Google memo in which employees petitioned management to end a controversial project with the Pentagon that aimed to use AI to interpret video imagery, with the goal of improving the accuracy of drone strikes.

The South Korean school, which won a prestigious Defense Advanced Research Projects Agency (DARPA) robotics challenge in 2015, and Google, which has long worked with American military and intelligence agencies, are exposing fissures in the stakeholder environment that will be critical in setting global ethical standards regarding lethal autonomous weapons.

Trust in Tech Companies Eroding

In an era of unprecedented technological disruption, trust in civic society continues to erode.

The 2018 edition of the Trust Barometer, a global measure of citizen trust in business, government, NGOs and media conducted for two decades by Edelman, has for the second year uncovered a deeply rooted level of distrust amongst citizens in the mainstream institutions that make up the core of the post-WWII western security order. Moreover, the survey reveals that “technology-powered social networks and search are being questioned for the first time at scale.”

This waning trust in Silicon Valley is occurring precisely at the moment that western governments and tech giants will face increasing scrutiny as global discussions around lethal autonomous weapons ramp up.

Western Tech Development Ecosystem Has Always Been Military-Driven

As Mariana Mazzucato detailed in The Entrepreneurial State, many of the most important technology innovations of the past half-century – including the internet, GPS, and even the underpinnings of the voice-assistant technologies that have recently become ubiquitous – owe their germination to military backing, mostly via the US government. Even Google Earth originated from a CIA-sponsored company called Keyhole Inc.

In his new book The Darkening Web: The War for Cyberspace, Andrew Kliumburg argues that Silicon Valley’s big tech darlings have always been considered an essential US national security asset, citing among other things the development of modern cloud technology as being largely funded by In-Q-Tel, the CIA’s venture capital arm.

Indeed, western economies and citizens have benefitted greatly from this relationship, in which “almost every ingredient of the internet age came from government-funded scientists or research labs.”

In 2016, the Pentagon formally created a Defense Innovation Board, with the explicit purpose of bringing the technological innovation of Silicon Valley to the US military. Eric Schmidt, the recently departed Executive Chairman of Alphabet, currently chairs it, and has publicly suggested AI might be used for offensive military purposes.

It should therefore come as no surprise that the western AI development ecosystem will also be driven in part by military prerogatives – and more such ethical flashpoints such as those faced recently by KAIST University and Google will arise. However, western security rivals already present a much more vexing challenge.

Western Rivals Command a Closed AI Ecosystem with No Scrutiny

In China, AI research and development is government-directed, through both the military as well as quasi-state controlled tech giants such as Tencent and Baidu. This closed ecosystem is already being oriented towards what futurist Yuval Noah Harari calls the rise of digital dictatorship in an era where technology has quickly surpassed our institutions’ understanding of how to regulate data ownership – good data being a key driver of AI development.

China has already weaponized AI against its own citizens, mining internet data to help in its development of a Black Mirror-like surveillance state. Without international standards, it is beyond a reasonable doubt that China will skirt, if not outright ignore, ethical concerns around lethal autonomous weapons as it looks to weaponize AI outwards against security rivals.

What Can Be Done About Lethal Autonomous Weapons?

Western countries continue to grapple with the ruptured levels of trust in the tech companies most critical to our AI ecosystems. On the home front, Canada’s leading AI experts long ago wrote to the federal government urging it to back a ban on killer robots, and Prime Minister Trudeau is facing backlash over perceptions that Canada has been too lenient on the issue of citizen privacy in its economic courtship of major US tech companies. Tech entrepreneur Andrew Yang has made headlines with a quixotic early campaign for the US presidency in 2020, warning of the “civilization-wide threat of AI run amok”.

But as TechCrunch has frighteningly explained, if we wish to advance AI for the sake of making positive contributions to society, then it becomes inevitable that off-the-shelf AI will be layered into weapons systems by bad actors. This is similar to the way in which nuclear technology can be used to both “power our societies with nuclear reactors, or… in a bomb to kill hundreds of thousands”.

Given this catch-22 scenario, it is even more critical that Silicon Valley’s tech giants work to restore the lost trust provoked by recent breaches of citizen data.

National governments can help – first, by working with companies like Facebook on responsible regulation, rather than reacting to the Cambridge Analytica breach by prioritizing the scoring of political points on Mark Zuckerberg.

Second, by bringing all stakeholders together to facilitate an open dialogue, with citizen input, on ethical questions and international standards surrounding lethal autonomous weapons, and how to build a framework for oversight in the use of AI. One such body working to bring stakeholders together for this express purpose is the AI Now Institute.

The symbiotic relationship between western militaries and the private sector has always relied upon this type of close working relationship and open dialogue. This relationship has produced countless consumer tech innovations and life-improving products. But the stakes today have never been higher, and the existential threat of killer robots presents the greatest challenge yet to this special historical arrangement.

<style><!-- -->#articlecontent a{<!-- --> color: #8a1f03; text-decoration: underline;<!-- -->}<!-- --></style>

Author

Matthew Lombardi
Matthew Lombardi is a CIC Senior Fellow. He works at the Future of Canada Centre, Deloitte’s thought leadership unit, which performs original research and publishes reports, articles, and papers that provide insights for businesses, governments, and academia. He is also a contributor to the firm’s Consulting Innovation market signals publication. His professional background includes half a decade in consulting, providing trusted advice to clients in public policy and strategy.

By