
Playing the shell game with technology: AI, regulation, responsibility, and values

Professor Ryan Calo, a legal scholar and co-founder of the influential Tech Policy Lab at the University of Washington, challenges the dominant narrative that artificial intelligence is too complex and fast-moving for effective regulation. He argues that there are models other than accepting technological change as inevitable and adjusting to the disruption.
In your new book, “Law and technology: a Methodical Approach,” you say that it is common to think that the pace of change often outstrips the law’s ability to regulate. You highlight the tension between law and AI, stressing that everyone talks about how fast and complex AI is, and how the law can’t keep up. On a global scale, especially with autonomous weapons, is international law keeping up?
Ryan Calo: The purpose of the book is to show how law can address technology as a social fact, but one with some characteristics that make it very difficult to analyse and regulate, such as the perception that it moves too fast or is too complicated.
Part of the book is attempting to resist that intuition. In his book The Social Control of Technology, David Collingridge argued that regulators, academics, and policymakers of all kinds are in what he called a “double bind” when it comes to technology. The first bind is information: it’s difficult to figure out precisely how a technology is going to play out in society in advance. For example, how AI will affect things, who will use it, how it will be used, or what its final form will be. People make all kinds of predictions, and they’re very often wrong. If we were to intervene without knowing, then we would make a mistake, or our regulations would become outdated.
The other part is about doing nothing and then letting technology run its course. By the time you regulate the technology, you have a second bind—the power bind, the idea that the technology has become path-dependent. People are invested in it, and companies have made a lot of money from it, which they then use to hire lobbyists and exert political pressure. So, it becomes entrenched, and there’s a difficulty of power, even when you know what you want to do at that point. Europe is encountering this right now. This is known as the Collingridge dilemma. What ends up happening is that, especially in the United States, people remember the first part of the dilemma—the “pacing problem,” when the technology’s pace is too fast for law to catch up—but forget the second part about the power bind.
The tendency is to act like technology is the only thing where you don’t have enough information, or it’s moving too fast. That is objectively wrong. In the United States, there was a period where we got so upset about people drinking alcohol that we amended our Constitution, which is hard to do. Less than a decade later, people got so upset about not drinking that we amended the Constitution again. There are times when things happen super-fast in the US. When we entered World War II, all of a sudden, there were no men to do the jobs, so they changed all the laws so that women could join the workforce in a very short period of time. Then, the war ended, and all these soldiers came back. And they changed the laws again. We can act quickly if we need be.
Professor Ryan Calo is an American expert in law, artificial intelligence, privacy, robotics, and misinformation. He is a co-founder of University of Washington Tech Policy Lab and the Center for an Informed Public, which focuses on combination misinformation. Prof Calo has testified before the US. Senate on AI and privacy policy, contributes regularly to major media outlets like The New York Times, Wired, Reuters, and NPR, and serves on advisory boards including the Electronic Frontier Foundation, Future of Privacy Forum, and World Bank’s privacy tribunal.
How do you see the situation with regulating AI? Is it on the verge of changing?
The people that are creating and selling artificial intelligence have managed to convince policymakers, especially in the US, that AI is too complicated and moving too fast, so they should just let it go. That’s why the book opens with a story about the Amish who only adopt technology if it is consistent with their values. America has lost sight of that idea that you might choose to adopt technology or not, or adopt a version of it that promotes human flourishing. Policymakers are succumbing, tricked by these people telling them that this is how AI has to be, so they should just wait and see. A kind of paralysis has arisen, which I do not think is inevitable.
My argument is that technology only looks like it’s too complicated and moving too fast, and that a lot of people have a humongous profit motivation to pretend that’s the case, as well as to say that if we regulate, China is going to become the most powerful and scare Europe and the US by presenting them as xenophobe.
A few years ago, AI was viewed purely as a tool, whereas nowadays it has become an environment.
Yes, that’s well said, and I absolutely agree. Once upon a time, it used to make sense to say that we do things with technology: we drive a car, we fly a plane, we hammer a nail, etc.
Increasingly, it’s fair to say that we do things through technology, that we’re mediated in an environment. It has such a great capacity to constrain what we see and what moves we’re able to make our affordances in the environment. We’re debating AI and whether it should be regulated while on platforms that are managed by that very AI… We should not at all be surprised that the voices say we can’t regulate.
What kind of effect do these conversations have on the balance of global power?
First of all, the role of AI in global power is dramatically overstated. What matters in global power is the ability to project violence, which AI helps marginally. The United States and China both have whatever AI they need. If America falls behind China and the rest of the world, it’s going to be because we’ve destroyed our diplomatic capacity, destroyed our goodwill and are wrecking our economy by overborrowing.
AI is a footnote to the kind of tectonic shifts that are occurring because the US is no longer capable of leading. It is wild to hear people in America say that we’d better make sure we don’t regulate AI because we’re dismantling our capability to be global leaders. We’re throwing it away with both hands in ways that everybody knows are consequential: by not investing in infrastructure and not educating people. These things are brutal for the balance of power. We’re going to look back at this period in disbelief.
It is also fascinating that every time the technology doesn’t work out like it’s supposed to, people just forget and gloss over. There’s a great book called Charismatic Technology by Morgan Ames that talks about the “One Laptop per Child”—an idea that somehow you could educate third-world children by giving them a laptop pre-loaded with lessons. People invested so much money and effort in these TED-talk-like videos, and it never worked. Morgan Ames traces how this charismatic technology charmed everybody, and yet it was a colossal failure.
Going back to the global balance of power, in Estonia, as we are very close to Russia, the question of cybersecurity and misinformation is very topical. We invest a lot of money in it, but does the US do the same?
There is a widespread understanding in the United States that cybersecurity is important and that we’re vulnerable. We’re vulnerable because nation-states and criminal syndicates are constantly trying to break into systems to steal intellectual property, and because in a conflict, our infrastructure is vulnerable to being shut down remotely. We’ve had incidents where water processing facilities or electrical grids have been disabled, seemingly to test the viability of attacking our infrastructure. There needs to be a public-private partnership: Microsoft needs to be talking to the intelligence sector and vice versa.
The US doesn’t have a very robust privacy data protection regime. It’s one of the reasons why Europe is not allowing European data to flow freely to American companies. We don’t have a Data Protection Authority and have never been certified as ‘adequate’ by the European Union under the General Data Protection Regulation (GDPR). If you’re in the EU, you’re under the GDPR. Israel has been certified as adequate, so European data can be processed in Israel, to give an example.
We’ve created various programmes, like the Privacy Shield, to function as a safe harbour. If a company like Microsoft were to come in and obey all the requirements, it could get adequacy and be able to process data. That arrangement keeps getting challenged: the European Court of Justice says the relationship between the US government and American tech companies is too close and lacks safeguards to make sure that Google won’t hand over European data to the National Security Agency (NSA). Our intelligence sectors have a completely free hand when it comes to foreign data. If foreign data is in the US, there’s almost no restriction on what the NSA can do with it.
The thing I find interesting has to do with the evolving nature of cybersecurity in light of artificial intelligence. Historically in the US, hacking involved breaking into a computer system. The language of the Computer Fraud and Abuse Act dates back to the 1980s and addresses “unauthorised access.” Similarly, inadequate security means you failed to keep people out—you left a back door open or didn’t train employees not to click on phishing emails. All of that is changing because the AI systems can not only be hacked but also tricked.
For example, you can trick facial recognition by wearing makeup. You can trick a driverless car into misperceiving a stop sign as a speed sign by adding little stickers to it. You can get a chatbot to say racist things. All of these tricks can cause AI to behave in an undesirable way without breaking into the system. As affordances change, we need to revisit our assumptions of law and policy. This is an instance where technology does change things, such as what we see as hacking and adequate security. American companies and our intelligence sector are grappling with adequate standards: for example, what is the equivalent of end-to-end encryption or social engineering training in a world where everything is AI?
You mentioned the public and private partnership. One trend we’re increasingly seeing is the erosion of clear boundaries between tools developed for military or intelligence purposes and their deployment in domestic settings, for example, in areas like surveillance and cyber defence. How well does US law distinguish between military and domestic use of AI technologies?
The most clear-cut example has to do with export controls. The US government has the ability to restrict exports of certain technologies to other countries if they implicate national security. The problem is that it used to be easy, like not sending guns and missiles to Iran. Now, we are talking about computing power or even software algorithms. There was an attempt a few years ago to impose special conditions on exporting machine learning, but it fell apart. There’s very little ability to keep the ideas about the technology close.
Another reason is that the knowledge about technology is often publicly available. Academia and businesses, even Chinese companies like Tencent, publish papers in peer-reviewed journals. A revolution in large language models (LLM) originates in part from a paper by Google engineers called “Attention is all you need.” The power of AI comes primarily from how clever you are at applying it. It’s not like nuclear power, where you have to have a certain amount of enriched uranium and a lot of know-how that few people possess. This is different. A Chinese company slightly outperformed the most recent versions of OpenAI and Meta’s LLMs using a lot less computational power. It shows that there’s so much knowledge out there that if you have enough smart people with a little innovation, you can build on top of these huge models.
It’s a very different paradigm—a logical extension of the notion of dual use. There’s no real distinction anymore between technologies that can have a military or a civilian application. This is a reason why Amazon, Meta, and Microsoft are competing for the best AI engineers as if they were professional athletes, signing them with $100 million bonuses. To be at the cutting edge, they need the most innovative, knowledgeable people.
We can see with the China example that the incremental advantage is minimal. Wars are not going to be won or lost based on who has better AI, because everybody has access to it. We see this also with drones. Drones were originally developed in the US and integrated into our programmes. There’s a book from 20 years ago called Wired for War by Peter Singer about how the US military invested in automation. It gave us a temporary strategic advantage, but now every country uses drones. Drones are what’s being used in the Israel-Gaza conflict, by Iran against Israel, in Ukraine against Russia and vice versa. Even the cartels in Mexico are using drones.
There is a shorter cycle from when things are in the military context to when they become civilian. The minute something becomes civilian, it can be copied by everybody else militarily. Before, in a world dominated by kinetic power, what mattered was who had the fastest jets and the biggest bombs. Those things still matter, but there wasn’t an expectation that something would originate in the military, be a short-term advantage, and then immediately proliferate through society.
The other big difference, besides the shortened cycles, is the fact that the innovations are not coming initially from the military. The military is having to turn to the private sector for what is “cutting edge.”
What is the role of non-state actors, not necessarily subject to international law per se, and their use of AI tools?
International law binds states. Companies can be reached either because they’re from a particular country or by the laws under which they operate. There are a few organisations, such as the World Bank or the United Nations, which, because of their unique charter, sit apart. The World Bank is not subject to international law in the same way, and it claims that it is not subject to the jurisdiction of any nation. I’m the chair of the board that appeals privacy violations by the World Bank, because the Bank has to have these internal structures for accountability, since it doesn’t have accountability outside of its own charter. So, when such organisations that are truly multinational, non-state actors use AI, they are still subject to some rules. But when Microsoft uses AI in Africa, it’s subject to whatever the rules apply in Africa. If the US uses AI in warfare, it is subject to the laws of war as agreed upon by nations.
Who is going to be held accountable if an AI system makes a decision that results in a loss of human life? Is it an individual developer, the whole company, or the state?
It’s like a shell game: you have three shells, and you never get it. Technology makes it a game of responsibility. When you do things with technology, there’s somebody else—even sitting with us right now—in a sense: the people that designed the phone that’s recording us, the people that designed your laptop, etc. Their worldview and their choices can affect us.
A very good example is the tragic fatality where an Uber driverless car hit and killed a woman in Arizona. The woman was crossing outside of the crosswalk at night, and the car ran into her. There was a person hired by Uber to sit in the driver’s seat and pay attention. There was an investigation, and it was observed that the person who died was crossing randomly. The US Department of Transportation did an in-depth review of Uber’s engineering team: it found that they had problems with their safety culture and that they made some obvious mistakes that contributed to the car running into this woman. For example, they turned off certain safety features like collision detection that would otherwise be available in a Volvo car and set the thresholds wrong about how to identify something as an obstacle to be avoided. So, they just made errors while designing the system.
The family of the person who died settled, meaning that Uber didn’t face a lawsuit. But the person behind the driver’s seat, hired by Uber to monitor the car, was charged with criminally negligent homicide. No ramifications for Uber except for bad press and having to pay money to the victim’s family.
Historically, it’s been very difficult for courts to ascribe responsibility in such cases. There was a paper by David Vladeck that shows that in case after case involving automation, the courts will look for a human to blame, and that human is generally not the one who designed the system.
This gets even harder in cases where the AI itself is learning. Imagine a hybrid, smart, AI-enabled car; it was designed to experiment with its internal state to maximise energy efficiency between the battery and gas. Imagine the system figures out that it has a more efficient day overall if it starts in the morning with a full battery. One night, the owner puts the car in the garage but does not plug it in. Yet the car just turns on the gas engine to charge the battery by itself, and carbon monoxide poisoning kills everybody in the house. The engineers can credibly convince the jury that not only did they not intend for that to happen but could not have imagined it could happen.
The law is very accustomed to finding someone to blame. But there may be times when nobody expected a thing to happen, where the engineers themselves truly did not understand that something was going to happen. That presents a hard problem because there’s no way for criminal law to find the mens rea, the intending mind that it would need to hold someone criminally liable. American law is very reticent to assign fault to someone if they not only did not anticipate but also could not reasonably have anticipated that something would occur.
Given all these challenges, what is the way forward?
The most important thing to me is, especially in America, that we take a page from the Amish and say to ourselves that we don’t need to accept technology that’s not consistent with our values. We don’t have to accept technology in the exact same form in which it’s being sold to us. We can say we don’t want this, or that we want this but in a different way. Technology is not inevitable. It’s contingent. An important thing to understand is that society could be configured differently. That’s why I start and end the book talking about the Amish or Jews on the Sabbath, who have debates about what technology they can or cannot use. Or these kids in New York City, these Neo-Luddites, who decide to reject smartphones in favour of flip phones. I use those examples to say that what we have now, where we accept technological change as inevitable and then adjust to the disruption—that’s not the only model.
This article was written for ICDS Diplomaatia magazine. Views expressed in ICDS publications are those of the author(s).





