The legend is told about King Canute, the 12-century ruler in England, who planted his throne at the edge of the ocean and demanded that the tides recede. His power was limited, and, contrary to popular belief, he allegedly did this to demonstrate the limits of his power — not because he wrongly assumed he had some supposed supernatural abilities.
Many see efforts to contain the artificial intelligence explosion as a King Canute-type effort in futility. However, just as the king actually set out to make a point, so does an emerging movement of experts and activists seek to shed light on the potential abuses AI can bring. In the process, long-term awareness may lead to better AI outcomes for business and society.
We just saw this in the open letter, crafted by the Future of Life Institute, demanding a six-month “pause” in “giant AI experiments.” The letter was endorsed by luminaries across the technology world, including Steve Wozniak, Elon Musk, Turing Prize winner Yoshua Bengio, Stuart Russell, professor at University of California-Berkeley, and Evan Sharp of Pinterest.
Advanced AI “planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter states.
The Future of Life letter is not the only pushback against the advance of AI into our lives and livelihoods. At the end of 2021, Timnit Gebru, a well-known AI scholar who was fired by Google in December 2020 for drawing attention to big tech’s dominance and manipulation of AI, started the “Slow AI” movement, as described in an article published last year in IEEE Spectrum by Eliza Strickland. Gebru founded the Distributed AI Research Institute (DAIR), leading the way with the slow AI movement.
“AI needs to be brought back down to earth,” she said in a press release at the time of the founding. “It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity.”
Developing AI for good is one thing, asking for a pause in development is another. Industry analysts are skeptical whether this tide can be paused even for a moment, and if it’s even worth trying. “There was a similar outcry when IoT initially became popular about personal data collection as well,” says Andy Thurai, principal analyst with Constellation Research. “But wearing Fitbit and sharing the data across seem to be very common and accepted practice now. It shouldn’t be about stopping a certain technology or company unless they really went rogue.”
The letter “is an absurd proposal, even on the face of it,” agrees James Kobielus, senior research director for data management at TDWI. “It seems to operate on the assumption that these supposed guardrails will be developed, widely adopted, and field-tested after six months. It also seems to assume that more powerful models are the problem, not the solution. And it assumes that everybody’s just going to stop breakneck competition in innovating the tech just because it has vulnerabilities. That latter concern has never stopped any tech innovation anywhere ever.”
If AI development “is stopped or we impose a moratorium for a certain timeframe, then what?” Thurai asks. “Can the work be done after six months? What would have changed then? Would the bad actor nations and competing countries stop development too, or is it only applicable to the USA? If so, why are we shooting ourselves in the foot? If it is worldwide, how are we going to impose it?”
In other words, the show must go on. “Move the focus to work on proper governance, security, oversight, and guardrails within which these systems are allowed to work,” Thurai urges. “Define what is acceptable and what is not. Don’t kill innovation.”
Certainly, demanding a pause in any kind of technology development — especially when there is big money tied into it — is unlikely to happen. What such efforts do accomplish, however, is to raise awareness of the dangers of AI run amuck. Just as efforts to stop smoking in the 1960s onward seemed to yield little early on — people kept smoking and dying — we eventually arrived at a world where the health issues are well heeded, and smoking is banned everywhere. Awareness of AI’s risks also may take time to percolate through business and society.