Copied


Blockchain and AI: The Delicate Balance Between Two Cyber Titans

Fernando Sanchez   May 01, 2020 00:00 4 Min Read


'The Skynet Funding Bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.' -The Terminator, Terminator 2: Judgment Day

Webp.net-resizeimage (13).jpg

Back in 2017, a team of researchers worked on a Facebook-sponsored project to develop dialog agents (that's chatbots to you and me) that could deal and negotiate with customers naturally and efficiently. The program was called Facebook Artificial Intelligence Research (FAIR).

Today's chatbots have a rather limited ability to communicate. A chatbot conversation sounds mechanical, stilted, and unnatural. To paraphrase John Connor from The Terminator (James Cameron, 1984), chatbots can be spotted easily. This is because human-to-human communication has a degree of randomness to it, and a conversation between two or more people can take unexpected twists and turns that a chatbot simply cannot recognize, follow, or adequately respond to without sounding like, well, like a robot.

FAIR's objective was the creation of 'dialog agents' that could learn to negotiate via natural language dialogue. In other words, mimic human language well enough, and to reach a successful outcome by planning ('thinking') ahead, carefully calibrating all the possible paths that the conversation might take, and adapting their actions and words accordingly.

To that end, one could argue that this particular project was a failure. The chatbots created for the experiment (Alice and Bob) did not quite master the art of conversation of negotiation. Their communication remained unnatural and robot-like. Nonsensical, even, or so it seemed.

After a while, however, the research team hurriedly shut down their work with Alice and Bob. Because Alice and Bob, while did not learn to negotiate, did learn a very peculiar skill, hitherto restricted to the human mind, or so the research team thought: The ability to create a language only they could understand.

Alice, Bob, and paper clips: A tale of runaway AI

The implications of the FAIR chatbot experiment are obvious. Why would the bots create their own language, one that their human overseers couldn't easily identify or understand, until it was too late, perhaps? What if Alice and Bob had been left unchecked? Would they have learned to negotiate a deal that did not include humans? Far fetched as they might be, these scenarios are not quite implausible. In January 2015, a large sample of the world's brightest scientific and entrepreneurial minds co-signed an open letter on Artificial Intelligence (AI). People like the late Stephen Hawking, Tesla's Elon Musk, and Google's director of research Peter Norvig among many others came together to warn the world about the perilous path of embarking in the development of super-intelligent entities that might one day become uncontrollable and hostile towards the very people who created them.

In his paper Ethical Issues in Advanced Artificial Intelligence, Swedish philosopher Nick Bostrom deals with the ethical implications of creating an entity with far superior intellectual capabilities than the average human being, focusing particularly on our motivation to do such a thing and the very motivations that this entity would be instructed to abide by. To this respect, Bostrom does say that "the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations," while acknowledging that "superintelligence could also easily surpass humans in the quality of its moral thinking."

Bostrom beautifully illustrates this quandary with this often-quoted example: "Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

This seemingly silly scenario does, however, reflect the cold internal logic attributed to a machine, whose one-track 'mind', for lack of a better word, will strictly follow the parameters of its original programming to the point of exterminating the human race so that it can continue making paper clips unhindered until the end of time.

So the question arises, how can all of this be prevented? How can AI be stopped, even controlled? Is it even possible to exert dominion over a super-intelligent entity in the long run? Neither of these questions can be effectively answered today, for one simple reason. Mankind has not created any of these entities yet, so these questions remain largely theoretical and speculative at this time.

Blockchain: The big red button

At the end of Terminator 3: Rise of the Machines (Jonathan Mostow, 2003), John Connor finally understands the true nature of Skynet and narrates it against a background of nuclear missiles crisscrossing Planet Earth, heralding Armageddon: 'It was software, in cyberspace. There was no system core. It could not be shut down. The attack began at 6:18 pm...' Great ending to a mediocre movie, all the same, a piece of cinematography that, albeit brief, accurately reflects the folly in mankind's hubris.

The Terminator franchise is perhaps one of the most popular cinematic exemplifications of the dire consequences that the human race might face if we foolishly allow a cybernetic entity to run the show. Skynet's motivations are not that far removed from Bostrom's paperclip-making AI. In both cases, the AI's reasoning to remove humans from the overall picture is that by doing so, the AI can fulfill its programming. No humans means no one to turn the AI off.

So can be done about this seemingly inescapable doom, before we all burn in the nuclear fire? Can anything be done?

Industrial engineers typically fit their machinery with a big, fat red button that triggers an immediate stop in case of an emergency. You see these buttons every day. In escalators inside the subway system, or in big shopping centers, for example. Printing presses, garbage trucks, and so on. All an operator has to do is press this button, and the machine stops. The operator has total control over the machine.

Only AI is not a machine, at least not in the traditional sense of the word. AI is not a kettle that can be switch on and off at will. Nor is it a car, whose ignition key can be removed to render the vehicle immobile. AI is something inside the kettle. It is code that might be programmed to switch the kettle on at certain times of the day, when it knows the user normally drinks his coffee, for example. Or a routine in the car's central computer that opens the doors when it detects the driver's own heat signature nearby.

AI is software. In cyberspace. It cannot be shut down.

Or can it?

The paper 'Safely interruptible agents,' written and published by DeepMind researcher Laurent Orseau and Stuart Armstrong of the Future of Humanity Institute at Oxford University begins thus: 'Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.'

In other words, AI agents might begin acting unpredictably, so there better be a means to control them, lest we all end up being turned into paper clips. But this solution, while seemingly ideal, has one inherent flaw. What if the operator is unable, or unwilling to press that button? This is the key problem with a single point of failure, and this is also why a decentralized solution is a much better proposition when it comes to matters of nuclear annihilation.

Blockchain technology operates in a decentralized fashion, thus removing the single-point-of-failure quandary. If one node in the network is disabled, another one takes action. Blockchain would become AI's big red button.

Let's elaborate on this. Blockchain nodes typically have more to gain by being truthful than otherwise. This is because, within the confines of a blockchain, the rewards for doing the right thing far outweigh the opposite, and the notion of wide-scale fraud is almost unthinkable.

Economist and game-theorist John P. Conley illustrates this with an example: "Let’s take Bitcoin’s Proof-of-Work concept. Let’s say that we have 30,000 nodes that are validating transactions. How are they ever going to coordinate together and do something that’s dishonest? The distribution of the nodes that are validating gives inherent stability. At least half would have to be dishonest for the system to break, but the notion that 15,000 nodes would coordinate to do something dishonest is ridiculous.”

Conclusion

The concept of an incorporeal AI is perfectly illustrated in the manga Ghost in the Shell. Created by Japanese artist Masamune Shirow, Ghost in the Shell depicts a cyberpunk future where advanced cybernetics enable the direct interaction between the biological brain and computer networks. The main antagonist is thought to be a prominent human hacker, but it's ultimately revealed that a highly advanced AI entity is behind the events of the story. The AI ('ghost') manipulates people's bodies ('shell') and brains to do its bidding.

Ghost in the Shell is a sci-fi tale, of course, but one could argue that its basic premise is not that far removed from tomorrow's reality. The issue of brain jacking (that is, the artificial manipulation of a human brain via implants) has already been postulated. The Internet of Things (IoT) is slowly becoming commonplace. Soon, every household device will have a microchip. Many last-generation family cars already do. The humble toaster is not far behind. Alice and Bob might become the progenitors of a long cybernetic lineage, and that's where the real trouble might start.

The coronavirus pandemic has resoundingly proven how woefully prepared mankind is for a major global emergency. Covid-19 is a dangerous virus, certainly, but its mortality is nowhere near enough to eradicate the human race, so we'll live to fight another day.

But suppose that one day, a computer virus strikes in a world that's connected, interlinked, and wholly reliant on technology to survive. From kettles to jet airliners, all is rendered inoperative in a millisecond. We marveled at our own magnificence as we gave birth to AI, Morpheus said in The Matrix (The Wachowski Brothers, 1999)

We better have a big, fat, blockchain-powered red button to push.

 


Image via Shutterstock

Read More