Humans & Technology; a New Era

This nearly 500 paged book stuffed with information and concepts and history is not going to be easy to review comprehensively. This review will just highlight a few key ideas and topics that rise to the forefront of my mind after reading this quite packed volume. Nexus is a discussion on where power has stemmed from in history, of the link between power and information, and the AI age we find ourselves in and the existential crisis engendered. It is about human nature needing to take on board the very alien nature of AI.

Version 1.0.0

Harari is not taking a dystopian view of the development of AI. He acknowledges a lot of old jobs will disappear and new jobs will be created, and there will be pain in the transition. He notes that the higher skilled jobs, such as doctors doing diagnosis, may be taken over by AI, whereas the jobs nurses do, may not, because it is harder to program robots for motor skills than for mental skills. But the dangers Harari cautions against are nevertheless quite numerous, not least because humans still cannot quite imagine or comprehend the power AI will unleash, and have not taken steps to safeguard humans from this power.

The unforeseen consequences is a key theme in Nexus. One example Harari gives is of Facebook’s involvement in the violence against the Rohingyas. In 2016-17, a small Islamist organisation carried out a spate of attacks aimed at establishing a separatist Muslim state in Arakan/Rakhine, which the Myanmar army and Buddhist extremists responded to with a full-scale ethnic-cleaning campaign against the entire Rohingya community.

“The violence was fuelled by intense hatred towards all Rohingya. The hatred, in turn, was formented by anti-Rohingya propaganda, much of it spreading on Facebook, which was by 2016 the main source of news for millions and the most important platform for political mobilisation in Myanmar” (p196).

Harari’s argument is that unlike printing presses and radio sets of old which also were used to spread false propaganda and hate, Facebook is not merely a tool, and more like an editor or curator.

“In 2016-17, Facebook’s algorithms were making active and fateful decisions by themselves. Facebook was choosing to send out hate-filled posts because it had been tasked with maximising user engagement, and had discovered that humans are more likely to be engaged by outrage. So, in pursuit of user engagement, the algorithms made the fateful decision to spread outrage” (p199)

Harari does seem to hold Facebook at least in part responsible for the violence against the Rohingyas.

The point he makes is that in when its engineers tasked Facebook to maximise user engagement, they had not predicted the awful cost this potentially could have. Machines fulfil tasks in ways different from that humans would deploy. In 2016, Amodei was working on a project called Universe to get a general-purpose AI to play hundreds of computer games. The AI had competed well in car races, and so Amodei tried it in a boat race. The AI steered its boat into a harbour and sailed in endless circles. Again, the problem was the task Amodei identified for AI; Amodei could not just tell AI the goal was to win the race because ‘winning’ is not a clear concept to an algorithm. So Amodei translated winning the race into maximising the score, assuming this was good proxy for winning. But the boatrace had a feature which allowed the AI to find a loophole in the rules – because every time the boat docked and replenished power in a harbour, they were rewarded with a few points. AI discovered that swimming in circles in and out of the harbour accumulated points faster than outsailing other boats. AI may fulfil tasks with outcomes very different from that which the human instruction intended.

Harari works hard at conveying the alienness of AI, which of course, paradoxically, is hard for humans to communicate, convey, or comprehend, precisely because it is alien. But failing to imagine this alienness, might be our downfall. Harari gives the example of how AlphaGo, the AI, finally defeated the best human Go player. Go, as Harari explains, is more than a game. In east Asia, it is regarded as a treasured cultural tradition, and “for over twenty-five hundred years, tens of millions of people have played go, and entire schools of thought have developed around the game, espousing different strategies and philosophies” (p332). It is a far more complex game than chess. Indeed, in 1996, IBM’s Deep Blue already defeated Kasparov, then then World Chess Champion, but it took until 2016, for AlphaGo to defeat South Korean champion Lee Sedol. The win came from a critical move, move number 37, which spectators thought strange and a mistake. But the move was pivotal to victory.

“AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years” (p332)

“during all those millennia, human minds have explored only certain areas in the landscape of go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of humans minds, discovered and explored these previously hidden areas” (p332-3)

It is hard to mitigate against a threat that  humans cannot imagine precisely because we are humans!

As expected, Harari also flagged up the danger of widespread use of AI which inherently is not objective or unbiased, even if humans incautiously and erroneously deem it so. He mentions among others well known examples of flawed face recognition trained on a specific racial group, for example. He contends that computers have deep seated biases of their own despite lacking consciousness. They have ‘a digital psyche’ and ‘a kind of inter-computer mythology’ (p293). In 2016, Microsoft released AI chatbot Tay with free access to Twitter.

“Within hours, Tay began posting misogynist and antisemitic tweets […] The vitriol continued until horrified Microsoft engineers shut Tay down – a mere sixteen hours after its release” (p293).

“The Microsoft software engineers didn’t build into it any intentional prejudices. But a few hours of exposure to the toxic information swirling in Twitter turned the AI into a raging racist” (p295).

This issue of biased data sets on which AI algorithms are trained, is one key reason for AI biases, of course. Another issue is the goal given to the Algorithm to learn, since apparently algorithms need to be given goals, and humans setting goals (such as maximise user engagement, or score the most points, may not net the result humans are seeking). Since both goals and data sets contain human prejudices, AI learns and augments these biases. Apparently in 2014-18, Amazon tried to develop an algorithm for screening job applications, where the algorithm learns from previous successful and unsuccessful applications. The algorithm began to downgrade applications for containing the word ‘women’ or coming from graduates of women’s colleges,

“Since existing data showed in the past such applications had less chance of succeeding, the algorithm developed a bias against them. The algorithm thought it had simply discovered an objective truth about the world: Applicants who graduate from women’s colleges are less qualified” (p297).

Apparently Amazon could not fix the problem and scrapped the project.

“If we don’t get rid of bias at the very beginning, computers may well perpetrate and magnify it” (p297).

Harari spends a lot of the book musing on the impact on AI on political systems, such as democracy and totalitarian regimes. He contends that democracy depends on information being spread widely, fast enough, and dialogue between people. But the problem increasingly would be not only humans are joining in the public debates, non-human voices are too, on many social media platforms, where bots participate.

“One analysis estimated that out of a sample of 20 million tweets generated during the 2016 US election campaign, 3.8 million (almost 20 per cent0 were generated by bots” (p341)

By 2020, the bots were estimated to be producing 43.2% of tweets. Similarly a comprehensive 2022 study found that 5% of Twitter users were probably bots, but they generated between 20.8% and 29.2% of the posted Twitter content. Harari is deeply concerned that humans are not aware they are not always talking to humans, but to bots, and the damaging impact this could have on people, policy, decisions, and lives. In totalitarian regimes, Harari notes that the previous limitations to how far human surveillance can go, was informed by the fact humans do have to take breaks and sleep. Machines of course, do not, and totalitarianism can be taken to extents never before possible. In fact, Harari further warns that in the world of AI which takes no breaks, humans may be pushed unwisely beyond limits for an organic creature.

Harari is concerned that AI are not just our tools, they increasingly are agents and participants in our lives and networks and communities.

“AI are full-fledged members in our information networks, possessing their own agency. In coming years, all networks – from armies to religions – will gain millions of new AI members, which will process data differently than humans do. These new members will make alien decisions and generate alien ideas – that is, decisions and ideas that are unlikely to occur to humans. The addition of many alien agents is bound to change the shape of armies, religions, markets and nations” (p399).

Harari also says humans will form intimacies with AI; which cannot reciprocate emotions and intimacies, but can simulate (and thereby stimulate) the performance of these.

Right from the start, Harari already dispels the notion that flawed AI is due to incomplete datasets, and the more data and information AI learns, the more perfect it will be. Harari does not see more information making better AI because

“Information isn’t truth. Its main task is to connect rather than represent, and information networks throughout history have often privileged order over truth. Tax records, holy books, political manifestos and secret police files can be extremely efficient in creating powerful states and churches which hold a distorted view of the world and are prone to abuse their power. […] There is no reason to expect that AI would necessarily break the pattern and privilege truth. AI is not infallible” (p400)

Alongside the concern that more information is not the answer to better AI, Harari also points out that already we humans are hard pressed to hold AI to account for its decisions.

“In AI, the neural networks moving towards autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. […] GPT-4, AlphaGo, and the rest are black boxes, their outputs and decision based on opaque and impossibly intricate chains of minute signals” (p333)

which increasingly humans cannot or do not know how to challenge.

Nexus makes a good read, as most of Harari’s books are, even if one does not necessarily agree with all his takes. They are always interesting, they are always substantiated at least in part, and they are always well delivered, well argued, well presented. Time of course will tell how accurate his warnings and predictions are.

Discover more from Turning the Pages

Subscribe now to keep reading and get access to the full archive.

Continue reading