This month evidence of the potential use of AI and robotics for social benefit continued to lag portentous developments. On the one hand, the prospects for improving healthcare systems continue to grow. Google plans a health record tracking system loosely based on the bitcoin concept and using its DeepMind AI, for example. It aims for real-time tracking of data by hospitals, health organisations and patients alike. Beneficiaries will have better treatment prospects. Lives will be saved. On the other hand, a Microsoft researcher warned openly this month that AI, even in its current state, is ripe for abuse by aspiring despots: perfectly suited to the centralizing of power, tracking of populations down to the last individual, the demonizing of outsiders, all while radiating authority via a faux neutrality. “This is a fascist’s dream,” said Microsoft’s Kate Crawford, pulling no punches. “Power without accountability.”
All this before quantum computers have arrived on the scene, which they will within five years, Google is now saying. These machines will be significantly faster and more powerful than current computers. Ordinary mortals outside the campuses of Google, IBM and the like cannot imagine what will be possible with the algorithms that they will be using. “Artificial intelligence runs wild while humans dither”, read a headline in the Financial Times this month. It was a major understatement.
With the integration of AI and robotics, the threats to social coherence compound. Google-owned robotics firm Boston Dynamics unveiled a hybrid robot easily capable of inducing nightmares. Though it is designed currently only for manual tasks, it resembles a Terminator riding on a hoverboard. This in a world where robots can be programmed, today, literally to read the minds of humans they interactive with, provided the latter wear electrodes on their heads. Thus connected, the robot can correct simple mistakes in manipulating objects by translating electrical patterns from the human brain into code.
Warnings are proliferating of intelligent virtual helpers that would take away human jobs by default, in the near term, especially in customer-facing roles in banks and call centres. Large-scale deployment of such machines would quickly deepen the inequality gap, fuelling the very social divisiveness on which the new despotism feeds.
It is not as though practitioners of AI and robotics are blind to the dangers. This month, 40 experts convened at Arizona State University for a workshop to plot Doomsday scenarios, and how to counter them. Tesla’s Elon Musk and Skype’s Jaan Tallinn funded the exercise. Bloomberg’s account of the meeting suggested that the experts were rather better at dreaming up the Doomsday scenarios than they were the countermeasures. Other initiatives include the creation of AI Now, an online research community researching social impacts of AI, and the idea of a tax on robots, to help finance social adjustments, supported among others by Bill Gates.
Speaking of the Microsoft founder, clearly much will depend in this unfolding drama on the character and actions of the tech billionaires whose companies and technologies are located in the heart of the emerging drama. They will be increasingly unaccountable, on recent evidence. This month Snap Inc, the parent of Snapchat, went public in one of the most successful IPOs ever. It’s shares soared, valuing the company at $28bn. And incredibly, Snapchat founder Evan Spiegel successfully persuaded a critical mass of shareholders to invest without their being given any voting rights at all. This lack of governance and accountability – and investors’ willingness to tolerate it – sets a dangerous precendent in capitalism. If Snap rides on its IPO cash proceeds to rival Facebook, Google, and the others in scale, the world had better hope 26 year old Spiegel is a man with a heart and conscience.
That question mark will also apply to the founders of new companies that will inevitably try to emulate Snap. Worryingly, experts on a recent conference panel on tech leadership professed that psychopaths are rife in Silicon Valley. Studies suggest that whereas the proportion of psychopaths in the general population is around 1%, it is 4-8% in the corporate environment.
To see how this can play out in the tech world, consider the recent chronicle of alleged malfeasance and definite gross unpleasantness at Uber. It makes ominous reading for those of us who hope that tech and tech and tech companies can be a transformative force for social progression. And the whole saga is a manifestation of the leader’s character and values.
Which brings us to the theme of truth. In a world where your tech is drifting almost unopposed towards being perfect infrastructure for despots, wherein a new elite of breathtakingly wealthy leaders might be in danger of enhanced levels of psychopathy, the approach of the populist right to use of propaganda assumes critical importance. And here too the news is bad. New research from Columbia University, analysing 1.3 million articles in the run up to the US election, has shown that the internet itself did not favour the creation and spread of fake news. Rather, it was deliberate use of the technology for this purpose by a Breitbart-led right-wing media ecosystem that created havoc with reporting of true facts. More evidence of how this lie machine works comes out by the week. The Guardian dug deep into the origins of Cambridge Analytica, the controversial company that claims to use personal data to swing elections, and which may indeed have delivered on this claim in the US presidential election and the Brexit referendum. More emerged on how it is funded, with big-data billionaire Robert Mercer, backer of Donald Trump, prominent in the story. The whole narrative raises profound questions about the state, and future, of our democracies.
Again, tech does not appear to be helping the defenders of democracy, but abetting the aspiring new despots. Accusations that Google has been spreading fake news have intensified. It has been found to be repeatedly sharing falsehoods and conspiracy theories through its “featured snippets and search” functionality. There have also been major problems with its advertising this month, with organisations including the Guardian newspaper cancelling accounts because their ads had been placed next to extremist material.
Amid all this chaos, the founder of the internet, Tim Berners-Lee, called this month for tighter regulation of online political advertising. This, among many other responses by society, is clearly going to be needed. Perhaps the British government can lead the way, for the current US government certainly will not. This is not as impossible a prospect as it may sound. The UK government was one of the organisations to pull its ads from Google because of proximity to inappropriate extremist content.