Artificial Intelligence (AI) may finally get the surveillance it deserves.
The United Nations Commissioner for Human Rights (OHCHR), Michelle Bachelet, called for a moratorium on the sale and use of AI systems that pose a serious risk to human rights until adequate safeguards are put in place.
The UN human rights chief also called for a ban on AI applications that don’t comply with human rights. Right call, but one that has been slow in coming.
Here is why. For starters, much of the damage has already been done by Big Tech. Digital technologies, even without AI, have caused serious privacy violations. Big Tech and its band of advertisers stalk us in every corner of the world wide web.
Using our personal data mined from the platforms we use, they are able to tell where we are and where we are heading to. Even where we are thinking of going to. If this isn’t outrageous, what is? AI will only make it worse as Pegasus, the hacking spyware created by Israel’s NSO Group, is showing how.
Pegasus needs no click on malicious links for the phone to be infected. It is that dangerous. The NSO Group knows it, and so does the Israeli government. Israeli Prime Minister Naftali Bennett can’t deny it because as a one-time AI hawker, he knows what Pegasus is up to and where.
Disappointingly, there isn’t a word on Pegasus in OHCHR’s press release on Wednesday, when people around the world have been equating it with crimes against humanity. Perhaps Bachelet isn’t alert enough. Or Israel has friends in high places.
And Israel’s government is very dismissive: Pegasus is “only for lawful use”. Media exposes tell an entirely different story. In July, The Guardian, in a global alliance, revealed that the Pegasus spyware was used to target activists, journalists, lawyers and a slew of others across the world, all with full knowledge of NSO Group and the Israeli government.
The list of others should shock anyone but most certainly the UN’s chief of human rights. You name it, they are there: presidents, prime ministers, cabinet ministers, government officials, business executives, religious figures and academics.
Ten governments were named in the expose though the spyware was seen to be active in 45 countries. The governments have hacked into tens of thousands of devices. Like all malwares — Pegasus is distinctly one — it infects iPhones and Android devices “to extract messages, photos, emails, record calls and secretly activate microphones”. For sure, there is some good in AI. But there is much really bad there, too.
As the OHCHR itself acknowledges, AI is largely a decision-driven tool. It can change human lives. It can damage them, too. OHCHR must not only talk about closing the accountability gap in how data is collected, stored and used. It must walk the talk. Don’t just play catch-up to AI.
Marshal the world to put in place the guardrails on the use of AI. Pegasus isn’t a new threat to human lives. It was spotted as early as 2016 by Citizen Lab, an interdisciplinary laboratory at the Munk School of Global Affairs & Public Policy in the University of Toronto, embedded in thousands of IP addresses and domain names. AI technologies like Pegasus are a threat to humanity. Of this there is no doubt.
What is of greater threat is for the world to do nothing about it.