The Ethics of AI Surveillance and Which Countries Are Using It
How can governments approach the delicate issue of AI-enhanced surveillance?
With facial recognition becoming an increasingly popular surveillance tool used by police departments, airports and even schools, the debate over the value and the risks of AI-enhanced surveillance has become a complex challenge for governments and a defining issue of our time.
In the US, concerns over privacy led California to ban the use of facial recognition technology in body cams used by state and local law enforcement agencies, while the America Civil Liberties Union is suing multiple government agencies to release records on how facial recognition technology is being used, in a bid to protect the privacy, safety and civil liberties of American citizens.
But with emerging surveillance technology such as gait recognition, which analyses how people move as they walk, or lasers that can identify people by their heartbeat already being used by the US and China, governments are increasingly under pressure to address the ethical implications – and other potential ramifications – of using new technology for surveillance. It's an issue discussed at length in AI Governance, a series led by Governance Studies Vice President Darrell West, where scholars from in and outside the Washington-based Brookings Institution identify key governance and norm issues related to AI, proposing policy remedies to address the challenges associated with emerging technologies.
Some of the suggestions include providing citizens with clear notifications in public areas, mandating accuracy standards to prevent false-positive matches, and certifying certain usage.
For more clarity on which countries are using AI surveillance technology, what specific types of AI surveillance governments are deploying, and which countries and companies are supplying this technology, Carnegie’s AI Global Surveillance (AIGS) Index compiles empirical data on AI surveillance use for 176 countries around the world.