Set of guidelines for AI released by White House, to encourage responsible development and deployment of artificial intelligence
The White House on Tuesday has published what it is calling a blueprint for an AI Bill of Rights.
The announced AI guidelines are intended to persuade organisations to development automated systems that utilise artificial intelligence (AI) in a safe manner for parents, patients and workers.
There is no shortage of suggested guidelines for AI systems. In 2019 for example the House of Lords published a comprehensive report into artificial intelligence (AI) and called for an AI code of ethics.
That report stated that the UK is in a “unique position” to help shape the development of AI, to ensure the tech is only applied for the benefit of mankind.
Then in early 2020, the Trump administration said it would propose regulatory principles to govern the development and use of AI. Those regulatory principles were designed to prevent “overeach” by authorities, and the White House at the time also wanted European officials to likewise avoid aggressive approaches.
Then in July 2021 the US government announced the creation of the National Artificial Intelligence Research Resource Task Force.
The new entity was the Biden administration’s reaction to the perceived lack of world-leading expertise in developing AI systems.
Now the White House has proposed a non-binding AI Bill of Rights, which suggests numerous practices that developers and users of AI software should voluntarily follow.
The White House Office of Science and Technology Policy identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.
“The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats – and uses technologies in ways that reinforce our highest values,” said the White House.
“Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice – a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process,” it said.
“These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs,” it said.
The five principles that should guide the design, use, and deployment of automated systems cover “safe and effective systems”; “algorthmic discrimination protections”; “data privacy”; “notice and explanation”; and finally “human alternatives, consideration and fallback”.
The Biden administration’s move comes at a time when the European Union is moving toward regulating high-risk systems.
The United States meanwhile is nowhere near to a comprehensive set of laws to regulate AI.
It should be noted that the White House announcement did not include proposals for new laws.
Indeed, Reuters quoted US officials as reportedly saying regulators including the Federal Trade Commission would continue to apply existing rules to cutting-edge systems.
Rules concerning the ethical use of artificial intelligence have long been considered by authorities.
This was evidenced in 2015 when study by researchers at Arizona State University concluded that using AI algorithms to study patterns and behaviour of IS extremists could be of “significant” help for both the US military and policymakers in the future.
For years the US and its allies have employed forms of pattern-of-life analysis to determine threat levels of potential targets for its hunter-killer drones.
And it is fair to say that the arrival of AI has perhaps been one of the most vexing tech developments for politicians and tech experts in recent years, after some including the late Stephen Hawking repeatedly warned of the dangers the technology could present.
The White House blueprint on AI comes as companies increasingly look to integrate AI and machine learning into their business structures.
But the use of AI has long raised both control, privacy, cyber security and its impact on people’s jobs in the future.
In February 2018 the Future of Humanity Institute, whose the authors come from leading universities such as Cambridge, Oxford and Yale, along with privacy advocates and military experts, warned AI could be exploited for malicious purposes.
They warned that AI could misused by rogue states, criminals and lone-wolf attackers.
And over the past decade a number of high profile tech figures have warned about the dangers posed by AI. These include Telsa CEO Elon Musk, Bill Gates, and the late Professor Stephen Hawking.
Professor Hawking told the BBC in December 2014 that a thinking machine could “redesign itself at an ever-increasing rate” and “supersede” humans, while Musk, speaking during an interview at the AeroAstro Centennial Symposium at MIT, called AI “our biggest existential threat”.