The two-day international
artificial intelligence (AI) safety summit that was held in the United Kingdom
on November 1st and 2nd at Bletchley Park in
Buckinghamshire—the pile that is well known for British mathematician Alan
Turing's decoding of the Enigma code and as the birthplace of the first
programmable digital computer, Colossus—can perhaps be termed as a first big
step towards arriving at a cooperative approach to control the threats posed by
technology.
Around 150 representatives from
across the globe including government leaders from the US, EU, China, India,
Brazil, Indonesia, etc., and industry, academia and civic society leaders,
participated in the summit, which in itself is a “remarkable achievement” for
the UK in diplomatic terms.
Under a joint commitment signed
by 28 governments including the US, China and the EU, and leading AI companies
had reached a consensus on the need for sustained international cooperation to
combat both the short-term and long-term risks posed by ‘frontier AI’.
Accordingly, all the advanced AI models shall be subjected to a battery of
safety tests before they are released. It also emphasized the need to share an
evidence-based understanding of the risks posed by frontier AI and the safety
measures thereagainst across countries.
More importantly, participants
agreed to develop a ‘State of the Science’ report on the capabilities and risks
of frontier AI by an international panel of scientists assembled under the
leadership of AI luminary, Yoshua Bengio. The report developed by such an
eminent group of scientists shall be an invaluable document for educators,
employers, policymakers and scientists. Interestingly, the United Nations
confirmed its support for the creation of an expert AI panel akin to the
Intergovernmental Panel on Climate Change.
Another interesting outcome is an
announcement of the UK for the creation of AI safety Institute meant for
researching the most advanced AI capabilities and testing their safety. It
proposes to collaborate with its international counterparts and like-minded
governments. The US also announced the formation of its own AI Safety
Institute.
Said that,
we must also appreciate that mere summit agreements will not be enough to
achieve a balance between risk management and innovation. Indeed, Prof Robert
Trager, Director of Oxford Martin AI Governance Initiative, observed that the
summit failed to arrive at a “consensus path forward in establishing
international standards and oversight of advanced AI.” In fact, challenges are aplenty: designing
‘tripwires’ that would subject certain models to heightened scrutiny and
constraints, developing AI safety research that incorporates the complexities
of human interaction with AI systems, understanding how frontier AI technology
is likely to behave when it is incorporated into billions of automated
problem-solving software ‘agents’, etc., are some of the pressing challenges
that need to be addressed. There is thus, a need for pragmatic design of institutional
mechanisms such as the international aviation safety process, to counter these
global challenges.
No doubt, the
summit facilitated a global conversation on AI safety, and the need for
international collaboration on AI regulation, however, there existed divergent
views on the type of regulation required. Indeed, different processes are
already running in parallel: The US government has issued executive orders on
safe, secure, and trustworthy AI. The European Union is finalizing introduction
of its regulatory mechanism for AI. China has already announced its regulatory
framework. These national regulations may be able to deal with simpler AI
applications and LLMs but the most powerful frontier models—models feared to be
capable of creating harmful pathogens or cyber weapons and might lead to
“artificial general intelligence” which could even threaten humanity’s survival—call
for global rules and an international body to regulate them. Thus, the basic
issue remains unanswered: How to engage all the countries, including China, in
arriving at an acceptable global regulatory framework and an institute that is
fit enough to verify the models to be introduced and certify them as
trustworthy for their continuation? Indeed, as some experts opined, it may even
call for the creation of a range of institutions.
In conclusion,
it merits to bear in mind the sane observation of Dr Heloise Stevance, Schmidt
AI in Science Fellow: “Historically, technological advances have not benefited all
tranches of society and all countries equally— if we want a better and safer
future for everyone, we must ensure that the fruits of the AI revolution are
not only ‘safe to eat’ but also shared fairly with humanity.”
No comments:
Post a Comment