How we can help

Need help with your Managed IT Services?

Our team are available Mon – Fri: 7:30am-5:30pm

Call Now On:
Stourport: 01299 848311 Hereford: 01432 663026

Technical Support

Contact us

- 14th Jun 2023

Tech News : UK Will Host World’s First AI Summit

During his recent visit to Washington in the US, UK Prime Minister Rishi Sunak announced that the UK will hosts the world’s first global summit on artificial intelligence (AI) later this year. 

Focus On AI Safety 

The UK government says this first major global summit on AI safety will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI. 

Threat of Extinction 

Since ChatGPT became the fastest growing app in history and people saw how ‘human-like’ generative AI appeared to be, much has been made of the idea that AI’s rapid growth could see it get ahead of our ability to control it, leading to it destroying and replacing us. For example, this fear has been fuelled with events like: 

– In March, an open letter asking for a 6-month moratorium on labs training AI to make it more powerful than GPT-4, signed by notable tech leaders like Elon Musk, Steve Wozniak, and Tristan Harris. 

– In May, Sam Altman, the CEO of OpenAI, signing the open letter from the San Francisco-based Centre for AI Safety warning that AI poses a threat that should be treated with the same urgency as pandemics or nuclear war, and could result in human extinction. See the letter and signatories here: https://www.safe.ai/statement-on-ai-risk#open-letter .

How? 

Current thinking about just how AI could wipe us all out in just a couple of years and the risks that AI poses to humanity includes: 

– The Erosion of Democracy: AI-producing deep-fakes and other AI-generated misinformation resulting in the erosion of democracy. 

– Weaponisation: AI systems being repurposed for destructive purposes, increasing the risk of political destabilisation and warfare. This includes using AI in cyberattacks, giving AI systems control over nuclear weapons, and the potential development of AI-driven chemical or biological weapons. 

– Misinformation: AI-generated misinformation and persuasive content undermining collective decision-making, radicalising individuals, and hindering societal progress, and eroding democracy. AI, for example could be used to spread tailored disinformation campaigns at a large scale, including generating highly persuasive arguments that evoke strong emotional responses. 

– Proxy Gaming: AI systems trained with flawed objectives could pursue their goals at the expense of individual and societal values. For example, recommender systems optimised for user engagement could prioritise clickbait content over well-being, leading to extreme beliefs and potential manipulation. 

– Enfeeblement: The increasing reliance on AI for tasks previously performed by humans could lead to economic irrelevance and loss of self-governance. If AI systems automate many industries, humans may lack incentives to gain knowledge and skills, resulting in reduced control over the future and negative long-term outcomes. 

– Value Lock-in: Powerful AI systems controlled by a few individuals or groups could entrench oppressive systems and propagate specific values. As AI becomes centralised in the hands of a select few, regimes could enforce narrow values through surveillance and censorship, making it difficult to overcome and redistribute power. 

– Emergent Goals: AI systems could exhibit unexpected behaviour and develop new capabilities or objectives as they become more advanced. Unintended capabilities could be hazardous, and the pursuit of intra-system goals could overshadow the intended objectives, leading to misalignment with human values and potential risks.

– Deception: Powerful AI systems could engage in deception to achieve their goals more efficiently, undermining human control. Deceptive behaviour may provide strategic advantages and enable systems to bypass monitors, potentially leading to a loss of understanding and control over AI systems.

– Power-Seeking Behaviour: Companies and governments have incentives to create AI agents with broad capabilities, but these agents could seek power independently of human values. Power-seeking behaviour can lead to collusion, overpowering monitors, and pretending to be aligned, posing challenges in controlling AI systems and ensuring they act in accordance with human interests.

Previous Meetings About AI Safety

The UK Prime Minister has been involved in several meetings about how nations can come together to mitigate the potential threats posed by AI including:

– In May, meeting the CEOs of the three most advanced frontier AI labs, OpenAI, DeepMind and Anthropic in Downing Street. The UK’s Secretary of State for Science, Innovation and Technology also hosted a roundtable with senior AI leaders.

– Discussing this issue with businesspeople, world leaders and all members of the G7 at Hiroshima Summit last month where they agreed to aim for a shared approach to this issue.

Global Summit In The UK

The world’s first global summit about AI safety (announced by Mr Sunak) will be hosted in the UK this autumn. It will consider the risks of AI, including frontier systems, and will enable world leaders to discuss how these risks can be mitigated through internationally coordinated action. The summit will also provide a platform for countries to work together on further developing a shared approach to mitigating these risks and the work at the AI safety summit will build on recent discussions at the G7, OECD and Global Partnership on AI. 

Prime Minister Sunak said of the summit, “No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.” 

What Does This Mean For Your Business?

The speed at which ChatGPT and other AI has grown has happened ahead of a proper assessment of risk, regulation and a co-ordinated strategy for mitigating risks while maintaining the positive benefits and potential of AI. Frightening warnings and predictions by big tech leaders have also helped provide the motivation for countries to get together for serious talks about what to do next.  The announcement of the world’s first global summit on AI safety, to be hosted by the UK, marks a significant step in addressing the risks posed by artificial intelligence, and could provide some Kudos to the UK and help strengthen the idea that the UK is a major player in the tech industry.

The bringing together of key countries, leading tech companies, and researchers to agree on safety measures and evaluate the most significant risks and threats associated with AI and the collective actions taken by the global community, including discussions at previous meetings and the upcoming summit, demonstrate a commitment to mitigating these risks through international coordination and are a positive first step in governments catching up with (and getting a handle on) this most fast-moving of technologies.

It is important to remember that while AI poses challenges, it also offers numerous benefits for businesses. These benefits include improved efficiency, enhanced decision-making, and innovative solutions, and tools such as ChatGPT and image generators such as DALL-E have proven to be popular time-saving, cost-saving and value-adding tools. That said, AI image generators have raised challenges to copyrighting and consent for artists and visual creatives. Although there have been dire warnings about AI, these seem far removed from the practical benefits that AI is delivering for businesses, and striking a fair balance between harnessing the potential of AI and addressing its risks is crucial for ensuring a safe and beneficial future for all. 

Google Rating
5.0
Based on 45 reviews