How we can help

Need help with your Managed IT Services?

Our team are available Mon – Fri: 7:30am-5:30pm

Call Now On:
Stourport: 01299 848311 Hereford: 01432 663026

Technical Support

Contact us

- 9th Aug 2023

Tech News : Seven Safeguarding SamurAI?

Following warnings about threats posed by the rapid growth of AI, the US White House has reported that seven leading AI companies have committed to developing safeguards. 

Voluntary Commitments Made 

A recent White House fact sheet has highlighted how, in a bid to manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety, President Biden met with and secured voluntary commitments from seven leading AI companies “to help move toward safe, secure, and transparent development of AI technology”.     

The companies who have made the voluntary commitments are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. 

What Commitments? 

In order to improve safety, security, and trust, and to help develop responsible AI, the voluntary commitments from the companies are: 

Ensuring Products are Safe Before Introducing Them to the Public 
 
– Internal and external security testing of their AI systems before their release, carried out in part by independent experts, to guard against AI risks like biosecurity and cybersecurity. 

– Sharing information across the industry and with governments, civil society, and academia on managing AI risks, e.g. best practices for safety, information on attempts to circumvent safeguards, and technical collaboration. 

Building Systems that Put Security First 

– Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (regarded as the most essential part of an AI system). The model weights will be released only when intended and when security risks are considered. 

– Facilitating third-party discovery and reporting of vulnerabilities in their AI systems, e.g. putting a robust reporting mechanism in place to enable vulnerabilities to be found and fixed quickly. 

Earning the Public’s Trust 

– Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system, thereby enabling creativity AI while reducing the dangers of fraud and deception. 

– Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security risks and societal risks (e.g. the effects on fairness and bias). 

– Prioritising research on the societal risks that AI systems can pose, including those on avoiding harmful bias and discrimination, and protecting privacy. 

– Developing and deploying advanced AI systems to help address society’s greatest challenges, e.g. cancer prevention, mitigating climate change, thereby (hopefully) contributing to the prosperity, equality, and security of all. 

To Be Able To Spot AI-Generated Content Easily 

One of the key aspects of more obvious issues of risk associated with AI is the fact that people need to be able to definitively tell the difference between real content and AI generated content. This could help mitigate the risk of people falling victim to fraud and scams involving deepfakes or believing misinformation and disinformation spread using AI deepfakes which could have wider political and societal consequences. 

One example of how this may be achieved, with the help of the AI companies, is the use of watermarks. This refers to embedding a digital marking in images and videos which is not visible to the human eye but can be read by certain software and algorithms and give information about whether it’s been produced by AI. Watermarks could help in tackling all kinds of issues including passing-off, plagiarism, stopping the spread of false information, tackling cybercrime (scams and fraud), and more. 

What Does This Mean For Your Business? 

Although AI is a useful business tool, the rapid growth-rate of AI has outstripped the pace of regulation. This has led to fears about the risks of AI when used to deceive, spread falsehoods, and commit crime (scams and fraud) as well as the bigger threats such as political manipulation, societal destabilisation, and even the existential threat to humanity. This, in-turn, has led to the first stage action. Governments, particularly, need to feel that they can get the lid partially back on the “genie’s bottle” so that they can at least ensure safeguards are built-in early-on to mitigate risks and threats.

The Biden administration getting at least some wide-ranging voluntary commitments from the Big AI companies is, therefore, a start. Given that many of signatories to the open letter calling for 6-month moratorium on systems more powerful that GPT-4 were engineers from those big tech companies, it’s also a sign that more action may not be too far behind. Ideas like watermarking look a likely option and no doubt there’ll be more ideas.

AI is transforming businesses in a positive way although many also fear how the automation it offers could result in big job losses, thereby affecting economies. This early stage is, therefore, the best time to make a real start in building in the right controls and regulations that allow the best aspects of AI to flourish and keep the negative aspects in check, but this complex subject clearly has a long way to run.

Google Rating
5.0
Based on 45 reviews