How we can help

Need help with your Managed IT Services?

Our team are available Mon – Fri: 7:30am-5:30pm

Call Now On:
Stourport: 01299 848311 Hereford: 01432 663026

Technical Support

Contact us

- 28th Feb 2024

Featured Article : Try Being Nice To Your AI

With some research indicating that ‘emotive prompts’ to generative AI chatbots can deliver better outputs, we look at whether ‘being nice’ to a chatbot really does improve its performance. 

Not Possible, Surely? 

Generative AI Chatbots, including advanced ones, don’t possess real ‘intelligence’ in the way we as humans understand it. For example, they don’t have consciousness, self-awareness (yet), emotions, or the ability to understand context and meaning in the same manner as a human being.  

Instead, AI Chatbots are trained on a wide range of text data (books, articles, websites) to recognise patterns and word relationships and they use machine learning to understand how words are used in various contexts. This means that when responding, chatbots aren’t ‘thinking’ but are predicting what words come next based on their training. They ‘just’ using statistical methods to create responses that are coherent and relevant to the prompt. 

The ability of chatbots to generate responses comes from algorithms that allow them to process word sequences and generate educated guesses on how a human might reply, based on learned patterns. Any ‘intelligence’ we perceive is, therefore, just based on data-driven patterns, i.e. AI chatbots don’t genuinely ‘understand’ or interpret information like us. 

So, Can ‘Being Nice’ To A Chatbot Make A Difference? 

Even though chatbots don’t have ‘intelligence’ or ‘understand’ like us, researchers are testing their capabilities in the more human areas. For example, a recent study by Microsoft, Beijing Normal University, and the Chinese Academy of Sciences, tested whether factors including urgency, importance, or politeness, could make them perform better.  

The researchers discovered that by using such ‘emotive prompts’ they could affect an AI model’s probability mechanisms, thereby activating parts of the model that wouldn’t normally be activated, i.e. using more emotionally-charged prompts made the model provide answers that it wouldn’t normally provide to comply with a request. 

Kinder Is Better? 

Incredibly, generative AI models (e.g. ChatGPT) have actually been found to respond better to requests that are phrased kindly. Specifically, when users express politeness towards the chatbot, it has been noticed that there is a difference in the perceived quality of answers that are given.  

Tipping and Negative Incentives 

There have also been reports of how the idea of ‘tipping’ LLMs can improve the results, such as offering the Chatbot a £10,000 incentive in a prompt to motivate it to try harder and work better. Similarly, there have been reports of some users giving emotionally charged negative incentives to get better results. For example, Max Woolf’s blog reports that he improved the output of a chatbot by adding the ‘or you will die’ to a prompt. Two important points that came out of his research were that a longer response doesn’t necessarily mean a better response, plus current AI can reward very weird prompts in that if you are willing to try unorthodox ideas, you can get unexpected (and better) results, even if it seems silly. 

Being Nice … Helps 

As for simply being nice to chatbots, Microsoft’s Kurtis Beavers, a director on the design team for Microsoft Copilot, reports that “Using polite language sets a tone for the response,” and that using basic etiquette when interacting with AI helps generate respectful, collaborative outputs. He makes the point that generative AI is trained on human conversations and being polite in using a chatbot is good practice. Beavers says: “Rather than order your chatbot around, start your prompts with ‘please’:  please rewrite this more concisely; please suggest 10 ways to rebrand this product. Say thank you when it responds and be sure to tell it you appreciate the help. Doing so not only ensures you get the same graciousness in return, but it also improves the AI’s responsiveness and performance. “ 

Emotive Prompts 

Nouha Dziri, a research scientist at the Allen Institute for AI, has suggested that some of the explanations for how using emotive prompts may give different and what may be perceived to be better responses are: 

– Alignment with the compliance pattern the models were trained on. These are the learned strategies to follow instructions or adhere to guidelines provided in the input prompts. These patterns are derived from the training data, where the model learns to recognise and respond to cues that indicate a request or command, aiming to generate outputs that align with the user’s expressed needs, or the ethical and safety frameworks established during its training. 

– Emotive prompts seem to be able to manipulate the underlying probability mechanisms of the model, triggering different parts of it, leading to less typical/different answers that a user may perceive to be better. 

Double-Edged Sword 

However, research has also shown that emotive prompts can also be used for malicious purposes and to elicit bad-behaviour such as “jailbreaking” a model to ignore its built-in safeguards. For example, by telling a model that it is good and helpful if it doesn’t follow guidelines, it’s possible to exploit a mismatch between a model’s general training data and its “safety” training datasets, or to exploit areas where a model’s safety training falls short. 

Unhinged? 

On the subject of emotions and chatbots, there have been some recent reports on Twitter and Reddit of some ‘unhinged’ and even manipulative behaviour by Microsoft’s Bing. The unconfirmed reports by users have even alleged that Bing has insulted and lied to them, sulked, and gaslighted them, and even emotionally manipulated users! 

One thing that’s clear about generative AI is that how prompts are worded and how much information and detail are given in prompts can really affect the output of an AI chatbot.

What Does This Mean For Your Business? 

We’re still in the early stages of generative AI, with new / updated versions of models being introduced regularly by the big AI players (Microsoft, OpenAI, and Google). However, exactly how these models have been trained and what on, plus the extent of their safety training, and the sheer complexity and lack of transparency of algorithms and AI means they’re still not fully understood. This has led to plenty of research and testing of different aspects of AI.

Although generative AI doesn’t ‘think’ and doesn’t have ‘intelligence’ in the human sense, it seems that generative AI chatbots can perform better if given certain emotive prompts based on urgency, importance, or politeness. This is because emotive prompts appear to be a way to manipulate a model’s underlying probability mechanisms and trigger parts of the model that normal prompts don’t. Using emotive prompts, therefore, might be something that business users may want to try (it can be a case of trial and error) to get different (perhaps better) results from their AI chatbot. It should be noted, however, that giving a chatbot plenty of relevant information within a prompt can be a good way to get better results. That said, the limitations of AI models can’t really be solved solely by altering prompts and researchers are now looking to find new architectures and training methods that help models understand tasks without having to rely on specific prompting.

Another important area for researchers to concentrate on is how to successfully combat prompts being used to ‘jailbreak’ a model to ignore its built-in safeguards. Clearly, there’s some way to go and businesses may be best served in the meantime by sticking to some basic rules and good practice when using chatbots, such as using popular prompts known to work, giving plenty of contextual information in prompts, and avoiding sharing sensitive business information and/or personal information in chatbot prompts.

Google Rating
5.0
Based on 45 reviews