By: Coran Darling (DLA Piper)
We are slowly seeing the emerging trend of organisations considering the use of generative technologies across all areas of business. Collectively known as ‘generative AI’, these technologies (such as the popular Chat-GPT and Dall-E) are capable of taking a prompt from its user and creating entirely new content, such as blog posts, letters to clients, or internal policies.
In a previous article, we examined several points that organisations should consider, such as potential for IP infringement and inadvertent PR issues. The article goes on to consider several steps organisations can take to mitigate these risks throughout the process, such as regular testing and ensuring appropriate safeguards are put in place. As will be clear for those who have already interacted with these technologies, while there is certainly value in implementing them within certain processes, these safeguards are clearly a necessary step to ensure that the AI is behaving accurately and, in the case of written works, in a way that is not misleading.
The need for these internal processes can be displayed using a simple riddle. For this example, Chat-GPT was asked the following:
‘Mark has three brothers: Peter, Alex, and Simon. Who is the fourth brother?’.
The AI quickly, though inaccurately, responded that there was no fourth brother even though, as would be obvious to a human, Mark is clearly the fourth brother.
While the error in this example is humorous, and without consequence, the severity changes when using generative AI for more technical or material uses. The creation of news articles informing the public of political updates, for example, would clearly require that the information is accurate and reliable. The same could be said for the creation of a letter regarding a failure to adhere to contractual terms or service levels. It is clear that a number of outcomes may materially mislead parties (whether mistakenly or otherwise) to their detriment which, in turn, may lead to concerns of dishonesty, fraud, and misinformation…
CONTINUE READING…