Generative AI Risks, Issues & Concerns

By Susan Olapade

Generative AI has the potential to transform various fields and operations, including software engineering, content creation, fashion design, higher education, customer service, marketing, sales, and risk management. However, continued research and development of ethical practices in Generative AI will be critical to ensure that the technology is used in a positive way.

AI ethics which has been discussed over the years is a set of values, principles, and techniques used to guide moral conduct in the development and use of AI technologies, according to Leslie (2019). These standards aim to ensure that AI is developed and deployed in a way that aligns with widely accepted notions of right and wrong.

As with traditional AI, concerns such as biases, deep fakes, misinformation, job displacement, environmental impact, and privacy invasion are also prevalent in Generative AI. To use this cutting-edge technology in a responsible and ethical manner, it is essential to identify and address specific concerns related to image-generating tools like DALL-E and Stable Diffusion, and Large Language Models like Chat GPT and Bing AI Chat. 

Increased risk of AI Biases

The risk of biases previously brought up in traditional AI has also been looked into more critically in Generative AI because there is the potential for increased bias in Large Language Models. According to an example by AI Multiple.com (2023), compared to a 117 million parameter model developed in 2018, a 280 billion parameter model created lately demonstrated an enormous 29% increase in toxicity levels. Recent research also suggests that larger and more sophisticated systems like GPT-3 are often more susceptible to underlying social biases from their training data. According to McKinsey (2022), bias can be introduced into the data through how they are collected for use. Data created by users can also generate a feedback cycle that leads to bias. 

GPT- 4, the latest language model sensation developed by OpenAI, has been said to have more promising ways of reducing AI Biases. A “Bias Mitigation Toolkit” that provides users with a set of best practices for mitigating bias in GPT-4 models were also released by OpenAI. The toolkit includes guidelines for using diverse data sets and representatives of different demographics, as well as techniques for identifying and addressing potential sources of bias. According to ts2.ai (2023), GPT-4 is speculated to have abilities that can generate a more balanced dataset that other AI models can be trained on and that GPT-4 can generate text that can detect and correct biases in other AI models. 

Copyright Infringement

Generative AI raises issues that image-generating AI companies use artists’ intellectual property to create new images without crediting them and paying them fairly. This led to a lawsuit from Getty Images, a stock image, and video provider, against Stability AI. This company makes AI models for images, videos, etc., and created the popular text-to-image tool, Stable Diffusion. Getty Images accused Stability AI of “illegally copying and processing millions of copyrighted images.” Getty Images also banned users from uploading and selling AI-generated images/art on its platform. This shows the growing tension between art creators and image-generating tool developers over recognition. Adobe, which recently launched its own text-to-image platform, Firefly, said that it avoids this problem by using Adobe Stock Service, which has a relationship with its creators to provide users with images that are safe for commercial use. Adobe also said that it is thinking of ways to pay creators whose works will influence its AI-generated images and art, a major differentiator strategy compared to its competitors. 

Incorrect / Non-Factual Information

Lastly, a major concern since ChatGPT launched in November 2022 is Language Models generating incorrect information. This is particularly worrying for students who use Generative AI for educational purposes. For example, while writing this report, I tried to use Bing’s AI chat to generate citations for some of the sites I referenced to save time. Figure 1, under Appendix, shows the response generated by the language model, which included an incorrect author name and an incorrect article name. The correct author name and title of the article can be seen in Figure 2. Stack Overflow, a site that has been the go-to source for software developers for Q & A on coding-related problems, also banned ChatGPT because of a similar issue. Their major concern was that the language model had a high chance of generating wrong answers that looked correct and that because the answers were easy to produce, most people would not bother verifying the authenticity.

Previous
Previous

The Bold Prints and the Thorny Issues: Navigating the Visibility of African Fashion

Next
Next

Literacy as a Social Determinant of Stroke