Generative Artificial Intelligence (GAI) systems that can automatically generate content in the form of source code or other contents (e.g., images) has seen increasing popularity due to the emergence of tools such as ChatGPT which rely on Large Language Models (LLMs). Misuse of the automatically generated content can incur serious consequences due to potential harms in the generated content. Despite the importance of ensuring the quality of automatically generated content, there is little to no approach that can systematically generate tests for identifying software harms in the content generated by these GAI systems. In this article, we introduce the novel concept of ethics testing which aims to systematically generate tests for identifying software harms. Different from existing testing methodologies (e.g., fairness testing that aims to identifying software discrimination), ethics testing aims to systematically detect software harms that could be induced due to unethical behavior (e.g., harmful behavior or behavior that violates intellectual property rights) in automatically generated content. We introduced the concept of ethics testing, discussed the challenges therewithin, and conducted five case studies to show how ethics testing can
This version is intentionally a light rewrite for admin review. It preserves the article's main claim, the named people or companies involved, and the practical importance of the development instead of turning it into a different lesson.
Why it matters: this story sits in Research and may be useful as a NEWS byte because it gives readers direct context on a current development from ArXiv – Artificial Intelligence. Before approval, an admin should verify that the emphasis, framing, and any implied conclusions still match the source article.