Law and Ethics in Generative AI
The use of Generative AI raises questions about academic honesty and information security. It is therefore important for you as a teacher to be aware of the rules that apply and to actively work with challenges linked to acoustic integrity as well as source criticism and source trust.
Source criticism and source trust
GenAI influences source criticism on the internet, not least when it comes to news and social media. As more and more of what you encounter online is based on various forms of personalization with the support of AI algorithms that create unique information flows at the individual level, the risk increases that existing opinions are reinforced and it becomes increasingly difficult to distinguish false information. In other words, it becomes even more important that we as users remain critical and questioning of the content we consume online.
If you want to read more about the importance of source criticism and source trust linked to GenAI, you can take advantage of the Swedish Internet Foundation's page.
GDPR and information security when using generative AI
The following information has been produced by the Information Security Group at University West:
Generative AI is a form of artificial intelligence with the ability to produce diverse forms of content, including text, images, video, audio, code, and more, through the use of generative models such as ChatGPT and DALL-E.
It is of utmost importance to understand that when you use publicly available AI tools and interact with them by providing prompts or asking questions, your information is transferred to a third party. Most open generative AI tools that are available at no cost are cloud-based and usually come with terms of service that enable the use of the submitted or uploaded information to improve the tool. It is therefore not allowed to include personal data or confidential information in use as the tool could reuse or reproduce that information in a completely different context.
Entering personal data or confidential information into an open AI tool is comparable to publishing the information openly on the internet. Removing names alone is often insufficient, and a complete de-identification of the data is required in accordance with the requirements of the GDPR. In addition, it is inappropriate for these tools to share confidential information, such as research results or analytical data, as this may lead to the unintentional dissemination of such information to the public.