AI systems can be unfair and biased. They may favour certain languages, cultures, or groups over others. An AI writing assistant as an example might be better at generating text in English than in other languages, putting non-English speakers at a disadvantage. They may also contain implicit or explicit biases introduced through the algorithmic design, data collection, data labeling, or model training processes. It’s important to evaluate AI-generated content for common types of bias, such as racial bias, gender bias, class bias, sexual orientation bias, disability bias, religious bias, and political bias.
Legal and Ethical Frameworks
As AI becomes more advanced, we need clear laws and ethical guidelines to ensure it's used responsibly. This helps protect consumers, promote fairness, and hold AI companies accountable. Ireland's AI use is governed by the EU Artificial Intelligence Act.
Psychological and Societal Effects
AI is changing the way we live and work. We need to study how these changes affect our mental health, relationships, and social norms. For instance, as AI takes over more tasks, it could lead to job losses and changes in the workforce that impact society.
AI can generate content that copies or is inspired by existing work. This raises questions about who owns the rights to that content. For example, if an AI writes a story that is very similar to a published book, there could be legal issues around copyright infringement. It is crucial to cite authors whose work is retrieved via AI platforms. Failing to do so constitutes plagiarism, which is unethical and often illegal.
When you use AI, you are sharing information with the AI company. Your privacy is not guaranteed, so be careful about uploading personal or sensitive information. AI companies could potentially misuse your data.
When using AI tools such as Microsoft CoPilot or ChatGPT for research, they may make up credible-sounding citations to sources that do not exist, or give inaccurate information, which is called “hallucinating.” These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. AI hallucinations can be a problem for AI systems that are used to make important decisions, such as medical diagnoses or financial trading.
AI content may be selective as it depends on the algorithm which it uses to create the responses, and although it accesses a huge amount of information found on the internet, it may not be able to access subscription-based information that is secured behind firewalls. Content may also lack depth, be vague rather than specific, and it may be full of clichés, repetitions, and even contradictions.
AI-generated content is not always accurate. It may contain errors, false claims, or plausible-sounding content that is invented and false (confabulations). AI tools may be limited by the dataset available to them, which may not include the latest information. Consider that subscription-based resources, such as library databases may offer more authoritative sources.
AI-generated content may contain information that is outdated. This may result from access to old or limited datasets. For instance, some free GPTs only have access to a snapshot of the internet from several years ago, so they are unable to generate content that draws from the latest information.
Confabulation is the creation of false content without the intention to deceive. AI tools, especially chatbots are prone to make up information because they are trying to provide plausible responses to the prompts they receive. For example, if an AI chatbot is asked to provide a literature review for the latest research on dementia, it may confabulate by generating a response that misconstrues information or invents plausible-sounding information, sources, and citations.
Algorithmic bias can occur if there is biased information in the AI algorithms, or if the datasets the AI tool uses contain bias. Common biases that can occur in the data AI tools pull from to generate answers to prompts can include biases against race, class, gender, sexual orientation, ability, religion, or political beliefs.
Because of both confabulation and the potential for bias, it is recommended to evaluate whether the information and citations the AI tool generates are good sources of information. See how to evaluate AI content below.
Text from this page has been pulled from the Artificial Intelligence Guides of the IADT-Dun Laoghaire Institute of Art, Design and Technology, the University of Louisiana Monroe, The University of Texas Libraries, and the definition of AI hallucinations provided by Google Cloud on 9/4/2025.
McDaniel College
2 College Hill, Westminster, MD 21157
Phone 410.857.2281