A note that this page of the LibGuide is meant to ignite your curiosity on these ethical considerations within the topic of artificial intelligence, but we would highly recommend you continue your own research on these topics. See the Additional Links and References Page for further readings on related topics.
When talking about AI, there are many considerations surrounding intellectual property, data, and privacy.
Two key considerations include:
Intellectual property and privacy is a large ethical concern surrounding AI and it could be a great conversation starter with students surrounding AI.
There has recently been much discussion about the environmental impacts of AI chatbots. Below are just a few of the articles curated by SUNY Office of Library and Information Services (OLIS) Collaborative LibGuide which outline the stark realities of the environmental considerations surrounding use of AI chatbots.
There are countless other information sources on the environmental impacts of AI. This could be a great conversation to have in class especially in the context sustainability being one of SUNY Oneonta's core values (Sustainability, n.d.)!
As there are many considerations surrounding intellectual property and academic integrity, there are also many considerations for AI surrounding copyright and artist rights. Below is a brief list of articles which outline key considerations about artist rights and the use of AI.
Artist rights and AI is a topic that students are considering and thinking about, this could lead to great conversations in a classroom setting.
There have recently been many articles released which explore the impact of AI technology on human labor around the globe. Many large issues encompassed within one single quote, published in the Noēma Magazine outlines, "so-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions" (Williams, Miceli, and Gebru, October 13, 2022, para. 8).
Further research on the impact of AI on human labor can be found below:
We encourage you to continue researching and considering the impact of AI on human labor.
When using AI to find sources, you will occasionally be provided with citations or links to articles that do not exist. AI can provide false or inaccurate information to a user, due in part to AI hallucinations. AI hallucinations are defined by Choi and Mei as, "when an algorithmic system generates information that seems plausible but is actually inaccurate or misleading" (March 21, 2025, para. 3).
While it is always critical to evaluate information source before using it in your own work, this skill becomes even more important when attempting to use AI to help you find bibliographic sources. Alike other information, AI will also hallucinate citations (Walters and Wilder, 2023). If a chatbot provides you with a citation to a source, there is a chance that the article does not actually exist, and instead the article was hallucinated to sound like a real source. As such, these hallucinated citations could even include real journal or author names, as well as links to reliable websites, but the article itself may not exist.
To ensure you are not relying on hallucinated information or sources to back up your argument, or unknowingly contributing to misinformation, always stop and consider the source before using and ultimately citing it in your own work. To verify a citation is not hallucinated, you may try to search for the article title in Milne Search or through a quick Google search. If you are struggling to find information on the source that was provided to you through an AI chatbot, it is possible that it does not exist and was instead hallucinated. If you are unsure if the article actually exists, contact a Milne Librarian for assistance verifying the publication, and/or to get help accessing the full text of the article directly through the Milne Library or through an interlibrary loan (ILL).
There are many evaluation criteria that could help you with identifying what to look for when evaluating a source, which you can find on the Evaluating Sources: AI page of this LibGuide, including the Lateral Reading method. Lateral reading requires information seekers to stop and research claims from an original source, completely separate from the original source (Wineburg & McGrew, 2017). This or other evaluation methods could be valuable to use when evaluating information or articles provided to you by an AI chatbot. Stop and consider looking up the article to see if it actually exists at all, if it was published in a peer-reviewed journal, and if the Milne Library has full-text access to the article.
Note: If you are interested in reading more about mis- and disinformation, you may want to check out the Milne Library's research guide on Teaching in the Age of Alternative Facts!
The content on bias in AI is from SUNY Office of Library and Information Services (OLIS) Collaborative LibGuide.
These resources were recommended by the SUNY Office of Library and Information Services (OLIS) Collaborative LibGuide.