Skip to Main Content

DRGN 1000: Adventures in Entrepreneurship (David) Fall 2024

This is a library guide created for Professor David's DRGN 1000 Fall 2024 class.

What is Generative Artificial Intelligence

I want to acknowledge that generative AI tools are being updated with new features quickly and it is difficult to stay up-to-date. This library guide reflects the features of current (as of 10-29-24) generative AI and LLMs available to users. 

Chatgpt, Claude, Perplexity AI, and other generative AI tools are based on Large Language Models. 

Large Language Models (LLM) function through algorithms that are trained on massive datasets that help the model learn to mimic natural human language. These large datasets are what the LLM draws from for information and uses an algorithm to predict what the next word it a response should be. Generative Artificial Intelligence tools like ChatGPT use large language models. 

Algorithms are step-by-step instructions that computers follow to complete tasks, solve problems, and make automated decisions. They use data to make predictions about people, including their preferences, attributes, and behaviors.

Humans create algorithms and the data they are trained on. This leads to Algorithmic Bias, which furthers racism and sexism in the outputs. Generative Artificial Intelligence tools reproduce these issues. 

 

 

Ethical Dilemmas

Evaluation Method for AI

The ROBOT test

Reliability

Objective

Bias

Ownership

Type


Reliability:

  • How reliable is the information available about the AI technology?

  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?

  • If it is produced by the party responsible for the AI, how much information are they making available? 

    • Is information only partially available due to trade secrets?

    • How biased is they information that they produce?

Objective:

  • What is the goal or objective of the use of AI?

  • What is the goal of sharing information about it?

    • To inform?

    • To convince?

    • To find financial support?

Bias:

  • What could create bias in the AI technology?

  • Are there ethical issues associated with this?

  • Are bias or ethical issues acknowledged?

    • By the source of information?

    • By the party responsible for the AI?

    • By its users?

Owner:

  • Who is the owner or developer of the AI technology?

  • Who is responsible for it?

    • Is it a private company?

    • The government?

    • A think tank or research group?

  • Who has access to it?

  • Who can use it?

Type:

  • Which subtype of AI is it?

  • Is the technology theoretical or applied?

  • What kind of information system does it rely on?

  • Does it rely on human intervention? 


Source:

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test/