Skip to Main Content

Artificial Intelligence (AI)

Welcome to the Guide for Artificial Intelligence (AI)

Artificial Intelligence is defined as "The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. Abbreviated AI." (OED, Dec 2022).  

The field of computer science and engineering that deals with the creation and development of machines that can emulate human tasks has expanded greatly over the recent years.  Through machine learning, language and visual processing, and other techniques, machines have the ability to perceive, process, learn and adapt to their environment in human ways.  AI is currently being developed and used for internet interactions, vehicle control and guidance, security, construction, and others.

This guide will provide updated information that is available through the Eugene McDermott Library's physical and electronic collections relating to the study of AI, research on related applications and uses, and current news concerning this growing field of interest and study.

Alternative Definitions for various types of Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, particularly computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction. AI is often categorized into two types:

  1. Narrow AI: Also known as weak AI, operates under a limited set of constraints and is designed to perform a narrow task, such as voice recognition or driving a vehicle. Most AI that we interact with today, like virtual assistants (e.g., Siri or Alexa), are considered narrow AI.

  2. General AI: Also known as strong AI, this type of AI possesses the ability to perform any intellectual task that a human being can do. It can understand, learn, adapt, and implement knowledge in a way that's not limited to a specific domain. 

Machine learning (ML) is a branch of artificial intelligence (AI) and computer science that focuses on the using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy (IBM - What is ML).

Large language models (LLMs) are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks (IBM - What are LLMs).

A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions. (IBM - What is a Neural Network).

Generative AI can learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but don’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs.  Generative AI uses a number of techniques that continue to evolve. Foremost are AI foundation models, which are trained on a broad set of unlabeled data that can be used for different tasks, with additional fine-tuning. Complex math and enormous computing power are required to create these trained models, but they are, in essence, prediction algorithms. (Gartner - What is Gen AI).

Important News from the President of the United States

OCTOBER 2023 - President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

More here:

https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/31/what-they-are-saying-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

IMPORTANT - Information on Citing AI correctly and ethically.

Accessible Artificial Intelligence is expanding rapidly and shaking the academic and publishing worlds.  These computer programs can do a good job at guessing how experts in many fields might respond to questions or prompts.  It is important to understand that the programs are not searching for information, not evaluating merit, and most importantly: they aren't thinking.  They guess or predict text.  Do not be fooled into treating generative artificial intelligence as fact or reliable evidence.  There will be times when it is appropriate to use ChatGPT or other programs academically, but there is great risk of accidentally plagiarizing existing publications or creating false citations.  

If you do use AI to complete part of a project, you must cite this information so it is clear where that content came from.  The academic world is still figuring out the rules to do this properly. 

Here is a useful link that explains how to cite AI in the most common Academic formats.

Top 5 Resources

Relevant Guides