Glossary of Common Terms in Artificial Intelligence
This article provides definitions for key terms associated with artificial intelligence. It covers concepts from large language models to hallucinations and other frequently used phrases. The glossary aims to clarify terminology as AI technology develops.
Substrate placeholder — needs reviewArtificial intelligence has introduced numerous specialized terms. This glossary defines some of the most common words and phrases encountered in discussions about AI. The definitions are based on standard usage in the field.
Large language models, or LLMs, refer to AI systems trained on vast amounts of text data to generate human-like responses. These models power applications such as chatbots and text generators. They process and predict language patterns to produce coherent outputs.
Hallucinations describe instances where AI models generate incorrect or fabricated information presented as factual. This occurs due to limitations in training data or model architecture. Developers work to mitigate hallucinations through improved validation techniques.
Machine learning is a subset of AI that enables systems to learn from data without explicit programming. Algorithms identify patterns and make predictions based on input. It forms the foundation for many AI applications, including image recognition and recommendation systems.
Neural networks are computational models inspired by the human brain's structure. They consist of interconnected nodes that process information in layers. These networks are essential for tasks like natural language processing and computer vision.
refers to models that create new content, such as text, images, or audio, based on learned patterns.
Tools using this technology assist in creative tasks and automation. Examples include writing assistants and art generators. Bias in AI occurs when models reflect unfair prejudices from training data, leading to skewed outputs.
Addressing bias involves diverse datasets and ethical guidelines. It affects fairness in applications like hiring tools and facial recognition. Training data comprises the datasets used to teach AI models.
Quality and diversity of this data influence model performance. Ongoing refinements ensure accuracy and reliability in real-world use. Ethics in AI encompasses principles for responsible development and deployment.
It includes considerations of privacy, transparency, and societal impact. Frameworks guide decisions to align technology with human values.
Key Facts
Potential Impact
- 01
Clear definitions reduce confusion in AI discussions and media.
- 02
Improved understanding of AI terms may aid public engagement with technology.
- 03
Glossaries like this can support education in AI-related fields.
Transparency Panel
Related Stories
Rest of WorldS&P 500 and Nasdaq Gain on AI Tech Strength Ahead of Trump-Xi Meeting
Markets advanced on May 13 2026 with AI-related tech shares providing the lift. President Donald Trump and President Xi Jinping are scheduled to meet in Beijing this week amid tensions over the war in Iran, Taiwan, and technology issues including chip exports and AI rivalry.
indianexpress.comU.S. Government Sells 30-Year Bonds at 5% Yield for First Time Since 2007
The U.S. Treasury sold 30-year bonds at a 5% yield on Wednesday, the first time that benchmark has been reached since 2007. The sale comes amid shifting global oil demand forecasts, cryptocurrency market legislation, and varied corporate and sports developments. Broader economic…
thezvi.wordpress.com (News photo)Anthropic in Talks to Raise Tens of Billions at $950 Billion Valuation
Anthropic is in talks to raise tens of billions of dollars in a funding round that would value the company at $950 billion, surpassing OpenAI's $854 billion valuation from March. The AI developer has quadrupled its market share among business customers since May 2025 and released…