AI Glossary
Fundamental AI and ML terms explained simply for beginners.
AI Glossary of Terms
This glossary provides definitions and simple explanations of key terms related to Artificial Intelligence (AI) and Machine Learning (ML). It is designed to help beginners understand the fundamental concepts in the field. These terms are commonly used in AI literature, courses, and discussions, making it easier for newcomers to grasp the basics of AI and ML. We recommend reviewing these terms to build a solid foundation in AI concepts and following AI for Beginners learning path for a structured introduction to AI.
Term | Definition | Simple Explanation |
---|---|---|
Artificial Intelligence (AI) | The field of computer science focused on creating systems that can perform tasks that typically require human intelligence. | Teaching computers to do things that usually need human thinking. |
Machine Learning (ML) | A subset of AI where computers learn from data to make predictions or decisions without being explicitly programmed. | Computers learn from examples instead of following strict rules. |
Neural Network | A computing system inspired by the human brain, made up of layers of interconnected nodes (‘neurons’). | A computer system that works a bit like a simplified brain. |
Deep Learning | A type of machine learning using large neural networks with many layers to analyze complex data. | Using big, layered networks to help computers learn from lots of data. |
Training Data | The information (data) used to teach an AI model how to perform a task. | The examples you give a computer so it can learn. |
Model | The result of training an AI system; it can make predictions or decisions based on new data. | The ‘brain’ the computer builds after learning from data. |
Algorithm | A set of rules or instructions a computer follows to solve a problem. | Step-by-step instructions for a computer to follow. |
Supervised Learning | A machine learning method where the model learns from labeled data (data with correct answers). | Teaching a computer by showing it examples with the right answers. |
Unsupervised Learning | A machine learning method where the model finds patterns in data without labels. | Letting a computer find patterns on its own, without being told the answers. |
Reinforcement Learning | A type of machine learning where an agent learns by trying things and getting rewards or penalties. | Teaching a computer by rewarding it for good choices and punishing bad ones. |
Classification | A task where the AI sorts data into categories. | Sorting things into groups, like spam vs. not spam emails. |
Regression | A task where the AI predicts a continuous value (like a number). | Predicting numbers, like house prices. |
Natural Language Processing (NLP) | The field of AI focused on understanding and generating human language. | Helping computers understand and use human language. |
Computer Vision | The field of AI that enables computers to interpret and understand images and videos. | Teaching computers to ‘see’ and understand pictures or videos. |
Overfitting | When an AI model learns the training data too well, including its noise, and performs poorly on new data. | When a computer memorizes examples instead of learning general rules. |
Underfitting | When an AI model is too simple and fails to capture patterns in the data. | When a computer doesn’t learn enough from the examples. |
Bias | Systematic errors in AI predictions due to unfair or unrepresentative training data. | When a computer makes unfair or skewed decisions because of bad examples. |
Dataset | A collection of data used for training or testing AI models. | A big group of examples for the computer to learn from. |
Feature | An individual measurable property or characteristic of the data. | A detail or piece of information about each example. |
Label | The correct answer or category for a piece of data in supervised learning. | The ‘right answer’ attached to each example. |
Hyperparameter | A parameter whose value is set before the learning process begins and controls the learning process. | Settings you choose before training a model to help it learn better. |
Loss Function | A function that measures how well the AI model’s predictions match the actual data. | A way to measure how wrong the computer’s guesses are. |
Gradient Descent | An optimization algorithm used to minimize the loss function by iteratively adjusting the model’s parameters. | A method to help the computer learn by making small adjustments to improve guesses. |
Epoch | One complete pass through the entire training dataset. | One round of teaching the computer with all the examples. |
Batch | A subset of the training data used to train the model in one iteration. | A small group of examples used to teach the computer at a time. |
Activation Function | A function used in neural networks to introduce non-linearity and help the model learn complex patterns. | A function that helps the computer learn complex patterns. |
Convolutional Neural Network (CNN) | A type of neural network designed for processing structured grid data like images. | A special neural network for analyzing images. |
Recurrent Neural Network (RNN) | A type of neural network designed for processing sequential data like time series or text. | A special neural network for analyzing sequences, like sentences. |
Transfer Learning | A technique where a pre-trained model is used as a starting point for a new, related task. | Using a model trained for one task to help learn a new, similar task. |
Generative Adversarial Network (GAN) | A type of neural network where two networks compete to generate realistic data. | Two neural networks competing to create realistic data. |
Tokenization | The process of breaking text into smaller units (tokens) for analysis. | Splitting text into smaller pieces, like words or phrases. |
Embedding | A representation of data in a lower-dimensional space to capture its meaning. | A way to represent data in a simpler form that captures its meaning. |
Attention Mechanism | A technique in neural networks that allows the model to focus on specific parts of the input. | A method that helps the computer pay attention to important parts of the data. |
Transformer | A type of neural network architecture that uses self-attention mechanisms to process sequential data. | A powerful neural network for analyzing sequences, like text. |
Fine-Tuning | The process of making small adjustments to a pre-trained model to adapt it to a specific task. | Tweaking a pre-trained model to make it better at a specific task. |
Backpropagation | An algorithm for training neural networks by adjusting weights based on the error gradient. | A method to train neural networks by adjusting weights based on errors. |
Autoencoder | A type of neural network used to learn efficient representations of data, typically for dimensionality reduction. | A neural network that learns to compress and reconstruct data. |
Further Learning Resources
- AI for Beginners: A beginner-friendly introduction to AI concepts and applications with hands-on labs.
- Generative AI for Beginners: Focuses on the principles and applications of generative models in AI.