How to Optimize SQL Queries
What you’ll build or solve
You’ll speed up slow SELECT queries by measuring performance, improving index usage, reducing scanned rows, and simplifying costly operations like joins and aggregation.
When this approach works best
This approach works best when:
Learn SQL on Mimo
- A dashboard or API endpoint feels slow and relies on database queries.
- A query scans a large table but returns a small result set.
- You see long query times in logs or monitoring tools.
Skip this if performance issues come from network latency, application code, or tiny datasets where changes won’t matter.
Prerequisites
- Access to a SQL database such as MySQL, PostgreSQL, SQL Server, or SQLite
- Permission to create indexes
- Basic understanding of
SELECT,WHERE,JOIN, andGROUP BY
Step-by-step instructions
These steps are not a checklist you must follow in order. Treat them as techniques. Pick the ones that match what your query is doing, then measure again.
Step 1: Measure the current performance
Start by measuring the query so you know what to fix.
Option A, most common:
EXPLAIN
SELECT*
FROM orders
WHERE user_id=42;
Option B in PostgreSQL:
EXPLAIN ANALYZE
SELECT*
FROM orders
WHERE user_id=42;
What to look for
If you see Seq Scan or Full Table Scan on a large table, the database reads every row. That’s often the first target.
Look for:
- Estimated vs actual rows
- Cost numbers
- Whether indexes are used
Always measure before and after changes.
Step 2: Select only the columns you need
Fetching unnecessary columns increases work and data transfer.
Less efficient:
SELECT*
FROM users
WHERE active=1;
More efficient:
SELECT id, name, email
FROM users
WHERE active=1;
This matters most for wide tables or queries that return many rows.
Step 3: Add indexes for frequent filters and joins
Indexes help the database find rows without scanning the whole table.
If you frequently filter by user_id:
CREATE INDEX idx_orders_user_id
ON orders (user_id);
Re-check the plan:
EXPLAIN
SELECT*
FROM orders
WHERE user_id=42;
You want to see an Index Scan instead of a full scan.
Focus on columns used often in:
WHEREJOINORDER BY
Too many indexes can slow down writes, so add them intentionally.
Step 4: Write WHERE clauses that can use indexes
Certain patterns block index usage.
Less efficient:
CSS
SELECT*
FROM users
WHEREYEAR(created_at)=2025;
Better:
SQL
SELECT*
FROM users
WHERE created_at>='2025-01-01'
AND created_at<'2026-01-01';
Avoid leading wildcards:
SQL
-- Slower
WHERE emailLIKE'%@example.com';
Prefer anchored patterns:
SQL
-- Faster
WHERE emailLIKE'admin%@example.com';
Leading wildcards often force full scans.
Step 5: Filter early to reduce work
Apply filters before expensive operations so fewer rows flow into joins, sorts, and aggregation.
Filter before joins:
SELECT users.id, users.name, filtered_orders.total
FROM users
JOIN (
SELECT user_id, total
FROM orders
WHERE total>500
)AS filtered_orders
ON users.id= filtered_orders.user_id;
Filter before aggregation:
SQL
SELECT user_id,COUNT(*)AS order_count
FROM orders
WHERE created_at>='2026-01-01'
GROUPBY user_id;
Use LIMIT when you do not need all rows:
CSS
SELECT id, user_id, total, created_at
FROM orders
ORDERBY created_atDESC
LIMIT20;
Filtering and limiting early reduces total work.
Step 6: Simplify joins
Each join adds work, especially on large tables. Join only what you need.
Less efficient:
SELECT users.name, orders.total, products.name
FROM users
JOIN ordersON users.id= orders.user_id
JOIN productsON orders.product_id= products.id;
If product data is not required:
SELECT users.name, orders.total
FROM users
JOIN ordersON users.id= orders.user_id;
Index join keys when tables are large:
CREATE INDEX idx_orders_user_id
ON orders (user_id);
Examples you can copy
Example 1: Speed up user lookup by email
Before:
SELECT*
FROM users
WHERE email='ana@example.com';
Add an index:
CREATE INDEX idx_users_email
ON users (email);
Then fetch only what you need:
SELECT id, name
FROM users
WHERE email='ana@example.com';
Example 2: Optimize recent completed orders
Before:
SELECT*
FROM orders
WHERE status='completed'
ORDERBY created_atDESC;
Add a composite index:
CREATE INDEX idx_orders_status_created
ON orders (status, created_at);
Improved query:
SELECT id, user_id, total
FROM orders
WHERE status='completed'
ORDERBY created_atDESC
LIMIT50;
Example 3: Optimize a reporting query with a date filter
Before:
SQL
SELECT users.name, SUM(orders.total)AS total_spent
FROM users
JOIN ordersON users.id= orders.user_id
GROUPBY users.name;
Add an index:
CREATE INDEX idx_orders_user_id
ON orders (user_id);
Filter early:
SQL
SELECT users.name, SUM(orders.total)AS total_spent
FROM users
JOIN ordersON users.id= orders.user_id
WHERE orders.created_at>='2026-01-01'
GROUPBY users.name;
Common mistakes and how to fix them
Mistake 1: Adding too many indexes
You might add indexes on every column.
Why it breaks:
Writes get slower because the database must update each index on INSERT and UPDATE.
Correct approach:
Add indexes only for columns used often in WHERE, JOIN, and ORDER BY.
Mistake 2: Using functions on indexed columns
You might write:
SQL
SELECT*
FROM orders
WHERE LOWER(status)='completed';
Why it breaks:
The function can prevent index usage.
Correct approach:
SELECT*
FROM orders
WHERE status='completed';
If you need case-insensitive matching, use a case-insensitive collation or data type when supported.
Mistake 3: Skipping execution plans
You might change queries without checking the plan.
Why it breaks:
You may optimize the wrong part.
Correct approach:
EXPLAIN ANALYZE
SELECT*
FROM orders
WHERE user_id=42;
Measure before and after each change.
Troubleshooting
If EXPLAIN still shows a full table scan, confirm the filtered column has an index and the table is large enough for the planner to prefer it.
If performance does not improve after adding an index, check for functions on the filtered column or mismatched data types.
If sorting is slow, ensure ORDER BY columns align with an index and add LIMIT when you only need top results.
If joins are slow, index the join keys on both sides and filter the larger table before joining.
If results look wrong after optimization, confirm that refactoring did not change join types or filter logic.
Quick recap
- Use
EXPLAINorEXPLAIN ANALYZEto find bottlenecks. - Select only the columns you need.
- Add indexes for frequent filters and join keys.
- Write
WHEREclauses that can use indexes. - Filter and limit early before joins, sorts, and aggregation.
- Remove unnecessary joins and re-measure after changes.
Join 35M+ people learning for free on Mimo
4.8 out of 5 across 1M+ reviews
Check us out on Apple AppStore, Google Play Store, and Trustpilot