How to Remove Duplicates from a List in Python
What you’ll build or solve
You’ll remove duplicates from a Python list and keep the result in the shape you need.
When this approach works best
Removing duplicates works well when you:
Learn Python on Mimo
- Clean user input, like tags or email addresses that may repeat.
- Prepare data for display, like showing a list of categories without repeats.
- Build a unique set of IDs before further processing, like fetching records from an API.
Skip these approaches when duplicates carry meaning, like a shopping cart with quantities. In that case, you likely want counts instead of removal.
Prerequisites
- Python 3 installed
- You know what a list is
Step-by-step instructions
1) Remove duplicates while preserving order
A set removes duplicates, but it does not keep order. To preserve order, track what you have seen and build a new list.
items= ["a","b","a","c","b"]
seen=set()
unique= []
forxinitems:
ifxnotinseen:
seen.add(x)
unique.append(x)
print(unique)
What to look for: this keeps the first occurrence of each item and removes later repeats.
2) Use a set when order does not matter
If order does not matter, converting to a set is the shortest approach.
items= ["a","b","a","c","b"]
unique=list(set(items))
print(unique)
What to look for: the output order can change, so avoid this method when order is important.
3) Handle unhashable items like dictionaries or lists
set() only works for hashable values like strings, numbers, and tuples. For dictionaries or lists, deduplicate by a key or convert items into a hashable representation.
Option A: Deduplicate dictionaries by a key
users= [
{"id":1,"name":"Amina"},
{"id":1,"name":"Amina"},
{"id":2,"name":"Luka"},
]
seen_ids=set()
unique= []
foruserinusers:
user_id=user["id"]
ifuser_idnotinseen_ids:
seen_ids.add(user_id)
unique.append(user)
print(unique)
Option B: Deduplicate dictionaries by their full content
records= [
{"a":1,"b":2},
{"b":2,"a":1},
{"a":2,"b":3},
]
seen=set()
unique= []
forrinrecords:
key=tuple(sorted(r.items()))
ifkeynotinseen:
seen.add(key)
unique.append(r)
print(unique)
What to look for: sorted(r.items()) makes dictionaries comparable even if key order differs.
Examples you can copy
Example 1: Remove duplicate strings case-insensitive
tags= ["Python","python","SQL","sql","SQL"]
seen=set()
unique= []
fortintags:
key=t.lower()
ifkeynotinseen:
seen.add(key)
unique.append(t)
print(unique)
Example 2: Remove duplicates after stripping whitespace
raw= [" a","a ","b"," b ",""]
seen=set()
unique= []
forsinraw:
key=s.strip()
ifkeyandkeynotinseen:
seen.add(key)
unique.append(key)
print(unique)
Example 3: Remove duplicates from numbers
nums= [3,3,1,2,2,5]
seen=set()
unique= []
forninnums:
ifnnotinseen:
seen.add(n)
unique.append(n)
print(unique)
Example 4: Remove duplicates from a list of dictionaries by ID
users= [
{"id":"u1","name":"Amina"},
{"id":"u1","name":"Amina"},
{"id":"u2","name":"Luka"},
]
seen=set()
unique= []
foruinusers:
ifu["id"]notinseen:
seen.add(u["id"])
unique.append(u)
print(unique)
Example 5: Keep the last occurrence instead of the first
Sometimes the last value should win.
items= ["a","b","a","c","b"]
seen=set()
unique_reversed= []
forxinreversed(items):
ifxnotinseen:
seen.add(x)
unique_reversed.append(x)
unique=list(reversed(unique_reversed))
print(unique)
What to look for: iterating from the end keeps the last occurrence, then you reverse the result to restore order.
Common mistakes and how to fix them
Mistake 1: Using set() and expecting the same order
What you might do:
items= ["a","b","a","c","b"]
unique=list(set(items))
Why it breaks: sets do not preserve original order.
Correct approach:
items= ["a","b","a","c","b"]
seen=set()
unique= []
forxinitems:
ifxnotinseen:
seen.add(x)
unique.append(x)
print(unique)
Mistake 2: Trying to put dictionaries into a set
What you might do:
users= [{"id":1}, {"id":1}]
unique=list(set(users))
Why it breaks: dictionaries are unhashable, so Python raises TypeError.
Correct approach:
users= [{"id":1}, {"id":1}, {"id":2}]
seen=set()
unique= []
foruinusers:
ifu["id"]notinseen:
seen.add(u["id"])
unique.append(u)
print(unique)
Mistake 3: Removing items while iterating over the same list
What you might do:
CSS
items= ["a","b","a","c","b"]
forxinitems:
ifitems.count(x)>1:
items.remove(x)
print(items)
Why it breaks: removing shifts the list while iterating, so you skip items.
Correct approach: build a new list.
items= ["a","b","a","c","b"]
seen=set()
unique= []
forxinitems:
ifxnotinseen:
seen.add(x)
unique.append(x)
print(unique)
Troubleshooting
If you see TypeError: unhashable type, your items cannot go in a set. Deduplicate by a key or convert them into a hashable representation.
If the output order changes, you used set(). Switch to an order-preserving approach.
If duplicates still appear, check your key logic. For case-insensitive matches, compare x.lower(), not the original string.
If your list contains mixed types, confirm what counts as a duplicate in your context. For example, 1 and 1.0 compare as equal.
Quick recap
- Use a
seenset plus a new list to remove duplicates while preserving order. - Use
list(set(items))only when order does not matter. - For dictionaries or lists, deduplicate by a stable key or convert items to a hashable form.
- Build a new list instead of removing items while iterating.
- To keep the last occurrence, iterate in reverse, then reverse the result back.
Join 35M+ people learning for free on Mimo
4.8 out of 5 across 1M+ reviews
Check us out on Apple AppStore, Google Play Store, and Trustpilot