![[object Object]](https://i0.wp.com/getmimo.wpcomstaging.com/wp-content/uploads/2025/11/Top-29-Python-Interview-Questions-and-Answers-for-2026.jpg?fit=1920%2C1080&ssl=1)
Top 29 Python Interview Questions and Answers for 2026
Get job-ready fast with our top 29 Python interview questions and answers. Drawn from real interviews and docs, we include clear examples and role-specific sections for data science, analytics, and software engineering.
Over the past years, we’ve helped thousands of Mimo’s users prepare for and ace their Python job interviews.
We’ve seen firsthand which questions come up repeatedly and which concepts trip people up the most.
To create this guide, we’ve combined that knowledge with fresh research from actual tech interviews, Reddit threads, Glassdoor reports, and official Python documentation.
Keep reading or jump to the Python interview questions (and answers) of your choice:
- Beginner Python questions: Basic syntax, data structures, and core concepts
- Intermediate Python questions: Memory management, generators, error handling
- Advanced Python questions: Metaclasses, concurrency, and system design
- Bonus: Specialized questions: Check the specific question examples for data science, analytics, and software engineering roles
| 💡Enroll in Mimo’s Python career path and get job-ready in no time.Learn through bite-sized, interactive lessons, get AI assistance while coding, and build a real-world portfolio. [Start for free] |
Beginner Python interview questions
First, let’s go over the fundamental questions and build a solid foundation for your Python interview preparation.
We’ll cover core coding language concepts that every Python developer should know.
1. What is the difference between a list and a tuple?
This question tests your understanding of Python’s basic data structures and their properties.
Interviewers will want to see that you grasp when to use each type and why they exist.
How to solve it
- Lists are mutable (changeable) collections that use square brackets: [1, 2, 3]. You can add, remove, or modify elements after creation.
- Tuples are immutable (fixed) collections using parentheses: (1, 2, 3). Once created, they can’t be changed.
Show practical examples:
A list like numbers = [1, 2, 3] can be extended with numbers.append(4), while a tuple like coordinates = (39.47, -0.38) cannot be modified.
- Use lists when you need a collection that changes over time (like accumulating results), and tuples for fixed data (like coordinates or database records).
- Since tuples can’t change, they can also serve as dictionary keys, which lists cannot.
2. How are arguments passed in Python?
This question checks if you get how Python passes arguments, which works differently from some other coding languages.
Knowing this helps you avoid bugs that appear when a function changes something you didn’t expect.
How to solve it
Python passes arguments using a mechanism called “call-by-object-reference” (sometimes called call-by-assignment).
This means when you pass a variable to a function, you’re passing a reference to the object it points to, not a copy of the value.
This behavior has important consequences:
- If you modify a mutable object (like a list) inside the function, the changes affect the original object
- If you reassign the parameter name inside the function, it only affects the local variable, not the original
Here’s an example:
def append_one(lst):
lst.append(1) # Mutates the original list!
def reassign(lst):
lst = [99, 100] # Only changes the local variable
nums = [0]
append_one(nums)
print(nums) # [0, 1] - The original list changed
nums = [0]
reassign(nums)
print(nums) # [0] - The original list didn't change
For immutable objects like integers, strings, and tuples, you can’t change their content once they’re created.
This explains why something like num += 1 inside a function doesn’t affect the original variable — it’s creating a new object and rebinding the local name.
3. Explain what Python decorators do
Decorators let you modify or enhance functions without changing their code. They appear frequently in web frameworks like Flask and Django.
How to solve it
A decorator is a function that takes another function as input, adds some functionality, and returns a modified function.
You apply decorators with the @ symbol above a function definition.
Here’s a simple example of a timing decorator:
import time
def timer(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
print(f"Function {func.__name__} took {time.time() - start:.5f} seconds")
return result
return wrapper
@timer
def slow_function():
time.sleep(1)
return "Done!"
slow_function() # Will print timing information automatically
When you add @timer above slow_function, Python essentially does this behind the scenes:
slow_function = timer(slow_function)
Common uses for decorators include:
- Adding logging or timing
- Access control and authentication
- Caching results
- Input validation
Decorators let you separate these concerns from the main function logic, making your code cleaner and more maintainable.
4. Show ways to get every third item from a list
This question tests your knowledge of Python’s slicing syntax and ability to solve a problem in multiple ways.
How to solve it
The simplest way to get every third item is to use Python’s slice notation with a step value:
items = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
every_third = items[::3] # Returns [0, 3, 6, 9]
The slice notation [start:stop:step] works like this:
- When start is omitted, it defaults to 0 (the beginning)
- When stop is omitted, it defaults to the end of the list
- The step value (3 in our case) means “take every 3rd item”
So [::3] means “start at index 0, go until the end, stepping by 3 each time.”
You can also solve this with other approaches — using list comprehension and using a regular for loop:
# Using list comprehension
every_third = [item for index, item in enumerate(items) if index % 3 == 0]
# Using a regular for loop
result = []
for i in range(0, len(items), 3):
result.append(items[i])
The slice notation is generally preferred because it’s more concise and faster.
5. What does PEP 8 say about indentation?
This question tests whether you’re familiar with Python’s style conventions, showing you care about code readability and team standards.
How to solve it
PEP 8 is Python’s official style guide, and it has clear recommendations about indentation:
- Use 4 spaces per indentation level
- Spaces are preferred over tabs
- Never mix tabs and spaces in the same project
For example:
def function():
# Use 4 spaces (not a tab) for this level
if condition:
# Use 4 more spaces (8 total) for this level
do_something()
PEP 8 also recommends keeping lines under 79 characters and has guidelines for line breaks and continuations.
Many teams enforce these standards with tools like Black, Flake8, or pylint, which automatically check or format code to follow PEP 8.
Following these conventions makes code more consistent and easier for other developers to read, which is key when working in a team.
6. Explain the difference between repr and str
This question tests your understanding of Python’s special methods and how objects convert to strings in different contexts.
How to solve it
Both __repr__ and __str__ are special methods that return string representations of an object, but they have different purposes:
- __str__ is for creating a user-friendly, readable representation
- __repr__ is for creating an unambiguous representation, ideally one that could recreate the object
Here’s an example showing both:
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
return f"{self.name}, {self.age} years old"
def __repr__(self):
return f"Person('{self.name}', {self.age})"
person = Person("Alice", 30)
print(str(person)) # Alice, 30 years old
print(repr(person)) # Person('Alice', 30)
When you print an object, Python uses __str__ if it exists. When displaying objects in the interactive console or debugger, Python uses __repr__.
If you only implement one, implement __repr__ — Python will use it as a fallback for __str__ too.
7. Why is Python called an interpreted language?
This Python interview question checks if you understand how Python code executes. This knowledge helps when you’re dealing with performance or deployment issues.
How to solve it
Python is called an interpreted language because:
- You don’t need to compile (convert code into a form the computer can understand) Python code before running it
- The Python interpreter executes the code line by line
That said, there’s a bit more nuance:
- Python actually compiles your code to bytecode first (creating .pyc files)
- The Python Virtual Machine (PVM) then interprets this bytecode
This two-step process happens automatically and invisibly to developers, which is why Python is still considered interpreted.
And just in case you get a question about the advantages and disadvantages:
| Pros of interpreted coding languages | Cons of interpreted coding languages |
| Faster development cycle – no separate compilation stepCross-platform compatibility without recompilingInteractive development (REPL) | Generally slower execution than fully compiled languagesNeed to distribute the interpreter with your program |
The bottom line: this hybrid approach gives Python a balance between development speed and runtime performance.
8. What does the with statement do?
This interview question checks if you know how the with statement helps manage resources in Python.
It’s often used when working with files or network connections to make sure they’re closed automatically, even if there’s an error.
How to solve it
The with statement creates a context where setup and cleanup actions happen automatically. It’s commonly used with files, network connections, and database transactions.
Here’s how it works with files:
# Without with - easy to forget closing
f = open("data.txt", "r")
content = f.read()
f.close() # What if an exception occurs before this line?
# With with - automatically handles closing
with open("data.txt", "r") as f:
content = f.read()
# File is automatically closed here, even if exceptions happen
Behind the scenes, the with statement:
- Calls the object’s __enter__ method when entering the block
- Guarantees the object’s __exit__ method is called when leaving the block, even if an exception occurs
9. Describe list and dictionary comprehensions
This question checks if you know how to create lists and dictionaries in a quick, readable way using a single line of code.
How to solve it
Comprehensions are compact, powerful expressions for creating lists, dictionaries, and sets. They replace verbose loops with a more readable syntax.
List comprehensions create lists in a single line:
# Instead of:
squares = []
for x in range(10):
if x % 2 == 0:
squares.append(x**2)
# You can write:
squares = [x**2 for x in range(10) if x % 2 == 0]
# Result: [0, 4, 16, 36, 64]
Dictionary comprehensions work similarly but create key-value pairs:
names = ["Alice", "Bob", "Charlie"]
name_lengths = {name: len(name) for name in names}
# Result: {"Alice": 5, "Bob": 3, "Charlie": 7}
Comprehensions are also often faster than regular loops and better express your intent.
They’re considered more “Pythonic” (a cleaner and more natural way to write code in Python) than building collections with explicit loops.
10. What is the purpose of self in class methods?
This interview question tests your understanding of how Python implements object-oriented programming.
How to solve it
In Python, self is a convention for the first parameter of instance methods. It refers to the instance the method is called on.
Unlike many other languages (like Java or C#), Python makes the instance reference explicit rather than implicit.
When you call a method on an object, Python automatically passes the object as the first argument.
class Dog:
def __init__(self, name):
self.name = name
def bark(self):
return f"{self.name} says woof!"
dog = Dog("Rex")
dog.bark() # Python automatically passes dog as self
Behind the scenes, dog.bark() is actually transformed to Dog.bark(dog).
If you forget to include self as the first parameter, you’ll get an error when trying to call the method:
class Broken:
def speak(): # Missing self parameter!
print("Hello")
b = Broken()
b.speak() # TypeError: speak() takes 0 positional arguments but 1 was given
While you could technically name this parameter something else, using self is a strong convention that all Python developers follow.
11. What’s the difference between .py and .pyc files?
This question tests whether you understand Python’s execution model and the role of bytecode.
How to solve it
.py files contain human-readable Python source code that you write and edit. .pyc files contain compiled bytecode that Python generates to speed up module loading.
Here’s how it works:
- When you import a Python module for the first time, the interpreter compiles the .py file to bytecode
- Python saves this bytecode in a .pyc file (inside a __pycache__ directory in Python 3)
- On subsequent imports, Python checks if the source has changed
- If the source is unchanged, Python loads the .pyc directly (skipping compilation)
This process happens automatically and improves performance, especially for large modules.
Some key points to keep in mind:
- .pyc files are binary and not meant to be edited directly
- They’re specific to Python versions but work across platforms
- You don’t need to distribute .pyc files — Python creates them as needed
- If you modify the .py file, Python automatically recompiles it
You shouldn’t commit .pyc files to version control since they’re generated files.
12. How would you safely handle missing keys in a dictionary?
This question tests your knowledge of error handling and dictionary operations, which are common in real-world code.
How to solve it
You can approach handling missing keys in dictionaries in several ways:
- Using the get() method with a default value:
user_settings = {"theme": "dark"}
font_size = user_settings.get("font_size", 12) # Returns 12 if key missing
- Using collections.defaultdict to provide automatic defaults:
from collections import defaultdict
# Creates a dictionary that automatically uses 0 as default
word_counts = defaultdict(int)
word_counts["hello"] += 1 # No KeyError even though "hello" wasn't there
- Using the in operator to check first:
if "font_size" in user_settings:
font_size = user_settings["font_size"]
else:
font_size = 12
- Using try/except to catch KeyError:
try:
font_size = user_settings["font_size"]
except KeyError:
font_size = 12
In practice, the best approach depends on the situation:
- get() is most concise when you need a simple default
- defaultdict is best when you’re building a collection
- in checks are clear when you need different logic based on presence
- try/except follows Python’s “easier to ask forgiveness than permission” philosophy — meaning you just try the action and handle the error if it happens, instead of checking first
For most simple cases, get() provides the cleanest solution.
Intermediate Python interview questions
Once you’ve mastered the basics, interviewers will challenge you with these more advanced topics.
These interview questions evaluate your deeper understanding of Python’s behavior, memory management, and more sophisticated language features.
13. What is the Global Interpreter Lock (GIL) and why does it matter?
This question assesses your understanding of Python’s concurrency limitations, which impacts how you’d design high-performance applications.
How to solve it
Explain that the GIL is a mutex (lock) in CPython that prevents multiple native threads from executing Python bytecode simultaneously.
While it simplifies Python’s memory management and makes single-threaded programs faster, it means that CPU-bound Python code can’t fully utilize multiple processor cores in a single process.
You can show the practical implications with an example:
import threading
import time
def cpu_intensive_task():
# Simulate CPU-intensive work
count = 0
for i in range(10_000_000):
count += i
return count
# Running two tasks sequentially
start = time.time()
cpu_intensive_task()
cpu_intensive_task()
print(f"Sequential: {time.time() - start:.2f} seconds")
# Running two tasks in separate threads
start = time.time()
t1 = threading.Thread(target=cpu_intensive_task)
t2 = threading.Thread(target=cpu_intensive_task)
t1.start(); t2.start()
t1.join(); t2.join()
print(f"Threaded: {time.time() - start:.2f} seconds")
The threaded approach won’t be significantly faster due to the GIL. Alternatives include:
- Use multiprocessing module for CPU-bound tasks (each process has its own Python interpreter and GIL)
- Use asyncio for I/O-bound tasks (cooperative multitasking in a single thread)
- Use threading for I/O-bound tasks where one thread can run while others are waiting
14. Explain shallow copy versus deep copy
This question tests your understanding of how objects are referenced and copied in Python, which is critical for avoiding subtle bugs.
How to solve it
In Python, a shallow copy creates a new object but inserts references to the objects found in the original.
A deep copy creates a completely independent clone with all nested objects duplicated.
Here’s an example:
import copy
# Original nested structure
original = [1, 2, [3, 4]]
# Shallow copy
shallow = copy.copy(original)
# Deep copy
deep = copy.deepcopy(original)
# Modify the nested list in the original
original[2][0] = 'X'
print(f"Original: {original}") # [1, 2, ['X', 4]]
print(f"Shallow copy: {shallow}") # [1, 2, ['X', 4]] - also affected!
print(f"Deep copy: {deep}") # [1, 2, [3, 4]] - unchanged
The assignment (b = a) doesn’t create any copies — it just builds another reference to the same object.
Shallow copies are sufficient for flat structures with immutable objects, but nested structures with mutable objects require deep copies to be fully independent.
15. Describe how range works in Python 3.
This question checks your knowledge of Python’s memory-efficient sequence types and the evolution between Python 2 and 3.
How to solve it
Explain that range in Python 3 is a memory-efficient sequence type.
Unlike Python 2’s range(), which created a full list in memory, Python 3’s range produces values on demand, similar to Python 2’s xrange().
# This doesn’t create a million-element list
big_range = range(1_000_000)
# Instead, it creates a range object that generates numbers on-the-fly
print(type(big_range)) # <class 'range'>
print(big_range[0]) # 1
print(big_range[1]) # 2
print(10 in big_range) # True
# Only creates a list when explicitly requested
big_list = list(big_range) # Now a list is created
You can also highlight that range objects support indexing, slicing, containment checks, and length calculation without storing all the values.
They’re immutable and can be compared for equality (range(0, 10, 2) == range(0, 10, 2) is True).
The three forms of range are:
- range(stop): values from 0 to stop-1
- range(start, stop): values from start to stop-1
- range(start, stop, step): values from start to stop-1, incrementing by step
16. Differentiate pass, continue, and break
This question tests your understanding of Python’s flow control statements, which are essential for writing clean loop logic.
How to solve it
Explain each statement with a clear example:
pass: A no-operation placeholder that does nothing. It’s used when syntax requires a statement, but no action is needed.
def not_implemented_yet():
# Will implement this later
pass
class EmptyClass:
pass
continue: Skips the rest of the current loop iteration and jumps to the next iteration.
for num in range(10):
if num % 2 == 0: # If number is even
continue # Skip the rest of this iteration
print(num) # Only odd numbers get printed: 1, 3, 5, 7, 9
break: Exits the loop entirely, skipping all remaining iterations.
for num in range(10):
if num > 5:
break # Exit the loop completely when num > 5
print(num) # Only 0, 1, 2, 3, 4, 5 get printed
Finally, here’s an example where all three statements might be used in a single function to show their differences:
def process_data(items):
if not items:
pass # Nothing to process
for item in items:
if item.is_empty():
continue # Skip empty items
if item.is_corrupted():
break # Stop processing if we find corrupted data
process(item)
17. How do you use try/except/else/finally?
This question checks how well you understand error handling in Python, which helps you write more reliable code.
How to solve it
Python’s try/except structure has two optional additional clauses: else and finally.
try:
# Code that might raise an exception
value = int(user_input)
except ValueError as e:
# Runs if a ValueError occurs
print(f"That's not a valid number: {e}")
except (TypeError, KeyError) as e:
# You can catch multiple exception types together
print(f"A TypeError or KeyError occurred: {e}")
else:
# Runs if NO exceptions occur in the try block
print(f"You entered the number: {value}")
finally:
# ALWAYS runs, whether an exception occurred or not
print("Cleanup code goes here")
Here’s how you can explain the execution flow:
- The try block is executed first
- If an exception occurs, Python looks for a matching except block
- If no exception occurs, the else block runs
- The finally block always runs last, regardless of what happened
Finally, make sure to highlight best practices:
- Catch specific exceptions rather than the broad Exception class
- Use the as keyword to access the exception object
- Put cleanup code in finally blocks
- Use the else clause for code that should run only if no exceptions occurred
18. What are generators and how do they differ from normal functions?
This question tests your understanding of Python’s exception handling and how it helps you write more reliable code.
How to solve it
Generators are functions that produce a sequence of values over time, rather than computing everything at once and returning a complete collection.
The key distinction is that generator functions use yield instead of return.
When called, it returns a generator object that produces values one at a time:
def countdown(n):
print("Starting countdown")
while n > 0:
yield n # Pause here and return this value
n -= 1 # Will resume from here on next iteration
print("Countdown complete")
# Create the generator object
counter = countdown(3)
# Nothing is printed yet because the function hasn't started
print("Before iteration")
# Each iteration continues from where it left off
print(next(counter)) # Prints: Starting countdown, then 3
print(next(counter)) # Prints: 2
print(next(counter)) # Prints: 1
# When exhausted, raises StopIteration
# next(counter) # Would raise StopIteration and print "Countdown complete"
# Or use in a loop (cleaner)
for num in countdown(3):
print(num) # Prints: Starting countdown, 3, 2, 1, Countdown complete
The advantages of generators include:
- Memory efficiency: Only one value is in memory at a time
- Lazy evaluation: Values are computed only when needed
- State preservation: The function’s local state is preserved between yields
You can also mention generator expressions — they look like list comprehensions but use parentheses and create values one at a time, only when needed.
# List comprehension (computes all values immediately)
squares_list = [x**2 for x in range(1000)] # Creates a list with 1000 elements
# Generator expression (computes values on demand)
squares_gen = (x**2 for x in range(1000)) # Creates a generator object
19. Explain lambda functions and closures
This question tests your knowledge of functional programming concepts in Python.
How to solve it
Lambda functions are small anonymous functions created with the lambda keyword. They can take any number of arguments but contain only a single expression:
# Normal function
def add(x, y):
return x + y
# Equivalent lambda function
add_lambda = lambda x, y: x + y
Lambdas are commonly used where a simple function is needed temporarily, like with sorted() or filter():
names = ['Charlie', 'Alice', 'Bob']
sorted_by_length = sorted(names, key=lambda name: len(name))
# ['Bob', 'Alice', 'Charlie']
A closure is a function that remembers values from its enclosing scope even after that scope has finished executing:
def make_multiplier(factor):
def multiply(number):
return number * factor # 'factor' is remembered
return multiply
double = make_multiplier(2)
triple = make_multiplier(3)
print(double(5)) # 10
print(triple(5)) # 15
Even though make_multiplier has finished executing, the returned functions still remember their respective factor values.
Closures are useful for creating function factories, callbacks with state, and implementing decorators.
20. How would you process an 8 GB text file to find the first non-repeating character?
This question checks if you know how to work with large amounts of data when there isn’t enough memory to load everything at once.
You can’t just open an 8 GB file directly — you need to process it in smaller parts.
How to solve it
The key is to stream through the file in chunks rather than loading it all at once.
Make two passes: first to count character frequencies, then to find the first character with a count of 1:
from collections import Counter
def find_first_non_repeating_char(filename):
# First pass: count all characters
counter = Counter()
with open(filename, 'r', encoding='utf-8') as f:
for line in f:
counter.update(line)
# Second pass: find first character with count 1
with open(filename, 'r', encoding='utf-8') as f:
for line in f:
for char in line:
if counter[char] == 1:
return char
return None # No non-repeating characters
This approach uses O(1) memory relative to file size (just storing character counts) and O(n) time (reading the file twice).
For even larger files, you could use a database or distributed processing system.
21. Why should API keys be stored in environment variables? How do you access them?
This question tests your understanding of security best practices when working with sensitive information.
How to solve it
Storing API keys and other secrets in source code is dangerous because:
- Source code often ends up in version control systems
- It makes keys visible to anyone with code access
- It makes it hard to use different keys in different environments
Store them in environment variables instead.
On Linux/macOS, set them with export KEY=value in the shell. On Windows, use set KEY=value or setx KEY value.
Then, access them in Python with the os module:
import os
# Get the API key from environment
api_key = os.getenv('API_KEY')
# Add a fallback for development
api_key = os.getenv('API_KEY', 'development-key')
# Check if the key exists
if api_key is None:
raise RuntimeError("API_KEY environment variable not set!")
For development, tools like python-dotenv let you load variables from a .env file (which should be in your .gitignore). Environment variables also work well with containers and cloud services.
22. How do you sort a dictionary by its values?
This question tests your ability to use higher-order functions and your understanding of Python’s sorted function.
How to solve it
Dictionaries aren’t inherently sorted, but you can sort their items based on values.
The sorted() function accepts a key function that specifies the sorting criteria:
scores = {'Alice': 90, 'Bob': 75, 'Charlie': 95}
# Sort by values (ascending)
sorted_items = sorted(scores.items(), key=lambda item: item[1])
print(sorted_items) # [('Bob', 75), ('Alice', 90), ('Charlie', 95)]
# Sort by values (descending)
sorted_items_desc = sorted(scores.items(), key=lambda item: item[1], reverse=True)
print(sorted_items_desc) # [('Charlie', 95), ('Alice', 90), ('Bob', 75)]
If you need a dictionary back, convert the sorted items:
# In Python 3.7+, regular dictionaries preserve insertion order
sorted_dict = dict(sorted_items_desc)
print(sorted_dict) # {'Charlie': 95, 'Alice': 90, 'Bob': 75}
For earlier Python versions, use OrderedDict to maintain the sorted order:
from collections import OrderedDict
sorted_dict = OrderedDict(sorted_items_desc)
23. Explain the difference between is and ==
This question checks whether you understand object identity versus value equality, a common source of bugs.
How to solve it
The == operator tests whether two objects have the same value. The is operator tests whether two names refer to the exact same object in memory (same identity):
a = [1, 2, 3]
b = [1, 2, 3]
c = a
print(a == b) # True - they have the same values
print(a is b) # False - they are different objects
print(a is c) # True - they are the same object
Use is when checking for None, True, or False:
if x is None:
# Correct way to check for None
pass
Remember to be cautious with numbers and strings. Python may intern some values, making is appear to work:
x = 5
y = 5
print(x is y) # May be True! But don't rely on this behavior
x = 1000
y = 1000
print(x is y) # Probably False
Finally, small integers and some strings are cached (interned) by Python, but this is an implementation detail you shouldn’t depend on.
24. How do you check if two strings are anagrams?
This question tests your string manipulation skills and algorithm knowledge.
How to solve it
Two strings are anagrams (they use the same letters, just arranged differently) if they contain the same characters with the same frequencies, regardless of order.
There are two main approaches:
Approach 1: Sort and compare —
def is_anagram(str1, str2):
# Remove spaces and lowercase both strings
s1 = str1.replace(" ", "").lower()
s2 = str2.replace(" ", "").lower()
# Sort characters and compare
return sorted(s1) == sorted(s2)
Approach 2: Count character frequencies —
from collections import Counter
def is_anagram(str1, str2):
# Remove spaces and lowercase both strings
s1 = str1.replace(" ", "").lower()
s2 = str2.replace(" ", "").lower()
# Compare character counts
return Counter(s1) == Counter(s2)
The sorting approach has O(n log n) time complexity due to sorting.
The Counter approach has O(n) time complexity, making it more efficient for longer strings.
Both handle case sensitivity and spaces correctly.
Advanced Python interview questions
Finally, let’s review a few sample interview questions that explore Python’s more sophisticated features and design patterns.
They’re often asked in interviews for senior roles or positions requiring deep Python expertise.
25. What is Method Resolution Order (MRO) in multiple inheritance?
This question demonstrates understanding of complex OOP. When classes inherit from multiple parents, Python must determine which method to call when names overlap.
How to solve it
Explain that the MRO is the sequence Python follows when looking up a method in a class hierarchy.
- For single inheritance, this is straightforward – look in the class, then its parent, and so on
- For multiple inheritance, Python uses the C3 linearization algorithm to create a consistent order
You can inspect the MRO using Class.__mro__ or Class.mro().
Show the classic “diamond problem” — when a class inherits from two classes that both inherit from the same parent, creating a diamond-shaped hierarchy that can cause ambiguity about which parent’s method to use.
class A: pass
class B(A): pass
class C(A): pass
class D(B, C): pass
print(D.mro())
# [<class '__main__.D'>, <class '__main__.B'>,
# <class '__main__.C'>, <class '__main__.A'>, <class 'object'>]
The MRO ensures that B (listed first in D’s inheritance) is checked before C, and both are checked before their common parent A. This order is critical for methods like super() to work correctly.
Remember to mention that designing sensible class hierarchies helps avoid ambiguity and that composition is often preferred over deep inheritance trees.
26. What are magic (dunder) methods? Give examples
This question tests how well you can customize object behavior and make classes behave like built-in types.
How to solve it
Magic (or “dunder”) methods have names surrounded by double underscores and are invoked implicitly by Python.
They let you define how instances of your class respond to built-in functions and operators.
Common magic methods include:
- __init__: Constructor called when creating an instance
- __repr__ and __str__: Control string representation
- __len__: Allows len(obj) to work
- __getitem__, __setitem__: Enable bracket notation (obj[key])
- __iter__ and __next__: Make an object iterable
- __eq__, __lt__, etc.: Implement comparison operators
You can demonstrate it with a custom container:
class Countdown:
def __init__(self, start):
self.current = start
def __iter__(self):
return self
def __next__(self):
if self.current <= 0:
raise StopIteration
self.current -= 1
return self.current + 1
def __len__(self):
return self.current
# Usage
for n in Countdown(3):
print(n) # Prints: 3, 2, 1
Understanding magic methods lets you create intuitive, Pythonic classes that feel like built-in types.
27. How do you write your own context manager?
This question shows deep knowledge of resource management beyond the common with open() example.
Context managers are widely used in database, networking, and concurrency libraries.
How to solve it
A context manager ensures proper setup and cleanup of resources, even if exceptions occur.
There are two ways to create one:
Method 1: Define a class implementing __enter__ and __exit__:
class FileOpener:
def __init__(self, filename):
self.filename = filename
self.file = None
def __enter__(self):
self.file = open(self.filename)
return self.file
def __exit__(self, exc_type, exc_val, exc_tb):
if self.file:
self.file.close()
# Return True to suppress exceptions, False to propagate
return False
# Usage
with FileOpener('data.txt') as f:
content = f.read()
Method 2: Use contextlib.contextmanager with a generator function:
from contextlib import contextmanager
@contextmanager
def open_file(name):
f = open(name)
try:
yield f # Value provided to the with statement
finally:
f.close() # Always runs, even if exceptions occur
# Usage
with open_file('data.txt') as f:
content = f.read()
Context managers are perfect for resources that need proper cleanup like files, network connections, locks, and database transactions.
They make sure resources are released promptly and make your code exception-safe.
28. Describe asynchronous programming in Python using async/await
This question tests understanding of modern concurrency patterns, important for network-heavy applications and APIs.
How to solve it
Asynchronous programming in Python allows functions to pause execution while waiting for I/O, letting other tasks run in the meantime.
This improves efficiency for I/O-bound tasks without using threads.
The main components are:
- async def defines a coroutine function
- await pauses execution until an awaited coroutine completes
- An event loop schedules and runs coroutines
import asyncio
async def fetch_data():
print('Start fetching')
# Simulates I/O operation like a network request
await asyncio.sleep(1)
print('Done fetching')
return {'data': 42}
async def main():
result = await fetch_data()
print(result)
# Run multiple coroutines concurrently
results = await asyncio.gather(
fetch_data(),
fetch_data(),
fetch_data()
)
print(results)
# Run the event loop
asyncio.run(main())
You can explain that await only works within an async function and that all async functions must be awaited or run by an event loop.
The asyncio.gather() function lets multiple coroutines run concurrently.
Unlike threading, async code runs in a single thread with cooperative multitasking. It’s ideal for I/O-bound tasks like network requests, but doesn’t help with CPU-bound tasks due to the GIL.
29. What is pickling? How do you customize object serialization?
This question examines knowledge of Python’s serialization mechanisms, important for storing objects, caching, and inter-process communication.
How to solve it
Pickling is Python’s built-in serialization method for converting objects to byte streams.
It’s used to save and restore objects between program runs.
Basic usage:
import pickle
# Serialize an object to a file
data = {'a': 1, 'b': [2, 3]}
with open('data.pkl', 'wb') as f:
pickle.dump(data, f)
# Deserialize from a file
with open('data.pkl', 'rb') as f:
loaded = pickle.load(f)
For in-memory serialization, use pickle.dumps(obj) and pickle.loads(bytes_object).
To customize how your objects are pickled, implement special methods:
class Cache:
def __init__(self):
self.data = {}
self._cache = {} # Temporary data we don't want to pickle
def __getstate__(self):
# Return what should be pickled
state = self.__dict__.copy()
del state['_cache'] # Don't pickle this
return state
def __setstate__(self, state):
# Restore from pickle
self.__dict__.update(state)
self._cache = {} # Restore with default
Remember to mention that pickle has security implications — unpickling untrusted data can execute arbitrary code.
For cross-language or security-sensitive serialization, you can consider alternatives like JSON or Protocol Buffers.
Bonus: Python interview questions by specialization
To round out your preparation, here are a few targeted questions for specific Python career paths.
Think of this as a quick reference guide for the key topics you should be familiar with in each specialized field.
Data science Python interview topics
Data scientists need to know these core concepts for machine learning and statistical analysis in Python:
- Train-test splits: Understand why we split data (prevent overfitting) and how to use scikit-learn’s train_test_split. Set a random seed for reproducibility.
- Cross-validation: Know how k-fold CV provides more reliable performance estimates by training on multiple data splits.
- Supervised vs unsupervised learning: Differentiate between models that learn from labeled data (classification, regression) versus those that find patterns in unlabeled data (clustering, dimensionality reduction).
- Handling missing data: Be familiar with pandas methods like isna(), dropna(), and fillna(), and know when each approach is appropriate.
- Groupby operations: Understand how to aggregate data by categories using pandas’ groupby() for summary statistics and data analysis.
Data analytics Python interview topics
Data analysts should focus on these data manipulation and insight extraction techniques:
- DataFrame merging: Know different join types (inner, outer, left, right) and how to use pd.merge() or df.merge() to combine datasets.
- Descriptive statistics: Be able to explain what df.describe() shows and how to interpret stats like mean, median, and quartiles to understand distributions.
- Pivot tables: Understand how to reshape data with pd.pivot_table() to create summary views with different dimensions as rows and columns.
- Duplicate handling: Know how to identify and remove duplicates with df.drop_duplicates() and control which duplicates to keep.
- Group statistics: Be comfortable calculating different metrics per group using groupby() with aggregation functions like mean, sum, or custom calculations.
Software engineering Python interview topics
Software engineers using Python should be prepared for these common algorithm and coding challenges:
- Palindrome detection: Create a function to check if strings read the same backward as forward, handling cases and non-alphanumeric characters.
- Finding Nth largest elements: Write code to find elements like the second-largest number in a list, using either sorting or linear search approaches.
- Binary search implementation: Understand this O(log n) algorithm for finding elements in sorted lists by repeatedly dividing the search interval.
- Word frequency counting: Use collections.Counter to efficiently count occurrences in text data and find most common items.
- Removing duplicates: Know efficient ways to remove duplicates from a list while maintaining the original order, using dictionaries or sets.
Nail your Python interview with Mimo
Ready to put your new knowledge into practice?
Mimo’s Python Developer career path can help you master these concepts and more, preparing you for your next interview with confidence.
Here’s what you get:
- Learn step by step: Master Python fundamentals through bite-sized, interactive lessons designed for daily progress
- Build while you learn: Apply concepts immediately through coding challenges and guided projects that reinforce interview topics
- Create a portfolio: Develop 8 real-world Python projects that build your GitHub portfolio while reinforcing interview skills
- Learn at your pace: Fit learning around your schedule with accessible, structured content
