Problem-solving patterns, templates, complexity heuristics and interview speed strategies.
LeetCode PatternsInterview Prep
⏳
Loading cheatsheet...
⚡
Two Pointers
ESSENTIAL
The two pointers technique uses two indices to traverse a data structure (usually array or string) simultaneously. One pointer starts at the beginning, the other at the end (or both at the beginning). This reduces O(n²) brute force to O(n) by eliminating unnecessary nested loops.
When to Use
Sorted array pairsFinding pairs with target sum, two-sum variants
Palindrome checkCompare characters from both ends
Container problemsMaximize area/volume between boundaries
Remove/merge in-placeDeduplication, comparison without extra space
3Sum / k-SumFix one element, use two pointers for remaining
Pattern Variants
Opposite endsOne from left, one from right (e.g. Container)
Same directionBoth start at 0, one is slow/one is fast
Fast-slow (cycle)Floyd's Tortoise & Hare for cycle detection
Sorted pair searchBinary-search-like narrowing in sorted array
two_pointers.py
# ── 1. Two Sum II - Input Array Is Sorted (#167) ──
# Given sorted array, find two numbers that add up to target.
# Time: O(n) | Space: O(1)
def two_sum_ii(numbers: list[int], target: int) -> list[int]:
left, right = 0, len(numbers) - 1
while left < right:
total = numbers[left] + numbers[right]
if total == target:
return [left + 1, right + 1] # 1-indexed
elif total < target:
left += 1
else:
right -= 1
return [-1, -1]
# ── 2. Valid Palindrome (#125) ──
# Check if string is a palindrome considering only alphanumeric chars.
# Time: O(n) | Space: O(1)
def is_palindrome(s: str) -> bool:
left, right = 0, len(s) - 1
while left < right:
while left < right and not s[left].isalnum():
left += 1
while left < right and not s[right].isalnum():
right -= 1
if s[left].lower() != s[right].lower():
return False
left += 1
right -= 1
return True
# ── 3. Container With Most Water (#11) ──
# Find max area between two vertical lines (width × min height).
# Time: O(n) | Space: O(1)
def max_area(height: list[int]) -> int:
left, right = 0, len(height) - 1
max_water = 0
while left < right:
width = right - left
h = min(height[left], height[right])
max_water = max(max_water, width * h)
# Move the shorter line inward (taller line might give bigger area)
if height[left] < height[right]:
left += 1
else:
right -= 1
return max_water
two_pointers_advanced.py
# ── 4. 3Sum (#15) ──
# Find all unique triplets that sum to zero.
# Time: O(n²) | Space: O(1) excluding output
def three_sum(nums: list[int]) -> list[list[int]]:
res = []
nums.sort()
n = len(nums)
for i in range(n - 2):
# Skip duplicates for the first element
if i > 0 and nums[i] == nums[i - 1]:
continue
# Early termination: smallest possible sum > 0
if nums[i] + nums[i + 1] + nums[i + 2] > 0:
break
# Early termination: largest possible sum < 0
if nums[i] + nums[n - 2] + nums[n - 1] < 0:
continue
left, right = i + 1, n - 1
while left < right:
total = nums[i] + nums[left] + nums[right]
if total < 0:
left += 1
elif total > 0:
right -= 1
else:
res.append([nums[i], nums[left], nums[right]])
# Skip duplicates
while left < right and nums[left] == nums[left + 1]:
left += 1
while left < right and nums[right] == nums[right - 1]:
right -= 1
left += 1
right -= 1
return res
# ── 5. Remove Duplicates from Sorted Array (#26) ──
# Remove duplicates in-place and return new length.
# Time: O(n) | Space: O(1)
def remove_duplicates(nums: list[int]) -> int:
if not nums:
return 0
write = 1 # slow pointer
for read in range(1, len(nums)): # fast pointer
if nums[read] != nums[read - 1]:
nums[write] = nums[read]
write += 1
return write
# ── 6. Move Zeroes (#283) ──
# Move all zeros to the end while maintaining order of non-zeros.
# Time: O(n) | Space: O(1)
def move_zeroes(nums: list[int]) -> None:
write = 0
for read in range(len(nums)):
if nums[read] != 0:
nums[write], nums[read] = nums[read], nums[write]
write += 1
# ── 7. Trapping Rain Water (#42) ──
# Compute how much water can be trapped after raining.
# Time: O(n) | Space: O(1)
def trap(height: list[int]) -> int:
left, right = 0, len(height) - 1
left_max, right_max = 0, 0
water = 0
while left < right:
if height[left] < height[right]:
if height[left] >= left_max:
left_max = height[left]
else:
water += left_max - height[left]
left += 1
else:
if height[right] >= right_max:
right_max = height[right]
else:
water += right_max - height[right]
right -= 1
return water
💡
Key insight: When working with sorted arrays, two pointers can replace nested loops. For opposite-end pointers, always move the pointer at the boundary that constrains the solution (the shorter line in Container, the smaller sum in Two Sum).
Problem
#
Difficulty
Key Idea
Complexity
Two Sum II
167
Medium
Sorted array → opposite pointers
O(n)
Valid Palindrome
125
Easy
Compare alphanumeric from ends
O(n)
Container With Most Water
11
Medium
Move shorter boundary inward
O(n)
3Sum
15
Medium
Sort + fix one + two pointers
O(n²)
Remove Duplicates
26
Easy
Slow/fast same-direction pointers
O(n)
Trapping Rain Water
42
Hard
Two max boundaries converging
O(n)
Move Zeroes
283
Easy
Write pointer for non-zeros
O(n)
3Sum Closest
16
Medium
Same as 3Sum, track closest diff
O(n²)
4Sum
18
Medium
Sort + fix two + two pointers
O(n³)
Valid Triangle Number
611
Medium
Sort + two pointers for triangle
O(n²)
🪟
Sliding Window
ESSENTIAL
Sliding window is an optimization over brute force for problems involving contiguous subarrays or substrings. Instead of recomputing for every window position, we "slide" the window by adding the new element and removing the old one — reducing O(n²) to O(n).
When to Use
Contiguous subarrayMax/min sum, product, length of subarray
Substring problemsLongest substring with constraints
Fixed-size windowAverage, sum over k consecutive elements
Anagrams/permutationsCheck if substring is anagram of pattern
Template — Variable Window
python
left = 0
for right in range(len(arr)):
# Expand: add arr[right] to window
# Shrink: while window invalid:
# remove arr[left], left += 1
# Update answer
sliding_window.py
# ── 1. Best Time to Buy and Sell Stock (#121) ──
# Max profit from one buy + one sell. Track min price so far.
# Time: O(n) | Space: O(1)
def max_profit(prices: list[int]) -> int:
min_price = float('inf')
max_profit = 0
for price in prices:
min_price = min(min_price, price)
max_profit = max(max_profit, price - min_price)
return max_profit
# ── 2. Longest Substring Without Repeating Characters (#3) ──
# Find the length of the longest substring without duplicate chars.
# Time: O(n) | Space: O(min(n,m)) — m = charset size
def length_of_longest_substring(s: str) -> int:
char_index = {} # char -> last seen index
left = 0
max_len = 0
for right, ch in enumerate(s):
if ch in char_index and char_index[ch] >= left:
left = char_index[ch] + 1 # shrink from left
char_index[ch] = right
max_len = max(max_len, right - left + 1)
return max_len
# ── 3. Minimum Window Substring (#76) ──
# Smallest substring of s containing all chars of t.
# Time: O(|s| + |t|) | Space: O(|charset|)
from collections import Counter
def min_window(s: str, t: str) -> str:
if not s or not t or len(t) > len(s):
return ""
need = Counter(t)
missing = len(t)
left = start = 0
min_len = float('inf')
for right, ch in enumerate(s):
if ch in need:
need[ch] -= 1
if need[ch] >= 0:
missing -= 1
# When all chars found, try to shrink from left
while missing == 0:
if right - left + 1 < min_len:
min_len = right - left + 1
start = left
if s[left] in need:
need[s[left]] += 1
if need[s[left]] > 0:
missing += 1
left += 1
return "" if min_len == float('inf') else s[start:start + min_len]
sliding_window_advanced.py
# ── 4. Permutation in String (#567) ──
# Check if s2 contains a permutation of s1.
# Time: O(n) | Space: O(1) — fixed 26 letters
def check_inclusion(s1: str, s2: str) -> bool:
from collections import Counter
need = Counter(s1)
missing = len(s1)
left = 0
for right, ch in enumerate(s2):
if ch in need:
need[ch] -= 1
if need[ch] >= 0:
missing -= 1
# Shrink window to size of s1
if right - left + 1 == len(s1):
if missing == 0:
return True
if s2[left] in need:
need[s2[left]] += 1
if need[s2[left]] > 0:
missing += 1
left += 1
return False
# ── 5. Maximum Average Subarray I (#643) ──
# Find contiguous subarray of length k with max average.
# Time: O(n) | Space: O(1)
def find_max_average(nums: list[int], k: int) -> float:
window_sum = sum(nums[:k])
max_sum = window_sum
for i in range(k, len(nums)):
window_sum += nums[i] - nums[i - k]
max_sum = max(max_sum, window_sum)
return max_sum / k
# ── 6. Longest Repeating Character Replacement (#424) ──
# Replace at most k chars to get longest repeating substring.
# Time: O(n) | Space: O(1)
def character_replacement(s: str, k: int) -> int:
count = {}
max_count = 0
left = 0
max_len = 0
for right in range(len(s)):
count[s[right]] = count.get(s[right], 0) + 1
max_count = max(max_count, count[s[right]])
# Window valid if: window_size - max_count <= k
while (right - left + 1) - max_count > k:
count[s[left]] -= 1
left += 1
max_len = max(max_len, right - left + 1)
return max_len
# ── 7. Subarray Product Less Than K (#713) ──
# Count subarrays where product of all elements < k.
# Time: O(n) | Space: O(1)
def num_subarray_product_less_than_k(nums: list[int], k: int) -> int:
if k <= 1:
return 0
product = 1
left = 0
count = 0
for right in range(len(nums)):
product *= nums[right]
while product >= k:
product //= nums[left]
left += 1
count += right - left + 1 # all subarrays ending at right
return count
⚠️
Common mistake: Forgetting to properly shrink the window. The key is: when the window becomes invalid, move left forward until it's valid again. Use a while loop, not if, to fully shrink.
Problem
#
Difficulty
Key Idea
Complexity
Best Time Buy/Sell Stock
121
Easy
Track min price, max diff
O(n)
Longest Substring w/o Repeating
3
Medium
Hash map of last indices
O(n)
Minimum Window Substring
76
Hard
Counter + missing count shrink
O(n)
Permutation in String
567
Medium
Fixed-size window + counter
O(n)
Max Average Subarray
643
Easy
Fixed window, add right remove left
O(n)
Longest Repeating Char Replace
424
Medium
Max count of one char in window
O(n)
Subarray Product < K
713
Medium
Divide product to shrink
O(n)
Sliding Window Maximum
239
Hard
Monotonic deque (see Stack/Queue)
O(n)
Minimum Size Subarray Sum
209
Medium
Shrink when sum >= target
O(n)
Find All Anagrams
438
Medium
Fixed window + counter comparison
O(n)
🔍
Binary Search
ESSENTIAL
Binary search repeatedly halves the search space, achieving O(log n) time on sorted data. Beyond searching for an element, the pattern applies to any monotonic function where you can determine which half contains the answer — including "binary search on answer."
When to Use
Sorted array lookupFind target, insertion point, boundaries
Rotated sorted arraySearch in pivoted/rotated array
Binary search on answerMinimize max, maximize min problems
Peak findingFind local/global maxima or minima
Monotonic predicateFirst/last true in a boolean array
Avoiding Bugs
OverflowUse mid = left + (right - left) // 2
Infinite loopEnsure loop narrows: left=mid+1 or right=mid-1
BoundaryBe explicit about inclusive/exclusive bounds
Template 1left <= right → mid found → shrink by 1
Template 2left < right → mid found → shrink to mid
binary_search.py
# ── 1. Binary Search (#704) ──
# Standard binary search in sorted array.
# Time: O(log n) | Space: O(1)
def binary_search(nums: list[int], target: int) -> int:
left, right = 0, len(nums) - 1
while left <= right:
mid = left + (right - left) // 2
if nums[mid] == target:
return mid
elif nums[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
# ── 2. Search in Rotated Sorted Array (#33) ──
# Array rotated at unknown pivot. Find target in O(log n).
# Time: O(log n) | Space: O(1)
def search_rotated(nums: list[int], target: int) -> int:
left, right = 0, len(nums) - 1
while left <= right:
mid = left + (right - left) // 2
if nums[mid] == target:
return mid
# Determine which half is sorted
if nums[left] <= nums[mid]: # Left half sorted
if nums[left] <= target < nums[mid]:
right = mid - 1
else:
left = mid + 1
else: # Right half sorted
if nums[mid] < target <= nums[right]:
left = mid + 1
else:
right = mid - 1
return -1
# ── 3. Find Peak Element (#162) ──
# Find a peak element (greater than neighbors).
# Time: O(log n) | Space: O(1)
def find_peak_element(nums: list[int]) -> int:
left, right = 0, len(nums) - 1
while left < right:
mid = left + (right - left) // 2
if nums[mid] < nums[mid + 1]:
left = mid + 1 # peak is to the right
else:
right = mid # peak is at mid or to the left
return left
binary_search_advanced.py
# ── 4. Search Insert Position (#35) ──
# Return index where target would be inserted to maintain order.
# Time: O(log n) | Space: O(1)
def search_insert(nums: list[int], target: int) -> int:
left, right = 0, len(nums)
while left < right:
mid = left + (right - left) // 2
if nums[mid] < target:
left = mid + 1
else:
right = mid
return left # left == right, insertion point
# ── 5. Koko Eating Bananas (#875) ──
# Find minimum eating speed to eat all piles within h hours.
# Binary search on answer: the speed k is monotonic.
# Time: O(n log M) | Space: O(1) — M = max(piles)
def min_eating_speed(piles: list[int], h: int) -> int:
import math
lo, hi = 1, max(piles)
while lo < hi:
mid = lo + (hi - lo) // 2
hours = sum(math.ceil(p / mid) for p in piles)
if hours <= h:
hi = mid # can eat slower (or same)
else:
lo = mid + 1 # must eat faster
return lo
# ── 6. Find Minimum in Rotated Sorted Array (#153) ──
# Time: O(log n) | Space: O(1)
def find_min_rotated(nums: list[int]) -> int:
left, right = 0, len(nums) - 1
while left < right:
mid = left + (right - left) // 2
if nums[mid] > nums[right]:
left = mid + 1 # min is in right half
else:
right = mid # min is at mid or in left half
return nums[left]
# ── 7. Median of Two Sorted Arrays (#4) ──
# Find median of two sorted arrays in O(log(min(m,n))).
# Time: O(log(min(m,n))) | Space: O(1)
def find_median_sorted_arrays(nums1: list[int], nums2: list[int]) -> float:
if len(nums1) > len(nums2):
nums1, nums2 = nums2, nums1
m, n = len(nums1), len(nums2)
lo, hi = 0, m
while lo <= hi:
p1 = (lo + hi) // 2
p2 = (m + n + 1) // 2 - p1
l1 = float('-inf') if p1 == 0 else nums1[p1 - 1]
r1 = float('inf') if p1 == m else nums1[p1]
l2 = float('-inf') if p2 == 0 else nums2[p2 - 1]
r2 = float('inf') if p2 == n else nums2[p2]
if l1 <= r2 and l2 <= r1:
if (m + n) % 2 == 0:
return (max(l1, l2) + min(r1, r2)) / 2
return float(max(l1, l2))
elif l1 > r2:
hi = p1 - 1
else:
lo = p1 + 1
return 0.0
💡
Binary search on answer: When you need to find the minimum or maximum value satisfying a condition, binary search the answer space. Define a can(x) predicate that's monotonic (False → True or True → False), then binary search on x.
Problem
#
Difficulty
Key Idea
Complexity
Binary Search
704
Easy
Standard template
O(log n)
Search Insert Position
35
Easy
Find first >= target
O(log n)
Search in Rotated Array
33
Medium
Identify sorted half
O(log n)
Find Min in Rotated
153
Medium
Compare mid with right
O(log n)
Find Peak Element
162
Medium
Compare mid with mid+1
O(log n)
Koko Eating Bananas
875
Medium
Binary search on answer
O(n log M)
Median of Two Sorted
4
Hard
Partition-based search
O(log min(m,n))
Split Array Largest Sum
410
Hard
Binary search on max sum
O(n log S)
Kth Smallest in Sorted Matrix
378
Medium
Binary search on value
O(n log(max-min))
Capacity To Ship Packages
1011
Medium
Binary search on capacity
O(n log M)
🌳
BFS & DFS
GRAPH
Breadth-First Search (BFS) explores level by level using a queue, guaranteeing the shortest path in unweighted graphs. Depth-First Search (DFS) explores as deep as possible before backtracking, using recursion or a stack — ideal for cycle detection, topological sort, and connected components.
Word transformationBFS with character-level neighbors
bfs_dfs.py
# ── 1. Number of Islands (#200) ──
# Count connected '1's (islands) in a 2D grid.
# Time: O(m×n) | Space: O(m×n) worst case
from collections import deque
def num_islands(grid: list[list[str]]) -> int:
if not grid:
return 0
rows, cols = len(grid), len(grid[0])
count = 0
def bfs(r, c):
queue = deque([(r, c)])
grid[r][c] = '0'
while queue:
cr, cc = queue.popleft()
for dr, dc in [(0,1),(0,-1),(1,0),(-1,0)]:
nr, nc = cr + dr, cc + dc
if 0 <= nr < rows and 0 <= nc < cols and grid[nr][nc] == '1':
grid[nr][nc] = '0'
queue.append((nr, nc))
# DFS alternative:
def dfs(r, c):
if r < 0 or r >= rows or c < 0 or c >= cols or grid[r][c] == '0':
return
grid[r][c] = '0'
dfs(r+1, c); dfs(r-1, c); dfs(r, c+1); dfs(r, c-1)
for r in range(rows):
for c in range(cols):
if grid[r][c] == '1':
bfs(r, c) # or dfs(r, c)
count += 1
return count
# ── 2. Clone Graph (#133) ──
# Deep copy an undirected graph (nodes with neighbors).
# Time: O(V + E) | Space: O(V)
class Node:
def __init__(self, val=0, neighbors=None):
self.val = val
self.neighbors = neighbors if neighbors else []
def clone_graph(node: Node) -> Node:
if not node:
return None
clones = {} # original -> clone
def dfs(n):
if n in clones:
return clones[n]
clone = Node(n.val)
clones[n] = clone
for neighbor in n.neighbors:
clone.neighbors.append(dfs(neighbor))
return clone
return dfs(node)
# BFS version:
def clone_graph_bfs(node: Node) -> Node:
if not node: return None
clones = {node: Node(node.val)}
queue = deque([node])
while queue:
curr = queue.popleft()
for neighbor in curr.neighbors:
if neighbor not in clones:
clones[neighbor] = Node(neighbor.val)
queue.append(neighbor)
clones[curr].neighbors.append(clones[neighbor])
return clones[node]
bfs_dfs_advanced.py
# ── 3. Course Schedule (#207) ──
# Can you finish all courses given prerequisites? (Cycle detection)
# Time: O(V + E) | Space: O(V + E)
from collections import deque
def can_finish(num_courses: int, prerequisites: list[list[int]]) -> bool:
# Build graph + in-degree
graph = [[] for _ in range(num_courses)]
in_degree = [0] * num_courses
for course, prereq in prerequisites:
graph[prereq].append(course)
in_degree[course] += 1
# BFS: start with nodes that have no prerequisites
queue = deque([i for i in range(num_courses) if in_degree[i] == 0])
completed = 0
while queue:
node = queue.popleft()
completed += 1
for neighbor in graph[node]:
in_degree[neighbor] -= 1
if in_degree[neighbor] == 0:
queue.append(neighbor)
return completed == num_courses # False if cycle exists
# ── 4. Word Ladder (#127) ──
# Shortest transformation from beginWord to endWord, one letter at a time.
# Time: O(M² × N) | Space: O(M × N) — M = word length, N = words
from collections import deque
def ladder_length(beginWord: str, endWord: str, wordList: list[str]) -> int:
word_set = set(wordList)
if endWord not in word_set:
return 0
queue = deque([(beginWord, 1)])
visited = {beginWord}
while queue:
word, level = queue.popleft()
for i in range(len(word)):
for ch in 'abcdefghijklmnopqrstuvwxyz':
next_word = word[:i] + ch + word[i+1:]
if next_word == endWord:
return level + 1
if next_word in word_set and next_word not in visited:
visited.add(next_word)
queue.append((next_word, level + 1))
return 0
# ── 5. Max Area of Island (#695) ──
# Find maximum area of an island in grid (largest connected component of 1s).
# Time: O(m×n) | Space: O(m×n) recursion stack
def max_area_of_island(grid: list[list[int]]) -> int:
rows, cols = len(grid), len(grid[0])
max_area = 0
def dfs(r, c):
if r < 0 or r >= rows or c < 0 or c >= cols or grid[r][c] == 0:
return 0
grid[r][c] = 0 # mark visited
return 1 + dfs(r+1,c) + dfs(r-1,c) + dfs(r,c+1) + dfs(r,c-1)
for r in range(rows):
for c in range(cols):
if grid[r][c] == 1:
max_area = max(max_area, dfs(r, c))
return max_area
# ── 6. Surrounded Regions (#130) ──
# Flip 'O's not on the border to 'X' (capture).
# Time: O(m×n) | Space: O(m×n)
def solve(board: list[list[str]]) -> None:
if not board: return
rows, cols = len(board), len(board[0])
def dfs(r, c):
if r < 0 or r >= rows or c < 0 or c >= cols or board[r][c] != 'O':
return
board[r][c] = '#' # mark as safe
dfs(r+1,c); dfs(r-1,c); dfs(r,c+1); dfs(r,c-1)
# Mark border-connected 'O's
for r in range(rows):
for c in range(cols):
if (r == 0 or r == rows-1 or c == 0 or c == cols-1) and board[r][c] == 'O':
dfs(r, c)
# Flip remaining 'O' to 'X', restore '#' to 'O'
for r in range(rows):
for c in range(cols):
if board[r][c] == 'O':
board[r][c] = 'X'
elif board[r][c] == '#':
board[r][c] = 'O'
⚠️
Grid DFS/BFS pattern: For 2D grid problems, always check bounds 0 <= r < rows, 0 <= c < cols before accessing. Mark cells as visited by changing their value (e.g., '0' or a visited set) to avoid revisiting.
Problem
#
Difficulty
Key Idea
Complexity
Number of Islands
200
Medium
BFS/DFS flood fill
O(m×n)
Clone Graph
133
Medium
DFS/BFS + hashmap clones
O(V+E)
Course Schedule
207
Medium
Topological sort, cycle detection
O(V+E)
Word Ladder
127
Hard
BFS + word generation
O(M²×N)
Max Area of Island
695
Medium
DFS flood fill, track size
O(m×n)
Surrounded Regions
130
Medium
Border DFS + flip
O(m×n)
Rotting Oranges
994
Medium
Multi-source BFS, time steps
O(m×n)
Graph Valid Tree
261
Medium
BFS/DFS, check no cycle + connected
O(V+E)
Course Schedule II
210
Medium
Topological order using BFS
O(V+E)
Pacific Atlantic Water Flow
417
Medium
Reverse DFS from oceans
O(m×n)
🧩
Dynamic Programming
CORE
Dynamic Programming (DP) solves problems by breaking them into overlapping subproblems, solving each subproblem once, and caching results. The key steps are: (1) define the state, (2) find the recurrence relation, (3) establish base cases, and (4) determine the iteration order.
DP Problem Categories
Type
Pattern
Example
1D Linear
dp[i] = f(dp[i-1], dp[i-2], ...)
Climbing Stairs, House Robber
2D Table
dp[i][j] = f(dp[i-1][j], dp[i][j-1], ...)
LCS, Edit Distance, Grid Paths
Unbounded Knapsack
Complete coin change, unlimited items
Coin Change, Word Break
0/1 Knapsack
Each item used at most once
Knapsack, Partition Equal Subset
Interval DP
dp[i][j] on substrings/ranges
Burst Balloons, Matrix Chain
Bitmask DP
dp[mask] for subset selection
TSP, Smallest Sufficient Team
Tree DP
Post-order traversal on tree
House Robber III, Diameter
State Machine
dp[i][state] with transitions
Best Buy/Sell Stock III
DP Checklist
Step 1Define the state (what does dp[i] represent?)
Step 2Find recurrence (how to compute dp[i] from subproblems)
Step 3Base cases (smallest subproblems)
Step 4Iteration order (bottom-up: smallest to largest)
Step 5Extract answer from dp table
OptimizationSpace: can you use 1D instead of 2D?
dp_1d.py
# ── 1. Climbing Stairs (#70) ──
# Count ways to reach top. Can climb 1 or 2 steps.
# dp[i] = dp[i-1] + dp[i-2] → Fibonacci!
# Time: O(n) | Space: O(1)
def climb_stairs(n: int) -> int:
if n <= 2: return n
a, b = 1, 2
for _ in range(3, n + 1):
a, b = b, a + b
return b
# ── 2. House Robber (#198) ──
# Max sum of non-adjacent elements.
# dp[i] = max(dp[i-1], dp[i-2] + nums[i])
# Time: O(n) | Space: O(1)
def rob(nums: list[int]) -> int:
prev2, prev1 = 0, 0
for num in nums:
curr = max(prev1, prev2 + num)
prev2, prev1 = prev1, curr
return prev1
# ── 3. Coin Change (#322) ──
# Minimum coins to make up amount. Coins can be reused (unbounded).
# dp[i] = min(dp[i], dp[i - coin] + 1)
# Time: O(amount × len(coins)) | Space: O(amount)
def coin_change(coins: list[int], amount: int) -> int:
dp = [float('inf')] * (amount + 1)
dp[0] = 0
for coin in coins:
for i in range(coin, amount + 1):
dp[i] = min(dp[i], dp[i - coin] + 1)
return dp[amount] if dp[amount] != float('inf') else -1
# ── 4. Longest Increasing Subsequence (#300) ──
# Time: O(n log n) using patience sorting + binary search | Space: O(n)
def length_of_lis(nums: list[int]) -> int:
tails = [] # tails[i] = smallest tail of LIS of length i+1
for num in nums:
left, right = 0, len(tails)
while left < right:
mid = left + (right - left) // 2
if tails[mid] < num:
left = mid + 1
else:
right = mid
if left == len(tails):
tails.append(num)
else:
tails[left] = num
return len(tails)
dp_2d.py
# ── 5. Longest Common Subsequence (#1143) ──
# Time: O(m×n) | Space: O(min(m,n)) with optimization
def longest_common_subsequence(text1: str, text2: str) -> int:
m, n = len(text1), len(text2)
if m < n: # ensure n is smaller for space optimization
text1, text2 = text2, text1
m, n = n, m
prev = [0] * (n + 1)
for i in range(1, m + 1):
curr = [0] * (n + 1)
for j in range(1, n + 1):
if text1[i-1] == text2[j-1]:
curr[j] = prev[j-1] + 1
else:
curr[j] = max(prev[j], curr[j-1])
prev = curr
return prev[n]
# ── 6. Edit Distance (#72) ──
# Min operations (insert, delete, replace) to convert word1 → word2.
# Time: O(m×n) | Space: O(min(m,n))
def min_distance(word1: str, word2: str) -> int:
m, n = len(word1), len(word2)
if m < n:
word1, word2 = word2, word1
m, n = n, m
prev = list(range(n + 1))
for i in range(1, m + 1):
curr = [i] + [0] * n
for j in range(1, n + 1):
if word1[i-1] == word2[j-1]:
curr[j] = prev[j-1]
else:
curr[j] = 1 + min(prev[j], # delete
curr[j-1], # insert
prev[j-1]) # replace
prev = curr
return prev[n]
# ── 7. 0/1 Knapsack (#LC variant) ──
# Max value with given weight capacity. Each item at most once.
# Time: O(n × capacity) | Space: O(capacity)
def knapsack(weights: list[int], values: list[int], capacity: int) -> int:
n = len(weights)
dp = [0] * (capacity + 1)
for i in range(n):
for w in range(capacity, weights[i] - 1, -1): # reverse to prevent reuse
dp[w] = max(dp[w], dp[w - weights[i]] + values[i])
return dp[capacity]
# ── 8. Word Break (#139) ──
# Can s be segmented into words from dictionary?
# Time: O(n²) | Space: O(n)
def word_break(s: str, word_dict: list[str]) -> bool:
word_set = set(word_dict)
dp = [False] * (len(s) + 1)
dp[0] = True
for i in range(1, len(s) + 1):
for j in range(i):
if dp[j] and s[j:i] in word_set:
dp[i] = True
break
return dp[len(s)]
dp_advanced.py
# ── 9. Partition Equal Subset Sum (#423) ──
# Can array be partitioned into two subsets with equal sum?
# Time: O(n × sum) | Space: O(sum)
def can_partition(nums: list[int]) -> bool:
total = sum(nums)
if total % 2 != 0:
return False
target = total // 2
dp = [False] * (target + 1)
dp[0] = True
for num in nums:
for j in range(target, num - 1, -1):
dp[j] = dp[j] or dp[j - num]
return dp[target]
# ── 10. Decode Ways (#91) ──
# Count ways to decode a digit string (A=1, B=2, ..., Z=26).
# Time: O(n) | Space: O(1)
def num_decodings(s: str) -> int:
if not s or s[0] == '0': return 0
prev2, prev1 = 1, 1 # dp[i-2], dp[i-1]
for i in range(1, len(s)):
curr = 0
if s[i] != '0':
curr = prev1
two_digit = int(s[i-1:i+1])
if 10 <= two_digit <= 26:
curr += prev2
prev2, prev1 = prev1, curr
return prev1
# ── 11. Unique Paths (#62) ──
# Number of unique paths from top-left to bottom-right in grid.
# Time: O(m×n) | Space: O(n)
def unique_paths(m: int, n: int) -> int:
dp = [1] * n
for _ in range(1, m):
for j in range(1, n):
dp[j] += dp[j - 1]
return dp[-1]
# ── 12. Minimum Path Sum (#64) ──
# Min sum path from top-left to bottom-right.
# Time: O(m×n) | Space: O(n)
def min_path_sum(grid: list[list[int]]) -> int:
rows, cols = len(grid), len(grid[0])
dp = [0] * cols
dp[0] = grid[0][0]
for j in range(1, cols):
dp[j] = dp[j-1] + grid[0][j]
for i in range(1, rows):
dp[0] += grid[i][0]
for j in range(1, cols):
dp[j] = grid[i][j] + min(dp[j], dp[j-1])
return dp[-1]
💡
Space optimization: Most 2D DP can be reduced to 1D by only keeping the current and previous row. For 0/1 Knapsack, iterate capacity in reverse to prevent using the same item twice. For unbounded Knapsack (Coin Change), iterate forward.
Problem
#
Difficulty
DP Type
Complexity
Climbing Stairs
70
Easy
1D linear
O(n)
House Robber
198
Medium
1D linear
O(n)
Coin Change
322
Medium
Unbounded knapsack
O(n×S)
Longest Increasing Subseq
300
Medium
1D + binary search
O(n log n)
Longest Common Subseq
1143
Medium
2D string
O(m×n)
Edit Distance
72
Medium
2D string
O(m×n)
0/1 Knapsack
416 (var)
Medium
0/1 knapsack
O(n×W)
Word Break
139
Medium
1D string + set
O(n²)
Decode Ways
91
Medium
1D linear
O(n)
Unique Paths
62
Medium
2D grid
O(m×n)
Min Path Sum
64
Medium
2D grid
O(m×n)
Partition Equal Subset
416
Medium
Subset sum DP
O(n×S)
🗺️
Graph Patterns
ADVANCED
Beyond basic BFS/DFS, graph problems involve specialized algorithms for shortest paths with weights, topological ordering, connectivity, spanning trees, and graph coloring. Mastering these patterns is essential for hard LeetCode problems and system design.
# ── 1. Dijkstra's Algorithm (#743 / #1631) ──
# Single-source shortest path with non-negative weights.
# Time: O((V + E) log V) with heap | Space: O(V + E)
import heapq
def dijkstra(graph: dict[int, list[tuple[int,int]]], start: int) -> dict[int, int]:
"""graph: {node: [(neighbor, weight), ...]}"""
dist = {node: float('inf') for node in graph}
dist[start] = 0
heap = [(0, start)] # (distance, node)
while heap:
d, node = heapq.heappop(heap)
if d > dist[node]:
continue # stale entry
for neighbor, weight in graph[node]:
new_dist = d + weight
if new_dist < dist[neighbor]:
dist[neighbor] = new_dist
heapq.heappush(heap, (new_dist, neighbor))
return dist
# LeetCode 743: Network Delay Time
def network_delay_time(times: list[list[int]], n: int, k: int) -> int:
graph = {i: [] for i in range(1, n + 1)}
for u, v, w in times:
graph[u].append((v, w))
dist = dijkstra(graph, k)
max_dist = max(dist.values())
return max_dist if max_dist != float('inf') else -1
graph_algorithms.py
# ── 2. Topological Sort (Kahn's BFS) ──
# Linear ordering of vertices in a DAG.
# Time: O(V + E) | Space: O(V + E)
from collections import deque
def topological_sort(num_nodes: int, edges: list[list[int]]) -> list[int]:
graph = [[] for _ in range(num_nodes)]
in_degree = [0] * num_nodes
for u, v in edges:
graph[u].append(v)
in_degree[v] += 1
queue = deque([i for i in range(num_nodes) if in_degree[i] == 0])
order = []
while queue:
node = queue.popleft()
order.append(node)
for neighbor in graph[node]:
in_degree[neighbor] -= 1
if in_degree[neighbor] == 0:
queue.append(neighbor)
return order if len(order) == num_nodes else [] # empty if cycle
# ── 3. Union-Find (Disjoint Set Union) ──
# Efficient dynamic connectivity with path compression + union by rank.
# Time: O(α(n)) amortized per op | Space: O(n)
class UnionFind:
def __init__(self, n: int):
self.parent = list(range(n))
self.rank = [0] * n
self.count = n # number of components
def find(self, x: int) -> int:
if self.parent[x] != x:
self.parent[x] = self.find(self.parent[x]) # path compression
return self.parent[x]
def union(self, x: int, y: int) -> bool:
px, py = self.find(x), self.find(y)
if px == py:
return False
# union by rank
if self.rank[px] < self.rank[py]:
px, py = py, px
self.parent[py] = px
if self.rank[px] == self.rank[py]:
self.rank[px] += 1
self.count -= 1
return True
def connected(self, x: int, y: int) -> bool:
return self.find(x) == self.find(y)
# LeetCode 684: Redundant Connection
def find_redundant_connection(edges: list[list[int]]) -> list[int]:
n = len(edges)
uf = UnionFind(n + 1) # 1-indexed
for u, v in edges:
if not uf.union(u, v):
return [u, v] # this edge creates a cycle
return []
graph_mst_bipartite.py
# ── 4. Kruskal's Minimum Spanning Tree ──
# Find MST by greedily adding smallest edges (no cycle).
# Time: O(E log E) | Space: O(V + E)
def kruskal_mst(n: int, edges: list[list[int]]) -> int:
"""
n: number of nodes
edges: [weight, u, v]
Returns: total weight of MST
"""
edges.sort() # sort by weight
uf = UnionFind(n)
mst_weight = 0
edges_used = 0
for w, u, v in edges:
if uf.union(u, v):
mst_weight += w
edges_used += 1
if edges_used == n - 1:
break
return mst_weight
# ── 5. Prim's Minimum Spanning Tree ──
# Grow MST from a starting node using priority queue.
# Time: O((V + E) log V) | Space: O(V + E)
import heapq
def prim_mst(n: int, graph: dict[int, list[tuple[int,int]]]) -> int:
"""graph: {node: [(neighbor, weight), ...]}"""
visited = set()
min_heap = [(0, 0)] # (weight, node), start from node 0
mst_weight = 0
while min_heap and len(visited) < n:
w, node = heapq.heappop(min_heap)
if node in visited:
continue
visited.add(node)
mst_weight += w
for neighbor, weight in graph[node]:
if neighbor not in visited:
heapq.heappush(min_heap, (weight, neighbor))
return mst_weight
# ── 6. Bipartite Graph Check (#785) ──
# Can graph be colored with 2 colors? (no odd-length cycle)
# Time: O(V + E) | Space: O(V + E)
from collections import deque
def is_bipartite(graph: list[list[int]]) -> bool:
n = len(graph)
color = [0] * n # 0 = uncolored, 1 = color A, -1 = color B
for start in range(n):
if color[start] != 0:
continue
queue = deque([start])
color[start] = 1
while queue:
node = queue.popleft()
for neighbor in graph[node]:
if color[neighbor] == 0:
color[neighbor] = -color[node]
queue.append(neighbor)
elif color[neighbor] == color[node]:
return False
return True
# ── 7. Floyd-Warshall (All-Pairs Shortest Path) ──
# Time: O(V³) | Space: O(V²)
def floyd_warshall(n: int, edges: list[list[int]]) -> list[list[int]]:
"""edges: [u, v, weight]"""
INF = float('inf')
dist = [[INF] * n for _ in range(n)]
for i in range(n):
dist[i][i] = 0
for u, v, w in edges:
dist[u][v] = w
for k in range(n):
for i in range(n):
for j in range(n):
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j]
return dist
🚫
Cycle in undirected graph: If union(u, v) returns False (same component), the edge creates a cycle. For directed graph cycle detection, use topological sort — if the result has fewer nodes than the graph, a cycle exists.
Problem
#
Difficulty
Algorithm
Complexity
Network Delay Time
743
Medium
Dijkstra's
O((V+E) log V)
Path With Min Effort
1631
Medium
Dijkstra's variant
O((V+E) log V)
Course Schedule II
210
Medium
Topological Sort
O(V+E)
Alien Dictionary
269
Hard
Topological Sort
O(C)
Redundant Connection
684
Medium
Union-Find
O(E α(V))
Accounts Merge
721
Medium
Union-Find + BFS
O(V E log E)
Min Cost Connect Cities
1135
Medium
Kruskal MST
O(E log E)
Min Cost Connect (Prim)
1135
Medium
Prim MST
O((V+E) log V)
Is Graph Bipartite?
785
Medium
BFS 2-color
O(V+E)
Cheapest Flights (K stops)
787
Medium
Bellman-Ford variant
O(K×E)
🤑
Greedy Algorithms
PATTERN
Greedy algorithms make locally optimal choices at each step, hoping to find a global optimum. They work when the problem has the greedy-choice property (a locally optimal choice leads to a globally optimal solution) and optimal substructure.
# ── 1. Jump Game (#55) ──
# Can you reach the last index? Track farthest reachable.
# Time: O(n) | Space: O(1)
def can_jump(nums: list[int]) -> bool:
max_reach = 0
for i in range(len(nums)):
if i > max_reach:
return False # stuck, can't reach position i
max_reach = max(max_reach, i + nums[i])
if max_reach >= len(nums) - 1:
return True
return True
# ── 2. Jump Game II (#45) ──
# Minimum jumps to reach last index.
# Time: O(n) | Space: O(1)
def jump(nums: list[int]) -> int:
jumps = 0
current_end = 0
farthest = 0
for i in range(len(nums) - 1): # no need to check last
farthest = max(farthest, i + nums[i])
if i == current_end:
jumps += 1
current_end = farthest
return jumps
# ── 3. Activity Selection (Non-overlapping Intervals) (#435) ──
# Minimum removals to make intervals non-overlapping.
# Greedy: always pick the interval that ends earliest.
# Time: O(n log n) | Space: O(1)
def erase_overlap_intervals(intervals: list[list[int]]) -> int:
if not intervals: return 0
intervals.sort(key=lambda x: x[1]) # sort by end time
count = 1
end = intervals[0][1]
for start, finish in intervals[1:]:
if start >= end:
count += 1
end = finish
return len(intervals) - count # removals needed
# ── 4. Gas Station (#134) ──
# Find starting station to complete circuit (if possible).
# Time: O(n) | Space: O(1)
def can_complete_circuit(gas: list[int], cost: list[int]) -> int:
total_tank, curr_tank, start = 0, 0, 0
for i in range(len(gas)):
total_tank += gas[i] - cost[i]
curr_tank += gas[i] - cost[i]
if curr_tank < 0:
start = i + 1 # can't start from any station before i
curr_tank = 0
return start if total_tank >= 0 else -1
greedy_advanced.py
# ── 5. Huffman Coding (Greedy on frequency) ──
# Build optimal prefix-free binary code. Minimize weighted path length.
# Time: O(n log n) | Space: O(n)
import heapq
from collections import Counter
def huffman_codes(text: str) -> dict[str, str]:
freq = Counter(text)
heap = [(count, ch) for ch, count in freq.items()]
heapq.heapify(heap)
while len(heap) > 1:
lo = heapq.heappop(heap)
hi = heapq.heappop(heap)
# Merge: new node with combined freq
merged = (lo[0] + hi[0], lo, hi)
heapq.heappush(heap, merged)
codes = {}
def traverse(node, prefix):
if len(node) == 2: # leaf node
codes[node[1]] = prefix
return
traverse(node[1], prefix + '0')
traverse(node[2], prefix + '1')
if heap:
traverse(heap[0], '')
return codes
# ── 6. Fractional Knapsack ──
# Maximize value with weight capacity (items can be fractionally taken).
# Time: O(n log n) | Space: O(1)
def fractional_knapsack(weights, values, capacity):
items = sorted(zip(values, weights), key=lambda x: x[0]/x[1], reverse=True)
total_value = 0.0
remaining = capacity
for value, weight in items:
if remaining <= 0:
break
take = min(weight, remaining)
total_value += (value / weight) * take
remaining -= take
return total_value
# ── 7. Partition Labels (#763) ──
# Partition string into as many parts as possible, each letter in at most one part.
# Time: O(n) | Space: O(1)
def partition_labels(s: str) -> list[int]:
last_occurrence = {ch: i for i, ch in enumerate(s)}
result = []
start = end = 0
for i, ch in enumerate(s):
end = max(end, last_occurrence[ch])
if i == end:
result.append(end - start + 1)
start = end + 1
return result
# ── 8. Candy (#135) ──
# Distribute candies: higher rating child gets more than neighbors.
# Time: O(n) | Space: O(n)
def candy(ratings: list[int]) -> int:
n = len(ratings)
candies = [1] * n
# Left pass
for i in range(1, n):
if ratings[i] > ratings[i-1]:
candies[i] = candies[i-1] + 1
# Right pass
for i in range(n - 2, -1, -1):
if ratings[i] > ratings[i+1]:
candies[i] = max(candies[i], candies[i+1] + 1)
return sum(candies)
💡
Interval greedy rule: When dealing with overlapping intervals, sorting by end time is usually optimal for maximizing count (activity selection). Sorting by start time is used for merging intervals (see Intervals section).
Problem
#
Difficulty
Greedy Idea
Complexity
Jump Game
55
Medium
Track farthest reachable
O(n)
Jump Game II
45
Medium
Jump at current range boundary
O(n)
Non-overlapping Intervals
435
Medium
Sort by end, pick earliest
O(n log n)
Gas Station
134
Medium
Total gas >= cost, find start
O(n)
Partition Labels
763
Medium
Track last occurrence
O(n)
Candy
135
Hard
Two-pass (left then right)
O(n)
Assign Cookies
455
Easy
Sort both, greedy match
O(n log n)
Task Scheduler
621
Medium
Schedule most frequent first
O(n)
Minimum Platforms
(GFG)
Medium
Sort arrivals + departures
O(n log n)
Huffman Coding
(Classic)
-
Min heap on frequency
O(n log n)
🔄
Backtracking
RECURSION
Backtracking explores all possible solutions by building candidates incrementally and abandoning ("backtracking") a candidate as soon as it determines it cannot lead to a valid solution. It's essentially DFS with pruning — the key is knowing when to prune.
Template
python
def backtrack(path, choices):
if is_solution(path):
result.append(path[:]) # copy!
return
for choice in choices:
if is_valid(choice):
path.append(choice)
backtrack(path, new_choices)
path.pop() # backtrack (undo choice)
Partition problemsPalindrome partitioning, word search
Expression generationGenerate all possible expressions
backtracking.py
# ── 1. Subsets (#78) ──
# Generate all possible subsets (power set).
# Time: O(2ⁿ) | Space: O(n) recursion depth
def subsets(nums: list[int]) -> list[list[int]]:
result = []
def backtrack(start, path):
result.append(path[:])
for i in range(start, len(nums)):
path.append(nums[i])
backtrack(i + 1, path)
path.pop()
backtrack(0, [])
return result
# ── 2. Permutations (#46) ──
# Generate all permutations of distinct elements.
# Time: O(n! × n) | Space: O(n) recursion depth
def permute(nums: list[int]) -> list[list[int]]:
result = []
def backtrack(path, used):
if len(path) == len(nums):
result.append(path[:])
return
for i in range(len(nums)):
if used[i]:
continue
used[i] = True
path.append(nums[i])
backtrack(path, used)
path.pop()
used[i] = False
backtrack([], [False] * len(nums))
return result
# ── 3. Combination Sum (#39) ──
# All unique combinations of candidates that sum to target (unlimited reuse).
# Time: O(N^(T/M+1)) | Space: O(T/M) recursion depth
def combination_sum(candidates: list[int], target: int) -> list[list[int]]:
result = []
candidates.sort() # enables early termination
def backtrack(start, path, remaining):
if remaining == 0:
result.append(path[:])
return
for i in range(start, len(candidates)):
if candidates[i] > remaining:
break # early termination (sorted)
path.append(candidates[i])
backtrack(i, path, remaining - candidates[i]) # i not i+1: reuse allowed
path.pop()
backtrack(0, [], target)
return result
backtracking_advanced.py
# ── 4. Sudoku Solver (#37) ──
# Solve a 9×9 Sudoku puzzle.
# Time: O(9^(empty)) | Space: O(1) board is fixed size
def solve_sudoku(board: list[list[str]]) -> bool:
rows = [set() for _ in range(9)]
cols = [set() for _ in range(9)]
boxes = [set() for _ in range(9)] # 3x3 boxes
# Initialize sets
for r in range(9):
for c in range(9):
if board[r][c] != '.':
val = board[r][c]
rows[r].add(val)
cols[c].add(val)
boxes[(r//3)*3 + c//3].add(val)
def backtrack():
for r in range(9):
for c in range(9):
if board[r][c] == '.':
for num in map(str, range(1, 10)):
box_idx = (r//3)*3 + c//3
if num not in rows[r] and num not in cols[c] and num not in boxes[box_idx]:
board[r][c] = num
rows[r].add(num)
cols[c].add(num)
boxes[box_idx].add(num)
if backtrack():
return True
# Undo
board[r][c] = '.'
rows[r].remove(num)
cols[c].remove(num)
boxes[box_idx].remove(num)
return False
return True # all cells filled
backtrack()
# ── 5. N-Queens (#51) ──
# Place N queens on N×N board, no two attack each other.
# Time: O(N!) | Space: O(N²)
def solve_n_queens(n: int) -> list[list[str]]:
result = []
cols = set()
pos_diag = set() # r + c
neg_diag = set() # r - c
def backtrack(row, board):
if row == n:
result.append(["".join(row) for row in board])
return
for c in range(n):
if c in cols or (row + c) in pos_diag or (row - c) in neg_diag:
continue
cols.add(c)
pos_diag.add(row + c)
neg_diag.add(row - c)
board[row][c] = 'Q'
backtrack(row + 1, board)
board[row][c] = '.'
cols.remove(c)
pos_diag.remove(row + c)
neg_diag.remove(row - c)
backtrack(0, [['.' for _ in range(n)] for _ in range(n)])
return result
# ── 6. Word Search (#79) ──
# Does the word exist in the grid? (can move in 4 directions)
# Time: O(N × 4^L) | Space: O(L) recursion
def exist(board: list[list[str]], word: str) -> bool:
rows, cols = len(board), len(board[0])
def dfs(r, c, i):
if i == len(word):
return True
if r < 0 or r >= rows or c < 0 or c >= cols or board[r][c] != word[i]:
return False
temp = board[r][c]
board[r][c] = '#' # mark visited
found = dfs(r+1,c,i+1) or dfs(r-1,c,i+1) or dfs(r,c+1,i+1) or dfs(r,c-1,i+1)
board[r][c] = temp # restore
return found
return any(dfs(r, c, 0) for r in range(rows) for c in range(cols) if board[r][c] == word[0])
# ── 7. Palindrome Partitioning (#131) ──
# Partition string such that every substring is a palindrome.
# Time: O(N × 2^N) | Space: O(N)
def partition(s: str) -> list[list[str]]:
result = []
def is_palindrome(l, r):
while l < r:
if s[l] != s[r]: return False
l += 1; r -= 1
return True
def backtrack(start, path):
if start == len(s):
result.append(path[:])
return
for end in range(start, len(s)):
if is_palindrome(start, end):
path.append(s[start:end+1])
backtrack(end + 1, path)
path.pop()
backtrack(0, [])
return result
⚠️
Common bugs: (1) Forgetting to path.pop() or undo state changes after recursion. (2) Forgetting to copy path[:] when appending to results. (3) For Permutations with duplicates: sort first, then skip nums[i] == nums[i-1] when !used[i-1].
Problem
#
Difficulty
Key Idea
Complexity
Subsets
78
Medium
Include/exclude each element
O(2ⁿ)
Permutations
46
Medium
Swap or used array
O(n!×n)
Combination Sum
39
Medium
Reuse allowed, prune if > target
O(N^(T/M))
Sudoku Solver
37
Hard
3 constraint sets, try 1-9
O(9^(81))
N-Queens
51
Hard
3 diagonal/column checks
O(N!)
Word Search
79
Medium
DFS with in-place visited
O(N×4^L)
Palindrome Partitioning
131
Medium
Check palindrome, backtrack
O(N×2^N)
Letter Combinations
17
Medium
Digit to letters mapping
O(4ⁿ)
Combination Sum II
40
Medium
Skip duplicates after sorting
O(2ⁿ)
Generate Parentheses
22
Medium
Count open/close, validity check
O(4ⁿ/√n)
📚
Stack & Queue
LINEAR
Stacks (LIFO) and queues (FIFO) are fundamental data structures. Stacks excel at matching pairs, expression evaluation, and maintaining ordering. Queues are essential for BFS, scheduling, and level-order processing. Deques and monotonic queues unlock sliding window optimizations.
Stack Patterns
Matching pairsParentheses, tags, HTML validity
Expression evalRPN, infix with operator precedence
Monotonic stackNext greater element, histogram
Auxiliary stackMin stack, max stack
Nested structuresDecode string, score of parens
Queue Patterns
BFS traversalLevel order, shortest path
Sliding windowMonotonic deque for max/min
InterleavingMerge sorted streams
Task schedulingRound-robin, priority queue
Producer-consumerBuffer between processes
stack.py
# ── 1. Valid Parentheses (#20) ──
# Check if brackets are properly matched.
# Time: O(n) | Space: O(n)
def is_valid(s: str) -> bool:
mapping = {')': '(', ']': '[', '}': '{'}
stack = []
for ch in s:
if ch in mapping:
top = stack.pop() if stack else '#'
if mapping[ch] != top:
return False
else:
stack.append(ch)
return not stack
# ── 2. Min Stack (#155) ──
# Stack that supports push, pop, top, and getMin in O(1).
# Time: O(1) all ops | Space: O(n)
class MinStack:
def __init__(self):
self.stack = []
self.min_stack = []
def push(self, val: int):
self.stack.append(val)
self.min_stack.append(min(val, self.min_stack[-1]) if self.min_stack else val)
def pop(self):
self.min_stack.pop()
return self.stack.pop()
def top(self) -> int:
return self.stack[-1]
def getMin(self) -> int:
return self.min_stack[-1]
# ── 3. Daily Temperatures (#739) ──
# Days until a warmer temperature. Use monotonic decreasing stack.
# Time: O(n) | Space: O(n)
def daily_temperatures(temperatures: list[int]) -> list[int]:
n = len(temperatures)
result = [0] * n
stack = [] # indices of days with decreasing temps
for i in range(n):
while stack and temperatures[i] > temperatures[stack[-1]]:
prev = stack.pop()
result[prev] = i - prev
stack.append(i)
return result # remaining indices have 0 (no warmer day)
stack_queue_advanced.py
# ── 4. Sliding Window Maximum (#239) ──
# Max in each window of size k. Monotonic decreasing deque.
# Time: O(n) | Space: O(k)
from collections import deque
def max_sliding_window(nums: list[int], k: int) -> list[int]:
dq = deque() # indices with decreasing values
result = []
for i in range(len(nums)):
# Remove indices outside the window
if dq and dq[0] <= i - k:
dq.popleft()
# Maintain decreasing order
while dq and nums[dq[-1]] < nums[i]:
dq.pop()
dq.append(i)
# Record max for window [i-k+1, i]
if i >= k - 1:
result.append(nums[dq[0]])
return result
# ── 5. Evaluate Reverse Polish Notation (#150) ──
# Time: O(n) | Space: O(n)
def eval_rpn(tokens: list[str]) -> int:
stack = []
for t in tokens:
if t in '+-*/':
b, a = stack.pop(), stack.pop()
if t == '+': stack.append(a + b)
elif t == '-': stack.append(a - b)
elif t == '*': stack.append(a * b)
else: stack.append(int(a / b)) # truncate toward zero
else:
stack.append(int(t))
return stack[0]
# ── 6. Largest Rectangle in Histogram (#84) ──
# Largest rectangle in histogram using monotonic stack.
# Time: O(n) | Space: O(n)
def largest_rectangle_area(heights: list[int]) -> int:
stack = [] # indices, increasing heights
max_area = 0
# Append sentinel 0 to force computation
heights.append(0)
for i, h in enumerate(heights):
while stack and heights[stack[-1]] > h:
height = heights[stack.pop()]
width = i if not stack else i - stack[-1] - 1
max_area = max(max_area, height * width)
stack.append(i)
heights.pop() # restore
return max_area
# ── 7. Next Greater Element II (#503) ──
# Next greater element in circular array.
# Time: O(n) | Space: O(n)
def next_greater_elements(nums: list[int]) -> list[int]:
n = len(nums)
result = [-1] * n
stack = [] # indices
for i in range(2 * n):
idx = i % n
while stack and nums[stack[-1]] < nums[idx]:
result[stack.pop()] = nums[idx]
if i < n:
stack.append(idx)
return result
💡
Monotonic stack: When you need to find the next greater/smaller element for each position, use a monotonic stack. Keep the stack in decreasing order (for "next greater") or increasing order (for "next smaller"). Process elements from left to right.
Problem
#
Difficulty
Key Idea
Complexity
Valid Parentheses
20
Easy
Stack matching
O(n)
Min Stack
155
Medium
Auxiliary min stack
O(1)
Daily Temperatures
739
Medium
Monotonic decreasing stack
O(n)
Sliding Window Maximum
239
Hard
Monotonic deque
O(n)
Evaluate RPN
150
Medium
Stack-based expression eval
O(n)
Largest Rectangle
84
Hard
Monotonic stack + sentinel
O(n)
Next Greater Element II
503
Medium
Circular monotonic stack
O(n)
Simplify Path
71
Medium
Split by /, stack for dirs
O(n)
Decode String
394
Medium
Nested count + stack
O(n)
Asteroid Collision
735
Medium
Stack collision simulation
O(n)
🏔️
Heap / Priority Queue
DATA STRUCTURE
A heap (priority queue) gives O(log n) insert and O(log n) extract-min/max. Python's heapq module provides a min-heap. For max-heap behavior, negate values. Heaps are essential when you need efficient access to the largest/smallest elements, or need to merge sorted streams.
Heap Operations
heapify(list)O(n) — build heap in place
heappush(heap, x)O(log n) — insert
heappop(heap)O(log n) — extract min
heap[0]O(1) — peek at min
Max heapPush -x, pop -heappop()
nlargest(k, iter)O(n log k) — top k elements
When to Use
Top K elementsMin-heap of size k
Kth smallest/largestMax-heap of size k or quickselect
Merge sorted listsPush heads, pop min
Median maintenanceTwo heaps (max + min)
Dijkstra/PrimPriority queue for shortest edge
Task schedulingSchedule by priority
heap.py
import heapq
# ── 1. Merge K Sorted Lists (#23) ──
# Merge k sorted linked lists into one sorted list.
# Time: O(N log k) | Space: O(k) — N = total nodes
class ListNode:
def __init__(self, val=0, next=None):
self.val, self.next = val, next
def merge_k_lists(lists: list[ListNode]) -> ListNode:
dummy = ListNode()
curr = dummy
# Min-heap of (value, index, node)
heap = []
for i, node in enumerate(lists):
if node:
heapq.heappush(heap, (node.val, i, node))
while heap:
val, i, node = heapq.heappop(heap)
curr.next = node
curr = curr.next
if node.next:
heapq.heappush(heap, (node.next.val, i, node.next))
return dummy.next
# ── 2. Top K Frequent Elements (#347) ──
# Return k most frequent elements.
# Time: O(n log k) | Space: O(n)
from collections import Counter
def top_k_frequent(nums: list[int], k: int) -> list[int]:
count = Counter(nums)
# Min-heap of (frequency, element) — keep only k
heap = []
for num, freq in count.items():
heapq.heappush(heap, (freq, num))
if len(heap) > k:
heapq.heappop(heap) # remove least frequent
return [num for freq, num in heap]
# ── 3. Find Median from Data Stream (#295) ──
# Two-heap approach: max-heap for lower half, min-heap for upper half.
# Time: O(log n) addNum, O(1) findMedian | Space: O(n)
class MedianFinder:
def __init__(self):
self.small = [] # max-heap (negate values): lower half
self.large = [] # min-heap: upper half
def addNum(self, num: int):
heapq.heappush(self.small, -num) # push to max-heap
heapq.heappush(self.large, -self.small[0]) # balance
heapq.heappop(self.small) # remove from max-heap
# Keep small >= large (or equal)
if len(self.small) < len(self.large):
heapq.heappush(self.small, -self.large[0])
heapq.heappop(self.large)
def findMedian(self) -> float:
if len(self.small) > len(self.large):
return -self.small[0]
return (-self.small[0] + self.large[0]) / 2
# ── 4. Kth Largest Element (#215) ──
# Time: O(n log k) with heap | Space: O(k)
def find_kth_largest(nums: list[int], k: int) -> int:
# Min-heap of size k: top = kth largest
heap = nums[:k]
heapq.heapify(heap)
for num in nums[k:]:
heapq.heappushpop(heap, num) # push then pop min
return heap[0]
heap_advanced.py
# ── 5. Meeting Rooms III (#2402) ──
# Find room used for the most meetings.
# Time: O(n log n) | Space: O(n)
import heapq
def most_booked(n: int, meetings: list[list[int]]) -> int:
meetings.sort()
available = list(range(n))
heapq.heapify(available)
busy = [] # (end_time, room)
count = [0] * n
for start, end in meetings:
# Free up rooms that have ended
while busy and busy[0][0] <= start:
_, room = heapq.heappop(busy)
heapq.heappush(available, room)
if available:
room = heapq.heappop(available)
heapq.heappush(busy, (end, room))
else:
end_time, room = heapq.heappop(busy)
heapq.heappush(busy, (end_time + end - start, room))
count[room] += 1
return count.index(max(count))
# ── 6. Reorganize String (#767) ──
# Rearrange so no two adjacent chars are the same. Use max-heap.
# Time: O(n log k) | Space: O(k) — k = unique chars
def reorganize_string(s: str) -> str:
from collections import Counter
count = Counter(s)
# Max-heap: (-freq, char)
heap = [(-freq, ch) for ch, freq in count.items()]
heapq.heapify(heap)
result = []
prev_freq, prev_ch = 0, ''
while heap:
freq, ch = heapq.heappop(heap)
result.append(ch)
if prev_freq < 0: # push previous back (ensures non-adjacent)
heapq.heappush(heap, (prev_freq, prev_ch))
prev_freq, prev_ch = freq + 1, ch # freq+1 because negated
if len(result) != len(s):
return "" # impossible
return ''.join(result)
# ── 7. Task Scheduler (#621) ──
# Minimum intervals to complete all tasks with cooldown.
# Time: O(n) | Space: O(1) — 26 letters
def least_interval(tasks: list[str], n: int) -> int:
from collections import Counter
freq = Counter(tasks)
max_freq = max(freq.values())
max_count = sum(1 for v in freq.values() if v == max_freq)
# Formula: (max_freq - 1) * (n + 1) + max_count
return max(len(tasks), (max_freq - 1) * (n + 1) + max_count)
💡
Min-heap of size k: For "top K" problems, maintain a min-heap of size K. When a new element comes in, if heap size exceeds K, pop the minimum. This gives O(n log k) instead of O(n log n).
Problem
#
Difficulty
Key Idea
Complexity
Merge K Sorted Lists
23
Hard
Push all heads, pop min
O(N log k)
Top K Frequent Elements
347
Medium
Min-heap of size k on freq
O(n log k)
Find Median Stream
295
Hard
Two heaps (max + min)
O(log n) add
Kth Largest Element
215
Medium
Min-heap of size k
O(n log k)
Meeting Rooms III
2402
Hard
Two heaps + busy tracking
O(n log n)
Reorganize String
767
Medium
Max-heap, alternate picks
O(n log k)
Task Scheduler
621
Medium
Max freq formula
O(n)
K Closest Points
973
Medium
Max-heap of size k
O(n log k)
Find Right Interval
436
Medium
Sort + binary search / heap
O(n log n)
IPO
502
Hard
Two heaps (max-profit, min-cap)
O(n log n)
🌲
Trie (Prefix Tree)
TREE
A Trie (prefix tree) stores strings in a tree structure where each node represents a character. Common prefixes are shared, making it efficient for prefix-based operations: search, insert, autocomplete, and word games. Operations are O(L) where L is the length of the word.
Trie Node Structure
python
class TrieNode:
def __init__(self):
self.children = {} # char -> TrieNode
self.is_end = False # marks end of word
self.count = 0 # number of words through node
When to Use
Prefix searchAutocomplete, suggestions
Word search in gridEfficient prefix pruning
Word frequencyCount occurrences, top words
Replace hash setWhen many strings share prefixes
IP routingLongest prefix match
trie.py
# ── 1. Implement Trie (#208) ──
# Standard trie with insert, search, and startsWith.
# Time: O(L) per op | Space: O(N × L) total
class TrieNode:
def __init__(self):
self.children = {}
self.is_end = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word: str) -> None:
node = self.root
for ch in word:
if ch not in node.children:
node.children[ch] = TrieNode()
node = node.children[ch]
node.is_end = True
def search(self, word: str) -> bool:
node = self.root
for ch in word:
if ch not in node.children:
return False
node = node.children[ch]
return node.is_end
def starts_with(self, prefix: str) -> bool:
node = self.root
for ch in prefix:
if ch not in node.children:
return False
node = node.children[ch]
return True
# Bonus: Delete a word
def delete(self, word: str) -> bool:
def _delete(node, i):
if i == len(word):
if not node.is_end:
return False # word doesn't exist
node.is_end = False
return len(node.children) == 0 # safe to prune?
if word[i] not in node.children:
return False
should_prune = _delete(node.children[word[i]], i + 1)
if should_prune:
del node.children[word[i]]
return len(node.children) == 0
return False
return _delete(self.root, 0)
trie_advanced.py
# ── 2. Word Search II (#212) ──
# Find all words from dictionary in the board.
# Use trie for prefix pruning during DFS.
# Time: O(M × N × 4 × 3^(L-1)) | Space: O(T + L)
class TrieNode2:
def __init__(self):
self.children = {}
self.word = None # store the complete word at end node
def word_search_ii(board: list[list[str]], words: list[str]) -> list[str]:
# Build trie
root = TrieNode2()
for w in words:
node = root
for ch in w:
node = node.children.setdefault(ch, TrieNode2())
node.word = w
rows, cols = len(board), len(board[0])
result = []
def dfs(r, c, node):
ch = board[r][c]
if ch not in node.children:
return
next_node = node.children[ch]
if next_node.word:
result.append(next_node.word)
next_node.word = None # prevent duplicates
board[r][c] = '#' # mark visited
for dr, dc in [(0,1),(0,-1),(1,0),(-1,0)]:
nr, nc = r + dr, c + dc
if 0 <= nr < rows and 0 <= nc < cols:
dfs(nr, nc, next_node)
board[r][c] = ch # restore
# Prune: remove leaf nodes with no word
if not next_node.children:
del node.children[ch]
for r in range(rows):
for c in range(cols):
dfs(r, c, root)
return result
# ── 3. Auto-complete System ──
# Design an auto-complete system using trie with ranking.
class AutocompleteSystem:
def __init__(self, sentences: list[str], times: list[int]):
self.root = {}
self.input_buffer = ""
# Build trie: {char: {children..., count: N}}
for sentence, count in zip(sentences, times):
node = self.root
for ch in sentence:
node = node.setdefault(ch, {})
node['$'] = node.get('$', 0) + count
def input(self, c: str) -> list[str]:
if c == '#':
# Commit: add current input to trie
node = self.root
for ch in self.input_buffer:
node = node.setdefault(ch, {})
node['$'] = node.get('$', 0) + 1
self.input_buffer = ""
return []
self.input_buffer += c
# Search trie for prefix
node = self.root
for ch in self.input_buffer:
if ch not in node:
return []
node = node[ch]
# Collect all sentences from this point
results = []
def collect(n, path):
if '$ in n:
results.append((-n['$'], path))
for ch in n:
if ch != '$':
collect(n[ch], path + ch)
collect(node, self.input_buffer)
results.sort()
return [r[1] for r in results[:3]]
💡
Word Search II optimization: During DFS on the board, if the current path prefix doesn't exist in the trie, prune immediately — don't continue DFS. After finding a word, set node.word = None to avoid duplicates. Optionally prune empty leaf nodes to speed up future searches.
Problem
#
Difficulty
Key Idea
Complexity
Implement Trie
208
Medium
Hash map children, is_end flag
O(L) per op
Word Search II
212
Hard
Trie + DFS with pruning
O(M×N×4^L)
Design Add/Search Words
211
Medium
Trie with . wildcard
O(26^L)
Replace Words
648
Medium
Find shortest prefix in trie
O(total chars)
Map Sum Pairs
677
Medium
Trie with value at nodes
O(L)
Concatenated Words
472
Hard
Trie + DP for word break
O(n × L²)
Word Squares
425
Hard
Trie for prefix constraints
O(n × 26^L)
Implement Trie II
1804
Medium
Trie with count_at_node
O(L)
🔢
Bit Manipulation
BINARY
Bit manipulation leverages binary operations (&, |, ^, ~, <<, >>) for efficient computation. Essential for problems involving counting, toggling, masking, and optimization where O(1) space is needed.
Essential Bit Tricks
python
# Set bit at position i: x |= (1 << i)
# Clear bit at position i: x &= ~(1 << i)
# Toggle bit at position i: x ^= (1 << i)
# Check bit at position i: (x >> i) & 1
# Get lowest set bit: x & (-x)
# Remove lowest set bit: x & (x - 1)
# Count set bits: bin(x).count('1')
# Check power of 2: x > 0 and (x & (x-1)) == 0
# Swap two numbers: a ^= b; b ^= a; a ^= b
# Flip all bits: x ^ 0xFFFFFFFF (for 32-bit)
When to Use
XOR propertiesa^a=0, a^0=a, XOR is commutative
Single numberFind element appearing once
Counting bitsBrian Kernighan: n &= n-1
Power of twon & (n-1) == 0
Bitmask DPSubset enumeration: 1 << n subsets
SwappingXOR swap, no temp variable
bit_manipulation.py
# ── 1. Single Number (#136) ──
# Every element appears twice except one. Find it.
# XOR all: pairs cancel out, single remains.
# Time: O(n) | Space: O(1)
def single_number(nums: list[int]) -> int:
result = 0
for num in nums:
result ^= num
return result
# ── 2. Single Number III (#260) ──
# Every element appears twice except two. Find both.
# Time: O(n) | Space: O(1)
def single_number_iii(nums: list[int]) -> list[int]:
xor_all = 0
for num in nums:
xor_all ^= num
# Find a bit where the two singles differ
diff_bit = xor_all & (-xor_all) # rightmost set bit
a = b = 0
for num in nums:
if num & diff_bit:
a ^= num
else:
b ^= num
return [a, b]
# ── 3. Counting Bits (#338) ──
# Count 1-bits for every number from 0 to n.
# Time: O(n) | Space: O(n) — dynamic programming on bits
def count_bits(n: int) -> list[int]:
bits = [0] * (n + 1)
for i in range(1, n + 1):
# bits[i] = bits[i >> 1] + (i & 1)
# OR: bits[i] = bits[i & (i-1)] + 1 (Brian Kernighan)
bits[i] = bits[i >> 1] + (i & 1)
return bits
# ── 4. Power of Two (#231) ──
# Is n a power of 2? (n > 0, only one bit set)
# Time: O(1) | Space: O(1)
def is_power_of_two(n: int) -> bool:
return n > 0 and (n & (n - 1)) == 0
# ── 5. Reverse Bits (#190) ──
# Reverse the bits of a 32-bit unsigned integer.
# Time: O(32) = O(1) | Space: O(1)
def reverse_bits(n: int) -> int:
result = 0
for _ in range(32):
result = (result << 1) | (n & 1)
n >>= 1
return result
# ── 6. Number of 1 Bits (#191) ──
# Count set bits using Brian Kernighan's algorithm.
# Time: O(number of 1-bits) | Space: O(1)
def hamming_weight(n: int) -> int:
count = 0
while n:
n &= n - 1 # remove lowest set bit
count += 1
return count
# ── 7. Subsets Using Bitmask (#78 variant) ──
# Generate all subsets using bitmask enumeration.
# Time: O(n × 2ⁿ) | Space: O(n)
def subsets_bitmask(nums: list[int]) -> list[list[int]]:
n = len(nums)
result = []
for mask in range(1 << n):
subset = []
for i in range(n):
if mask & (1 << i):
subset.append(nums[i])
result.append(subset)
return result
💡
XOR is your friend: XOR has magical properties for "find the unique element" problems: a ^ a = 0, a ^ 0 = a, and XOR is commutative/associative. So XOR-ing all elements in an array where every element appears twice except one yields the unique element.
Problem
#
Difficulty
Key Idea
Complexity
Single Number
136
Easy
XOR all elements
O(n)
Single Number II
137
Medium
Count bits mod 3
O(n)
Single Number III
260
Medium
XOR + partition by diff bit
O(n)
Counting Bits
338
Easy
DP: bits[i>>1] + i&1
O(n)
Power of Two
231
Easy
n & (n-1) == 0
O(1)
Reverse Bits
190
Easy
Shift and build
O(32)
Number of 1 Bits
191
Easy
n &= n-1
O(#bits)
Missing Number
268
Easy
XOR 0..n with array
O(n)
Subsets (bitmask)
78
Medium
Enumerate 2ⁿ masks
O(n×2ⁿ)
Maximum Product of Word Lengths
318
Medium
Bitmask for letter sets
O(n²)
📐
Intervals
PATTERN
Interval problems involve sets of ranges [start, end]. The canonical approach is to sort by start time, then process intervals in order. Key operations are merging overlapping intervals, inserting new intervals, and checking for overlaps.
Interval Checklist
Step 1Sort intervals by start time (and end time as tiebreaker)
MergeIf current.start <= prev.end: merge; else: start new
InsertFind position, merge if overlapping
Overlap checkSort by start, track max end
Sweep lineProcess all start/end events in order
Common Operations
python
# Two intervals overlap:
a_start <= b_end and b_start <= a_end
# Merge two overlapping intervals:
merged_start = min(a_start, b_start)
merged_end = max(a_end, b_end)
# Sort by start, then end:
intervals.sort(key=lambda x: (x[0], x[1]))
intervals.py
# ── 1. Merge Intervals (#56) ──
# Merge all overlapping intervals.
# Time: O(n log n) | Space: O(n)
def merge(intervals: list[list[int]]) -> list[list[int]]:
if not intervals:
return []
intervals.sort(key=lambda x: x[0])
merged = [intervals[0]]
for start, end in intervals[1:]:
if start <= merged[-1][1]: # overlap
merged[-1][1] = max(merged[-1][1], end)
else:
merged.append([start, end])
return merged
# ── 2. Insert Interval (#57) ──
# Insert new interval into sorted non-overlapping list, merge if needed.
# Time: O(n) | Space: O(n)
def insert_interval(intervals: list[list[int]], new_interval: list[int]) -> list[list[int]]:
result = []
i = 0
n = len(intervals)
# Add all intervals that end before new starts (no overlap)
while i < n and intervals[i][1] < new_interval[0]:
result.append(intervals[i])
i += 1
# Merge overlapping intervals
while i < n and intervals[i][0] <= new_interval[1]:
new_interval[0] = min(new_interval[0], intervals[i][0])
new_interval[1] = max(new_interval[1], intervals[i][1])
i += 1
result.append(new_interval)
# Add remaining intervals
while i < n:
result.append(intervals[i])
i += 1
return result
# ── 3. Meeting Rooms (#252) ──
# Can a person attend all meetings? (No overlaps)
# Time: O(n log n) | Space: O(1)
def can_attend_meetings(intervals: list[list[int]]) -> bool:
intervals.sort(key=lambda x: x[0])
for i in range(1, len(intervals)):
if intervals[i][0] < intervals[i-1][1]:
return False
return True
# ── 4. Meeting Rooms II (#253) ──
# Minimum meeting rooms needed.
# Time: O(n log n) | Space: O(n)
import heapq
def min_meeting_rooms(intervals: list[list[int]]) -> int:
intervals.sort(key=lambda x: x[0])
end_times = [] # min-heap of meeting end times
for start, end in intervals:
if end_times and end_times[0] <= start:
heapq.heappop(end_times) # free a room
heapq.heappush(end_times, end)
return len(end_times)
# ── 5. Non-overlapping Intervals (#435) ──
# Minimum intervals to remove to make non-overlapping.
# Time: O(n log n) | Space: O(1)
def erase_overlap_intervals(intervals: list[list[int]]) -> int:
if not intervals: return 0
intervals.sort(key=lambda x: x[1])
count = 1
end = intervals[0][1]
for start, finish in intervals[1:]:
if start >= end:
count += 1
end = finish
return len(intervals) - count
intervals_advanced.py
# ── 6. Minimum Number of Arrows to Burst Balloons (#452) ──
# Find min arrows to burst all balloons (overlapping intervals).
# Time: O(n log n) | Space: O(1)
def find_min_arrow_shots(points: list[list[int]]) -> int:
if not points: return 0
points.sort(key=lambda x: x[1])
arrows = 1
end = points[0][1]
for start, finish in points[1:]:
if start > end: # no overlap
arrows += 1
end = finish
return arrows
# ── 7. Interval List Intersections (#986) ──
# Intersection of two sorted interval lists.
# Time: O(m + n) | Space: O(1) excluding output
def interval_intersection(
first_list: list[list[int]], second_list: list[list[int]]
) -> list[list[int]]:
i = j = 0
result = []
while i < len(first_list) and j < len(second_list):
start = max(first_list[i][0], second_list[j][0])
end = min(first_list[i][1], second_list[j][1])
if start <= end:
result.append([start, end])
# Move the pointer of the interval that ends first
if first_list[i][1] < second_list[j][1]:
i += 1
else:
j += 1
return result
# ── 8. Employee Free Time (#759) ──
# Find common free time across all employees' schedules.
# Time: O(n log n) | Space: O(n)
def employee_free_time(schedule: list[list[list[int]]]) -> list[list[int]]:
# Flatten and sort all intervals
intervals = sorted(
[interval for emp in schedule for interval in emp],
key=lambda x: x[0]
)
merged = [intervals[0]]
for start, end in intervals[1:]:
if start <= merged[-1][1]:
merged[-1][1] = max(merged[-1][1], end)
else:
merged.append([start, end])
# Find gaps between merged intervals
free = []
for i in range(1, len(merged)):
free.append([merged[i-1][1], merged[i][0]])
return free
⚠️
Sort strategy matters: For merging intervals, sort by start. For minimum removals (activity selection), sort by end to greedily keep earliest-finishing intervals. For meeting rooms, use a min-heap of end times or sweep line with start/end events.
Problem
#
Difficulty
Key Idea
Complexity
Merge Intervals
56
Medium
Sort by start, merge overlap
O(n log n)
Insert Interval
57
Medium
Find position + merge
O(n)
Meeting Rooms
252
Easy
Sort + check adjacent overlap
O(n log n)
Meeting Rooms II
253
Medium
Sort + min-heap of end times
O(n log n)
Non-overlapping Intervals
435
Medium
Sort by end, greedy
O(n log n)
Arrow Burst Balloons
452
Medium
Sort by end, count arrows
O(n log n)
Interval Intersections
986
Medium
Two pointers, advance shorter
O(m+n)
Employee Free Time
759
Hard
Merge all + find gaps
O(n log n)
Summary Ranges
228
Easy
Sort + group consecutive
O(n)
Car Pooling
1094
Medium
Sweep line / diff array
O(n log n)
📈
Complexity Analysis
FUNDAMENTAL
Understanding time and space complexity is crucial for choosing the right algorithm and optimizing solutions. This section provides a comprehensive reference for Big O notation, common complexities, amortized analysis, and space-time tradeoffs.
Big O Complexity Classes
Big O
Name
n=10
n=100
n=1000
Example
O(1)
Constant
1
1
1
Hash lookup, array index
O(log n)
Logarithmic
3
7
10
Binary search
O(√n)
Square root
3
10
32
Factor check, two pointers
O(n)
Linear
10
100
1K
Single loop, array scan
O(n log n)
Log-linear
33
664
10K
Merge sort, heap sort
O(n²)
Quadratic
100
10K
1M
Bubble sort, nested loops
O(n³)
Cubic
1K
1M
1B
Matrix multiply, 3 nested
O(2ⁿ)
Exponential
1K
1.3×10³⁰
∞
All subsets, backtracking
O(n!)
Factorial
3.6M
∞
∞
All permutations
Data Structure Operations
Data Structure
Access
Search
Insert
Delete
Space
Array
O(1)
O(n)
O(n)
O(n)
O(n)
Sorted Array
O(1)
O(log n)
O(n)
O(n)
O(n)
Linked List
O(n)
O(n)
O(1)
O(1)*
O(n)
Hash Table
O(1)
O(1)
O(1)
O(1)
O(n)
BST (balanced)
O(log n)
O(log n)
O(log n)
O(log n)
O(n)
Heap
O(1) peek
O(n)
O(log n)
O(log n)
O(n)
Trie
—
O(L)
O(L)
O(L)
O(N×L)
Stack/Queue
O(1) top
O(n)
O(1)
O(1)
O(n)
Space-Time Tradeoffs
Hash mapO(n) space for O(1) lookup (Two Sum)
MemoizationO(states) space for O(1) per state (DP)
TrieO(N×L) space for O(L) prefix search
Sorted arrayO(n log n) sort for O(log n) search
Prefix sumO(n) space for O(1) range queries
Monotonic stackO(n) space for O(n) next greater
Amortized Analysis
Dynamic array appendO(1) amortized (doubling)
Hash table insertO(1) amortized (rehash rare)
Union-FindO(α(n)) amortized (inverse Ackermann)
Splay treeO(log n) amortized per operation
Garbage collectionO(1) amortized per allocation
complexity_examples.py
# ── Complexity Cheat Sheet ──
# Loops: multiply nested depths
def nested_loops(n): # O(n²)
for i in range(n):
for j in range(n):
pass
# Halving: O(log n)
def binary_search(arr, t): # O(log n)
lo, hi = 0, len(arr) - 1
while lo <= hi:
mid = (lo + hi) // 2
if arr[mid] == t: return mid
if arr[mid] < t: lo = mid + 1
else: hi = mid - 1
return -1
# Two pointers: O(n) — not O(n²) even though two loops
def two_pointers(arr): # O(n)
i, j = 0, len(arr) - 1
while i < j:
# process
i += 1; j -= 1
# Backtracking: O(2ⁿ) for subsets, O(n!) for permutations
def subsets(arr): # O(n × 2ⁿ) time
def bt(start, path):
result.append(path[:])
for i in range(start, len(arr)):
path.append(arr[i])
bt(i + 1, path)
path.pop()
result = []
bt(0, [])
return result
# Recursion with memoization: O(n × states)
def fib(n, memo={}): # O(n) with memo, O(2ⁿ) without
if n <= 1: return n
if n in memo: return memo[n]
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
# ── Master Theorem ──
# T(n) = aT(n/b) + O(nᵈ)
# If d < log_b(a): O(n^(log_b(a))) — work at leaves dominates
# If d = log_b(a): O(nᵈ log n) — equal work at all levels
# If d > log_b(a): O(nᵈ) — work at root dominates
#
# Example: Merge sort T(n) = 2T(n/2) + O(n)
# a=2, b=2, d=1 → d = log₂(2) = 1 → O(n log n) ✓
complexity_tips.py
# ── Quick Complexity Estimation Guide ──
# Problem: "Given an array of size n..."
# One pass: O(n)
# Sort + pass: O(n log n)
# Nested loops: O(n²)
# Binary search: O(log n)
# Subsets/backtracking: O(2ⁿ)
# Permutations: O(n!)
# Problem: "Given a string of length n..."
# Single scan: O(n)
# With hash set: O(n)
# Two nested scans: O(n²)
# Sort + scan: O(n log n)
# Problem: "Given matrix m×n..."
# Single scan: O(m × n)
# BFS/DFS: O(m × n)
# DP on grid: O(m × n)
# All paths (DP): O(m × n)
# Permute cells: O((m×n)!)
# Problem: "Given tree with N nodes..."
# BFS/DFS traversal: O(N)
# Recursive: O(N) visits
# Path sum problems: O(N)
# Subtree comparisons: O(N²) worst case
# ── Space Complexity ──
# Input: Usually NOT counted in space complexity
# Output: Usually NOT counted
# Extra: What you allocate beyond input/output
#
# Common space usage:
# Array/HashSet: O(n)
# 2D DP table: O(n²) or O(n×m)
# Recursion stack: O(depth) — O(n) for list, O(log n) for balanced tree
# Trie: O(N × L) where L = avg word length
# Hash map of counters: O(k) where k = unique elements
💡
Interview tip: Always state your time and space complexity before coding. If asked to optimize, first consider: can you eliminate the inner loop (hash map, two pointers)? Can you reduce space (reuse array, iterate backward)? Can you use binary search (sorted property)?