528. Random Pick with Weight - Detailed Explanation

Free Coding Questions Catalog
Boost your coding skills with our essential coding questions catalog. Take a step towards a better tech career now!

Problem Statement

You are given a 0-indexed array of positive integers w where w[i] represents the weight of index i. You need to design a class Solution that supports two operations:

  • Solution(w) (constructor): Initializes the object with the given weights array w.
  • pickIndex(): Returns a random index i in the range [0, w.length - 1] such that the probability of returning index i is proportional to w[i]. In other words, pickIndex() returns index i with probability w[i] / sum(w).

The goal is to simulate a random selection where indices with larger weights are picked more often (i.e., higher weight means higher probability of being chosen).

Example 1:

  • Input: w = [1]
  • Output: 0
  • Explanation: There is only one index (0) with weight 1. pickIndex() will always return 0 since it's the only choice.

Example 2:

  • Input: w = [1, 3]
  • Output (one possible sequence of calls): 1, 1, 1, 1, 0
  • Explanation: Here index 0 has weight 1 and index 1 has weight 3. The probability of picking index 0 is 1/(1+3) = 0.25 (25%), and for index 1 is 3/(1+3) = 0.75 (75%) . In a series of calls to pickIndex(), index 1 is expected to appear about three times more often than index 0. In the example output, across 5 calls index 1 appears 4 times and index 0 appears once, which is consistent with the probabilities. (Since the selection is random, each individual call’s result may vary. The shown sequence is just one possible outcome.)

Example 3:

  • Input: w = [2, 5, 3, 4]
  • Output (one possible sequence of calls): 1, 3, 1, 2, 1
  • Explanation: The weights sum to 2+5+3+4 = 14. The probability of each index is: index 0 = 2/14 (≈14.3%), index 1 = 5/14 (≈35.7%), index 2 = 3/14 (≈21.4%), index 3 = 4/14 (≈28.6%). One possible outcome for five calls is shown above, where index 1 (the heaviest weight) appears most frequently. For instance, if we call pickIndex() once and the random process picks index 3, that implies the random choice fell into the range allocated to index 3 (weights and ranges explained below). Repeating multiple times will show index 1 about 35-40% of the time, index 3 about 28%, index 2 about 21%, and index 0 about 14% over the long run.

Constraints:

  • 1 <= w.length <= 10^4
  • 1 <= w[i] <= 10^5
  • pickIndex() will be called at most 10^4 times .

These constraints mean the solution should be efficient in both time and memory. The weights are all positive, so no index has zero probability. Edge cases include a single-element array (always return index 0) and very large arrays or weight values (where performance and potential integer overflow need to be considered).

Hints for Solution

  • Think about how you might simulate the process of picking an index with the given probabilities. Can you transform the weight distribution into something like a range or interval that each index occupies?
  • One idea: If you had a list of items where each index i appears w[i] times, then choosing a random item from that list would naturally pick index i with the correct probability. How can you avoid explicitly creating such a large list (which could be infeasible)?
  • Consider using prefix sums to aggregate weights. If you know the cumulative weight up to each index, how could a random number help you decide which index to pick?
  • Once you have a total sum of weights, try generating a random number in the range of this sum. Determine how to efficiently find which segment of the cumulative weight this number falls into (this is where binary search or other data structures might help).

Multiple Approaches

Brute Force Approach

A straightforward approach is to directly use the weights to simulate the random pick. Although this approach works logically, it may not be efficient for large inputs.

Idea: Use the weights to create a "sample space" of indices, then pick a random index from that space uniformly. For example, if w = [1, 3, 2], imagine a list that contains one 0 (for weight 1), three 1's (for weight 3), and two 2's (for weight 2):

sample_list = [0, 1, 1, 1, 2, 2]

If you choose a random element from sample_list, you’ll get 0 with probability 1/6, 1 with probability 3/6, and 2 with probability 2/6, matching the weights. This concept is simple but constructing such a list explicitly is very inefficient when weights are large (the list length would be sum(w) which can be up to 1e9 in the worst case!).

A better brute-force strategy is to avoid storing the huge list and instead use a linear scan to find the random index:

  1. Compute the total sum of weights S = sum(w).
  2. Generate a random integer r in the range [1, S] (inclusive).
  3. Iterate through the weight array, accumulating weights until the running sum exceeds or equals r. The index at which this happens is the chosen index. This works because you're essentially finding which interval of the cumulative weights r falls into.
    • For instance, if w = [1,3,2] (total S=6) and r = 5, you would accumulate: after index 0 sum=1 (not yet ≥5), after index 1 sum=4 (not yet ≥5), after index 2 sum=6 (now ≥5) so you stop at index 2. Index 2 is the result, which makes sense because r=5 falls into the portion of the number line allocated to index 2 (which covers fifth and sixth positions out of 6).

Python Implementation (Brute Force – Linear Scan):

Python3
Python3

. . . .

Java Implementation (Brute Force – Linear Scan):

Java
Java

. . . .

Complexity Analysis (Brute Force):

  • Time Complexity: Initializing the total sum is O(n). Each call to pickIndex() in the worst case scans through the entire weight array, which is O(n). If pickIndex() is called m times, the worst-case time is O(n * m). Given the constraints (n up to 10,000 and m up to 10,000), this could be up to ~10<sup>8</sup> operations in the worst case, which might be borderline slow in a language like Python. In C++/Java, 10^8 operations is on the edge but could pass; in Python, it would likely be too slow.

  • Space Complexity: O(1) extra space (not counting the input). We only store a few integers. (If we had implemented the explicit sample list method, space would be O(sum(w)), which is infeasible for large weights.)

Optimal Approach (Prefix Sum + Binary Search)

The optimal solution uses a prefix sum array combined with binary search to pick an index efficiently. The idea is to map each index to a range on the number line proportional to its weight, then use a random number to select a range.

How it works:

  1. Prefix Sum Construction: Compute a prefix sum array prefix such that prefix[i] is the sum of weights from index 0 up to i. In other words, prefix[i] = w[0] + w[1] + ... + w[i]. The last value prefix[n-1] will be the total sum of all weights. For example, if w = [2,5,3,4], then the prefix sums would be [2, 7, 10, 14] . Each prefix value represents the upper bound of the range for that index. Index 0 covers range 1..2, index 1 covers 3..7, index 2 covers 8..10, and index 3 covers 11..14 in this example (these ranges are exactly the weight counts for each index).

  2. Random Number Pick: Generate a random integer r in the range [1, total_sum]. This r will fall into one of the intervals defined by the prefix sums. Using the above example (total_sum = 14):

    • If r is 1 or 2, it falls in the first interval (prefix <= 2), so index 0 is picked.
    • If r is 3 to 7, it falls in the second interval (prefix <= 7), so index 1 is picked.
    • If r is 8 to 10, it falls in the third interval (prefix <= 10), so index 2 is picked.
    • If r is 11 to 14, it falls in the fourth interval (prefix <= 14), so index 3 is picked.
  3. Binary Search: Instead of scanning linearly to find which interval r falls into, we can use binary search on the prefix array to find the smallest index such that prefix[index] >= r. This index is the result of pickIndex(). Binary search reduces the selection time from O(n) to O(log n).

By using binary search on the prefix sums, we efficiently determine the correct index for any random value.

Python Implementation (Optimal – Prefix Sum + Binary Search):

Python3
Python3

. . . .

In the Python code above, bisect_left(self.prefix, target) returns the leftmost index where target could be inserted to keep self.prefix sorted. This effectively gives the index of the first prefix sum that is >= target, which is exactly what we want. (If target equals a prefix sum exactly, bisect_left will return that index, which is correct because that index’s range includes the prefix sum value.)

Java Implementation (Optimal – Prefix Sum + Binary Search):

Java
Java

. . . .

In the Java code, we perform a manual binary search on the prefix array to find the correct index. We could also use Arrays.binarySearch with a tweak (searching for target in prefix and interpreting the result), but the manual binary search is straightforward: we narrow down the range lo..hi until lo == hi, at which point lo is the index of the first prefix that is not less than target.

Complexity Analysis (Optimal Approach):

  • Time Complexity: Computing the prefix sum array takes O(n). Each call to pickIndex() uses binary search on the prefix array, which is O(log n). If m calls are made, the total time complexity is O(n + m·log n). With n and m up to 10,000, in the worst case this is about 10,000 + 10,000·log(10,000) ≈ 10,000 + 10000·~14 ≈ 150,000 operations, which is very efficient.
  • Space Complexity: O(n) for storing the prefix sum array. The prefix array is the same length as the input weights array. (We use a few additional variables for sums and indices, and in Python the bisect module, but that’s negligible extra space.)

Step-by-Step Visual Walkthrough

Let's walk through the process with a visual example to build intuition. Consider the weights array: w = [1, 2, 3, 4, 5]. This means:

  • Index 0 has weight 1 (small chance to be picked).
  • Index 1 has weight 2.
  • Index 2 has weight 3.
  • Index 3 has weight 4.
  • Index 4 has weight 5 (largest chance to be picked).

The total weight sum is 1+2+3+4+5 = 15. Imagine a number line from 1 to 15 representing the range of this total weight.

Visualization: The top row (blue numbers) shows each index repeated in proportion to its weight, essentially simulating the “sample space” of 15 outcomes. The bottom row (red markers on a line) shows the prefix sum intervals for each index on the number line from 1 to 15. Index 0 occupies only the segment [1,1], index 1 occupies [2,3], index 2 occupies [4,6], index 3 occupies [7,10], and index 4 occupies [11,15]. This corresponds to the prefix sum array [1, 3, 6, 10, 15] – for example, prefix[2] = 6 means indices ≤2 cover range 1..6.

When we call pickIndex(), we generate a random number in 1..15 and see where it falls:

  • If the random number falls in 1..1, it picks index 0.

  • If it falls in 2..3, it picks index 1.

  • If it falls in 4..6, it picks index 2.

  • If it falls in 7..10, it picks index 3.

  • If it falls in 11..15, it picks index 4.

    Example: Suppose pickIndex() generates the random number 8 (yellow highlight in the diagram). We look at the prefix intervals and see that 8 falls into the range [7,10], which corresponds to index 3 (the interval for index 3). So pickIndex() would return 3. If another call generates the random number 15 (green highlight), it falls into [11,15], the range for index 4, so the result would be index 4. In this way, each index’s chance of being picked is exactly proportional to the length of its interval, which matches its weight. The binary search approach finds these interval boundaries quickly even when the number line (total sum) is large.

Through this visualization, you can see that the prefix sum array partitions the number line into segments, one for each index, with segment lengths equal to the weights. Picking a random point on this line (uniformly) and finding which segment it lands in is effectively what our algorithm does.

Common Mistakes and Edge Cases

  • Off-by-One Errors: A frequent pitfall is dealing with the random range and binary search conditions incorrectly. For example, if you use random.randint(0, total_sum) (which gives 0 to total_sum inclusive) or rand.nextInt(total_sum) (which gives 0 to total_sum-1) without adjusting, you have to be careful to map that to your prefix array correctly. In our solution, we chose [1, total_sum] as the random range and looked for the first prefix ≥ that number. Alternatively, using [0, total_sum - 1] as the range and then finding the first prefix > that number is also valid. The key is to be consistent. An off-by-one error can cause the distribution to be slightly skewed or even index out of range.

  • Not Using Binary Search (when needed): Some might attempt to linearly scan the prefix array for each pick (as in the brute-force approach). This works but is slow for large input sizes or many picks. Forgetting to optimize with binary search (or an equivalent structure) can lead to timeouts.

  • Integer Overflow: In languages like Java or C++, watch out for potential overflow when computing the prefix sum if the total sum exceeds the max value of the data type. In this problem, the maximum total sum is 10,000 * 100,000 = 1e9, which fits in 32-bit int, but it’s a good habit to use a larger type (like long in Java) for sum if the limits are unclear or could grow. In our Java solution, int was sufficient due to problem constraints, but for a variation with larger weights, long would be safer.

  • Zero or Negative Weights: By problem definition, weights are positive. But if you ever encounter a variation where weights could be zero or if the input had a zero (despite constraints), ensure your logic skips or handles those properly (a zero-weight index should essentially never be picked). Negative weights would not make sense in this context and should be treated as invalid input.

  • Misinterpreting Probability: A conceptual mistake is trying to directly use the weights as probabilities without normalization. Remember, the probability for index i is w[i] / sum(w). Our method effectively normalizes by using the random range up to sum(w). If someone tried to pick an index by, say, random.choice on the array of weights or other ad-hoc methods, they may not get the correct distribution. Always ensure the method truly reflects the weighted probabilities.

This pattern of prefix sum + binary search for random selection appears in various scenarios:

  • Random Pick with Blacklist (LeetCode 710): You need to pick a random index from a range [0, N) excluding certain "blacklisted" indices. The probabilities are uniform for allowed indices, but the idea of mapping ranges and using random numbers is similar. (That problem can be solved with a clever mapping technique since all allowed indices have equal weight.)

  • Random Point in Non-Overlapping Rectangles (LeetCode 497): Here you pick a random point from given rectangles, where each rectangle’s chance is proportional to its area. The solution first picks a rectangle using weight = area (often via prefix sums and binary search) and then picks a random point within that rectangle. It’s a direct extension of the weighted random selection concept to two dimensions.

  • Random Pick Index (LeetCode 398): In this problem, given an array and a target value, you need to randomly return an index of the target value (assuming target appears multiple times). All such indices should be equally likely. This is a simpler scenario (weights are equal for the target’s occurrences). It can be solved by reservoir sampling or simply collecting indices. While not a weighted problem, it deals with random selection under certain constraints.

  • Linked List Random Node (LeetCode 382): Pick a random node from a linked list (each node equal probability). This is uniform random selection, which can be done in one pass with reservoir sampling since the list length might not be known in advance. Again, not weighted, but related to random selection.

  • Sampling and Probability in general: The technique used in this problem is essentially performing what’s known in statistics as sampling from a discrete probability distribution. Another advanced method to achieve the same result is the Alias Method, which preprocesses the weights into a special structure to allow O(1) random picks. The alias method is more complex to implement but can be more efficient for huge numbers of picks.

  • Variations: If the problem were extended to allow updating weights dynamically (not in this LeetCode problem, but hypothetically), you would need a data structure that supports updating and querying prefix sums quickly (like a Fenwick tree or segment tree) to maintain efficiency. Also, if weights were given as probabilities (floats) instead of integers, you’d accumulate them similarly (or scale to an integer range) and use binary search, being mindful of precision.

TAGS
LeetCode
CONTRIBUTOR
Design Gurus Team
-

GET YOUR FREE

Coding Questions Catalog

Design Gurus Newsletter - Latest from our Blog
Boost your coding skills with our essential coding questions catalog.
Take a step towards a better tech career now!
Explore Answers
3163. String Compression III - Detailed Explanation
Learn to solve Leetcode 3163. String Compression III with multiple approaches.
Why is Netflix so popular?
What code is Google written in?
Related Courses
Image
Grokking the Coding Interview: Patterns for Coding Questions
Grokking the Coding Interview Patterns in Java, Python, JS, C++, C#, and Go. The most comprehensive course with 476 Lessons.
Image
Grokking Modern AI Fundamentals
Master the fundamentals of AI today to lead the tech revolution of tomorrow.
Image
Grokking Data Structures & Algorithms for Coding Interviews
Unlock Coding Interview Success: Dive Deep into Data Structures and Algorithms.
Image
One-Stop Portal For Tech Interviews.
Copyright © 2025 Design Gurus, LLC. All rights reserved.
;