3133. Minimum Array End - Detailed Explanation
Problem Statement:
Definition: You are given two integers n
and x
. Construct an array nums
of positive integers with length n
that satisfies:
nums[i+1] > nums[i]
for every0 <= i < n-1
(the array is strictly increasing).- The bitwise AND of all elements in
nums
is exactlyx
. In other words,nums[0] & nums[1] & ... & nums[n-1] = x
.
Goal: Return the minimum possible value of the last element nums[n-1]
(the end of the array) under these conditions.
Examples:
-
Example 1:
Input:n = 3
,x = 4
Output:6
Explanation: One valid construction isnums = [4, 5, 6]
. All numbers have the bitwise AND4 & 5 & 6 = 4
which equalsx
. The last element is6
. (Other sequences like[4, 5, 7]
also meet the conditions, but6
is the smallest possible last value.) -
Example 2:
Input:n = 2
,x = 7
Output:15
Explanation: We can takenums = [7, 15]
. Here7 & 15 = 7
which equalsx
. The last element is15
. (There is no way to have the second element smaller than 15 while keeping the AND as 7.) -
Example 3:
Input:n = 5
,x = 2
Output:10
Explanation: One optimal array isnums = [2, 3, 6, 7, 10]
. All elements share the bit1
(the bit for value 2) so that2 & 3 & 6 & 7 & 10 = 2
. The last element here is10
, which is the minimum possible for 5 increasing numbers satisfying the AND condition.
Constraints:
1 <= n <= 10^8
1 <= x <= 10^8
The large constraints mean we need an efficient solution. A brute-force or naive approach that tries to generate all numbers for very large n
would be too slow, so we must look for a mathematical or bitwise pattern to compute the result.
Hints:
-
Hint 1: Consider what the bitwise AND requirement implies for each bit position. If a certain bit is
1
inx
, how must that bit appear in every element of the array? If a bit is0
inx
, what does that tell you about at least one element in the array? -
Hint 2: What is the smallest possible value for the first element
nums[0]
? (Think about the requirement that all elements' AND isx
.) Starting from that, how can you getn
distinct values that all maintain the AND equal tox
? -
Hint 3: Try constructing small examples by hand. For instance, if
x
has a single bit (likex=4
which is100
in binary), how do the sequence of numbers look? What ifx
has multiple 1-bits (likex=6
which is110
in binary)? Look for a pattern in how the last element grows asn
increases. -
Hint 4: Think in binary. The condition that the AND of all numbers is
x
means every bit that is1
inx
stays1
in all numbers. Bits that are0
inx
can vary among the numbers. This often means we can use those0
bits to create differences between array elements. -
Hint 5: If you find the sequence by simulation (incrementing and adjusting bits), see if you can derive a direct way to compute the final number from
n
andx
without generating the entire array.
Multiple Approaches:
Brute Force Approach (Simulation):
Idea: Use the conditions directly to simulate the construction of the array. We know the smallest possible first element is x
itself (since every element must have all bits of x
; any number smaller than x
would miss a bit that x
has). From there, generate each subsequent number as the smallest integer greater than the previous one that still has a bitwise AND with x
equal to x
. This can be done by checking each increment or using a trick with bit operations.
How to ensure the AND remains x
: A number has bitwise AND equal to x
with another number only if it contains all the 1-bits of x
. So every number in nums
must have all bits that x
has. If x
in binary is, say, 1010
, then every nums[i]
must have bits 1 and 3 set to 1 (matching x
). They can have other bits set too, but those additional bits will be turned off in the AND if at least one array element doesn’t have them. The sequence should increase gradually by turning on extra bits (that x
doesn’t have) as needed to get n
distinct numbers.
Step-by-Step (Brute Force) Example:
Consider x = 6 (binary 110)
and n = 4
. We need 4 increasing numbers, all containing bits 1 and 2 (0-indexed from LSB) because x=6
has those bits set (binary 110
).
-
Start
nums[0] = 6
(the smallest number that has bits 1 and 2 set). -
To get
nums[1]
, we try the next integer7
(binary111
). Does it have bits 1 and 2? Yes (it has bits 0,1,2 set). Sonums[1] = 7
. -
For
nums[2]
, the next integer after 7 is 8 (1000
binary). 8 does not have bit 1, so it cannot be in the array (it would drop a required bit). Next is 9 (1001
– missing bit 1), then 10 (1010
– missing bit 2), then 11 (1011
). 11 in binary is1011
which has bit 1 and bit 0, but it is missing bit 2 (bit 2 is 0 in 11, so it’s not acceptable). Next is 12 (1100
– missing bit 1), then 13 (1101
– missing bit 1), then 14 (1110
). 14 is1110
in binary, which has bits 1 and 2 set (and bit 3 as well). This one works. Sonums[2] = 14
. -
For
nums[3]
(the 4th number), continue from 14: 15 (1111
in binary) has bits 1 and 2, so it’s valid. We takenums[3] = 15
.
Our array is[6, 7, 14, 15]
. Every element has bits 1 and 2 set, so the AND is6
(the extra bits like bit 0 or bit 3 are not present in all elements, thus they drop out in the AND). The last element here is 15, which we believe is minimal for these conditions.
As you can see, the brute force approach essentially checks each subsequent number and ensures it maintains the required bits. A direct implementation of this would increment and bit-check until it found n
numbers. This works for small cases, but for large n
(up to 100 million!), this is far too slow. We need to optimize.
Optimized Approach (Bit Manipulation Pattern):
Idea: Instead of simulating number by number, derive a direct calculation for the final number (nums[n-1]
). We can leverage binary representations to skip the brute-force iteration. The key observation is that when constructing the array, we are effectively counting up in binary while always including the bits of x
in every number. This is like counting through numbers, but skipping any number that would drop a bit of x
.
Understanding the Pattern:
Treat the bits of x
as “fixed” 1s that must always be present. Any bits that are 0 in x
are “free” to vary among the array elements. When we list out valid array elements in increasing order, we’re essentially listing all numbers that contain x
as a subset of their bits, in increasing order.
If you take those “free” bits (the 0s of x
), the sequence of allowed numbers corresponds to counting up in binary using only those free bit positions. We start at 0 extra bits (which gives the first number as x
itself), then the next combination of extra bits, and so on.
Another way to see it: Imagine writing all numbers in binary that have all bits of x
. If you strip away the bits that x
always has, what remains is just combinations of the other bits. For example, if x = 4 (100)
then any allowed number has bit2=1 always, and bits {1,0} can vary. The allowed numbers (with base 100
) are:
100 (base bits only) -> 4
101 (base + bit0) -> 5
110 (base + bit1) -> 6
111 (base + bit1+bit0) -> 7
1000 (base bit moved to bit3, which is actually bit2 base plus new bit3, = 12)
... and so on.
These correspond to taking the binary count of extra bits: 00, 01, 10, 11, 100, ... and overlaying them on the base 100
.
Efficient Construction: We can determine nums[n-1]
by taking the binary number (n - 1)
and merging its bits into the positions of the 0-bits of x
. Why n-1
? Because if we count starting from 0, there are n
numbers from 0 to n-1
inclusive. Each of these in binary will represent one combination of extra bits (including the combination “no extra bits” which corresponds to the first element being x
itself). The 0th combination (extra bits = 0) gives x
. The 1st combination (extra bits = binary 1
) gives the next number, etc., and the (n-1)
th combination will give the last number.
Algorithm for optimized approach:
- Start with
ans = x
(this will accumulate the result). - Take
k = n - 1
(this is the number of extra combinations we need to add on top ofx
). - Iterate over bit positions from least significant bit (LSB) upward:
- If the current bit position
i
is a 0 inx
(meaningx
does not have this bit set), then this position can be used by the "extra bits" from our countk
.-
Look at the corresponding bit of
k
(starting from its LSB upwards). If that bit ofk
is 1, set biti
in our resultans
to 1. If that bit ofk
is 0, leaveans
biti
as 0. -
Move to the next bit of
k
for the next zero position ofx
.
-
- If bit
i
is 1 inx
, then this bit remains 1 inans
(because all numbers must have it). Skip it for the purposes of placingk
's bits (do not consume a bit ofk
in this case, sincek
’s bits only fill the gaps wherex
has 0).
- If the current bit position
- Continue until you have placed all bits of
k
intoans
(or run out of positions up to the necessary range). The resultingans
will be the minimum last element.
This method effectively interweaves the bits of (n-1)
into the "gaps" of x
's bit pattern. It runs in time proportional to the number of bits we process (which is on the order of log2(n) + log2(x)
– very efficient even for large values).
Step-by-Step (Optimized) Example:
Let’s walk through an example to illustrate the optimized approach, say n = 5
and x = 2
(as in Example 3, where we expect output 10).
-
x = 2
in binary is10
(bit1 is 1, bit0 is 0). So the base bits that must remain 1 in all numbers are at position 1. -
The zero bits of
x
(considering positions from LSB upward) are: bit0 is 0, bit2 is 0, bit3 is 0, etc. (bit1 is 1 so skip it). -
We take
k = n - 1 = 5 - 1 = 4
. Binary representation of4
is100
. -
Now, fill the zero positions of
x
with the bits of4
(which is100
in binary):- The first zero bit of
x
is at position 0.k
’s least significant bit is 0 (the binary100
of k, from rightmost bit, is 0). So we set this position inans
to 0 (it remains 0). Move to the next bit ofk
(next bit of100
is 0) and the next zero inx
. - The next zero bit of
x
is at position 2 (since position 1 was a 1 in x and is skipped). The next bit ofk
is 0, so we set position 2 to 0. Move to the next bit ofk
(next bit of100
is 1) and the next zero inx
. - The next zero bit of
x
is at position 3. The next bit ofk
is 1, so we set position 3 inans
to 1. Now we have used all bits ofk
(100
has three bits and we placed them in positions 0, 2, 3 respectively).
- The first zero bit of
-
All other higher positions of
x
(and thus ofans
) beyond those we considered remain 0 ifx
had them 0 and we ran out ofk
bits. -
Now we combine this with the original bits of
x
:x
had bit1 = 1, so ensure bit1 ofans
is 1. After filling:- Bit1 is 1 (from
x
), - Bit3 is 1 (set from
k
), - Bits0,2 are 0 (either from
x
or filled with 0 fromk
), - So
ans
in binary is1010
, which is10
in decimal.
- Bit1 is 1 (from
-
Result
ans = 10
, which matches the expected minimal last element.
This method quickly gives the result without enumerating all intermediate numbers.
Code Implementations:
Below are code solutions in both Python and Java for both the brute force simulation and the optimized approach. The brute force approach is mainly for demonstration and will work for small cases; the optimized approach is necessary for efficiency on large inputs. Each implementation includes a simple main
or test demonstration.
Python – Brute Force Solution:
Explanation: The brute force function incrementally finds each next valid number. It checks next_num & x == x
, which ensures next_num
has all bits of x
. This approach works but notice that if n
is very large, the loop will run n-1
times, which is not feasible when n
is up to 1 e 8. The demonstration calls show it produces the expected outputs for the examples given.
Python – Optimized Solution:
Explanation: The optimized function iterates through each bit position. It uses two counters: bit_position
for the actual bit in the result and bit_index_in_k
for which bit of k = n-1
to apply. It checks if x
has a bit at the current position. If yes, it keeps that bit as 1 in the result (and does not consume a bit from k
). If x
has 0 at this position, then it uses the next bit of k
to decide if result should have a 1 or 0 at this bit. After processing all bits of k
, the result is ready. The demo prints confirm the function works for the example cases.
Java – Brute Force Solution:
Note: We use long
in Java for the result since the numbers can potentially exceed the range of 32-bit int if n
is large (the result could be larger than x
). The brute force is again only suitable for small n
due to performance.
Java – Optimized Solution:
Explanation: The Java optimized version follows the same logic as the Python one. We use bit manipulation (<<
and &
) to test and set bits. We loop through bit positions: if x
has a 1-bit, we leave it (result keeps it), if x
has a 0-bit, we assign a bit from k
to that position in the result. We stop when we've assigned all bits of k
. The while
loop condition here is written to avoid infinite looping, using Java’s Long.SIZE
(64 bits) as an upper bound and checking we don't overshoot the significant bits of k
.
Complexity Analysis:
-
Brute Force Approach: The time complexity is O(n * B), where
B
is the bit-length of the numbers (for checking the condition(next & x) == x
, which is O(B) per check, but B is at most 30 for our range, so mainly O(n)). In the worst case, it could check nearly every number until building n elements. For largen
(up to 100 million), this is clearly infeasible. The space complexity is O(1) (we only store a few counters and current number). While this approach is conceptually simple, it will time out or run for too long for large inputs. -
Optimized Approach: The time complexity is O(log n + log x), which comes from iterating through the bit positions needed. In the worst case,
log2(n)
andlog2(x)
are around 27 (for n or x up to 1e8), so maybe on the order of 30-35 iterations, which is effectively constant time for practical input sizes. This is extremely fast and easily handles the maximum constraints. Space complexity is O(1), as we only use a few variables to compute the result. This approach is efficient because it uses bit operations that execute in constant time per bit and doesn’t grow withn
linearly.
Why O(log n + log x)? We combine bits of n-1
and x
. In the worst case, if x
is 0 (which it can’t be in our constraints because x>=1, but imagine conceptually) or very small, we might have to consider bits up to log2(n)
because n-1
could have that many bits. If x
is large (with bits near that range), it might itself contribute up to log2(x)
bits to consider. We iterate through the union of those bit-lengths. Still, even summing those, it’s on the order of at most ~60 bits for our input range, which is constant-ish. Thus, this solution is extremely scalable for the given limits.
Step-by-Step Walkthrough (Optimized Solution with Example):
To solidify the understanding, let’s do a detailed binary walkthrough for a specific example using the optimized method. We’ll use Example 1 (n=3, x=4) for clarity:
- Input:
n = 3
,x = 4
.
x
in binary is100
. The bits ofx
that are 1: only bit2 is 1. The bits ofx
that are 0: bit0 and bit1 (and any higher bits beyond bit2 are also 0, but we may not need those).
We needn = 3
numbers, so considerk = n - 1 = 2
.
k = 2
in binary is10
.
Now, we fill the zero bits of x
(from LSB upwards) with the bits of k
(10
):
- Bit position 0:
x
has a 0 in this position (since100
has 0 at bit0).- The least significant bit of
k
(2, which is10
in binary) is 0. - We set bit0 of result to 0 (leave it as 0).
- Move to the next bit of
k
(now consider the next bit in10
, which is 1) and next zero position.
- Bit position 1:
x
has 0 in this position as well (bit1 of100
is 0).- The next bit of
k
is 1. - We set bit1 of result to 1 (to include this bit from
k
). - Move to the next bit of
k
(we have used both bits ofk
now: first 0, then 1) and next zero position.
- Bit position 2:
x
has 1 in this position (bit2 of100
is 1). This is a fixed bit that all numbers must have.- We ensure the result has bit2 = 1 (it already will, since we start result = x). We do not consume a
k
bit here because this bit was not a “free” position; it’s mandated byx
. - (We continue scanning higher, but since
k
’s bits are already all used, we can stop once we’ve ensured allk
bits are placed.)
- Higher bits (bit3, bit4, ...):
x
has 0 in these higher bits, but we have no more bits ink
(we placed both bits ofk
already). So all higher bits remain 0 in the result.
Now compile the result bits we determined:
- Bit0 = 0
- Bit1 = 1
- Bit2 = 1 (from
x
) - Higher bits = 0
So the result in binary is 110
, which is 6
in decimal. This matches the output we expected. The sequence constructed implicitly here is: 100 (4) -> 101 (5) -> 110 (6)
. And indeed [4,5,6]
was our example array.
Visualizing the bit placement:
Let's illustrate this process in a compact form:
x = 100 (binary for 4)
n-1 = 10 (binary for 2)
Positions (from LSB) : bit2 bit1 bit0
x bits : 1 0 0
n-1 bits to place : 1 0 (aligning n-1 = "10" with x's zero bits)
----------------------------------------
Result bits : 1 1 0 => 110 (binary) = 6 (decimal)
In the above diagram, we aligned n-1
’s binary (10
) with the zero bits of x
. Bit1 and bit0 of x
were zero, so those became targets for n-1
's bits (1 and 0 respectively). The result’s bits are then the combination of x
’s bits (bit2) and the placed bits.
This method scales to bigger numbers the same way. We always ensure the x
bits stay, and we fill in extra bits from n-1
into the gaps.
Common Mistakes & Edge Cases:
-
Dropping bits of
x
: A common mistake is to start the array with a number smaller thanx
or to allow a number in the sequence that doesn’t contain all bits ofx
. Remember that every element must have all the 1-bits ofx
. The first element should essentially bex
itself (if you choose anything lower, it would miss a bit thatx
has, causing the AND to drop that bit). Always check this condition;(num & x) == x
should hold for every elementnum
in the array. -
Assuming OR instead of AND: Sometimes confusion arises between AND and OR in bitwise problems. Here we require the AND of all numbers to equal
x
. This is a stricter condition than OR. For clarity: AND =x
means each bit that is 1 inx
is 1 in every number (and any bit that ends up 0 inx
means at least one number had 0 in that bit position). If you mistakenly approach this as a bitwise OR problem, you might try to ensure each bit ofx
appears somewhere – that would be a different problem. Make sure to enforce the AND condition correctly. -
Performance pitfalls: Iterating
n
times (or more) can be a huge pitfall givenn
can be up to 100 million. Even if each step is a simple operation, 100 million steps in Python (or even Java) can be borderline. The optimized approach avoids iterating through each number. Be cautious about any approach that loopsn
times or tries to build the entire array explicitly for largen
. -
Edge Case – n = 1: If there’s only one element in the array, the problem trivializes: the array AND must equal
x
, and there’s only one number to pick, so that number must bex
. The minimum (and only) last element is justx
itself. Our formula also reflects that: if n=1, then n-1=0, we fill no extra bits, and the result remainsx
. -
Edge Case – x with all bits 1 up to a certain position: If
x
is like7 (111)
or15 (1111)
, it has no zero bits up to its highest bit. That means you can’t create a second number without jumping to a higher bit that was zero (because any change in lower bits would drop a 1). For example,x=7 (111)
as in Example 2: the next number had to use bit3 (giving 15). This often results in a significant jump for the second number. Ensure the logic correctly handles when it needs to go to a higher bit position thatx
doesn’t have. The formula naturally does this by using the next available zero bit (bit3 in that case) and placing the next bit ofk
there. -
Checking bit positions beyond the range of
x
: Ifn
is large, you might end up setting bits higher than any bit present inx
. That’s normal. For instance, ifx=1
andn
is large, the last element could be quite big because you’ll be using many higher bits to accommodate alln
combinations. Make sure your solution doesn’t assume a fixed bit-size ofx
and iterates enough to place all needed bits ofn-1
. -
Using the wrong data type: In languages like Java or C++, the result might exceed the range of a 32-bit integer if a lot of high-order bits get set. For safety, use a larger type (like 64-bit) for calculations. In our Java solution we used
long
for this reason. In Python, integers can grow arbitrarily large, so it’s not an issue there.
Alternative Variations:
This problem showcases a neat technique of combining bit patterns. There are a few variations or related concepts worth noting:
-
Bitwise OR instead of AND: Imagine a variation where the condition was that the bitwise OR of all elements equals some target
x
. In that case, to minimize the last element, you would do the opposite: every bit that is 0 inx
must be 0 in all numbers (instead of at least one number), and bits that are 1 inx
can be distributed. In fact, if OR of all = x, the simplest way is often to take the first element asx
and all others as0
(except 0 might not be allowed as "positive", so maybex
and some numbers that don’t introduce new 1 bits). That problem is much easier – the last element could just bex
itself in many cases. -
Bitwise XOR conditions: Another variation could be requiring the XOR of all elements to equal
x
. XOR has different properties (it’s 1 in a bit if an odd number of elements have 1 in that bit). Constructing arrays for XOR conditions would be a different challenge, often solvable by greedy or pairing approaches. -
Different update operations: The way we generated the next number as
(prev + 1) OR x
in the simulation is a specific operation sequence. One could imagine an interview question asking to derive that operation or use it directly. This(num + 1) | x
trick effectively finds the next number that has all bits ofx
by adding 1 and then restoring any dropped bits (OR with x). This kind of operation is useful in bit manipulations for constructing constrained sequences. -
Interleaving bit patterns: The optimized solution interleaves the bits of
n-1
into the zeros ofx
. This is reminiscent of bit interleaving problems (for example, merging two binary numbers by alternating their bits). In our case, one number provides positions (the pattern ofx
), and another provides content for the gaps (n-1
). This technique of merging two bit patterns could apply to other problems, such as encoding two numbers into one, creating Gray codes, or generating combinatorial subsets by bit masking. -
Binary combinatorics: Essentially, we treated the sequence of valid numbers as a binary counting problem in a reduced bit-space (only bits where
x
is 0). This approach is related to combinatorics – generating combinations of “extra” bits. If a different problem asked for k-th smallest number that has certain bits fixed, a similar approach of treating it as a combinatorial index could be used.
Understanding this technique can help in any scenario where you need to enforce that certain bits remain constant across a set or when merging two sets of bits. Always consider the bitwise interpretation of constraints – it often leads to elegant solutions like this one.
Related Problems for Further Practice:
Practicing similar concepts will reinforce the understanding of bitwise operations and sequence construction:
-
LeetCode 201: Bitwise AND of Numbers Range – This problem asks for the AND of all numbers in a range [m, n]. It has a similar flavor because as the range grows, bits drop out. The solution involves finding common prefix bits of m and n (which is conceptually related to keeping certain bits constant, much like we did here).
-
LeetCode 390: Elimination Game – Although not obviously a bitwise problem, the elimination pattern leads to a solution using bit manipulation (power of two reasoning). It’s good practice for thinking in terms of binary when iterating through sequences.
-
LeetCode 136 & 137: Single Number problems – These involve XOR properties (finding a unique number where others repeat). They’re simpler but build comfort with bitwise operations.
-
LeetCode 260: Single Number III – Another XOR-based puzzle for finding two unique numbers in an array. It requires using a bit mask trick to separate numbers into two groups.
-
LeetCode 784: Letter Case Permutation – Not a bitwise problem per se, but if you consider letters and cases as bits (0/1 for lowercase/uppercase), generating all permutations is analogous to iterating over subsets of bit positions – similar in spirit to how we considered subsets of free bits of
x
. -
Custom challenge: Try writing a function to generate the entire array given
n
andx
for small cases. Verify that the AND isx
and the list is increasing. Then check how the last element relates to the formula. This can solidify your understanding by connecting the sequence to the final result. -
Binary/Bit Manipulation practice: Look for any problems labeled with "Bit Manipulation" on LeetCode. Many of them (like counting bits, flipping bits, etc.) will hone the skills needed to come up with optimized solutions like we did here.
GET YOUR FREE
Coding Questions Catalog
