55. Jump Game - Detailed Explanation
Problem Statement
You are given an array of non-negative integers nums
. You start at the first index (index 0) of this array. Each element in the array represents the maximum jump length from that position. The goal is to determine if you can reach the last index of the array (i.e., reach the end) by jumping according to the rules. You can jump forward by at most the number of steps indicated by the value at the current index. You may choose to jump fewer steps than the maximum, or exactly the maximum.
- Return
true
if it is possible to reach the last index, orfalse
if it is not possible.
Example 1:
Input: nums = [2, 3, 1, 1, 4]
Output: true
Explanation: Jump 1 step from index 0 to 1, then 3 steps to the last index.
Explanation: Starting at index 0 (value 2), you can jump up to 2 steps. One possible path is to jump 1 step to index 1 (value 3), then jump 3 steps from index 1 to index 4 (the last index). Since you reached the last index, the output is true
.
Example 2:
Input: nums = [3, 2, 1, 0, 4]
Output: false
Explanation: You will always stop at index 3 and cannot reach index 4.
Explanation: No matter how you jump, you will end up at index 3 which has the value 0. From index 3, you cannot move forward at all (maximum jump length is 0), making it impossible to reach index 4. Thus, the output is false
.
Constraints:
1 <= nums.length <= 3 * 10^4
(up to 30,000 elements)0 <= nums[i] <= 10^5
(each element is between 0 and 100,000)
These constraints mean that an efficient solution is needed. A brute force approach that tries every possible jump path would be infeasible for the upper limits of nums.length
. We need to find a solution better than exponential time. Typically, a linear or near-linear time solution (e.g. O(n) or O(n log n)) is expected given n
can be 30,000. Memory is not a big issue (we can use O(n) space if needed, but O(1) space is ideal).
Hints
Think about the problem step by step before diving into coding:
-
Hint 1: What would a brute-force solution look like? Consider trying to jump from the start and explore all possible jump lengths at each step. This is similar to exploring all paths in a tree or graph of moves. How could you implement this exploration? (This hint guides toward a backtracking or recursive solution, but be mindful of its efficiency.)
-
Hint 2: Can you optimize the brute-force approach? If you find yourself recalculating results for the same index repeatedly, consider storing those results. This idea leads toward dynamic programming or memoization. Perhaps you can mark whether each index is reachable or not as you progress, to avoid re-computing paths from those indices.
-
Hint 3: Notice that we don't actually need to find the exact sequence of jumps, we just need to know if it’s possible to reach the end. Is it necessary to try every path? Maybe you can greedily keep track of the farthest index you can reach so far, as you iterate through the array. If at any point you can’t move further, you know the answer is false. This line of thinking hints at a greedy approach.
Try to solve with these hints in mind. Below, we will discuss each approach in detail, from brute force to the optimal greedy solution.
Approach 1: Brute Force (Backtracking)
Explanation:
A straightforward way to solve the problem is to explore all possible jump sequences from the start. Treat the array indices as nodes in a graph, where an index i
has edges to i+1, i+2, ..., i+nums[i]
(all the indices you could jump to from i
). The brute force approach is essentially a depth-first search (DFS) or backtracking through this graph of jumps.
- Start at index 0.
- From a current index, try to jump 1 step, 2 steps, ... up to
nums[current]
steps. Recursively attempt each jump and see if you can eventually reach the last index. - If any sequence of jumps leads to the last index, return true. If all possibilities are exhausted without reaching the end, return false.
This approach will correctly find a solution if one exists, but it is extremely inefficient. In the worst case (e.g., an array of all 1's), the number of possible paths grows exponentially because from each index you can go to many others. The time complexity is roughly O(n^n) in the worst case , which is practically impossible to compute for large n (even for n=20 it’s huge, and n can be up to 30,000!). This is why a plain brute force will time out for large inputs. It’s useful to understand the problem, but not feasible for actual constraints.
However, let's outline how the brute force works with a small example:
Suppose nums = [2, 0, 0]
.
- Start at index 0 (value 2). From here, you can jump either 1 step to index 1, or 2 steps to index 2.
- If you jump to index 1 (value 0), from index 1 you can only jump 0 steps (meaning you are stuck). That path fails to reach the end. Backtrack.
- If you jump to index 2, you've reached the last index. This path succeeds.
- Since one path succeeded, the answer is true.
For a slightly larger example like [2,3,1,1,4]
, a brute force would try paths like 0->1->4, 0->1->2->... etc., exploring many combinations, which is not efficient but will find the successful path eventually.
Brute Force Pseudocode:
We can implement this via recursion (DFS). Use a helper function canReachFrom(index)
that tries all jumps from the given index.
- If
index
is at or beyond the last index, return true (we reached the end). - For jump length from 1 to
nums[index]
:- If
canReachFrom(index + jump)
returns true for any jump, then return true.
- If
- If none of the jumps lead to a solution, return false (backtrack).
We also need to guard against infinite recursion or revisiting the same index multiple times. In the Jump Game problem as stated, you can only move forward and you never revisit an index in a single path (because you always go to a higher index), so cycles are not possible in this graph. However, to be safe and to optimize, one can use a set or boolean array to mark visited indices (so we don't repeat work from the same index). This becomes more relevant in the dynamic programming approach below.
Brute Force – Python Implementation: (Recursive DFS)
Brute Force – Complexity Analysis:
- Time Complexity: Exponential, approximately O(n^n) in the worst case . The brute force tries every possible jump from each index, which leads to a combinatorial explosion in paths. Even though many paths are repetitive, without memoization the algorithm will re-explore them.
- Space Complexity: O(n) in the worst case due to recursion stack depth. If we also use a visited set to avoid repeats, that would be O(n) extra space.
For large n
(like 30k), this approach will not finish in any reasonable time. We need to optimize it using Dynamic Programming or Greedy techniques.
Approach 2: Dynamic Programming (Memoization / Bottom-Up DP)
Explanation:
The brute force approach wastes time by exploring many redundant paths. We can optimize by using dynamic programming to avoid re-computation. The idea is to remember the results of subproblems (here, subproblem can be "can we reach the end starting from index i?"). If we already know the answer for a certain starting index, we don’t need to compute it again.
There are two common DP strategies for this problem:
-
Top-Down with Memoization (DFS + memo): Use recursion (or DFS) like the brute force, but cache the result (
True
/False
) for each index once computed. If you try to solve from that index again, return the cached result immediately instead of recomputing. This avoids exploring the same index’s possibilities more than once. -
Bottom-Up DP (Tabulation): Iteratively build up a boolean array
reachable[]
wherereachable[i]
indicates if indexi
is reachable from the start. Start withreachable[0] = True
(we begin at index 0). Then for each indexi
from 0 to n-1, ifreachable[i]
is True, mark all indices you can jump to fromi
as reachable (i.e., for allj
in rangei+1
toi + nums[i]
, setreachable[j] = True
). In the end, checkreachable[n-1]
. If it's True, the answer is True; otherwise False.
Both methods above will run in polynomial time. However, note that the bottom-up approach with nested loops can still degrade to O(n^2) time in the worst case (for example, nums = [25000, 25000, 25000, ...]
or in general when each element is large, you might mark many positions many times). With n
up to 30,000, an O(n^2) = 900 million operations worst-case might be borderline (or too slow in Python). In lower-level languages like C++ an O(n^2) might pass at the upper limit, but in Python it likely TLEs for the worst case. Still, O(n^2) is a huge improvement over O(n^n). And often, the average-case is better (not all elements allow jumping to 30k ahead).
Let's outline the top-down DP (memoization) approach:
-
Use a recursion (DFS) starting from index 0, similar to brute force, but maintain a memo (e.g., a dictionary or set) of indices that are already determined to be dead-ends (cannot reach the end from there).
-
If we call the function on an index that we have seen before (in the memo) and know it cannot reach the end, we immediately return False without exploring again.
-
If we reach the last index, return True. If an index has no possible jump that leads to a True outcome, mark it as a dead-end (False) in the memo.
-
Continue until we get a result.
Alternatively, the bottom-up approach:
- Initialize a boolean array
reachable
of length n with all False, exceptreachable[0] = True
. - Iterate
i
from 0 to n-1: ifreachable[i]
is True, then fromi
markreachable[i+1...i+nums[i]]
as True (basically, those indices can be reached viai
). - If at any point
reachable[n-1]
becomes True, you can break early and return True. If you finish andreachable[n-1]
is still False, return False.
DP Approach – Python Implementation (Top-Down with Memo):
(This top-down DP is essentially DFS with memo. We use a dictionary memo
to store indices that are confirmed to be reachable or not. We could also use a set
to store just the bad (unreachable) indices as in the Reddit example, which is equivalent.)
DP Approach – Complexity Analysis:
-
Time Complexity: In the worst case, O(n^2). Each index in worst case might end up checking many possible next steps. For example, in the worst configuration (like
nums = [n-1, n-2, ..., 3, 2, 1, 0]
), the first index might try ~n jumps, the second ~n-1 jumps, etc., leading to about ½ * n^2 attempts. The intended optimal solution runs in O(n), but a naive DP can be O(n^2). With memoization, we avoid re-exploring indices, so each index is processed at most once in the top-down approach. Even then, exploring an index can require iterating through up tonums[i]
jumps. In the worst-case scenario, this still sums up to on the order of n^2. -
Space Complexity: O(n). The memo or reachable array takes O(n) space, and the recursion stack in the top-down approach can go O(n) deep in the worst case.
The DP solution is an improvement over brute force (avoids redundant work), but we can still do better. We can achieve a linear time solution by adopting a greedy strategy.
Approach 3: Greedy Algorithm (Optimal Solution)
Explanation:
The greedy approach is the most efficient way to solve Jump Game. The key insight is that we don't need to explore all jump options at each step—we only need to keep track of the furthest position we can reach as we scan through the array.
Think of it like this: as you move through the array, maintain a variable maxReach
which represents the furthest index that you can reach so far. Initially, maxReach = 0
(at start, you can only reach index 0). Then:
- Iterate through the array indices
i
from 0 to n-1. At each index, first check: is this index reachable? Ifi
is greater thanmaxReach
, it means we've come to an index that we cannot reach (we've hit a "gap"), so we should break and return false. Ifi <= maxReach
, then this index is reachable. Once we're at indexi
, we can potentially extend our reach from here. So updatemaxReach = max(maxReach, i + nums[i])
. This means: from indexi
, we could potentially reachi + nums[i]
, so the furthest reachable gets updated. - Continue this process. If at any point
maxReach
becomes >= last index, we know we can reach the end and can return true early. If the loop finishes, then check if we ended up reaching at least the last index. If yes, return true; if not, return false.
This greedy approach only requires a single pass through the array, making it O(n) time, and O(1) extra space. It works because if a position is reachable, we greedily assume we will get there with the minimum effort, and we keep extending the range of reach. We don't need to worry about exactly how we get to each reachable index, only that it is reachable.
Another way to think about the greedy solution (alternative perspective) is from the end backward: Start with the last index as a "goal". Walk backwards and find the first index that can reach this goal. Then make that index the new goal. Continue moving backwards greedily. If you can move your "goal post" all the way back to index 0, then index 0 can reach the last index. This backward method is equivalent in complexity and result to the forward method, just a different way to implement greedy. We'll demonstrate the forward method here, as it's more intuitive for many.
Forward Greedy Example:
For nums = [2, 3, 1, 1, 4]
:
- Start
maxReach = 0
(at index 0, you can reach 0). - i = 0: value = 2. This index is <= maxReach (0 <= 0, reachable). Update
maxReach = max(0, 0+2) = 2
. Now we can reach up to index 2. - i = 1: value = 3. 1 <= maxReach (1 <= 2, reachable). Update
maxReach = max(2, 1+3) = 4
. Now we can reach up to index 4 (which is the last index). - Since
maxReach
is now 4, which is the last index, we already know the answer is true. (We could even return true early here.) We can continue the loop for completeness: - i = 2: value = 1. 2 <= maxReach (2 <= 4, reachable). Update
maxReach = max(4, 2+1) = 4
(no change). - i = 3: value = 1. 3 <= maxReach (3 <= 4, reachable). Update
maxReach = max(4, 3+1) = 4
(no change). - i = 4: value = 4. 4 <= maxReach (4 <= 4, reachable). Update
maxReach = max(4, 4+4) = 8
. NowmaxReach = 8
, beyond the last index. End of loop. - We were able to reach the last index (and even beyond), so return true.
For nums = [3, 2, 1, 0, 4]
:
-
Start
maxReach = 0
. -
i = 0: value = 3. 0 <= 0 (reachable). Update
maxReach = max(0, 0+3) = 3
. (Reachable range is now [0..3]). -
i = 1: value = 2. 1 <= 3 (reachable). Update
maxReach = max(3, 1+2) = 3
. (maxReach
stays 3.) -
i = 2: value = 1. 2 <= 3 (reachable). Update
maxReach = max(3, 2+1) = 3
. (maxReach
stays 3.) -
i = 3: value = 0. 3 <= 3 (reachable). Update
maxReach = max(3, 3+0) = 3
. (maxReach
stays 3, and at index 3 you can't extend further.) -
i = 4: Now when we try to go to i = 4, check 4 <= maxReach. But maxReach is 3, so 4 > 3. This means index 4 is not reachable given all the jumps we have considered so far. We break out and return false. Indeed, in this array, you get stuck at index 3 which has 0 jump length, so index 4 is unreachable.
This method never re-examines an index more than once, and it doesn't branch out into multiple paths. It simply keeps track of the furthest reach as it goes.
Greedy – Python Implementation:
This greedy solution runs in linear time. The loop goes through each index at most once, and maxReach
gets updated efficiently.
Greedy – Complexity Analysis:
-
Time Complexity: O(n) – we do a single pass through the array, constantly updating a reach range. Each index is processed once.
-
Space Complexity: O(1) – we only use a few integer variables (
maxReach
, and indices/counters). No additional arrays proportional to n.
This is optimal for this problem. In fact, it’s been proven that this problem can be solved in linear time and that any approach that is substantially slower (like quadratic DP) may fail for the largest inputs.
Step-by-Step Walkthrough of the Greedy Solution
Let's walk through the greedy approach step-by-step with an example, using a table to illustrate how we update the reachability. We will use the example nums = [2, 3, 1, 1, 4]
which we know is reachable (output True). We will track the index, the value at that index, the current maxReach
before processing that index, and the updated maxReach
after processing that index. We will also note whether the current index is within the reachable range at that step.
Index (i) | nums[i] | maxReach (before) | maxReach (after) | Status/Action |
---|---|---|---|---|
0 | 2 | 0 | 2 (max(0, 0+2)) | 0 ≤ 0 (reachable). Update maxReach to 2. |
1 | 3 | 2 | 4 (max(2, 1+3)) | 1 ≤ 2 (reachable). Update maxReach to 4 (can reach end). |
2 | 1 | 4 | 4 (max(4, 2+1)) | 2 ≤ 4 (reachable). maxReach remains 4. |
3 | 1 | 4 | 4 (max(4, 3+1)) | 3 ≤ 4 (reachable). maxReach remains 4. |
4 | 4 | 4 | 8 (max(4, 4+4)) | 4 ≤ 4 (reachable). Update maxReach to 8 (beyond end). |
Explanation of table:
-
At i = 0, before processing,
maxReach = 0
. Index 0 is reachable (since 0 ≤ 0). The value at 0 is 2, so we can jump as far as index 2. We updatemaxReach
to 2. -
At i = 1, before processing,
maxReach = 2
. Index 1 is ≤ maxReach (reachable). Value is 3, so from index 1 we could reach index 4. We updatemaxReach
to 4. (Now we know we can reach the last index, index 4.) -
At i = 2,
maxReach = 4
. Index 2 is reachable (2 ≤ 4). Value is 1, so from index 2 we can reach at most index 3.maxReach
stays 4 because we already could reach farther than 3. -
At i = 3,
maxReach = 4
. Index 3 is reachable (3 ≤ 4). Value is 1, so from index 3 we reach index 4.maxReach
stays 4. -
At i = 4,
maxReach = 4
. Index 4 is exactly equal to maxReach (reachable). It's the last index. The value is 4, which would allow jumps beyond the array end. We updatemaxReach
to 8 (not that it matters now, since we've reached the end).
After processing all indices (or in fact, once we saw maxReach
hit 4 at i=1, we could have concluded), we have successfully reached the last index. So the result is True.
Now consider the array nums = [3, 2, 1, 0, 4]
(the second example) and how the greedy algorithm catches the failure:
Index (i) | nums[i] | maxReach (before) | maxReach (after) | Status/Action |
---|---|---|---|---|
0 | 3 | 0 | 3 (max(0, 0+3)) | 0 ≤ 0 (reachable). Update maxReach to 3. |
1 | 2 | 3 | 3 (max(3, 1+2)) | 1 ≤ 3 (reachable). maxReach remains 3. |
2 | 1 | 3 | 3 (max(3, 2+1)) | 2 ≤ 3 (reachable). maxReach remains 3. |
3 | 0 | 3 | 3 (max(3, 3+0)) | 3 ≤ 3 (reachable). maxReach remains 3 (can't extend). |
4 | 4 | 3 | N/A | 4 > 3 (unreachable). Stop – cannot proceed further. |
By the time we reach index 4, we find that i = 4
is greater than the current maxReach = 3
. That means index 4 is not reachable. The loop would break and return False at that point, correctly identifying that the last index can't be reached.
The greedy approach efficiently identifies reachable ranges and stops as soon as an index is found that is out of reach. If the loop completes, it means every index up to the last was in range (hence last index reachable). If it breaks early, it found a gap.
Python Implementation (Optimal Solution)
Java Implementation (Optimal Solution)
Complexity Analysis of Approaches
Let's summarize the time and space complexity for each approach discussed:
Approach | Time Complexity | Space Complexity |
---|---|---|
Brute Force (DFS) | O(n^n) (exponential) – extremely slow | O(n) recursion stack (plus maybe O(n) visited set) |
Dynamic Programming (DP) | O(n^2) in worst case (quadratic) | O(n) for memo or DP table (and recursion stack if top-down) |
Greedy (Optimal) | O(n) (linear) | O(1) (constant extra space) |
- The brute force explores every possible combination of jumps, leading to an exponential blow-up in computations.
- The DP approach avoids redundant paths and is much faster than brute force, but in the worst scenario it still performs a double nested loop (or equivalent recursive calls), leading to quadratic time. For large inputs (n ~ 30k), O(n^2) ~ 900 million operations, which is borderline. In practice, a well-implemented DP might pass in C++/Java due to fewer constant factors, but in Python it often times out for the worst-case inputs (as seen by many users getting TLE when using DP).
- The greedy approach is optimal, scanning through the list once. It is the recommended solution for this problem, with linear time and constant space. This is crucial to pass the largest test cases within time limits.
Common Mistakes and Pitfalls
Beginners often run into a few pitfalls with this problem:
-
Using Brute Force on Large Input: Writing a naive recursive solution without memoization can lead to timeouts. As discussed, it’s exponential. Some might implement a DFS or BFS without marking visited indices, causing an explosion of repeated work. Always consider the input constraints – here n can be 30,000, which is far too large for brute force.
-
Incorrect DP Memoization: If using recursion with memo, a common mistake is to only memoize failing positions or not memoize at all. For example, the code might mark visited positions but still explore all possible jumps from each, effectively doing O(n^2) or worse work. Proper memoization should cut off all further exploration from a given index once determined. Another mistake is using Python recursion without increasing recursion limit or not considering Python's recursion depth (which could cause a recursion depth exceeded error on very large arrays). In those cases, an iterative approach or BFS might be safer.
-
Off-by-One Errors in Loops: When writing loops for the DP or greedy solutions, it’s easy to make an off-by-one mistake. For instance, forgetting to include the last index in the reachable range, or iterating one step too far. In the greedy solution, one must be careful with the loop condition and the check
if (i > maxReach)
, and updatingmaxReach
. If you use a slightly different loop structure (like awhile
loop), make sure it handles the last index correctly. -
Assuming You Must Land Exactly on the Last Index: Some think you need to land exactly on index
n-1
. In fact, landing beyond the last index is also fine — it implies you definitely passed the last index. The greedy solution often checksif maxReach >= n-1
to account for this. So don't mistakenly requiremaxReach == n-1
strictly. Overshooting is not an issue. -
Not Handling the Case of Single Element Array: If
nums
has length 1, you are trivially already at the last index. The answer should be true. Both DP and greedy logic naturally handle this (greedy: maxReach starts at 0 which is already >= last index 0, so returns true; DP: reachable[0] is true and that's the last index). But if someone wrote a loop incorrectly or an edge condition wrong, they might get an incorrect result for a single-element array. Always considern=1
scenario (should return true, because you're already at the end without any jumps). -
Not Considering Zero Values Properly: Zeros in the array are what can block you from reaching further. A common oversight is not specifically handling the case when you land on a 0. The greedy solution inherently handles it (if you land on 0 and can't go further, maxReach will stop increasing and eventually i will surpass maxReach). But if you write a custom loop, ensure that the logic stops correctly when a 0 is encountered that can't be bypassed. For example, if someone tries a strategy like "jump as far as possible each time" greedily (which is a different greedy approach that doesn't always work for minimum jumps, but for reachability it doesn't fail outright), they might get stuck on a 0 and need to detect that. The standard greedy approach above gracefully handles it by the
if (i > maxReach)
check. -
Confusing this problem with its variations: Jump Game I only asks if you can reach the end, not the minimum number of jumps. Some accidentally start solving the minimum jumps problem (Jump Game II) or vice versa. Keep the problem requirements clear: here we just need a boolean reachable/not reachable. We don't care how or in how many jumps, just whether it's possible at all. This allows a simpler solution than the minimum-jump variant.
Edge Cases to Consider
Make sure your solution accounts for these edge cases:
-
Single Element: If
nums
has length 1, the answer should always betrue
(you're already at the last index). e.g.[0]
should return true. Even[ANY_NUMBER]
(single element) is trivially true because there's nowhere else to go; you are at the end. -
First Element is 0: If
nums[0] == 0
and array length is greater than 1, then you cannot move anywhere from the start. The result should be false. For example,[0, 2, 3]
-> false (stuck at start). This is a quick check: ifn > 1
andnums[0] == 0
, immediately false. (Our greedy algorithm will handle this because i=0 <= maxReach initially, maxReach = 0, then when i=1, i (1) > maxReach (0) triggers false.) -
Array with all zeros: e.g.
[0,0,0,...,0]
of length > 1 is obviously false (except the trivial case of length 1, which as noted is true). Any path will immediately stop because the first element is 0. -
Array with large first jump but isolated zeros: e.g.
[5, 0, 0, 0, 0, 0]
. Here,nums[0] = 5
which is enough to jump directly to the last index (index 5) even though indices 1-4 are zero. The result is true. Greedy logic: maxReach becomes 5 at i=0, so we succeed. A DP might incorrectly mark intermediate as unreachable but should see that index 5 gets marked reachable from 0. Ensure that overshooting logic (like using min(n-1, i+nums[i])) is correctly handled. -
Array where you just can’t reach the last element by one step: e.g.
[1, 2, 3, 4, 0, 0, 0]
. In this array, you can reach up to the second-to-last index but the last value might be just out of reach depending on jumps. Actually, this example as given you can reach end because the sequence can go 1->somewhere. Maybe a better example:[2, 5, 0, 0]
. Here length 4, index0 can go to 2, index1 can go to beyond end... Actually that one is reachable. Or[2, 0, 0]
which we discussed is reachable (0->2 skip over the zero). A tricky edge might be something like[1,1,1,1,0]
(last element 0, second last can only jump 1 which lands exactly on last index which is 0 - actually that is reachable because you land on last which is fine). How about[1,1,1,0,1]
: index 3 is 0 and you can only jump 1 each time, so you get stuck at index 3, can't reach index 4. That should be false. Greedy catches it: maxReach will be 3 after index 2, then index 3 doesn't extend it, then index 4 is unreachable. So our solutions handle these. Just ensure to test scenarios where the last section of the array has zeros that you might or might not be able to jump over. -
Max jumps that far overshoot array length: e.g.
[10, ...]
where the first element is huge. Greedy and DP should both correctly handle that by essentially treating any jump that goes beyond last index as reaching last index. Just ensure no out-of-bounds errors in code when handlingi + nums[i]
(hence why we often takemin(n-1, i+nums[i])
in DP loops).
By considering these cases, you can be confident your solution handles not just the typical scenarios but also the corner cases.
Alternative Variations of the Problem
The Jump Game problem has several interesting variations and follow-ups. Being familiar with the reachability concept here will help in solving them:
-
Variation 1: Minimum Jumps to Reach the End. This is known as Jump Game II (LeetCode problem 45). Instead of just asking if you can reach the last index, this variation asks for the minimum number of jumps required to reach the last index (assuming it’s reachable). For example, given
nums = [2,3,1,1,4]
, the answer would be 2 (0 -> 1 -> 4) because it takes two jumps. This turns the problem into an optimization problem. A greedy solution can solve it in O(n) as well, but the strategy is a bit different (you greedily find the furthest you can get in one jump, then jump, etc., effectively a BFS in array levels). A dynamic programming solution for this variation could be O(n^2) and likely too slow for large input, whereas a greedy or BFS solution works in O(n). -
Variation 2: Bidirectional Jumps (Jump Game III). In Jump Game III (LeetCode problem 1306), you are given an array of non-negative ints and a starting index (not necessarily 0). You can jump left or right from any index by at most the value at that index. The goal is to check if you can reach any index with value 0 in the array. This becomes a graph reachability problem where each index has up to two neighbors (i + nums[i] and i - nums[i]). A typical solution uses BFS or DFS with visited set to avoid infinite loops. The complexity is O(n) as well, since each index is visited at most once. The key difference is allowing backward jumps and looking for a target value instead of the last index.
-
Variation 3: Jumps with Value-Based Teleportation (Jump Game IV). Jump Game IV (LeetCode problem 1345) adds another twist: from an index
i
, you can jump to i+1, i-1 (neighbors) or to any other index j with the same value asnums[i]
. The task is to find the minimum number of jumps to reach the last index. Because jumping to any index with the same value can be like a teleport, an efficient solution is required (BFS with careful marking of visited values to avoid redundant jumps). This is a harder variation that requires O(n) or O(n log n) time using BFS and using a hash map to track indices by value. -
Variation 4: Maximum Score Path (Jump Game VI). In Jump Game VI (LeetCode problem 1696), each element in the array has some score (which can be positive or negative), and you can jump up to a fixed range
k
from any index. The goal is to reach the last index with the maximum sum of scores possible along the path. This is a different take: instead of just reachability, it's an optimization problem (maximize sum) with a constraint on jump length. It can be solved with dynamic programming using a deque (monotonic queue) to achieve O(n) time. It's like a sliding window optimization problem built on the jump game concept. -
Variation 5: String Jump Game (Jump Game VII). Jump Game VII (LeetCode problem 1871) provides a binary string (containing '0's and '1's) and two integers minJump and maxJump. You start at index 0, and you can jump to any index between
i+minJump
andi+maxJump
(inclusive). You can only land on indices that have '0'. The question asks if you can reach the end of the string. This variation requires careful two-pointer or sliding window techniques to avoid O(n^2) checking, and can be solved in O(n). It's conceptually similar to Jump Game I but with a constraint on jump length and forbidden landing spots ('1's are forbidden).
As you can see, the core idea of jumping through an array can be adapted in many ways – asking for reachability, minimum steps, or other conditions. Each variation might require a tweak in strategy (sometimes greedy, sometimes BFS or DP) but understanding the basic Jump Game is a great foundation.
Related Problems for Further Practice
If you enjoyed Jump Game or want to practice similar problems (array reachability and greedy algorithms), you can try the following problems:
-
Jump Game II (LeetCode 45) – Minimum number of jumps to reach the last index. This is the direct follow-up asking for the fewest jumps. (Medium difficulty)
-
Jump Game III (LeetCode 1306) – Jump in both directions to find a zero. Given a start index, can you reach any index with value 0 by jumping left/right. (Medium difficulty)
-
Jump Game IV (LeetCode 1345) – Minimum jumps with value-based teleportation. You can jump to indices with the same value. Find the min jumps to reach end. (Hard difficulty)
-
Jump Game VI (LeetCode 1696) – Maximum score from start to end with at most k-length jumps. (Medium/Hard, uses deque optimization)
-
Jump Game VII (LeetCode 1871) – Jump within [minJump, maxJump] range on a binary string. (Medium difficulty)
-
Minimum Number of Jumps to Reach End of Array (GeeksforGeeks) – This is essentially Jump Game II, but it's a classic problem often found on GfG.
-
Reachability Problems in Graphs – Although not a single specific problem, thinking of the array indices as nodes and jumps as edges is a useful mindset. Problems like word ladder (transform one word to another by steps) or minimum moves in a grid have a similar flavor of states and transitions, where BFS/DP/Greedy might be used to find reachability or shortest path.
GET YOUR FREE
Coding Questions Catalog
