1235. Maximum Profit in Job Scheduling - Detailed Explanation

Free Coding Questions Catalog
Boost your coding skills with our essential coding questions catalog. Take a step towards a better tech career now!

Problem Statement

You are given n jobs, where each job i starts at time startTime[i] and ends at time endTime[i], earning a profit of profit[i]. You want to schedule some of these jobs to maximize your total profit without overlapping jobs. No two jobs can be active at the same time – if one job starts before another ends, they conflict. However, if a job ends at time X, you can start another job that begins at time X.

Goal: Find the maximum profit you can earn by selecting a subset of the given jobs such that no two selected jobs overlap in time.

Constraints:

  • 1 <= n <= 5 * 10^4 (up to 50,000 jobs)
  • 1 <= startTime[i] < endTime[i] <= 10^9 (jobs have positive duration)
  • 1 <= profit[i] <= 10^4 (profits are positive)

Examples:

  1. Example 1:
    Input: startTime = [1,2,3,3]
    endTime = [3,4,5,6]
    profit = [50,10,40,70]
    Output: 120

    Explanation: Choose job 1 (time 1–3, profit 50) and job 4 (time 3–6, profit 70). These do not overlap (job 1 ends at 3, job 4 starts at 3), yielding total profit 50 + 70 = 120.

  2. Example 2:
    Input: startTime = [1,2,3,4,6]
    endTime = [3,5,10,6,9]
    profit = [20,20,100,70,60]

    Output: 150
    Explanation: One optimal selection is job 1 (1–3, $20), job 4 (4–6, $70), and job 5 (6–9, $60). These jobs don't overlap and give profit 20 + 70 + 60 = 150.

  3. Example 3:
    Input: startTime = [1,1,1]
    endTime = [2,3,4]
    profit = [5,6,4]
    Output: 6

    Explanation: All jobs start at time 1, so you can only take one of them. The maximum profit comes from choosing the job with profit 6.

These examples illustrate typical scenarios: jobs that can chain without conflict (Example 1), jobs with complex overlaps (Example 2), and jobs that all conflict (Example 3).

Hints

  • Think Brute-Force First: Consider exploring every possible subset of non-overlapping jobs to find the maximum profit. How might you systematically try including or excluding each job? (This will lead to a recursive solution, albeit a very slow one.)

  • Sort the Jobs: It can be helpful to sort jobs by their time (start or end). Sorting by start time makes it easier to decide which job to consider next after taking one job. Sorting by end time makes it easier to know which jobs finished before a given job starts.

  • Choose or Skip: For each job, you have two choices – take it (add its profit and skip any overlapping jobs) or skip it. This suggests a recursive relation or dynamic programming state.

  • Find the Next Compatible Job Efficiently: If you take a job, you need to quickly find the next job that starts after this job ends. A binary search can find this next job index in logarithmic time, instead of scanning all jobs.

  • Use Dynamic Programming: Overlapping subproblems exist – for example, the optimal profit from job i onward may be reused in multiple decisions. Consider using memoization (top-down) or a DP table (bottom-up) to store and reuse results, turning an exponential brute-force solution into a polynomial solution.

Approach 1: Brute Force (Recursive Backtracking)

A straightforward approach is to try all combinations of non-overlapping jobs using recursion (backtracking). For each job, decide whether to take it or skip it, and recursively compute the profit for each choice. The recursion explores all possibilities:

  1. Sort by Start Time: To make it easier to find the next job after taking one, sort jobs by start time. This way, when you take a job at index i, you can look for the next job with start time >= endTime[i].

  2. Recursive Decision: Define a recursive function dfs(i) that returns the maximum profit achievable starting from the i-th job (in the sorted order):

    • If i is beyond the last job, return 0 (no profit).

    • Otherwise, you have two options:

      • Skip job i: move to dfs(i+1) (ignore the current job).
      • Take job i: add profit[i] and jump to the next compatible job. To find the next compatible job index j, find the first job whose start time is >= endTime[i] (this can be done by a loop or binary search). Then the profit if we take job i is profit[i] + dfs(j).
    • Return the maximum of skipping or taking: dfs(i) = max(dfs(i+1), profit[i] + dfs(j)) .

  3. Compute Answer: The answer will be dfs(0), the maximum profit starting from the first job.

Example (Brute-force thinking): Using Example 1 (start=[1,2,3,3], end=[3,4,5,6], profit=[50,10,40,70]):

  • Start at job 0 (time 1–3, profit 50). You can take it or skip it.

    • If you take job 0 (profit=50), the next job that starts at or after time 3 is job 2 (starts at 3). From job 2 onward, the max additional profit is max(dfs(2), profit[2] + dfs(next after 2)), and so on. This branch will find a best combination (in this case, it will end up taking job 3 with profit 70, reaching total 120).
    • If you skip job 0, you move to job 1 (time 2–4, profit 10) and consider taking or skipping it. This branch might end up with a lesser profit (optimal from skip-0 branch is 70 by taking jobs 3 and maybe 2).
  • The recursion explores both branches and finds the best outcome (120).

Why Brute Force is Impractical: In the worst case, you would explore every subset of jobs. With n jobs, there are 2^n possible subsets, which is astronomically large even for moderate n (for n=50,000, 2^n is impossible to enumerate). Even with pruning, the brute-force approach is not feasible for the upper constraints. It serves as a conceptual step, but we need to optimize it.

To handle up to 50,000 jobs efficiently, we use Dynamic Programming (DP) with binary search to prune the recursion. This problem is a classic Weighted Interval Scheduling problem, which DP can solve optimally in O(n log n) time.

Key idea: Sort jobs by finish time, and use a DP table where each state represents the best profit up to a certain point. We use binary search to quickly find the next non-conflicting job for any given job.

Steps:

  1. Sort Jobs by End Time: Create a list of jobs as tuples (end, start, profit) and sort it by end time ascending. Sorting by end time will help in building the DP in chronological order (so that when we consider a job, all jobs that end before it are already processed). For example, after sorting Example 2 by end time, the jobs order becomes:
    (1–3,$20), (2–5,$20), (4–6,$70), (6–9,$60), (3–10,$100) (ordered by end times 3,5,6,9,10).

  2. DP Definition: Let dp[i] be the maximum profit achievable considering the first i jobs (in sorted-by-end order). We will build this up iteratively:

    • dp[0] = 0 (no jobs, no profit).
    • Eventually, dp[n] (where n is total jobs) will be our answer – the max profit using all available jobs optimally.
  3. Transition: To compute dp[i] (for the i-th job in sorted order, 1-indexed for convenience):

    • Let the i-th job in sorted order be job_i with start = s_i, end = e_i, and profit = p_i.

    • First, find the index of the last job that does not conflict with job_i. In other words, find the job with the largest end time that is <= s_i (the start of current job). Because jobs are sorted by end time, all candidates lie in the range before i. We can binary search this sorted list of end times to find that last non-conflicting job index j. (If none end before s_i, let j = 0 which represents "no job taken before".)

    • Now we have two choices for job_i:

      • Take job_i: Then we add p_i (its profit) to the profit of all jobs before it that we can still take, which by definition is dp[j]. So profit if we take it = p_i + dp[j].
      • Skip job_i: Then the profit is dp[i-1] (we just carry forward the best profit up to the previous job without taking this one).
    • We set dp[i] = max(dp[i-1], p_i + dp[j]) .

  4. Binary Search for j: We need to efficiently find j, the index of the rightmost job that finishes on or before s_i. Since the jobs are sorted by end time, we can maintain an array of end times and use binary search (bisect in Python, or manual binary search in Java) to get this index in O(log n) time for each job. This avoids an inner loop over all previous jobs (which would be O(n) per job and lead to O(n^2) time overall, too slow for 50k jobs).

  5. Result: After filling the DP table for all jobs, dp[n] will hold the maximum profit for all jobs considered.

Walkthrough (Example 2): Let's apply this to Example 2 step-by-step:

  • Jobs sorted by end time:
    Job1: (start=1, end=3, profit=20)
    Job2: (start=2, end=5, profit=20)
    Job3: (start=4, end=6, profit=70)
    Job4: (start=6, end=9, profit=60)
    Job5: (start=3, end=10, profit=100)
    (We renumbered jobs in sorted order for clarity.)

  • dp[0] = 0 (no jobs).

  • Job1 (end=3): No previous job ends <= start1 (start1=1), so j=0.

    • Take profit = 20 + dp[0] = 20.
    • Skip profit = dp[0] = 0.
    • dp[1] = max(20, 0) = 20. (Best with first job is to take it.)
  • Job2 (end=5, start=2): Previous job that ends <=2? None (job1 ends at 3 which is >2), so j=0.

    • Take = 20 + dp[0] = 20.
    • Skip = dp[1] = 20.
    • dp[2] = max(20, 20) = 20. (Either take job2 or stick with job1, both give 20; we have 20 either way.)
  • Job3 (end=6, start=4): Find last job ending <=4. Job1 ends at 3 (<=4), job2 ends at 5 (>4), so the last non-conflicting is Job1 (index 1). Thus j = 1.

    • Take = 70 + dp[1] = 70 + 20 = 90.
    • Skip = dp[2] = 20.
    • dp[3] = max(90, 20) = 90. (Better to take job3 along with job1's profit, giving 90.)
  • Job4 (end=9, start=6): Last job ending <=6. Job3 ends at 6 (equal to start, non-conflicting by problem rule), so Job3 (index 3) is valid. j = 3.

    • Take = 60 + dp[3] = 60 + 90 = 150.
    • Skip = dp[3] = 90.
    • dp[4] = max(150, 90) = 150. (Take job4 in addition to jobs 1 and 3 yields 150 total.)
  • Job5 (end=10, start=3): Last job ending <=3. Job1 ends at 3 (<=3), so j = 1.

    • Take = 100 + dp[1] = 100 + 20 = 120.
    • Skip = dp[4] = 150.
    • dp[5] = max(120, 150) = 150. (Skipping job5 is better, as the combination of jobs 1,3,4 is still optimal.)

Finally, dp[5] = 150, which matches the expected answer. The DP table built up the solution in a straightforward manner. We found that taking jobs 1, 3, and 4 (with total profit 150) is optimal, and indeed those correspond to the original job indices [1,4,5] we identified in the example explanation.

Why This Works: By sorting by end times, when evaluating a job, we have already computed the best solutions for all earlier finishing jobs. The binary search efficiently finds the best compatible sub-solution to add to the current job’s profit. This method avoids redundant recomputation by storing solutions to subproblems (here, "optimal profit up to a given time") – a classic dynamic programming strategy.

Complexity: Sorting the jobs takes O(n log n). Building the DP table takes O(n) iterations, and each iteration does a O(log n) binary search. Overall time complexity is O(n log n), which is efficient for n = 50k. The space complexity is O(n) for storing the jobs and DP table. This approach is feasible within the given constraints.

Code Implementation (Python)

Python3
Python3

. . . .

Explanation: We sort jobs by end time and use a DP list where dp[i] represents the best profit using the first i jobs. For each job i, we use bisect to find j, the index of the last job (among those already considered) that doesn't overlap with job i. We then decide to take or skip job i based on which yields more profit. Finally, we print results for the example cases to verify the implementation.

Code Implementation (Java)

Java
Java

. . . .

Notes on the Java implementation: We sort an array of jobs by end time. We then use Arrays.binarySearch on the endTimes array to find the insertion point for start_i. If start_i is not exactly found, binarySearch returns -(insertion_point) - 1. We convert that to get the index of the last job that ends before start_i. Finally, we compute dp[i] similar to the Python approach. The main method tests the implementation with the sample cases.

Complexity Analysis

  • Brute Force Approach: Time Complexity: Exponential, roughly O(2^n) in the worst case. Each job can be either taken or not, leading to 2^n combinations (though conflicts reduce some combinations, the growth is still exponential). This is infeasible for large n (even n=30 would be over a billion possibilities). Space Complexity: O(n) for recursion stack in the worst case (one branch might take n nested calls). No substantial additional storage aside from recursion overhead.

  • Optimized DP + Binary Search Approach: Time Complexity: O(n log n). Sorting takes O(n log n). Building the DP uses a loop of n iterations, each doing a O(log n) binary search. For n = 50,000, n log n is about 50k * 15 ≈ 750k operations, which is very efficient. Space Complexity: O(n) for the job list, end-times array, and DP array. The recursion stack is eliminated in the iterative version (or bounded by O(n) in a memoized recursion). This easily fits in memory for n=50k.

Comparison: The DP approach is exponentially faster than brute force for large inputs. A brute force might work for very small n (like n < 20) for educational purposes, but DP is the only viable approach for n up to 50,000.

Step-by-Step Example Walkthrough

Let's walk through a smaller example in detail to solidify understanding. Consider:

startTime = [1, 2, 3]  
endTime   = [3, 4, 5]  
profit    = [50, 20, 30]

Here we have 3 jobs:

  • Job A: (1,3) profit 50
  • Job B: (2,4) profit 20
  • Job C: (3,5) profit 30

By inspection, jobs A and C do not overlap (A ends at 3, C starts at 3), so we could take A + C for profit 80. Job B overlaps with both A and C (B runs 2–4, which conflicts with A's 1–3 and C's 3–5), so B cannot go with either A or C. The best solution should be A + C = 80.

Using DP approach:

  1. Sort by end time:
    Sorted order: A (end=3), B (end=4), C (end=5).

  2. Initialize dp[0]=0.

  3. Evaluate Job A (end=3, start=1, profit=50):

    • No earlier job ends <= 1, so j=0.
    • Include profit = 50 + dp[0] = 50.
    • Exclude profit = dp[0] = 0.
    • dp[1] = max(50, 0) = 50.
  4. Evaluate Job B (end=4, start=2, profit=20):

    • No earlier job ends <= 2 (Job A ends at 3, which is >2), so j=0.
    • Include profit = 20 + dp[0] = 20.
    • Exclude profit = dp[1] = 50 (we already have 50 from job A).
    • dp[2] = max(20, 50) = 50.
      (It’s better to skip B and keep the profit from A so far.)
  5. Evaluate Job C (end=5, start=3, profit=30):

    • Find last job ending <= 3. Job A ends at 3 (which counts, since start=3 is allowed), so j=1 (Job A's index in sorted order).
    • Include profit = 30 + dp[1] = 30 + 50 = 80.
    • Exclude profit = dp[2] = 50.
    • dp[3] = max(80, 50) = 80.

After processing all jobs, dp[3] = 80, which is the maximum profit (from taking jobs A and C). This matches our manual analysis. The DP table values were:

  • dp[0]=0
  • dp[1]=50 (take A)
  • dp[2]=50 (skip B, carry over A’s profit)
  • dp[3]=80 (take C in addition to A)

This step-by-step illustrates how at each job we consider the best of taking it (plus whatever was optimal before its start) or skipping it. The binary search ensured we quickly found that Job A was the right predecessor for Job C.

Common Mistakes and Tips

  • Forgetting to Sort or Wrong Sort Order: Not sorting the jobs properly is a common bug. Always sort by start time for the recursive approach or by end time for the iterative DP approach. If unsorted, the logic to find the next job or previous compatible job will fail.

  • Greedy Assumption: A naive greedy strategy (like always picking the job with highest profit or earliest finishing time next) can fail. Counterexample: one job with huge duration and profit vs many small jobs with slightly less total profit – the greedy pick of highest profit might lock out the optimal combination. The DP is needed to evaluate combinations properly.

  • Incorrect Binary Search Bounds: When using binary search to find the next or previous compatible job, off-by-one errors are easy to make. Ensure that if a job starts exactly when another ends, you treat them as non-overlapping. For example, using bisect_right (in Python) or adjusting the index from Arrays.binarySearch (in Java) is necessary to handle equality correctly (so that end time == next start is allowed).

  • Not Using Memoization: If implementing the top-down recursive solution, forgetting to memoize results will lead to recomputation of subproblems. Always cache dfs(i) results for each index i to avoid exponential time. In Python, you might use functools.lru_cache for memoization, or a dictionary/array. In Java, use an array for memo. Without memoization, the recursive solution will time out.

  • Stack Overflow in Recursion: A purely recursive solution (even with memo) might hit recursion depth limits for n=50,000 (Python’s default recursion limit will be exceeded, and Java could run out of stack for very deep recursion). If using recursion, consider tail-recursion optimization (not available in Python) or switching to an iterative DP approach to be safe.

  • Misinterpreting “no overlap”: Remember that if a job ends at time X and another starts at time X, this is not an overlap (they are back-to-back). A common mistake is to treat this as overlapping. Make sure your condition (startTime[j] >= endTime[i]) is using >= and not > when finding the next job.

  • Using O(n^2) DP: One might attempt a DP where for each job you loop backwards to find a compatible job. That would work correctly but be too slow for large n (50k^2 operations is ~2.5 billion). Using binary search to find the compatible job index cuts this down significantly. Avoid the double nested loop approach for large inputs.

Edge Cases to Consider

  • Single Job: If there’s only one job, the answer is just that job’s profit (trivial case). Make sure the code handles n=1 properly (it should, in both approaches).

  • All Jobs Disjoint: If no two jobs overlap at all (each job starts after the previous one ends), then the optimal solution is to take all jobs. The algorithm should handle this and sum up all profits. (DP would just add profit each time since each job is compatible with the previous.)

  • All Jobs Overlap: If every job overlaps with every other (for example, all start at the same time or all have overlapping intervals), then you can only take one job. The solution should pick the single job with the highest profit. Ensure the algorithm indeed does that (the DP would compare taking each job versus another and end up with the max single profit).

  • Jobs With Same Start or Same End Times: If multiple jobs start at the same time or end at the same time, the algorithm should still work. Sorting by end time (or start) will group them, but the DP/transitions still correctly evaluate taking one versus another. Just be careful with binary search on equal times (as mentioned in mistakes). For example, two jobs with same end time – the order between them in sorting doesn’t matter for correctness, as neither can be taken with the other anyway.

  • Large Time Values: Start and end times can be large (up to 1e9), but our algorithm’s complexity doesn’t depend on the magnitude of time values, only on ordering. Just ensure using appropriate data types (int is fine for 1e9 in Java, and Python ints handle it automatically).

  • Large n: The solution should handle 50,000 jobs. Ensure that data structures (like recursion stack or loops) are efficient. Our DP approach uses O(n) memory and O(n log n) time which is fine. Test boundary scenarios if possible (like 50k jobs with minimal overlaps vs maximal overlaps) to ensure performance.

  • Top-Down Memoization: As discussed, you can implement the DP using recursion + memo (depth-first search with caching) instead of an explicit table. The strategy and complexity remain the same (O(n log n)). Some languages might find this easier to implement using built-in libraries (e.g., Python’s functools.lru_cache). The bottom-up and top-down approaches are just two ways to implement the same DP recurrence.

  • Greedy for Unweighted Scheduling: If all profits were equal (or if the goal was to maximize number of jobs instead of profit), the problem reduces to the classic Activity Selection Problem. In that case, a greedy algorithm (always take the job which finishes first) would maximize the number of non-overlapping jobs. This is a simpler variant when profit doesn't matter.

  • Job Sequencing with Deadlines: A related problem (often seen in interviews/competitions) is where each job has a deadline and profit, and each job takes the same unit time. The task is to schedule jobs such that they finish before their deadlines to earn profit. That can be solved with a greedy approach using a min-heap or union-find, and is different from interval scheduling because the "length" of each job is equal and you schedule in discrete slots up to the deadline.

  • Weighted Interval Scheduling in General: The problem we solved is essentially Weighted Interval Scheduling. In algorithm textbooks, the same DP approach is used. For further practice, you can look at other formulations of weighted scheduling:

    • Selecting compatible intervals with maximum total weight.
    • Variants where instead of profit, you maximize something like total length of intervals (which would effectively be similar approach).
    • Problems where intervals might be defined by different criteria (e.g., tasks you can do in a day with start-end times).
  • LeetCode Variations: Some other LeetCode problems to practice that involve interval scheduling or DP on intervals:

    • Non-overlapping Intervals (LeetCode 435): You need to remove the minimum number of intervals to avoid overlap (or equivalently, select the maximum number of non-overlapping intervals). This is the unweighted version solved by a greedy strategy.
    • Maximum Length of Pair Chain (LeetCode 646): Similar to activity selection, but phrased with pairs, asking for the longest chain of pairs where each chain link is compatible (can be solved greedily by end times as well).
    • Course Schedule III (LeetCode 630): A different spin on scheduling where each "job" has a duration and a deadline, and you want to take as many courses as possible before deadlines. The strategy there is greedy with a priority queue (not DP), but it’s another kind of scheduling optimization.
    • Job Scheduling with Resources: Some problems add more complexity like needing machines or having cooldowns (e.g., "Task Scheduler" problem where tasks have a cooling interval between same tasks). Those require different strategies (greedy or greedy+counting).
TAGS
leetcode
CONTRIBUTOR
Design Gurus Team
-

GET YOUR FREE

Coding Questions Catalog

Design Gurus Newsletter - Latest from our Blog
Boost your coding skills with our essential coding questions catalog.
Take a step towards a better tech career now!
Explore Answers
How often do Amazon employees quit?
How to begin an interview?
Is Zoom UDP or TCP?
Related Courses
Image
Grokking the Coding Interview: Patterns for Coding Questions
Grokking the Coding Interview Patterns in Java, Python, JS, C++, C#, and Go. The most comprehensive course with 476 Lessons.
Image
Grokking Modern AI Fundamentals
Master the fundamentals of AI today to lead the tech revolution of tomorrow.
Image
Grokking Data Structures & Algorithms for Coding Interviews
Unlock Coding Interview Success: Dive Deep into Data Structures and Algorithms.
Image
One-Stop Portal For Tech Interviews.
Copyright © 2025 Design Gurus, LLC. All rights reserved.
;