509. Fibonacci Number - Detailed Explanation

Free Coding Questions Catalog
Boost your coding skills with our essential coding questions catalog. Take a step towards a better tech career now!

Problem Statement:

The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding numbers. By definition, F(0) = 0 and F(1) = 1. For any n > 1, the recurrence is:

[ F(n) = F(n-1) + F(n-2). ]

The problem “Fibonacci Number” asks: given an integer n, calculate the n<sup>th</sup> Fibonacci number, F(n).

  • Input: An integer n (the position in Fibonacci sequence).
  • Output: The value of F(n).

Example Cases:

  • Example 1:
    Input: n = 2
    Output: 1
    Explanation: F(2) = F(1) + F(0) = 1 + 0 = 1.

  • Example 2:
    Input: n = 3
    Output: 2
    Explanation: F(3) = F(2) + F(1) = 1 + 1 = 2 .

  • Example 3:
    Input: n = 4
    Output: 3
    Explanation: F(4) = F(3) + F(2) = 2 + 1 = 3 .

  • Example 4:
    Input: n = 0
    Output: 0
    Explanation: Base case; F(0) is defined to be 0.

  • Example 5:
    Input: n = 1
    Output: 1
    Explanation: Base case; F(1) is defined to be 1.

Constraints: n is a non-negative integer with 0 ≤ n ≤ 30 . (Within these constraints, results fit in a 32-bit integer.)

Hints:

  • Use the Recurrence: Recognize that Fibonacci numbers are defined by the recurrence relation (F(n) = F(n-1) + F(n-2)). This directly suggests a recursive solution.

  • Base Cases: Always handle the smallest cases: F(0) = 0 and F(1) = 1. These stop conditions are crucial in a recursive solution to avoid infinite recursion.

  • Overlapping Subproblems: Notice that computing F(n) recursively leads to repeated calculations of the same subproblems. For example, to compute F(5) recursively, you compute F(3) multiple times. Think about how you can store and reuse results of smaller subproblems to avoid repetition (this is the key idea of memoization or dynamic programming).

  • Bottom-Up Thinking: Alternatively, consider building the solution from the ground up. If you know F(0) and F(1), you can iteratively compute F(2), then F(3), and so on up to F(n). This is often more efficient than naive recursion.

  • Space Optimization: Do you need to store the entire sequence up to F(n)? Perhaps you can get away with storing just a couple of values at a time.

  • Advanced Hint: There are mathematical methods to compute Fibonacci faster than O(n). One involves using matrix exponentiation or formulas to achieve O(log n) time. If n were very large, these methods become useful, but they are more complex to implement.

Multiple Approaches:

There are many ways to compute Fibonacci numbers. Below, we discuss five different approaches, from the most straightforward (but least efficient) to more optimized methods, including a mathematically advanced method. We will illustrate each approach with step-by-step reasoning.

Approach 1: Recursive Approach (Naïve)

The most direct implementation follows the Fibonacci definition. We define a function fib(n) that calls itself to compute fib(n-1) and fib(n-2) and returns their sum. This approach is very simple to implement, but highly inefficient for larger n due to repeated calculations.

  • Idea: If we need F(n), recursively get F(n-1) and F(n-2), and sum them up. Use the base cases F(0) and F(1) to stop recursion.

  • Pseudo-code:

    function fib(n): if n <= 1: return n else: return fib(n-1) + fib(n-2)
  • Trace Example (n = 5):
    To compute fib(5), the function calls:

    • fib(5)
      ├─ fib(4)
      │ ├─ fib(3)
      │ │ ├─ fib(2)
      │ │ │ ├─ fib(1) ⇒ 1
      │ │ │ └─ fib(0) ⇒ 0
      │ │ └─ fib(1) ⇒ 1
      │ │ (Now fib(2) = 1 + 0 = 1, fib(3) = 1 + 1 = 2)
      │ └─ fib(2) (needs to compute again)
      │ ├─ fib(1) ⇒ 1
      │ └─ fib(0) ⇒ 0
      (fib(2) = 1 + 0 = 1)
      (Now fib(4) = fib(3) + fib(2) = 2 + 1 = 3)
      └─ fib(3) (needs to compute again)
      ├─ fib(2) ⇒ (will compute as 1)
      └─ fib(1) ⇒ 1
      (fib(3) = 1 + 1 = 2)

    Finally, fib(5) = fib(4) + fib(3) = 3 + 2 = 5.

    Notice that fib(2) and fib(3) were each computed multiple times. This redundancy grows exponentially with n.

  • Complexity: This naive recursion has exponential time complexity ~O(2^n). Each call branches into two more calls (except base cases), leading to roughly 2^n calls. The space complexity is O(n) for the recursion call stack (the deepest stack is n calls in the worst case, when unwinding down to F(0)). In practice, this approach is only feasible for small n (which is fine for n ≤ 30 by problem constraints, but not for much larger n).

Approach 2: Memoization (Top-Down Dynamic Programming)

Memoization improves the recursive approach by caching results of subproblems so we don’t recompute them. This is a top-down dynamic programming strategy: we still use recursion, but we remember (memoize) the results of each fib(k) in an array or dictionary after computing it the first time.

  • Idea: Maintain a memo (lookup table) for computed Fibonacci values. When fib(n) is called:

    • If the result for n is already in the memo, return it directly (avoiding repeated work).
    • If not, compute fib(n-1) and fib(n-2) recursively, store the sum in the memo, then return it.
  • Implementation details: Initialize a memo array of size n+1 (or use a HashMap). You can set memo[0] = 0 and memo[1] = 1 as base cases. For other entries, use -1 or another sentinel to indicate “not computed yet.”

  • Trace Example (n = 5):
    Start with memo = {0:0, 1:1} (F(0) and F(1) known).

    1. fib(5) not in memo → compute fib(4) and fib(3).

    2. fib(4) not in memo → compute fib(3) and fib(2).

    3. fib(3) not in memo → compute fib(2) and fib(1).

    4. fib(2) not in memo → compute fib(1) and fib(0) which are base cases 1 and 0. Now we can store memo[2] = 1.

      • Return to fib(3): now compute fib(1) (which is base 1). Store memo[3] = memo[2] + memo[1] = 1 + 1 = 2.
    5. Return to fib(4): now fib(3) is in memo (we just computed memo[3] = 2), so no recursion needed for fib(3); use the memoized value. Compute fib(2) (also memoized as 1). Store memo[4] = memo[3] + memo[2] = 2 + 1 = 3.

    6. Return to fib(5): now compute fib(3) (memo says 2) and fib(4) (memo says 3). Store memo[5] = memo[4] + memo[3] = 3 + 2 = 5.

    The memoization ensures each Fibonacci number from 0 up to 5 was computed only once. Any subsequent calls to those will return instantly from the memo.

  • Advantages: We still use recursion, but we avoid duplicate calculations. The recursion tree is effectively pruned into a directed acyclic graph where each F(k) is solved once. This dramatically improves efficiency.

  • Complexity: With memoization, the time complexity reduces to O(n) because each number F(2) through F(n) is computed only one time. The space complexity is O(n) for the memo array and O(n) for recursion stack (in the worst case, the recursion depth is n before filling the memo). Given n ≤ 30, this is very efficient. Even for much larger n (say n = 1000), this approach would be feasible in terms of time (though recursion stack might become an issue if n is extremely large, leading to potential stack overflow in some languages).

Approach 3: Iterative Approach (Bottom-Up Dynamic Programming)

Dynamic programming can also be done in a bottom-up manner, which typically uses iteration instead of recursion. We start from the smallest subproblems (the base cases) and build up to the desired n. This avoids the overhead of recursion and is straightforward to implement with a loop.

  • Idea: Create an array dp[] of size n+1, where dp[i] will hold F(i). Initialize dp[0] = 0 and dp[1] = 1. Then, for i from 2 to n, compute dp[i] = dp[i-1] + dp[i-2]. Finally, dp[n] will contain the answer.

  • Step-by-Step (n = 5):
    Initialize an array dp of length 6 (index 0 through 5).

    • dp[0] = 0 (base case)
    • dp[1] = 1 (base case)
    • i = 2 ⇒ dp[2] = dp[1] + dp[0] = 1 + 0 = 1
    • i = 3 ⇒ dp[3] = dp[2] + dp[1] = 1 + 1 = 2
    • i = 4 ⇒ dp[4] = dp[3] + dp[2] = 2 + 1 = 3
    • i = 5 ⇒ dp[5] = dp[4] + dp[3] = 3 + 2 = 5

    After filling the table up to i=5, dp[5] is 5, which is F(5). We have our answer. We computed each Fibonacci number exactly once in increasing order.

  • Memory Usage: This approach uses O(n) space to store the table. For n=30 (max in this problem), that’s trivial. If n were much larger, the array might be a concern, but there's an optimization (next approach) to reduce space.

  • Complexity: Time complexity is O(n) (one loop from 2 to n, constant work each iteration). Space complexity is O(n) for the dp array. This is efficient for the given constraints. We also avoid recursion entirely, so there's no risk of stack overflow.

Approach 4: Optimized Iterative Approach (O(1) Space)

On closer observation, the DP table is overkill — at each step we only need the last two Fibonacci values to compute the next one. We can optimize space by keeping just two variables for the previous two Fibonacci numbers and iteratively update them. This yields the same O(n) time complexity but reduces space to constant O(1).

  • Idea: Use two variables (say a and b) to track F(i-2) and F(i-1) as we iterate upward. Initially, set a = F(0) = 0 and b = F(1) = 1. Then repeat the following step n−1 times: compute next = a + b (which is F(current index)), update a = b and b = next. After n iterations, b will hold F(n) (or we can use a depending on iteration count – careful indexing is needed, but conceptually one of them holds the answer at end).

  • Step-by-Step (n = 5):
    We want to compute F(5). Start with:

    • a = 0 (this will represent F(i-2)), b = 1 (represents F(i-1)). Initially i = 1 (meaning b = F(1)).
      Now loop from i = 2 up to 5, updating:
    1. i = 2: next = a + b = 0 + 1 = 1. Update a = b (a becomes 1), b = next (b becomes 1). (Now a = F(1)=1, b = F(2)=1).

    2. i = 3: next = a + b = 1 + 1 = 2. Update a = b (a becomes 1), b = next (b becomes 2). (Now a = F(2)=1, b = F(3)=2).

    3. i = 4: next = a + b = 1 + 2 = 3. Update a = b (a becomes 2), b = next (b becomes 3). (Now a = F(3)=2, b = F(4)=3).

    4. i = 5: next = a + b = 2 + 3 = 5. Update a = b (a becomes 3), b = next (b becomes 5). (Now a = F(4)=3, b = F(5)=5).

    At this point, for i=5, b = 5 which is F(5). We have our result. Notice how we never needed an array; we just carried forward two values.

    We can summarize these steps in a table for clarity:

    Step (i)a (prev F)b (curr F)next (a+b)
    start0 (F(0))1 (F(1))
    i = 21 (F(1))1 (F(2))1
    i = 31 (F(2))2 (F(3))2
    i = 42 (F(3))3 (F(4))3
    i = 53 (F(4))5 (F(5))5

    Table: Iteratively computing Fibonacci. After each step, b holds F(i). At the end, b = 5 which is F(5).

  • Complexity: Time complexity is O(n) (still doing n-1 additions). Space complexity is O(1) since we only use a few integer variables. This is the most efficient in terms of memory and more than fast enough for n up to 30 (or even much larger n). In fact, this approach is optimal for linear Fibonacci computation.

  • Use cases: This method is typically the go-to solution in interview settings for Fibonacci because it’s simple, uses constant space, and runs in linear time. For example, with this approach computing F(45) (the 45th Fibonacci) takes just a loop of 45 iterations, which is negligible, whereas the naive recursive solution would take on the order of 2^45 ≈ 35 billion calls!

Approach 5: Matrix Exponentiation Approach (Fast Fibonacci Computation)

This approach uses a mathematical property of Fibonacci numbers to achieve a faster computation time, particularly useful if n is very large. Fibonacci numbers can be generated by raising a specific 2×2 matrix to the n<sup>th</sup> power. Using fast exponentiation (a divide-and-conquer technique), we can compute this power in O(log n) time.

  • Key Idea: The Fibonacci sequence can be encoded in the matrix form:
    [ M = \begin{pmatrix}1 & 1\[6pt]1 & 0\end{pmatrix}. ]
    It turns out that raising M to the n<sup>th</sup> power gives:
    [ M^n = \begin{pmatrix}F(n+1) & F(n)\[6pt]F(n) & F(n-1)\end{pmatrix} . ]
    This is a known property of Fibonacci numbers. For example:

    • (M^1 = \begin{pmatrix}1 & 1\1 & 0\end{pmatrix}) yields F(2)=1, F(1)=1 in the top row.
    • (M^2 = \begin{pmatrix}2 & 1\1 & 1\end{pmatrix}) yields F(3)=2, F(2)=1.
    • (M^3 = \begin{pmatrix}3 & 2\2 & 1\end{pmatrix}) yields F(4)=3, F(3)=2.
    • (M^4 = \begin{pmatrix}5 & 3\3 & 2\end{pmatrix}) yields F(5)=5, F(4)=3.
    • (M^5 = \begin{pmatrix}8 & 5\5 & 3\end{pmatrix}) yields F(6)=8, F(5)=5.

    You can see the pattern: the [0,1] entry of (M^n) (or [1,0] entry) is F(n). So if we can efficiently compute (M^n), we get F(n) directly.

  • Fast Exponentiation: Computing (M^n) by naive multiplication would be O(n) matrix multiplications, which is not better than the DP approach. Instead, we use exponentiation by squaring (also known as divide-and-conquer power algorithm):

    • If n is even, (M^n = (M^{n/2}) \times (M^{n/2})).
    • If n is odd, (M^n = (M^{\lfloor n/2 \rfloor}) \times (M^{\lfloor n/2 \rfloor}) \times M).

    This way, we recursively halve the power at each step. The number of matrix multiplications will be ~O(log n). Each matrix multiplication is a constant number of operations (multiplying two 2×2 matrices involves about 8 multiplications and 4 additions of numbers, which is constant work). Thus, the overall time is O(log n).

  • Example (n = 5): We want to compute (M^5):

    1. n=5 is odd. Compute (M^{\lfloor5/2\rfloor} = M^2) first.
    2. Compute (M^2 = M \times M = \begin{pmatrix}2 & 1\1 & 1\end{pmatrix}) (as above).
    3. Now square that result to get (M^4 = M^2 \times M^2 = \begin{pmatrix}5 & 3\3 & 2\end{pmatrix}).
    4. Since n=5 is odd, multiply (M^4) by one more M: (M^5 = M^4 \times M = \begin{pmatrix}8 & 5\5 & 3\end{pmatrix}).
    5. From (M^5), we read off F(5) as the [0,1] entry which is 5 (or [1,0] entry, also 5). This matches the known F(5).
  • Implementation: We implement a fast power function for 2×2 matrices. The steps are:

    1. Define a function to multiply two 2×2 matrices.
    2. Define a recursive (or iterative) function to exponentiate the matrix M to power n:
      • If n == 0, return the identity matrix (\begin{pmatrix}1&0\0&1\end{pmatrix}).
      • If n == 1, return M.
      • Compute halfPower = power(M, n//2).
      • Compute fullPower = halfPower * halfPower (this gives M<sup>n//2 * 2</sup> = M<sup>n - (n mod 2)</sup>).
      • If n is odd, do fullPower = fullPower * M (multiply one extra M).
      • Return fullPower.
    3. Use the resulting matrix to get the Fibonacci number (either matrix[0][1] or matrix[1][0] will equal F(n)).
  • Complexity: Time complexity is O(log n) for the exponentiation (each level of recursion squares the matrix, reducing the power by half). Space complexity is O(1) for storing a few 2×2 matrices (if using recursion, the call stack is O(log n) deep). For n up to 30, this method is actually overkill (the iterative methods are more than fast enough). But for very large n (hundreds or thousands), matrix exponentiation or similar math techniques become essential.

  • Note: There is also a formula using the golden ratio that gives Fibonacci numbers in closed form (known as Binet’s formula). It is (F(n) = \frac{\varphi^n - \psi^n}{\sqrt{5}}), where (\varphi \approx 1.618034) (golden ratio) and (\psi \approx -0.618034) . However, using this in a program is tricky since it involves irrational numbers and can suffer precision errors for large n. The matrix method (or an equivalent fast doubling method) computes exact results using integer arithmetic.

Java Code Implementation:

We provide separate methods for each approach (fibRecursive, fibMemoization, fibDP, fibDPOptimized, fibMatrix).

Java
Java

. . . .

All approaches produce the same correct results. (The recursive approach will be noticeably slower for n=10 compared to others, but for demonstration it’s fine. For n=30, the recursive approach would still work but with a slight delay, whereas others return instantly.)

Python Implementation

Python3
Python3

. . . .

Complexity Analysis:

Let’s summarize the time and space complexity of each approach:

  • Recursive (Naïve): Time Complexity ≈ O(2^n). This exponential growth comes from the branching recursion (approximately the binary tree of calls has 2^n leaves) . For example, computing F(30) would take on the order of 2^30 ≈ 1 billion operations (practically infeasible). Space Complexity = O(n) for the recursion stack (in the worst case, the depth of recursion is n before hitting the base case).

  • Memoization (Top-Down DP): Time Complexity = O(n). Each F(i) for i from 2 to n is computed once and stored, so it’s basically linear in n. The overhead of recursion is there, but each call does constant work aside from the two recursive calls which are avoided on repeats. Space Complexity = O(n) for the memo table to store F(0)...F(n), and the recursion depth is also O(n) (though each call is quick). Overall O(n) space. Given n ≤ 30, this is trivial; even if n were large, the memory grows linearly.

  • Iterative Bottom-Up DP: Time Complexity = O(n) (one loop through n). Space Complexity = O(n) for the dp array. We remove recursion overhead here. If n=30, we use an array of length 31, which is fine. If n were huge (say 1e6), an array of that size is needed, which is still okay in many languages.

  • Optimized Iterative (Constant Space): Time Complexity = O(n) (same single loop). Space Complexity = O(1), since we only use a fixed number of variables (just a few integers). This is optimal space usage for this problem because you can’t compute F(n) without at least remembering the last two values. This approach is both time and space efficient for any reasonably large n that might be encountered.

  • Matrix Exponentiation: Time Complexity = O(log n) . The divide-and-conquer exponentiation drastically reduces the number of multiplications needed. For example, to compute F(30), this method would perform on the order of log2(30) ≈ 5 matrix multiplications (each multiplication is constant time), which is just a handful of operations! In general, it grows very slowly with n. Space Complexity = O(1) for the matrices and a few temp variables (if implemented recursively, it uses O(log n) stack space for the recursion, but that is minimal). The downside is the implementation is more complex. For n up to 30, the constant factors don’t matter and any O(n) method is more than fast enough. But for extremely large n (beyond the given constraints), this method or similar fast doubling algorithms are preferred.

Step-by-Step Walkthrough:

Let’s walk through a specific example to see how the Fibonacci value is computed at each step using a dynamic programming approach. Consider n = 6 (we want to compute F(6)). We’ll use the optimized iterative approach (constant space), as it clearly shows the progression:

We know the sequence should be: 0, 1, 1, 2, 3, 5, 8 for F(0)...F(6). Now follow the algorithm:

  • Initialization: a = 0 (F(0)), b = 1 (F(1)). This means at start, we consider we have F(0) and F(1) ready.
  • Step i=2: Compute next = a + b = 0 + 1 = 1. This is F(2). Update a = b (a becomes 1, the old F(1)), b = next (b becomes 1, which is F(2)). Now a = 1 (F(1)), b = 1 (F(2)).
  • Step i=3: Compute next = a + b = 1 + 1 = 2 (this is F(3)). Update a = b (a becomes 1, the old F(2)), b = next (b becomes 2, F(3)). Now a = 1 (F(2)), b = 2 (F(3)).
  • Step i=4: next = a + b = 1 + 2 = 3 (F(4)). Update a = 2, b = 3. Now a = 2 (F(3)), b = 3 (F(4)).
  • Step i=5: next = 2 + 3 = 5 (F(5)). Update a = 3, b = 5. Now a = 3 (F(4)), b = 5 (F(5)).
  • Step i=6: next = 3 + 5 = 8 (F(6)). Update a = 5, b = 8. Now a = 5 (F(5)), b = 8 (F(6)).

At the end of this loop, for i=6, b = 8. This is F(6), which is our answer. We successfully built up from the base cases to n=6, updating the state at each iteration.

If we were to visualize the recursive approach for n=6, it would look like a binary tree of calls (as shown in the recursion tree image earlier). Each node splits into two sub-calls until the leaves (base cases F(1) or F(0)). Many leaves repeat the same computations. The memoized approach would trim this tree by caching results: once F(3) (for example) is computed, any other branch needing F(3) would reuse the cached value instead of expanding further.

In the bottom-up DP approach, instead of thinking in terms of recursive calls, you fill out a table:

i:    0, 1,  2,  3,  4,  5,  6
F(i): 0, 1,  1,  2,  3,  5,  8

You start by writing down F(0) and F(1), then each subsequent entry is the sum of the two previous entries. By the time you fill the entry for i=6, you have the result.

In the matrix exponentiation approach, a step-by-step might involve recursively computing powers of the matrix. For n=6, you would compute (M^6) by:

  • Finding (M^3) (since 6 is even, (M^6 = (M^3)^2)).

  • To get (M^3), since 3 is odd, compute (M^1) and (M^1) (which is M itself) and multiply with an extra M. Essentially (M^3 = M^1 * M^1 * M).

  • Squaring M gives (M^2), then multiplying by M gives (M^3).

  • Then square (M^3) to get (M^6).

  • Read off F(6) from the matrix. If you carry out these multiplications, you’d end up with (M^6 = \begin{pmatrix}13 & 8\8 & 5\end{pmatrix}), which tells us F(6)=8, F(7)=13. Each multiplication step is a small calculation.

The step-by-step values in each approach all ultimately produce the same Fibonacci numbers, but the paths and intermediate states differ. The iterative approaches clearly show the running sum behavior of the Fibonacci sequence (each new number is added to the running total in a sense), whereas the recursive approaches explore the combination of previous results (and with memoization, we ensure each required result is computed once).

Common Mistakes & Edge Cases:

  • Forgetting Base Cases: A very common mistake is to forget to handle n = 0 or n = 1 properly. If the recursive function doesn’t stop at n <= 1, it will go into infinite recursion. Always ensure if (n <= 1) return n (or separate checks for 0 and 1) is in your recursive solution. Similarly, for iterative solutions, if n is 0 or 1, you should return immediately without looping.

  • Off-by-One Errors in Loops: When using loops (iterative DP), be careful with your loop ranges. For example, if you do for (int i = 2; i < n; i++) vs <= n, you might incorrectly stop one iteration too soon or run one too many. Double-check with small values (like n=2 or n=3) to ensure your loop produces the correct result.

  • Misunderstanding the Fibonacci Definition: Some definitions start Fibonacci as F(1)=1, F(2)=1 (omitting F(0)). The LeetCode problem clearly uses F(0)=0, F(1)=1 . If one isn’t careful, they might implement the wrong base cases (for example, some might code if(n <= 2) return 1, which would erroneously return 1 for n=0 as well). Make sure to implement according to the given definition.

  • Inefficient Approach for Larger n: Using the naive recursive approach for large n (even moderately large like n=40) will cause a huge slowdown or even time out, because of exponential complexity. Always consider the input constraints; for n up to 30, recursion is okay, but generally memoization or iteration is the safe choice. If you tried naive recursion for something like n=50 or n=100 (outside the given constraint), it would be extremely slow. Remember, dynamic programming (memoization or tabulation) saves the day by cutting down repeated work.

  • Integer Overflow (in other contexts): For n ≤ 30, overflow isn’t an issue (F(30) = 832040 fits in 32-bit int). But be aware, Fibonacci numbers grow fast. If the problem allowed n to be larger (say n=50 or n=100), the result might exceed the range of a 32-bit integer. In Java, one might need to use a larger data type (like long for up to F(92) or BigInteger for beyond) to handle the result. Always check the constraints; here int is fine.

  • Iterative State Updates: In the optimized iterative approach, a common mistake is updating the variables in the wrong order. For instance, doing a = a + b; b = a; will go wrong because once you update a, you’ve lost the old value. Always use a temporary variable (or do simultaneous assignment if the language allows) to hold one of the values during the swap/update. In our code, we used next to hold a+b before moving a into b.

  • Matrix Exponentiation Pitfalls: If implementing matrix exponentiation, mistakes can happen in matrix multiplication or the power algorithm. One must carefully implement multiplication to avoid index mix-ups, and ensure to multiply in the correct order (matrix multiplication is not commutative). Also, forgetting to multiply by the base matrix for odd exponents will give the wrong result. Testing on small n (like n=5 or n=6, where you know the answer) is important to verify the matrix method.

  • Recursive Depth (Stack Overflow): While not an issue for n=30, in general if you use recursion for Fibonacci without memo, the call depth can be n, which for large n (like thousands) would risk a stack overflow. Memoization still uses recursion but will terminate faster. In languages without proper tail-call optimization, deep recursion can be a problem. The iterative solutions avoid this entirely.

Edge cases for this problem are mainly the smallest values:

  • n = 0: Should return 0. Make sure your code handles this (both recursion and iteration). Our implementations all check if(n <= 1) return n, which correctly returns 0 for n=0.

  • n = 1: Should return 1. Also handled by if(n <= 1) return n.

  • n = 2: This is the first case that actually uses the recurrence (F(2) = 1). It’s good to test n=2 to ensure the loop or recursion logic works for the smallest non-trivial case. Our examples and code confirm F(2) comes out as 1.

  • Maximum n (30): For completeness, test n=30. F(30)=832040. All approaches should handle this quickly. Even the naive recursion can handle 30 within a second or two in Java, but it’s at the edge of what’s comfortable for recursive calls count. Memoization or iteration handles it instantly.

Alternative Variations:

The Fibonacci pattern and recurrence show up in many variations and other problems:

  • Similar Recurrence – Tribonacci: A variant of Fibonacci is the Tribonacci sequence (LeetCode 1137) where each number is the sum of the previous three numbers (with base cases T(0)=0, T(1)=1, T(2)=1). The approach to solve tribonacci is analogous: you can use recursion with memoization or iterative DP. Instead of keeping two previous numbers, you’d keep three.

  • Climbing Stairs Problem: LeetCode’s “Climbing Stairs” (Problem 70) is essentially a Fibonacci in disguise. It asks: given n stairs, in how many distinct ways can you climb to the top if you can take 1 or 2 steps at a time? The recurrence turns out to be the same as Fibonacci: ways(n) = ways(n-1) + ways(n-2), with base cases ways(0)=1, ways(1)=1. So, the solution is F(n+1) essentially. Many people unknowingly reinvent Fibonacci when solving that problem.

  • Fibonacci in Nature and Math: Fibonacci numbers appear in various natural patterns (like the spiral arrangement of leaves, flower petals, etc.) and in mathematical contexts such as the golden ratio. The ratio F(n+1)/F(n) approaches ~1.618 (φ) as n grows. While this is more of a mathematical curiosity, it has led to the closed-form formula (Binet’s formula) involving φ as mentioned earlier. In computing, we usually don’t use that directly, but it’s interesting to know that F(n) grows roughly exponentially with ratio φ.

  • Fast Doubling Method: This is an alternative fast method to compute Fibonacci, similar in spirit to matrix exponentiation but using recursion on the formulas: [ F(2k) = F(k) \times [2F(k+1) - F(k)], ] [ F(2k+1) = F(k+1)^2 + F(k)^2. ] Using these, one can compute Fibonacci in O(log n) time as well, with just integer arithmetic (no matrices). It’s called “fast doubling” because each recursive step roughly doubles the index. This method is often used in programming contests when n is extremely large (and usually with modulo arithmetic to keep numbers within limits).

  • Other DP Problems: The Fibonacci recurrence is one of the simplest forms of DP. Many other problems have similar recurrences. For example, the number of ways to cover a 2×n board with 2×1 dominoes is Fibonacci(n+1). The logic is: to cover 2×n, either the first column is covered by two vertical dominoes (then solve 2×(n-1)) or by two horizontal dominoes (then solve 2×(n-2)). This gives the same recurrence. Recognizing a Fibonacci pattern in disguise can help solve such problems by directly applying a known formula or approach.

  • Fibonacci Heaps and Fibonacci Search: The term “Fibonacci” also appears in advanced algorithms and data structures – for instance, Fibonacci Heap (a heap structure that has amortized operations bounded by Fibonacci numbers) and Fibonacci Search (a search algorithm that uses Fibonacci numbers to divide the array). These are beyond the scope of this problem, but it shows how ubiquitous Fibonacci numbers are in computer science.

  • Dynamic Programming Introduction: Fibonacci is often the first example used to teach dynamic programming because of its simple recurrence and the dramatic contrast between the exponential recursive solution and the linear DP solution. Understanding this example lays the foundation for tackling more complex DP problems where you store and reuse subproblem results.

In summary, Fibonacci numbers form a simple yet powerful case study in optimization. Techniques like memoization and bottom-up computation used here apply to a broad range of problems. The pattern of using previous results to build new results is at the heart of dynamic programming.

To reinforce the concepts learned from Fibonacci, you can try these related LeetCode problems and variations:

  • 70. Climbing Stairs (Easy): Essentially Fibonacci logic for counting ways to climb stairs (1 or 2 steps at a time).

  • 1137. N-th Tribonacci Number (Easy): Similar to Fibonacci but each term is the sum of three preceding numbers. Solve it using the same DP approaches (just extend to three terms).

  • 746. Min Cost Climbing Stairs (Easy): A DP problem where you advance either 1 or 2 steps, accumulating cost. The state transition resembles Fibonacci (minCost[i] depends on minCost[i-1] and i-2). This practices thinking in DP terms.

  • 198. House Robber (Medium): Another DP problem with a recurrence relation reminiscent of Fibonacci (robbery of adjacent houses has the relation dp[i] = max(dp[i-1], dp[i-2] + money[i])). It’s not the same sum recurrence, but solving it uses similar ideas of using past computed values.

  • 509. Fibonacci Number (Easy): (This problem itself on LeetCode). Re-implement it in another language or using another approach (like iterative if you did recursive, or vice versa) to solidify your understanding.

  • Advanced: If you’re interested in the math side, try implementing Fibonacci for very large n with modulus (for example, “Fibonacci Modulo” problems or Project Euler problems on Fibonacci). This will push you to use the matrix exponentiation or fast doubling method to handle huge n efficiently.

TAGS
leetcode
CONTRIBUTOR
Design Gurus Team
-

GET YOUR FREE

Coding Questions Catalog

Design Gurus Newsletter - Latest from our Blog
Boost your coding skills with our essential coding questions catalog.
Take a step towards a better tech career now!
Explore Answers
What is the salary of Splunk freshers?
How many Behavioural questions are asked in an interview?
Does IBM pay well?
Related Courses
Image
Grokking the Coding Interview: Patterns for Coding Questions
Grokking the Coding Interview Patterns in Java, Python, JS, C++, C#, and Go. The most comprehensive course with 476 Lessons.
Image
Grokking Modern AI Fundamentals
Master the fundamentals of AI today to lead the tech revolution of tomorrow.
Image
Grokking Data Structures & Algorithms for Coding Interviews
Unlock Coding Interview Success: Dive Deep into Data Structures and Algorithms.
Image
One-Stop Portal For Tech Interviews.
Copyright © 2025 Design Gurus, LLC. All rights reserved.
;