The python programming language has a built-in feature allowing you to add a least-recently used cache (LRU cache) to a method that functions automatically. You can specify how large the cache is, then add it to a method with a decorator.

A decorator in python appear on the line before a method definition like this.

@decorator(count=42)
def guide(planet, topic):
    code here
    return answer

Adding a cache to a method like this is functionally the same as adding dynamic programming or memoization. To demonstrate the power of an LRU cache, I ran the same code with and without one.

import functools
@functools.lru_cache(maxsize=32)
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

def fib2(n):
    if n < 2:
        return n
    return fib2(n-1) + fib2(n-2)

So now we have the same method, one with caching and one without. Let's benchmark the two approaches.

>>> time = datetime.datetime.now(); [fib(n) for n in range(42)]; datetime.datetime.now() - time
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141]
datetime.timedelta(microseconds=150)

>>> time = datetime.datetime.now(); [fib2(n) for n in range(42)]; datetime.datetime.now() - time
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141]
datetime.timedelta(seconds=142, microseconds=969262)

Using a cache finds the answer in 150 microseconds, not using a cache takes 142 seconds. That's literally a million times faster. It gets worse and worse the higher you go. The cache version always finishes in the blink of an eye on higher numbers, and the non-cache version would take months, years or never finish.

The @lru_cache decorator adds a method to the method to view stats.

>>> fib.cache_info()
CacheInfo(hits=452, misses=222, maxsize=32, currsize=32)

Before spending a lot of time implementing something fundamental in python, check the docs for a version that's built-in. Oftentimes that version will have the intense loops written in C for performance and you'll end up finished with your script earlier and with faster code.