Profiling#

Open in Colab

# if you're using colab, then install the required modules
import sys

IN_COLAB = "google.colab" in sys.modules
if IN_COLAB:
    %pip install --quiet line_profiler snakeviz pyinstrument eliot eliot-tree

Profiling analyses your code in terms of speed or memory.

This helps identify where the bottlenecks are (why and where is it slow?) and how much potential there is for improvement.

Note

IPython magic commands are useful, commonly used commands in Jupyter Notebooks.
They have one % at the start for use on a single line, or two %% at the start for use over a whole cell.

timeit#

An IPython magic command that measures the time execution of an expression.

It runs a few times, depending on how intensive the expression is, and returns the average time with a standard deviation. It is useful for benchmarking small code snippets.

Tip

timeit is different to time, which is a less optimised alternative. We recommend the use of timeit.

%timeit range(100)
78.8 ns ± 0.986 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
%%timeit
for x in range(100):
    pass
1.14 µs ± 7.32 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

Another option is:

line_profiler#

This is module that measures the time spent in each line of a function. A simple example:

%load_ext line_profiler
import time


def relax():
    pass


def bottleneck():
    time.sleep(0.001)


def some_function():
    nums = [num for num in range(1_000)]
    bigger_nums = [num**2 for num in nums]
    bottleneck()
    bigger_nums.extend(bigger_nums)
    relax()

The Timer unit is the conversion to seconds.

The table shows the following columns for each line of the function (each row):

  • Line #: The line number.

  • Hits: The number of times that line was called.

  • Time: The total time spent calling that line (in the Timer units).

  • Per Hit: The time spent for one call of that line.

  • % Time: The percentage of total time spent on that line (this is the often the most useful column).

  • Line Contents: The source code for that line.

%lprun -f some_function some_function()

Note

In Colab, the results will pop up at the bottom under “Help”, rather than inline like we have for a local Jupyter Notebook.

We can see that most of the time was in the bottleneck() call, highlighting the potential here for possible improvements.

pyinstrument#

A statistical profiler of wall-clock time (recording the call stack every 1ms), lowering the overhead compared to tracing profilers (like cProfile).

It hides library frames and fast functions, so you can focus on the slow parts of your code.

The output shows a tree of actions of how the function executes, using a traffic light colour legend (i.e., red = slow).

%load_ext pyinstrument
%%pyinstrument
some_function()

Eliot#

Logs the performance information. It tracks each function called, its arguments, the return values, and any rasied exceptions. Hence, it can be helpful for profiling in real use cases.

Note

Logging has a higher overhead than cProfile.

It works by using decorators. Decorators are a nice way for functions to return other functions.

So:

@f1
def f2(): 
  pass

Is the same as:

def f2(): 
  pass 

f2 = f1(f2)

For example, here is a decorated function that logs calls to a function:

def log_calls(function):
    def wrapper():
        name = function.__name__

        print(f"Before {name} was called ...")
        function()
        print(f"After {name} was called ...")

    return wrapper

Calling this returns a function:

log_calls(some_function)
<function __main__.log_calls.<locals>.wrapper()>

So, to execute it we add ():

log_calls(some_function)()
Before some_function was called ...
After some_function was called ...

The @wrapper syntax is a nice way to return this same result, where the outer function is placed above the inner function with the @ symbol:

@log_calls
def some_function_with_logging():
    some_function()
some_function_with_logging()
Before some_function_with_logging was called ...
After some_function_with_logging was called ...

Okay, so now we can use the @eliot.log_call decorator to profile a function:

import eliot
def bottleneck():
    time.sleep(0.5)
@eliot.log_call
def multiply(x, y):
    bottleneck()
    return x * y
@eliot.log_call
def add(x, y):
    return x + y

This next cell is just to remove the log file, if you’re trying different things:

# remove file if exists
import os

try:
    os.remove("profiling_some_function.log")
except OSError:
    pass

Now, tell eliot where to log the output messages and run the function call:

eliot.to_file(open("profiling_some_function.log", "w+"))
result = multiply(add(3, 4), add(5, 6))

You can now view this log file by calling eliot-tree from a terminal.

This shows that add() was called twice as inputs to multiply().

The input and output arguments are shown too.

The time of the call is shown to the right, where multiply() took 0.5 seconds (due to the time.sleep(0.5) call).

Note

You can call terminal commands from within a Jupyter Notebook cell by adding the prefix !.

!eliot-tree profiling_some_function.log
32877aae-bc3a-466b-97c4-0fff3b952904
└── __main__.add/1 ⇒ started 2024-11-28 09:05:55Z0.000s
    ├── x: 3
    ├── y: 4
    └── __main__.add/2 ⇒ succeeded 2024-11-28 09:05:55Z
        └── result: 7

20d6f425-2f2c-4bb1-ac0d-f612619027e6
└── __main__.add/1 ⇒ started 2024-11-28 09:05:55Z0.000s
    ├── x: 5
    ├── y: 6
    └── __main__.add/2 ⇒ succeeded 2024-11-28 09:05:55Z
        └── result: 11

ada4363d-ddc7-4a3a-b654-6a49d7c79fce
└── __main__.multiply/1 ⇒ started 2024-11-28 09:05:55Z0.501s
    ├── x: 7
    ├── y: 11
    └── __main__.multiply/2 ⇒ succeeded 2024-11-28 09:05:55Z
        └── result: 77

Visualising how Python handles functions can be helpful in understanding them.

Click here to see an example.

Exercises#

Exercise 1

What is profiling and why is it useful?

Exercise 2

What profiling tool times the execution of a cell in a Jupyter Notebook?

Exercise 3

Below are two approaches for filling up an empty NumPy array. Which approach is faster and why?

def fill_array_approach_1(n):
    array = np.empty(1)

    for index in range(n):
        new_point = np.random.rand()
        array = np.append(array, new_point)

    return array
def fill_array_approach_2(n):
    array = np.empty(n)

    for index in range(len(array)):
        new_point = np.random.rand()
        array[index] = new_point

    return array

Exercise 4

Below are two methods that find two numbers from an array of unique integers that add up to a target sum.

If the target can’t be made, then an empty list is returned.

Each element in the array can only be used once.

Which method is faster and why?

def two_sum_brute_force(array, target):
    """
    A brute force approach to find two numbers from an array that add up to a target.

    Steps
    1. Loop through the array twice, adding up pairs of array elements.
    2. Compare each of these sums to the target.
    3. Return the pair that sums to the target, if one exists.
    """
    for index_one in range(len(array)):
        for index_two in range(index_one + 1, len(array)):
            if (
                array[index_one] + array[index_two] == target  # check sum of pair
                and index_one != index_two  # can't use the same element twice
            ):
                return [index_one, index_two]  # return the target pair

    return []  # return an empty list if the target pair isn't found
def two_sum_cache(array, target):
    """
    Use caching to find two numbers from an array that add up to a target.

    Steps
    1. Create a dictionary of cached differences relative to the target sum.
    2. Loop through the array once, adding each index and difference to the cache.
    3. If the required difference of a new array element is already in the cache,
       then you've found a matching pair, which you can return.
    """
    cache_differences = {}
    for index, element in enumerate(array):
        difference = (
            target - element
        )  # calculate the target difference for this element
        if difference in cache_differences:  # if we have the matching pair
            return [index, cache_differences[difference]]  # return the target pair
        cache_differences[element] = index  # if we don't have a match, add to the cache

    return []  # return an empty list if the target pair isn't found

You can test them out using the following array and target:

import numpy as np

array = np.random.choice(1_000, 500, replace=False)
target = 250

Solutions#

Key Points#

Important

  • Measure the time of cells, functions, and programs to find bottlenecks e.g., using timeit and line_profiler.

  • Visualise the profiled code e.g., using SnakeViz and pyinstrument.

  • Log profiling information e.g., using Eliot.

  • Consider how fast the code could go e.g., Big O notation.

Further information#

Other options#

  • pyheat

    • Combines pprofile and matplotlib to show a heatmap of the profiled code.

  • VizTracer

    • A low-overhead logging/debugging/profiling tool that can trace and visualise your python code execution.

Memory profiling#

  • memory_profiler

    • Measures the memory used by a function, at its peak and the overall increment.

  • memray

    • Memray is a memory profiler for Python, generating several different types of reports to help you analyse the captured memory usage data.

Resources#