Data Structures, Algorithms, and Libraries#

Open in Colab

# if you're using colab, then install the required modules
import sys

IN_COLAB = "google.colab" in sys.modules
if IN_COLAB:
    %pip install --quiet algorithms

Python comes with a standard library.

This includes:

And lots more. They are optimised in C (statically typed and compiled). Hence, they’re often faster than reimplementing them yourself. They provide standardised solutions for many problems that occur in everyday programming.

Built-in functions#

len#

Return the length (the number of items) of an object. The argument may be a sequence (e.g., list) or a collection (e.g., dictionary).

Tip

Use len rather than counting the items in an object in a loop.

nums = [num for num in range(1_000_000)]
%%timeit
count = 0
for num in nums:  # time O(n)
    count += 1
26.7 ms ± 249 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
len(nums)  # time O(1)
47.1 ns ± 1.5 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

Data types/structures#

Lists#

A sequence of objects.

Tip

Append to lists, rather than concatenating.

Lists are allocated twice the memory required, so appending fills this up in O(1) (long-term average), while concatenating creates a new list each time in O(n).

%%timeit
my_list = []
for num in range(1_000):
    my_list += [num]  # time O(n)
44.4 µs ± 52.3 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
%%timeit
my_list = []
for num in range(1_000):
    my_list.append(num)  # time O(1)
36.2 µs ± 120 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

Numpy arrays#

Like a list, but optimised for handling numerical data. They are less flexible than lists, as they cannot contain heterogeneous data. They are however generally faster than lists when performing computations, allowing application of efficient element-wise computations and mathematical functions, implemented in C. They also typically have a smaller memory overhead.

import numpy as np
%%timeit
my_list = range(10000)
my_sum = sum(my_list)
132 µs ± 263 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
%%timeit
my_array = np.arange(10000)
my_sum = my_array.sum
3.71 µs ± 11.3 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

Dictionaries#

A set of key:value pairs, where the keys are unique and immutable indices.

Tip

Use dictionaries as look-ups, as they’re fast to search, O(1).

Example from Luciano Ramalho, Fluent Python, Clear, Concise, and Effective Programming, 2015. O’Reilly Media, Inc.

haystack_list = np.random.uniform(low=0, high=100, size=(1_000_000))
haystack_dict = {key: value for key, value in enumerate(haystack_list)}
needles = [0.1, 50.1, 99.1]
%%timeit
needles_found = 0
for needle in needles:
    if needle in haystack_list:  # time O(n) within list
        needles_found += 1
416 µs ± 706 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%%timeit
needles_found = 0
for needle in needles:
    if needle in haystack_dict:  # time O(1) within dict
        needles_found += 1
142 ns ± 1.38 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

Tip

Reduce repeated calculations with caching.

For example, use caching with the Fibonacci sequence (each number is the sum of the two preceding ones starting from 0 and 1 e.g. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34).

def fibonacci(n):  # time O(2^n) as 2 calls to the function n times
    # (a balanced tree of repeated calls)
    if n == 0 or n == 1:
        return 0
    elif n == 2:
        return 1
    else:
        return fibonacci(n - 1) + fibonacci(n - 2)
%timeit fibonacci(20)
1.11 ms ± 4.41 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
def fibonacci_with_caching(n, cache={0: 0, 1: 0, 2: 1}):  # time O(n) as 1 call per n
    if n in cache:
        return cache[n]
    else:
        cache[n] = fibonacci_with_caching(n - 1, cache) + fibonacci_with_caching(
            n - 2, cache
        )
        return cache[n]
%timeit fibonacci_with_caching(20, cache={0: 0, 1: 0, 2: 1})
3.94 µs ± 34 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

Question 1

Which of the following uses less memory and how can you check?

  • np.float16

  • np.float32

  • np.float64

Tuples#

Tuples are similar to dictionaries, in that they are immutable, and similar to lists, in that they are indexed in order.

If mutability is not required, they have a memory advantage over lists, as they don’t over-allocate to allow for dynamic resizing, mathematical operations or appending:

sys.getsizeof(list(iter(range(10))))
152
sys.getsizeof(tuple(iter(range(10))))
120

This means they are also faster to instantiate:

%%timeit
my_list = [11, 12, 99, 50, 2030]
37.9 ns ± 0.649 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
%%timeit
my_tuple = (11, 12, 99, 50, 2030)
11 ns ± 0.319 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)

Modules#

math#

This module provides access to the mathematical functions.

So, you could create your own function to calculate the hypotenuse of a triangle:

def hypotenuse(x, y):
    x = abs(x)
    y = abs(y)
    t = min(x, y)
    x = max(x, y)
    t = t / x
    return x * (1 + t * t) ** 0.5
%timeit hypotenuse(3.0, 4.0)
349 ns ± 0.619 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

However, math already has this implemented and optimised:

import math
%timeit math.hypot(3.0, 4.0)
77.3 ns ± 0.169 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

Algorithms#

An algorithm is the instructions (/recipe) to solve a problem.

Many existing libraries are already optimised (computationally and algorithmically). For example, the algorithm library has minimal examples of data structures and algorithms in Python e.g., breadth first search, depth first search, linked lists, etc.

Sorting#

unsorted_array = np.random.rand(1_000)

Selection sort#

Time O(n^2), space O(1)

  1. Have two arrays: one unsorted (original) and one sorted (can do in place to avoid extra memory).

  2. Find the smallest number in the unsorted array and add it to the sorted array.

  3. Repeat step 2 until the final index is the largest number (i.e. sorted array).

from algorithms.sort import selection_sort
%timeit selection_sort(unsorted_array)
73.7 ms ± 662 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Merge sort#

Time O(nlgn), space O(n or nlgn, depending on implementation)

  1. Divide the array in half.

  2. Then recursively apply:
    a. Step 1 to both halves, until hit the base case where both halves are length 1.
    b. Merge the two length 1 arrays into a sorted array of length 2.
    c. Repeat b, all the way up for this half.

from algorithms.sort import merge_sort
%timeit merge_sort(unsorted_array)
3.28 ms ± 6.36 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Timsort#

Time O(nlgn = worst case, n = best case), space O(n)

  • Timsort is the default implementation of sorting in Python.

  • It combines merge sort with insertion sort (where each element is inserted into a new sorted list).

  • It takes advantage of runs of consecutive ordered elements to reduce comparisons (relative to merge sort).

  • It merges when runs match a specified criterion.

  • The runs have a minimum size (attained by insertion sort, if needed).

sorted(my_iterable)

  • Creates a new sorted list.

  • Works for any iterable.

%timeit sorted(unsorted_array)
45.5 µs ± 261 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

my_list.sort()

  • In-place (only for lists).

%timeit unsorted_array.sort()
5.26 µs ± 2.67 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

Exercises#

Exercise 1

What data structure would be suitable for finding or removing duplicate values?

a. List
b. Dictionary
c. Queue
d. Set

Test out your answer on the following array:

array = np.random.choice(100, 80)

Are there any other ways of doing it?

Exercise 2

In the exercise from the profiling lesson, we saw an example of two_sum i.e., finding two numbers from an array of unique integers that add up to a target sum.

What would be a good approach for generalising this sum of two numbers to three, four, or n numbers?

Solutions#

Key Points#

Important

  • Make use of the built-in functions e.g, use len rather than counting the items in an object in a loop.

  • Use appropriate data structures e.g., append to lists rather than concatenating, use dictionaries as fast to search look-ups, cache results in dictionaries to reduce repeated calculations.

  • Make use of the standard library (optimised in C) e.g., the math module.

  • See whether there is an algorithm or library that already optimally solves your problem e.g., faster sorting algorithms.

Further information#

Other options#

  • Generators save memory by yielding only the next iteration.

  • For NetCDFs, using engine='h5netcdf' with xarray can be faster, over than the default engine='netcdf4'.

  • Compression

  • Chunking

    • If need all data, then can load/process in chunks to reduce amount in memory: Zarr for arrays, Pandas.

  • Indexing

    • If need a subset of the data, then can index (multi-index) to reduce memory and increase speed for queries: Pandas, SQLite.

  • Suitable data types for parallel computing

    • Parquet creates efficient tabular data (e.g. dataframes), useful for parallel computing.

    • Zarr creates compressed, chunked, N-dimensional arrays, designed for use in parallel computing.

Resources#