10.2. Unit Testing#

Hide code cell source

import sys
from pathlib import Path

current = Path.cwd()
for parent in [current, *current.parents]:
    if (parent / '_config.yml').exists():
        project_root = parent  # ← Add project root, not chapters
        break
else:
    project_root = Path.cwd().parent.parent

sys.path.insert(0, str(project_root))

from shared import thinkpython, diagram, jupyturtle
from shared.download import download

# Register as top-level modules so direct imports work in subsequent cells
sys.modules['thinkpython'] = thinkpython
sys.modules['diagram'] = diagram
sys.modules['jupyturtle'] = jupyturtle

In 11.1 you wrote code to handle errors at runtime. Unit tests are how you verify that code actually works.

Unit” means the smallest testable piece of code — typically a single function or method. The idea is to test each piece in isolation, independent of the rest of the system.

Testing can be seen as a spectrum:

Type

Scope

Unit test

Single function or method

Integration test

Multiple units working together

End-to-end test

Entire application flow

As you can see, unit testing is the foundation: if every unit works correctly in isolation, you have much more confidence that the whole system will work when assembled.

Unit testing is pre-ship verification: it runs during development to confirm your code behaves correctly before anyone uses it. It doesn’t run in production at all. Unit testing answers: “does my code actually do what I think it does?”

The whole idea of unit testing is a design and verification technique: you write tests that define expected behavior, then run them to confirm your code is correct. You do this before you submit it, share it, or build more code on top of it.

TDD (Test-Driven Development) takes this further: write the tests first, then write the code to make them pass. TDD (write tests first, then code) is still practiced and respected, but it’s not as dominant as it was in the mid-2000s when it was almost dogma in certain circles (particularly in, e.g., the Agile community). While it is still strong, e.g., in teams building libraries or APIs with well-defined contracts, it’s genuinely hard to do well. Also, many developers practice “test alongside” or “test after” and get good results. AI-assisted coding has further disrupted the workflow: people are generating code faster than TDD’s write-test-first cadence supports.

Key benefits:

  • Catch bugs before users do

  • Prevent regressions when you change code

  • Tests serve as runnable documentation

Python has three main testing tools:

Tool

Style

Best for

pytest

Plain assert, no class needed

New projects; industry standard

unittest

Class-based (TestCase)

Legacy codebases; built into the standard library

doctest

Examples embedded in docstrings

Illustrating function behavior in docstrings

We’ll cover them in that order, starting with the one you’re most likely to use professionally.

10.2.1. pytest#

pytest is the most widely used testing framework in Python. It requires no boilerplate classes. In a typical project structure, to use pytest for testing, you prepare two files:

  1. Your actual code (e.g., calc.py)

  2. Your test functions(e.g., test_calc.py)

You

  • write plain functions that start with test_ so that pytest can discover the function as a convention

  • use plain assert statements, and

  • run everything with pytest on the command line.

Install

The pytest module needs to be installed in the environment. For that,, use pip in the terminal with the virtual environment enabled.

pip install pytest

10.2.1.1. Inline Assertion Checks#

Before running pytest, let us take a look at how assert can be used for testing. You should see the similarity between using asserts manually and the pytest framework.

Here we demonstrate the test structure using inline assertion checks:

### functions (would be calc.py in, e.g., the project root later)
def add(a, b):
    return a + b

def subtract(a, b):
    return a - b

### pytest-style test functions (would go in test_calc.py later)
def test_add_positive():
    assert add(2, 3) == 5

def test_add_negative():
    assert add(-1, -1) == -2

def test_subtract():
    assert subtract(10, 4) == 6

# Run them manually (simulating pytest discovery)
for test_fn in [test_add_positive, test_add_negative, test_subtract]:
    test_fn()
    print(f'{test_fn.__name__} PASSED')
test_add_positive PASSED
test_add_negative PASSED
test_subtract PASSED

10.2.1.2. pytest File Convention#

Save the test functions to a file named test_*.py, pytest discovers and collects them automatically when run.

### the test_calc.py file
from calc import add, subtract, divide
def test_add():
    assert 1 + 1 == 2

def test_subtract():
    assert 3 - 1 == 2

def test_divide():
    assert 10 / 2 == 5.0

def test_divide_by_zero():
    import pytest
    with pytest.raises(ZeroDivisionError):
        1 / 0

Run from the terminal:

pytest test_calc.py -v

You should see output like this below saying:

  • 3 items collected

  • each item test PASSED

(.venv) [user]@[host]ː~/workspace/py/[path]$ pytest test_calc.py -v
====================== test session starts ======================
platform darwin -- Python 3.13.7, pytest-9.0.3, pluggy-1.6.0 -- /Users/[user]/workspace/py/.venv/bin/python3.13
cachedir: .pytest_cache
rootdir: /Users/[user]/workspace/py/[path]
plugins: anyio-4.11.0
collected 3 items                                               

test_calc.py::test_add PASSED                             [ 33%]
test_calc.py::test_divide PASSED                          [ 66%]
test_calc.py::test_divide_by_zero PASSED                  [100%]

======================= 3 passed in 0.01s =======================
(.venv) [user]@[host]ː~/workspace/py/[path]$ 

10.2.1.3. pytest in Practice#

Often, you would be testing a module instead of Python’s built-in arithmetic operators as above. For that, you may organize the code file and testing function file separately. This is best for production code, development pipelines, and anything you’re distributing or maintaining long-term.

The key here is to import your module to the test file. You see that a module is but a file and we can import the functions separately from the module.

### the code module file: calc.py
def add(a, b):
    return a + b

def divide(a, b):
    return a / b

import the module (calc.py) in the test file. Note that you import calc, less the file extension.

### the test_calc.py file

from calc import add, divide

def test_add():
    assert add(1, 1) == 2

def test_divide():
    assert divide(10, 2) == 5.0

def test_divide_by_zero():
    import pytest
    with pytest.raises(ZeroDivisionError):
        divide(1, 0)

In CLI, run

python -m pytest test_calc.py -v

or

pytest test_calc.py -v

and you should have output like:

(.venv) [user]@[host]ː~/workspace/py$ python -m pytest test_calc.py -v
====================== test session starts ======================
platform darwin -- Python 3.13.7, pytest-9.0.3, pluggy-1.6.0 -- /Users/[user]/workspace/py/.venv/bin/python3
cachedir: .pytest_cache
rootdir: /Users/[user]/workspace/py
plugins: anyio-4.11.0
collected 3 items                                               

test_calc.py::test_add PASSED                             [ 33%]
test_calc.py::test_divide PASSED                          [ 66%]
test_calc.py::test_divide_by_zero PASSED                  [100%]

======================= 3 passed in 0.02s =======================
(.venv) [user]@[host]ː~/workspace/py$ 

10.2.1.4. pytest as Subprocess#

You may also run run pytest inside the Jupyter notebook, or other environments such as conda, by writing the test file to disk and invoking the subprocess.

  1. Write a .py test file to disk using normal file I/O

  2. Run pytest on it via Python’s subprocess module

  3. Capture and display the output back in the notebook

import subprocess, pathlib
# %pip install pytest  # ensure pytest is available in the environment
### same as the pip line above, but using subprocess to avoid issues in some notebook environments
# subprocess.run(["pip", "install", "pytest"], capture_output=True, text=True)


# 1. Write test file to disk
pathlib.Path("test_temp.py").write_text("""
def test_add():
    assert 1 + 1 == 2

def test_fail():
    assert 1 + 1 == 3
""")

# 2. Invoke pytest as a subprocess
result = subprocess.run(
    # ["pytest", "test_temp.py", "-v"],
    ["python", "-m", "pytest", "test_temp.py", "-v"],
    capture_output=True, text=True
)

# 3. Display output in the notebook
print(result.stdout)
============================= test session starts ==============================
platform darwin -- Python 3.13.7, pytest-9.0.2, pluggy-1.6.0 -- /Users/tychen/workspace/py/.venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/tychen/workspace/py/chapters/11-testing
plugins: anyio-4.11.0
collecting ... collected 2 items

test_temp.py::test_add PASSED                                            [ 50%]
test_temp.py::test_fail FAILED                                           [100%]

=================================== FAILURES ===================================
__________________________________ test_fail ___________________________________

    def test_fail():
>       assert 1 + 1 == 3
E       assert (1 + 1) == 3

test_temp.py:6: AssertionError
=========================== short test summary info ============================
FAILED test_temp.py::test_fail - assert (1 + 1) == 3
========================= 1 failed, 1 passed in 0.02s ==========================

10.2.1.5. Jupyter Inline Testing#

In addition to running the pytest testing file in the terminal, you can also run the tests inside the Jupyter notebook using the ipytest plugin.

Note that %%ipytest does the discovering and calling of the test functions automatically within that cell, that’s why in the example below you don’t see the test functions being called. The workflow is:

  1. User runs the cell with %%ipytest

  2. ipytest scans the cell for any function whose name starts with test_

  3. It calls all of them via pytest under the hood

  4. Reports pass/fail

### install ipytest for in-notebook testing
### live code may require you to install it again: ModuleNotFoundError: No module named 'ipytest'
# %pip install ipytest -q
import ipytest
ipytest.autoconfig()
%%ipytest

def test_add_positive():
    assert add(2, 3) == 5
    
def test_add_negative():
    assert add(-1, -1) == -2
.
.
                                                                                           [100%]
2 passed in 0.00s
### Exercise: Write pytest-style test functions for is_even(n)
#   is_even(n) returns True if n is even, False otherwise.
#   Write three test functions:
#     1. test_is_even_positive: is_even(4) is True; is_even(3) is False
#     2. test_is_even_zero: is_even(0) is True
#     3. test_is_even_negative: is_even(-2) is True; is_even(-1) is False
#   Use plain assert statements (no class, no self).
#   After writing them, call each one in a loop and print PASSED.
### Your code starts here.



### Your code ends here.

Hide code cell source

### Solution
def is_even(n):
    return n % 2 == 0

def test_is_even_positive():
    assert is_even(4) == True
    assert is_even(3) == False

def test_is_even_zero():
    assert is_even(0) == True

def test_is_even_negative():
    assert is_even(-2) == True
    assert is_even(-1) == False

for fn in [test_is_even_positive, test_is_even_zero, test_is_even_negative]:
    fn()
    print(f'{fn.__name__} PASSED')
test_is_even_positive PASSED
test_is_even_zero PASSED
test_is_even_negative PASSED

10.2.2. unittest#

Python’s standard library includes unittest, a class-based testing framework. You’ll encounter it in older codebases and it’s worth knowing, but for new projects pytest is almost always the better choice.

10.2.2.1. pytest vs unittest#

The table below compares the two frameworks across the features you’re most likely to care about.

Feature

unittest

pytest

Requires class?

Yes (TestCase)

No

Assertion style

self.assertEqual(a, b)

assert a == b

Discovery

Manual or unittest.main()

Automatic with pytest command

Output

Minimal

Detailed, colored diff

Fixtures

setUp/tearDown

@pytest.fixture

Ecosystem

Standard library

Third-party, widely adopted

Both are valid. unittest is built-in; pytest is preferred in industry because its plain assert style produces clearer failure messages.

Note

unittest assertions vs. bare assert

unittest’s assertX methods (assertEqual, assertRaises, assertIsNone, etc.) are ordinary method calls — they cannot be disabled. Bare Python assert statements can be silently disabled by running the interpreter with the -O (optimize) flag, which strips them from the bytecode. In practice you never run your test suite with -O, so this rarely matters; but it is why assert is discouraged for input validation in production code while being perfectly fine inside test functions. pytest recaptures and enriches bare assert output through its own import hook, so you still get detailed failure messages even though the underlying statement is plain assert.

In the sample code below, part 2 is how you use unittest. You:

  • subclass unittest.TestCase,

  • write test_* methods, and

  • use self.assertX() helper methods for assertions.

The syntax of assertEqual() is self.assertEqual(actual, expected). For example, assertEqual(first, second) checks that first == second. If not, the test fails with a message showing both values.

In part 3, we start with import io because unittest.TextTestRunner writes its output to a file-like stream. By default that goes to stderr, which in some notebook environments doesn’t display inline with the cell output.

io.StringIO() is an in-memory text buffer that acts like a file; so the runner writes into it, and then print(buf.getvalue()) sends the captured text to the notebook’s normal output.

Without io to create buf, you’d pass stream=sys.stderr (the default), which may appear in a different output area or not at all depending on the environment.

import unittest

# Example: Testing a simple function with unit tests

### part 1: The code we want to test
def calculate_grade(score, total_points):
    """Calculate percentage grade from score and total points."""
    if total_points <= 0:
        raise ValueError("Total points must be positive")
    if score < 0:
        raise ValueError("Score cannot be negative") 
    if score > total_points:
        raise ValueError("Score cannot exceed total points")
    
    percentage = (score / total_points) * 100
    return round(percentage, 2)

def get_letter_grade(percentage):
    """Convert percentage to letter grade."""
    if percentage >= 90:
        return 'A'
    elif percentage >= 80:
        return 'B'
    elif percentage >= 70:
        return 'C'
    elif percentage >= 60:
        return 'D'
    else:
        return 'F'

### part 2: Unit tests for our functions
class TestGradeCalculation(unittest.TestCase):
    """Test cases for grade calculation functions."""
    
    def test_calculate_grade_normal_cases(self):
        """Test normal grade calculations."""
        self.assertEqual(calculate_grade(85, 100), 85.0)
        self.assertEqual(calculate_grade(95, 100), 95.0)
        self.assertEqual(calculate_grade(50, 100), 50.0)
        self.assertEqual(calculate_grade(0, 100), 0.0)
    
    def test_calculate_grade_edge_cases(self):
        """Test edge cases for grade calculation."""
        self.assertEqual(calculate_grade(100, 100), 100.0)
        self.assertEqual(calculate_grade(87, 92), 94.57)
    
    def test_calculate_grade_invalid_input(self):
        """Test that invalid inputs raise appropriate exceptions."""
        with self.assertRaises(ValueError):
            calculate_grade(85, 0)  # Zero total points
        
        with self.assertRaises(ValueError):
            calculate_grade(-10, 100)  # Negative score
        
        with self.assertRaises(ValueError):
            calculate_grade(110, 100)  # Score exceeds total
    
    def test_letter_grade_conversion(self):
        """Test letter grade assignments."""
        self.assertEqual(get_letter_grade(95), 'A')
        self.assertEqual(get_letter_grade(85), 'B')
        self.assertEqual(get_letter_grade(75), 'C')
        self.assertEqual(get_letter_grade(65), 'D')
        self.assertEqual(get_letter_grade(55), 'F')
        
        # Test boundary conditions
        self.assertEqual(get_letter_grade(90), 'A')
        self.assertEqual(get_letter_grade(89.9), 'B')

### part 3: Run the tests and display results
import io
buf = io.StringIO()
runner = unittest.TextTestRunner(stream=buf, verbosity=2)
result = runner.run(unittest.TestLoader().loadTestsFromTestCase(TestGradeCalculation))
print(buf.getvalue())
if result.wasSuccessful():
    print("All tests passed.")
test_calculate_grade_edge_cases (__main__.TestGradeCalculation.test_calculate_grade_edge_cases)
Test edge cases for grade calculation. ... ok
test_calculate_grade_invalid_input (__main__.TestGradeCalculation.test_calculate_grade_invalid_input)
Test that invalid inputs raise appropriate exceptions. ... ok
test_calculate_grade_normal_cases (__main__.TestGradeCalculation.test_calculate_grade_normal_cases)
Test normal grade calculations. ... ok
test_letter_grade_conversion (__main__.TestGradeCalculation.test_letter_grade_conversion)
Test letter grade assignments. ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK

All tests passed.

10.2.2.2. When tests fail#

The example above has tests that all pass. But finding failing tests is the whole point. Here’s a function with a deliberate bug. It refuses to raise a negative base to a power, returning a string instead of a number. The test suite will expose it.

import io

def my_power(base, exponent):
    """Return base raised to exponent. Bug: rejects negative bases."""
    if base < 0:
        return "Error: negative base"   # ← bug: should just compute it
    return base ** exponent

class TestPower(unittest.TestCase):
    def test_positive_base(self):
        self.assertEqual(my_power(2, 3), 8)

    def test_negative_base_even_exponent(self):
        self.assertEqual(my_power(-2, 2), 4)   # (-2)² = 4, not an error string

    def test_negative_base_odd_exponent(self):
        self.assertEqual(my_power(-3, 3), -27)

# Run and print output so the failure is visible in the notebook
buf = io.StringIO()
runner = unittest.TextTestRunner(stream=buf, verbosity=2)
result = runner.run(unittest.TestLoader().loadTestsFromTestCase(TestPower))
print(buf.getvalue())
if result.wasSuccessful():
    print("All tests passed.")
else:
    print(f"{len(result.failures)} failure(s), {len(result.errors)} error(s)")
test_negative_base_even_exponent (__main__.TestPower.test_negative_base_even_exponent) ... FAIL
test_negative_base_odd_exponent (__main__.TestPower.test_negative_base_odd_exponent) ... FAIL
test_positive_base (__main__.TestPower.test_positive_base) ... ok

======================================================================
FAIL: test_negative_base_even_exponent (__main__.TestPower.test_negative_base_even_exponent)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/var/folders/6t/bfp06fh96wn60n_mjtxmbhfm0000gn/T/ipykernel_5882/2752416876.py", line 14, in test_negative_base_even_exponent
    self.assertEqual(my_power(-2, 2), 4)   # (-2)² = 4, not an error string
    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
AssertionError: 'Error: negative base' != 4

======================================================================
FAIL: test_negative_base_odd_exponent (__main__.TestPower.test_negative_base_odd_exponent)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/var/folders/6t/bfp06fh96wn60n_mjtxmbhfm0000gn/T/ipykernel_5882/2752416876.py", line 17, in test_negative_base_odd_exponent
    self.assertEqual(my_power(-3, 3), -27)
    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 'Error: negative base' != -27

----------------------------------------------------------------------
Ran 3 tests in 0.000s

FAILED (failures=2)

2 failure(s), 0 error(s)

10.2.2.3. unittest conventions#

  • Name test methods descriptively: test_divide_by_zero_raises_exception tells you exactly what’s being checked; test_division doesn’t.

  • One behavior per test method: instead of one test_grade_calculation that checks everything, write test_grade_returns_percentage, test_grade_raises_on_negative_score, etc. Narrow tests pinpoint exactly what broke.

  • Use setUp for shared state: if multiple tests need the same object, create it in setUp rather than repeating the construction in every method.

### Exercise: Write a TestCase for to_celsius(f)
#   to_celsius(f) converts Fahrenheit to Celsius: (f - 32) * 5 / 9
#   Write a class TestToCelsius(unittest.TestCase) with three test methods:
#     1. test_boiling: to_celsius(212) should equal 100.0
#     2. test_freezing: to_celsius(32) should equal 0.0
#     3. test_body_temp: to_celsius(98.6) should be approximately 37.0
#        (hint: use assertAlmostEqual with places=1)
#   Run using TextTestRunner with io.StringIO and verbosity=2.
### Your code starts here.



### Your code ends here.

Hide code cell source

### Solution
import io

def to_celsius(f):
    return (f - 32) * 5 / 9

class TestToCelsius(unittest.TestCase):
    def test_boiling(self):
        self.assertEqual(to_celsius(212), 100.0)

    def test_freezing(self):
        self.assertEqual(to_celsius(32), 0.0)

    def test_body_temp(self):
        self.assertAlmostEqual(to_celsius(98.6), 37.0, places=1)

buf = io.StringIO()
runner = unittest.TextTestRunner(stream=buf, verbosity=2)
result = runner.run(unittest.TestLoader().loadTestsFromTestCase(TestToCelsius))
print(buf.getvalue())
if result.wasSuccessful():
    print('All tests passed.')
test_body_temp (__main__.TestToCelsius.test_body_temp) ... ok
test_boiling (__main__.TestToCelsius.test_boiling) ... ok
test_freezing (__main__.TestToCelsius.test_freezing) ... ok

----------------------------------------------------------------------
Ran 3 tests in 0.000s

OK

All tests passed.

10.2.3. Doctests#

A doctest is a test embedded directly in a function’s docstring. You write an example interaction using >>> (the Python interactive prompt), followed by the expected output. Python’s doctest module finds and runs these examples automatically.

Doctests serve two purposes at once: they document how a function works and verify that it actually works that way. They’re best for simple input/output examples, not a replacement for pytest or unittest when you need complex setup or many edge cases.

def uses_any(word, letters):
    """Checks if a word uses any of a list of letters.
    
    >>> uses_any('banana', 'aeiou')
    True
    >>> uses_any('apple', 'xyz')
    False
    """
    for letter in word.lower():
        if letter in letters.lower():
            return True
    return False

Each test begins with >>>, which is used as a prompt in some Python environments to indicate where the user can type code. In a doctest, the prompt is followed by an expression, usually a function call. The following line indicates the value the expression should have if the function works correctly.

In the first example, 'banana' uses 'a', so the result should be True. In the second example, 'apple' does not use any of 'xyz', so the result should be False.

To run these tests, we have to import the doctest module and run a function called run_docstring_examples. To make this function easier to use, I wrote the following function, which takes a function object as an argument.

from doctest import run_docstring_examples

def run_doctests(func):
    run_docstring_examples(func, globals(), name=func.__name__)

Now we can test uses_any like this.

run_doctests(uses_any)

run_doctests finds the expressions in the docstring and evaluates them. If the result is the expected value, the test passes. Otherwise it fails.

If all tests pass, run_doctests displays no output – in that case, no news is good news. To see what happens when a test fails, here’s an incorrect version of uses_any.

def uses_any_incorrect(word, letters):
    """Checks if a word uses any of a list of letters.
    
    >>> uses_any_incorrect('banana', 'aeiou')
    True
    >>> uses_any_incorrect('apple', 'xyz')
    False
    """
    for letter in word.lower():
        if letter in letters.lower():
            return True
        else:
            return False  

And here’s what happens when we test it.

run_doctests(uses_any_incorrect)
**********************************************************************
File "__main__", line 4, in uses_any_incorrect
Failed example:
    uses_any_incorrect('banana', 'aeiou')
Expected:
    True
Got:
    False

The output includes the example that failed, the value the function was expected to produce, and the value the function actually produced.

If you are not sure why this test failed, you’ll have a chance to debug it as an exercise.

### Exercise: Fix the Failing Doctest
#   `uses_any_incorrect` has a bug that causes one doctest to fail.
#   1. Trace through what happens when the function is called with ('apple', 'xyz').
#   2. Write a corrected version called `uses_any_fixed`.
#   3. Include the same two doctest examples and run `run_doctests(uses_any_fixed)`.
### Your code starts here.



### Your code ends here.

Hide code cell source

### Solution
def uses_any_fixed(word, letters):
    """Checks if a word uses any of a list of letters.

    >>> uses_any_fixed('banana', 'aeiou')
    True
    >>> uses_any_fixed('apple', 'xyz')
    False
    """
    for letter in word.lower():
        if letter in letters.lower():
            return True
    return False  # moved outside the loop — was the bug in uses_any_incorrect

run_doctests(uses_any_fixed)

10.2.4. Advanced pytest Techniques#

The sections below cover features you’ll reach for on real projects: isolating code from external dependencies, running the same test logic across many inputs, and measuring how much of your code your tests actually exercise.

10.2.4.1. Mocking Dependencies#

Tests should run without network calls, database access, or file I/O. Mocking replaces a real object with a controlled stand-in so tests are fast, isolated, and deterministic.

unittest.mock.patch is the standard tool. It temporarily replaces the target for the duration of the with block and restores the original afterward.

The example below patches a simple function (get_price) so the test never makes a real network call:

from unittest.mock import patch

# Imagine this lives in a pricing module and makes a real network call.
def get_price(item):
    """Fetch item price from an external service."""
    raise RuntimeError("This would make a real network call")

def apply_discount(item, discount=0.10):
    """Return the discounted price for an item."""
    price = get_price(item)
    return round(price * (1 - discount), 2)

# Patch get_price so apply_discount never touches the network.
with patch("__main__.get_price", return_value=100.0) as mock_get:
    result = apply_discount("widget")
    assert result == 90.0
    print(f"apply_discount('widget') = {result}  ✓")
    print(f"get_price was called with: {mock_get.call_args}")

# Outside the 'with' block, get_price is restored to the original.
print("\nget_price is restored — calling it now would raise RuntimeError")
apply_discount('widget') = 90.0  ✓
get_price was called with: call('widget')

get_price is restored — calling it now would raise RuntimeError
### Exercise: Mock a database lookup
#   get_user(user_id) is supposed to look up a user in a database.
#   format_greeting(user_id) calls get_user() and returns 'Hello, {name}!'
#   Using patch, mock get_user so format_greeting never touches a real database:
#     - patch get_user to return {'name': 'Alice'}
#     - assert format_greeting(1) == 'Hello, Alice!'
#     - assert get_user was called exactly once with argument 1
#       (hint: mock_get.assert_called_once_with(1))
### Your code starts here.



### Your code ends here.

Hide code cell source

### Solution
from unittest.mock import patch

def get_user(user_id):
    raise RuntimeError('This would query a real database')

def format_greeting(user_id):
    user = get_user(user_id)
    return f"Hello, {user['name']}!"

with patch('__main__.get_user', return_value={'name': 'Alice'}) as mock_get:
    result = format_greeting(1)
    assert result == 'Hello, Alice!', f'Got: {result}'
    mock_get.assert_called_once_with(1)
    print(f'format_greeting(1) = {result!r}  ✓')
    print(f'get_user called with: {mock_get.call_args}')
format_greeting(1) = 'Hello, Alice!'  ✓
get_user called with: call(1)

10.2.4.2. Parametrized Tests#

Writing one test function per input case leads to repetitive code. pytest.mark.parametrize runs the same test logic across many input/expected pairs automatically.

import pytest

@pytest.mark.parametrize("input_val, expected", [
    ("racecar", True),
    ("hello",   False),
    ("",        True),
])
def test_is_palindrome(input_val, expected):
    assert is_palindrome(input_val) == expected

Each row becomes a separate test case in the pytest report. Failed rows show the exact inputs that failed, making diagnosis fast.

# The @pytest.mark.parametrize decorator is designed to be discovered and run
# by the pytest command-line runner, not executed inside a notebook kernel.
# Rather than leave you with untestable code, we simulate the same logic below
# with a plain loop — the test function body is identical to what you'd write
# in a real test file.

# ── In a real test file (test_palindrome.py) you would write: ──────────────
# import pytest
#
# @pytest.mark.parametrize("s, expected", [
#     ("racecar", True),
#     ("madam",   True),
#     ("hello",   False),
#     ("",        True),
#     ("a",       True),
# ])
# def test_is_palindrome(s, expected):
#     assert is_palindrome(s) == expected
#
# Then run: pytest test_palindrome.py -v
# ──────────────────────────────────────────────────────────────────────────

def is_palindrome(s):
    return s == s[::-1]

cases = [
    ("racecar", True),
    ("madam",   True),
    ("hello",   False),
    ("",        True),
    ("a",       True),
]

# Manual equivalent: same logic, same visibility
passed = 0
for s, expected in cases:
    result = is_palindrome(s)
    status = "PASS" if result == expected else "FAIL"
    print(f"  is_palindrome({s!r}) == {expected!r}{status}")
    if status == "PASS":
        passed += 1

print(f"\n{passed}/{len(cases)} test cases passed")
  is_palindrome('racecar') == True  →  PASS
  is_palindrome('madam') == True  →  PASS
  is_palindrome('hello') == False  →  PASS
  is_palindrome('') == True  →  PASS
  is_palindrome('a') == True  →  PASS

5/5 test cases passed
### Exercise: Add test cases for is_leap_year(year)
#   A year is a leap year if divisible by 4,
#   EXCEPT century years (divisible by 100) are not,
#   UNLESS also divisible by 400.
#   Add at least 5 (year, expected) tuples to `cases`:
#     - divisible by 400 (e.g., 2000) → True
#     - divisible by 100 but not 400 (e.g., 1900) → False
#     - divisible by 4, not 100 (e.g., 2024) → True
#     - not divisible by 4 (e.g., 2023) → False
#     - one more case of your choice
def is_leap_year(year):
    return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)

cases = [
    # (year, expected)
    ### Your cases start here.

    ### Your cases end here.
]

passed = 0
for year, expected in cases:
    result = is_leap_year(year)
    status = 'PASS' if result == expected else 'FAIL'
    print(f'  is_leap_year({year}) == {expected}  ->  {status}')
    if status == 'PASS':
        passed += 1
print(f'\n{passed}/{len(cases)} test cases passed')
0/0 test cases passed

Hide code cell source

### Solution
def is_leap_year(year):
    return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)

cases = [
    (2000, True),   # divisible by 400 -> leap
    (1900, False),  # divisible by 100 but not 400 -> not leap
    (2024, True),   # divisible by 4, not 100 -> leap
    (2023, False),  # not divisible by 4 -> not leap
    (1600, True),   # divisible by 400 -> leap
]

passed = 0
for year, expected in cases:
    result = is_leap_year(year)
    status = 'PASS' if result == expected else 'FAIL'
    print(f'  is_leap_year({year}) == {expected}  ->  {status}')
    if status == 'PASS':
        passed += 1
print(f'\n{passed}/{len(cases)} test cases passed')
  is_leap_year(2000) == True  ->  PASS
  is_leap_year(1900) == False  ->  PASS
  is_leap_year(2024) == True  ->  PASS
  is_leap_year(2023) == False  ->  PASS
  is_leap_year(1600) == True  ->  PASS

5/5 test cases passed

10.2.4.3. Test Coverage Basics#

Test coverage measures what fraction of your source code is exercised by your test suite. A line is “covered” if at least one test causes it to run.

The standard tool is coverage.py, which integrates with pytest:

# Install (once)
pip install pytest-cov

# Run tests with coverage report
pytest --cov=my_module --cov-report=term-missing

Coverage %

Interpretation

< 50%

Probably missing whole branches or functions

50–80%

Reasonable for scripts; low for libraries

80–90%

Good target for most production code

> 90%

High confidence; watch for diminishing returns

Key insight: 100% coverage does not mean your code is correct — it means every line ran, not that every case was tested correctly. Focus on covering branches (if/else, try/except), not just lines.

# Generate an HTML report to see exactly which lines are missed
pytest --cov=my_module --cov-report=html
open htmlcov/index.html   # macOS; use 'start' on Windows or 'xdg-open' on Linux
# Or just open htmlcov/index.html directly in your browser.
### Exercise: Write pytest-style tests
#   Write three test functions for `count_vowels(s)` defined below:
#     1. test_count_vowels_typical: a string with multiple vowels (e.g., 'hello')
#     2. test_count_vowels_none: a string with no vowels (e.g., 'gym')
#     3. test_count_vowels_empty: the empty string should return 0
#   Each function should use plain `assert` (no `self`, no class).
#   After writing them, call each one to verify they pass.

def count_vowels(s):
    """Return the number of vowels (a, e, i, o, u) in s (case-insensitive)."""
    return sum(1 for c in s.lower() if c in 'aeiou')

### Your code starts here.



### Your code ends here.

Hide code cell source

### Solution

def count_vowels(s):
    """Return the number of vowels (a, e, i, o, u) in s (case-insensitive)."""
    return sum(1 for c in s.lower() if c in 'aeiou')

def test_count_vowels_typical():
    assert count_vowels('hello') == 2
    assert count_vowels('AEIOU') == 5

def test_count_vowels_none():
    assert count_vowels('gym') == 0
    assert count_vowels('rhythm') == 0

def test_count_vowels_empty():
    assert count_vowels('') == 0

for fn in [test_count_vowels_typical, test_count_vowels_none, test_count_vowels_empty]:
    fn()
    print(f'{fn.__name__} PASSED')
test_count_vowels_typical PASSED
test_count_vowels_none PASSED
test_count_vowels_empty PASSED