{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f39d9d77-c336-42ad-a1a3-a4c28be72888",
   "metadata": {},
   "source": [
    "# Unit Testing\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "c62714aa",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "from pathlib import Path\n",
    "\n",
    "current = Path.cwd()\n",
    "for parent in [current, *current.parents]:\n",
    "    if (parent / '_config.yml').exists():\n",
    "        project_root = parent  # ← Add project root, not chapters\n",
    "        break\n",
    "else:\n",
    "    project_root = Path.cwd().parent.parent\n",
    "\n",
    "sys.path.insert(0, str(project_root))\n",
    "\n",
    "from shared import thinkpython, diagram, jupyturtle\n",
    "from shared.download import download\n",
    "\n",
    "# Register as top-level modules so direct imports work in subsequent cells\n",
    "sys.modules['thinkpython'] = thinkpython\n",
    "sys.modules['diagram'] = diagram\n",
    "sys.modules['jupyturtle'] = jupyturtle"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "701137cb",
   "metadata": {},
   "source": [
    "In 11.1 you wrote code to handle errors at runtime. Unit tests are how you verify that code actually works.\n",
    "\n",
    "\"**Unit**\" means the smallest testable piece of code — typically a single function or method. The idea is to test each piece in isolation, independent of the rest of the system.\n",
    "\n",
    "Testing can be seen as a spectrum:\n",
    "| Type | Scope |\n",
    "|---|---|\n",
    "| Unit test | Single function or method |\n",
    "| Integration test | Multiple units working together |\n",
    "| End-to-end test | Entire application flow |\n",
    "\n",
    "As you can see, unit testing is the foundation: if every unit works correctly in isolation, you have much more confidence that the whole system will work when assembled.\n",
    "\n",
    "Unit testing is pre-ship verification: it runs during development to confirm your code behaves correctly before anyone uses it. It doesn't run in production at all. Unit testing answers: \"does my code actually do what I think it does?\"\n",
    "\n",
    "The whole idea of unit testing is a **design and verification** technique: you write tests that define expected behavior, then run them to confirm your code is correct. You do this before you submit it, share it, or build more code on top of it.\n",
    "\n",
    "TDD (Test-Driven Development) takes this further: write the tests first, then write the code to make them pass. TDD (write tests first, then code) is still practiced and respected, but it's not as dominant as it was in the mid-2000s when it was almost dogma in certain circles (particularly in, e.g., the Agile community). While it is still strong, e.g., in teams building libraries or APIs with well-defined contracts, it's genuinely hard to do well. Also, many developers practice \"test alongside\" or \"test after\" and get good results. AI-assisted coding has further disrupted the workflow: people are generating code faster than TDD's write-test-first cadence supports.\n",
    "\n",
    "**Key benefits:**\n",
    "- Catch bugs before users do\n",
    "- Prevent regressions when you change code\n",
    "- Tests serve as runnable documentation\n",
    "\n",
    "Python has three main testing tools:\n",
    "\n",
    "| Tool | Style | Best for |\n",
    "|------|-------|----------|\n",
    "| `pytest` | Plain `assert`, no class needed | New projects; industry standard |\n",
    "| `unittest` | Class-based (`TestCase`) | Legacy codebases; built into the standard library |\n",
    "| `doctest` | Examples embedded in docstrings | Illustrating function behavior in docstrings |\n",
    "\n",
    "We'll cover them in that order, starting with the one you're most likely to use professionally."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7273290",
   "metadata": {},
   "source": [
    "## `pytest`\n",
    "\n",
    "`pytest` is the most widely used testing framework in Python. It requires no boilerplate classes. In a typical project structure, to use `pytest` for testing, you prepare two files:\n",
    "1. Your actual code (e.g., `calc.py`)\n",
    "2. Your test functions(e.g., `test_calc.py`)\n",
    "\n",
    "You \n",
    "- write plain functions that start with `test_` so that `pytest` can discover the function as a convention \n",
    "- use plain `assert` statements, and \n",
    "- run everything with `pytest` on the command line.\n",
    "\n",
    "**Install**\n",
    "\n",
    "The `pytest` module needs to be installed in the environment. For that,, use `pip` in the *terminal* with the *virtual environment* enabled. \n",
    "\n",
    "```bash\n",
    "pip install pytest\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "378f6727",
   "metadata": {},
   "source": [
    "### Inline Assertion Checks\n",
    "\n",
    "Before running `pytest`, let us take a look at how `assert` can be used for testing. You should see the similarity between using asserts manually and the `pytest` framework.\n",
    "\n",
    "Here we demonstrate the test structure using inline assertion checks:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "6c462061",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test_add_positive PASSED\n",
      "test_add_negative PASSED\n",
      "test_subtract PASSED\n"
     ]
    }
   ],
   "source": [
    "### functions (would be calc.py in, e.g., the project root later)\n",
    "def add(a, b):\n",
    "    return a + b\n",
    "\n",
    "def subtract(a, b):\n",
    "    return a - b\n",
    "\n",
    "### pytest-style test functions (would go in test_calc.py later)\n",
    "def test_add_positive():\n",
    "    assert add(2, 3) == 5\n",
    "\n",
    "def test_add_negative():\n",
    "    assert add(-1, -1) == -2\n",
    "\n",
    "def test_subtract():\n",
    "    assert subtract(10, 4) == 6\n",
    "\n",
    "# Run them manually (simulating pytest discovery)\n",
    "for test_fn in [test_add_positive, test_add_negative, test_subtract]:\n",
    "    test_fn()\n",
    "    print(f'{test_fn.__name__} PASSED')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "546d3790",
   "metadata": {},
   "source": [
    "### `pytest` File Convention\n",
    "\n",
    "Save the test functions to a file named `test_*.py`, `pytest` discovers and collects them automatically when run.\n",
    "\n",
    "```python\n",
    "### the test_calc.py file\n",
    "from calc import add, subtract, divide\n",
    "def test_add():\n",
    "    assert 1 + 1 == 2\n",
    "\n",
    "def test_subtract():\n",
    "    assert 3 - 1 == 2\n",
    "\n",
    "def test_divide():\n",
    "    assert 10 / 2 == 5.0\n",
    "\n",
    "def test_divide_by_zero():\n",
    "    import pytest\n",
    "    with pytest.raises(ZeroDivisionError):\n",
    "        1 / 0\n",
    "```\n",
    "\n",
    "Run from the terminal:\n",
    "```bash\n",
    "pytest test_calc.py -v\n",
    "```\n",
    "\n",
    "You should see output like this below saying:\n",
    "- 3 items `collected` \n",
    "- each item test `PASSED`\n",
    "\n",
    "\n",
    "```bash\n",
    "(.venv) [user]@[host]ː~/workspace/py/[path]$ pytest test_calc.py -v\n",
    "====================== test session starts ======================\n",
    "platform darwin -- Python 3.13.7, pytest-9.0.3, pluggy-1.6.0 -- /Users/[user]/workspace/py/.venv/bin/python3.13\n",
    "cachedir: .pytest_cache\n",
    "rootdir: /Users/[user]/workspace/py/[path]\n",
    "plugins: anyio-4.11.0\n",
    "collected 3 items                                               \n",
    "\n",
    "test_calc.py::test_add PASSED                             [ 33%]\n",
    "test_calc.py::test_divide PASSED                          [ 66%]\n",
    "test_calc.py::test_divide_by_zero PASSED                  [100%]\n",
    "\n",
    "======================= 3 passed in 0.01s =======================\n",
    "(.venv) [user]@[host]ː~/workspace/py/[path]$ \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "163ebe2a",
   "metadata": {},
   "source": [
    "### `pytest` in Practice\n",
    "\n",
    "Often, you would be testing a module instead of Python's built-in arithmetic operators as above. For that, you may organize the code file and testing function file separately. This is best for production code, development pipelines, and anything you're distributing or maintaining long-term.\n",
    "\n",
    "The key here is to **import** your **module** to the test file. You see that a module is but a file and we can import the functions separately from the module."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "94dc016c",
   "metadata": {},
   "outputs": [],
   "source": [
    "### the code module file: calc.py\n",
    "def add(a, b):\n",
    "    return a + b\n",
    "\n",
    "def divide(a, b):\n",
    "    return a / b"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3748880a",
   "metadata": {},
   "source": [
    "`import` the module (`calc.py`) in the test file. Note that you import `calc`, less the file extension."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "573a963e",
   "metadata": {},
   "outputs": [],
   "source": [
    "### the test_calc.py file\n",
    "\n",
    "from calc import add, divide\n",
    "\n",
    "def test_add():\n",
    "    assert add(1, 1) == 2\n",
    "\n",
    "def test_divide():\n",
    "    assert divide(10, 2) == 5.0\n",
    "\n",
    "def test_divide_by_zero():\n",
    "    import pytest\n",
    "    with pytest.raises(ZeroDivisionError):\n",
    "        divide(1, 0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c580076c",
   "metadata": {},
   "source": [
    "In CLI, run\n",
    "\n",
    "```bash\n",
    "python -m pytest test_calc.py -v\n",
    "```\n",
    "or\n",
    "```bash\n",
    "pytest test_calc.py -v\n",
    "```\n",
    "\n",
    "\n",
    "and you should have output like:\n",
    "\n",
    "```\n",
    "(.venv) [user]@[host]ː~/workspace/py$ python -m pytest test_calc.py -v\n",
    "====================== test session starts ======================\n",
    "platform darwin -- Python 3.13.7, pytest-9.0.3, pluggy-1.6.0 -- /Users/[user]/workspace/py/.venv/bin/python3\n",
    "cachedir: .pytest_cache\n",
    "rootdir: /Users/[user]/workspace/py\n",
    "plugins: anyio-4.11.0\n",
    "collected 3 items                                               \n",
    "\n",
    "test_calc.py::test_add PASSED                             [ 33%]\n",
    "test_calc.py::test_divide PASSED                          [ 66%]\n",
    "test_calc.py::test_divide_by_zero PASSED                  [100%]\n",
    "\n",
    "======================= 3 passed in 0.02s =======================\n",
    "(.venv) [user]@[host]ː~/workspace/py$ \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8661d9c9",
   "metadata": {},
   "source": [
    "### `pytest` as Subprocess\n",
    "\n",
    "You may also run run `pytest` inside the Jupyter notebook, or other environments such as `conda`, by writing the test file to disk and invoking the subprocess.\n",
    "\n",
    "1. Write a .py test file to disk using normal file `I/O`\n",
    "2. Run `pytest` on it via Python's `subprocess` module\n",
    "3. Capture and display the output back in the notebook"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "a3019e43",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m============================= test session starts ==============================\u001b[0m\n",
      "platform darwin -- Python 3.13.7, pytest-9.0.3, pluggy-1.6.0 -- /Users/tcn85/workspace/py/.venv/bin/python\n",
      "cachedir: .pytest_cache\n",
      "rootdir: /Users/tcn85/workspace/py/chapters/11-testing\n",
      "plugins: anyio-4.11.0\n",
      "\u001b[1mcollecting ... \u001b[0mcollected 2 items\n",
      "\n",
      "test_temp.py::test_add \u001b[32mPASSED\u001b[0m\u001b[32m                                            [ 50%]\u001b[0m\n",
      "test_temp.py::test_fail \u001b[31mFAILED\u001b[0m\u001b[31m                                           [100%]\u001b[0m\n",
      "\n",
      "=================================== FAILURES ===================================\n",
      "\u001b[31m\u001b[1m__________________________________ test_fail ___________________________________\u001b[0m\n",
      "\n",
      "    \u001b[0m\u001b[94mdef\u001b[39;49;00m\u001b[90m \u001b[39;49;00m\u001b[92mtest_fail\u001b[39;49;00m():\u001b[90m\u001b[39;49;00m\n",
      ">       \u001b[94massert\u001b[39;49;00m \u001b[94m1\u001b[39;49;00m + \u001b[94m1\u001b[39;49;00m == \u001b[94m3\u001b[39;49;00m\u001b[90m\u001b[39;49;00m\n",
      "\u001b[1m\u001b[31mE       assert (1 + 1) == 3\u001b[0m\n",
      "\n",
      "\u001b[1m\u001b[31mtest_temp.py\u001b[0m:6: AssertionError\n",
      "\u001b[36m\u001b[1m=========================== short test summary info ============================\u001b[0m\n",
      "\u001b[31mFAILED\u001b[0m test_temp.py::\u001b[1mtest_fail\u001b[0m - assert (1 + 1) == 3\n",
      "\u001b[31m========================= \u001b[31m\u001b[1m1 failed\u001b[0m, \u001b[32m1 passed\u001b[0m\u001b[31m in 0.04s\u001b[0m\u001b[31m ==========================\u001b[0m\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import subprocess, pathlib\n",
    "# %pip install pytest  # ensure pytest is available in the environment\n",
    "### same as the pip line above, but using subprocess to avoid issues in some notebook environments\n",
    "# subprocess.run([\"pip\", \"install\", \"pytest\"], capture_output=True, text=True)\n",
    "\n",
    "\n",
    "# 1. Write test file to disk\n",
    "pathlib.Path(\"test_temp.py\").write_text(\"\"\"\n",
    "def test_add():\n",
    "    assert 1 + 1 == 2\n",
    "\n",
    "def test_fail():\n",
    "    assert 1 + 1 == 3\n",
    "\"\"\")\n",
    "\n",
    "# 2. Invoke pytest as a subprocess\n",
    "result = subprocess.run(\n",
    "    # [\"pytest\", \"test_temp.py\", \"-v\"],\n",
    "    [\"python\", \"-m\", \"pytest\", \"test_temp.py\", \"-v\"],\n",
    "    capture_output=True, text=True\n",
    ")\n",
    "\n",
    "# 3. Display output in the notebook\n",
    "print(result.stdout)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4fad5e2c",
   "metadata": {},
   "source": [
    "### Jupyter Inline Testing\n",
    "\n",
    "In addition to running the `pytest` testing file in the terminal, you can also run the tests inside the Jupyter notebook using the `ipytest` plugin. \n",
    "\n",
    "Note that `%%ipytest` does the discovering and calling of the test functions automatically within that cell, that's why in the example below you don't see the test functions being called. The workflow is:\n",
    "\n",
    "1. User runs the cell with %%ipytest\n",
    "2. ipytest scans the cell for any function whose name starts with test_\n",
    "3. It calls all of them via `pytest` under the hood\n",
    "4. Reports pass/fail"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "7b2ae89b",
   "metadata": {},
   "outputs": [],
   "source": [
    "### install ipytest for in-notebook testing\n",
    "### live code may require you to install it again: ModuleNotFoundError: No module named 'ipytest'\n",
    "# %pip install ipytest -q"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "6f52426d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import ipytest\n",
    "ipytest.autoconfig()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "4a73dc4f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[32m.\u001b[0m\u001b[32m.\u001b[0m\u001b[32m                                                                                           [100%]\u001b[0m\n",
      "\u001b[32m\u001b[32m\u001b[1m2 passed\u001b[0m\u001b[32m in 0.00s\u001b[0m\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "%%ipytest\n",
    "\n",
    "def test_add_positive():\n",
    "    assert add(2, 3) == 5\n",
    "    \n",
    "def test_add_negative():\n",
    "    assert add(-1, -1) == -2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "e1afc796",
   "metadata": {
    "tags": [
     "thebe-interactive"
    ]
   },
   "outputs": [],
   "source": [
    "### Exercise: Write pytest-style test functions for is_even(n)\n",
    "#   is_even(n) returns True if n is even, False otherwise.\n",
    "#   Write three test functions:\n",
    "#     1. test_is_even_positive: is_even(4) is True; is_even(3) is False\n",
    "#     2. test_is_even_zero: is_even(0) is True\n",
    "#     3. test_is_even_negative: is_even(-2) is True; is_even(-1) is False\n",
    "#   Use plain assert statements (no class, no self).\n",
    "#   After writing them, call each one in a loop and print PASSED.\n",
    "### Your code starts here.\n",
    "\n",
    "\n",
    "\n",
    "### Your code ends here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "23e67f66",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test_is_even_positive PASSED\n",
      "test_is_even_zero PASSED\n",
      "test_is_even_negative PASSED\n"
     ]
    }
   ],
   "source": [
    "### Solution\n",
    "def is_even(n):\n",
    "    return n % 2 == 0\n",
    "\n",
    "def test_is_even_positive():\n",
    "    assert is_even(4) == True\n",
    "    assert is_even(3) == False\n",
    "\n",
    "def test_is_even_zero():\n",
    "    assert is_even(0) == True\n",
    "\n",
    "def test_is_even_negative():\n",
    "    assert is_even(-2) == True\n",
    "    assert is_even(-1) == False\n",
    "\n",
    "for fn in [test_is_even_positive, test_is_even_zero, test_is_even_negative]:\n",
    "    fn()\n",
    "    print(f'{fn.__name__} PASSED')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "22dec523",
   "metadata": {},
   "source": [
    "## `unittest`\n",
    "\n",
    "Python's standard library includes `unittest`, a class-based testing framework. You'll encounter it in older codebases and it's worth knowing, but for new projects `pytest` is almost always the better choice.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2e8de9b",
   "metadata": {},
   "source": [
    "### `pytest` vs `unittest`\n",
    "\n",
    "The table below compares the two frameworks across the features you're most likely to care about.\n",
    "\n",
    "| Feature | `unittest` | `pytest` |\n",
    "|---|---|---|\n",
    "| Requires class? | Yes (`TestCase`) | No |\n",
    "| Assertion style | `self.assertEqual(a, b)` | `assert a == b` |\n",
    "| Discovery | Manual or `unittest.main()` | Automatic with `pytest` command |\n",
    "| Output | Minimal | Detailed, colored diff |\n",
    "| Fixtures | `setUp/tearDown` | `@pytest.fixture` |\n",
    "| Ecosystem | Standard library | Third-party, widely adopted |\n",
    "\n",
    "Both are valid. `unittest` is built-in; `pytest` is preferred in industry\n",
    "because its plain `assert` style produces clearer failure messages.\n",
    "\n",
    ":::{note}\n",
    "**`unittest` assertions vs. bare `assert`**\n",
    "\n",
    "`unittest`'s `assertX` methods (`assertEqual`, `assertRaises`, `assertIsNone`, etc.) are ordinary method calls — they cannot be disabled. Bare Python `assert` statements *can* be silently disabled by running the interpreter with the `-O` (optimize) flag, which strips them from the bytecode. In practice you never run your test suite with `-O`, so this rarely matters; but it is why `assert` is discouraged for *input validation* in production code while being perfectly fine inside test functions. `pytest` recaptures and enriches bare `assert` output through its own import hook, so you still get detailed failure messages even though the underlying statement is plain `assert`.\n",
    ":::\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eb13754a",
   "metadata": {},
   "source": [
    "In the sample code below, part 2 is how you use `unittest`. You:\n",
    "- subclass `unittest.TestCase`, \n",
    "- write `test_*` methods, and \n",
    "- use **`self.assertX()`** helper methods for assertions.\n",
    "\n",
    "The syntax of `assertEqual()` is `self.assertEqual(actual, expected)`. For example, `assertEqual(first, second)` checks that `first == second`. If not, the test fails with a message showing both values.\n",
    "\n",
    "\n",
    "In part 3, we start with `import io` because **`unittest.TextTestRunner`** writes its output to a file-like stream. By default that goes to `stderr`, which in some notebook environments doesn't display inline with the cell output.\n",
    "\n",
    "`io.StringIO()` is an in-memory text buffer that acts like a file; so the runner writes into it, and then `print(buf.getvalue())` sends the captured text to the notebook's normal output. \n",
    "\n",
    "Without `io` to create `buf`, you'd pass `stream=sys.stderr` (the default), which may appear in a different output area or not at all depending on the environment.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "d1a5885e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test_calculate_grade_edge_cases (__main__.TestGradeCalculation.test_calculate_grade_edge_cases)\n",
      "Test edge cases for grade calculation. ... ok\n",
      "test_calculate_grade_invalid_input (__main__.TestGradeCalculation.test_calculate_grade_invalid_input)\n",
      "Test that invalid inputs raise appropriate exceptions. ... ok\n",
      "test_calculate_grade_normal_cases (__main__.TestGradeCalculation.test_calculate_grade_normal_cases)\n",
      "Test normal grade calculations. ... ok\n",
      "test_letter_grade_conversion (__main__.TestGradeCalculation.test_letter_grade_conversion)\n",
      "Test letter grade assignments. ... ok\n",
      "\n",
      "----------------------------------------------------------------------\n",
      "Ran 4 tests in 0.000s\n",
      "\n",
      "OK\n",
      "\n",
      "All tests passed.\n"
     ]
    }
   ],
   "source": [
    "import unittest\n",
    "\n",
    "# Example: Testing a simple function with unit tests\n",
    "\n",
    "### part 1: The code we want to test\n",
    "def calculate_grade(score, total_points):\n",
    "    \"\"\"Calculate percentage grade from score and total points.\"\"\"\n",
    "    if total_points <= 0:\n",
    "        raise ValueError(\"Total points must be positive\")\n",
    "    if score < 0:\n",
    "        raise ValueError(\"Score cannot be negative\") \n",
    "    if score > total_points:\n",
    "        raise ValueError(\"Score cannot exceed total points\")\n",
    "    \n",
    "    percentage = (score / total_points) * 100\n",
    "    return round(percentage, 2)\n",
    "\n",
    "def get_letter_grade(percentage):\n",
    "    \"\"\"Convert percentage to letter grade.\"\"\"\n",
    "    if percentage >= 90:\n",
    "        return 'A'\n",
    "    elif percentage >= 80:\n",
    "        return 'B'\n",
    "    elif percentage >= 70:\n",
    "        return 'C'\n",
    "    elif percentage >= 60:\n",
    "        return 'D'\n",
    "    else:\n",
    "        return 'F'\n",
    "\n",
    "### part 2: Unit tests for our functions\n",
    "class TestGradeCalculation(unittest.TestCase):\n",
    "    \"\"\"Test cases for grade calculation functions.\"\"\"\n",
    "    \n",
    "    def test_calculate_grade_normal_cases(self):\n",
    "        \"\"\"Test normal grade calculations.\"\"\"\n",
    "        self.assertEqual(calculate_grade(85, 100), 85.0)\n",
    "        self.assertEqual(calculate_grade(95, 100), 95.0)\n",
    "        self.assertEqual(calculate_grade(50, 100), 50.0)\n",
    "        self.assertEqual(calculate_grade(0, 100), 0.0)\n",
    "    \n",
    "    def test_calculate_grade_edge_cases(self):\n",
    "        \"\"\"Test edge cases for grade calculation.\"\"\"\n",
    "        self.assertEqual(calculate_grade(100, 100), 100.0)\n",
    "        self.assertEqual(calculate_grade(87, 92), 94.57)\n",
    "    \n",
    "    def test_calculate_grade_invalid_input(self):\n",
    "        \"\"\"Test that invalid inputs raise appropriate exceptions.\"\"\"\n",
    "        with self.assertRaises(ValueError):\n",
    "            calculate_grade(85, 0)  # Zero total points\n",
    "        \n",
    "        with self.assertRaises(ValueError):\n",
    "            calculate_grade(-10, 100)  # Negative score\n",
    "        \n",
    "        with self.assertRaises(ValueError):\n",
    "            calculate_grade(110, 100)  # Score exceeds total\n",
    "    \n",
    "    def test_letter_grade_conversion(self):\n",
    "        \"\"\"Test letter grade assignments.\"\"\"\n",
    "        self.assertEqual(get_letter_grade(95), 'A')\n",
    "        self.assertEqual(get_letter_grade(85), 'B')\n",
    "        self.assertEqual(get_letter_grade(75), 'C')\n",
    "        self.assertEqual(get_letter_grade(65), 'D')\n",
    "        self.assertEqual(get_letter_grade(55), 'F')\n",
    "        \n",
    "        # Test boundary conditions\n",
    "        self.assertEqual(get_letter_grade(90), 'A')\n",
    "        self.assertEqual(get_letter_grade(89.9), 'B')\n",
    "\n",
    "### part 3: Run the tests and display results\n",
    "import io\n",
    "buf = io.StringIO()\n",
    "runner = unittest.TextTestRunner(stream=buf, verbosity=2)\n",
    "result = runner.run(unittest.TestLoader().loadTestsFromTestCase(TestGradeCalculation))\n",
    "print(buf.getvalue())\n",
    "if result.wasSuccessful():\n",
    "    print(\"All tests passed.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80d517be",
   "metadata": {},
   "source": [
    "### When tests fail\n",
    "\n",
    "The example above has tests that all pass. But finding failing tests is the whole point. Here's a function with a deliberate bug. It refuses to raise a negative base to a power, returning a string instead of a number. The test suite will expose it.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "7b59593b-fail-test",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test_negative_base_even_exponent (__main__.TestPower.test_negative_base_even_exponent) ... FAIL\n",
      "test_negative_base_odd_exponent (__main__.TestPower.test_negative_base_odd_exponent) ... FAIL\n",
      "test_positive_base (__main__.TestPower.test_positive_base) ... ok\n",
      "\n",
      "======================================================================\n",
      "FAIL: test_negative_base_even_exponent (__main__.TestPower.test_negative_base_even_exponent)\n",
      "----------------------------------------------------------------------\n",
      "Traceback (most recent call last):\n",
      "  File \"/var/folders/g4/v24tl8t172g5d7rzsd63y51w0000gp/T/ipykernel_78782/2752416876.py\", line 14, in test_negative_base_even_exponent\n",
      "    self.assertEqual(my_power(-2, 2), 4)   # (-2)² = 4, not an error string\n",
      "    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^\n",
      "AssertionError: 'Error: negative base' != 4\n",
      "\n",
      "======================================================================\n",
      "FAIL: test_negative_base_odd_exponent (__main__.TestPower.test_negative_base_odd_exponent)\n",
      "----------------------------------------------------------------------\n",
      "Traceback (most recent call last):\n",
      "  File \"/var/folders/g4/v24tl8t172g5d7rzsd63y51w0000gp/T/ipykernel_78782/2752416876.py\", line 17, in test_negative_base_odd_exponent\n",
      "    self.assertEqual(my_power(-3, 3), -27)\n",
      "    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^\n",
      "AssertionError: 'Error: negative base' != -27\n",
      "\n",
      "----------------------------------------------------------------------\n",
      "Ran 3 tests in 0.000s\n",
      "\n",
      "FAILED (failures=2)\n",
      "\n",
      "2 failure(s), 0 error(s)\n"
     ]
    }
   ],
   "source": [
    "import io\n",
    "\n",
    "def my_power(base, exponent):\n",
    "    \"\"\"Return base raised to exponent. Bug: rejects negative bases.\"\"\"\n",
    "    if base < 0:\n",
    "        return \"Error: negative base\"   # ← bug: should just compute it\n",
    "    return base ** exponent\n",
    "\n",
    "class TestPower(unittest.TestCase):\n",
    "    def test_positive_base(self):\n",
    "        self.assertEqual(my_power(2, 3), 8)\n",
    "\n",
    "    def test_negative_base_even_exponent(self):\n",
    "        self.assertEqual(my_power(-2, 2), 4)   # (-2)² = 4, not an error string\n",
    "\n",
    "    def test_negative_base_odd_exponent(self):\n",
    "        self.assertEqual(my_power(-3, 3), -27)\n",
    "\n",
    "# Run and print output so the failure is visible in the notebook\n",
    "buf = io.StringIO()\n",
    "runner = unittest.TextTestRunner(stream=buf, verbosity=2)\n",
    "result = runner.run(unittest.TestLoader().loadTestsFromTestCase(TestPower))\n",
    "print(buf.getvalue())\n",
    "if result.wasSuccessful():\n",
    "    print(\"All tests passed.\")\n",
    "else:\n",
    "    print(f\"{len(result.failures)} failure(s), {len(result.errors)} error(s)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "019f5e82",
   "metadata": {},
   "source": [
    "### unittest conventions\n",
    "\n",
    "- **Name test methods descriptively**: `test_divide_by_zero_raises_exception` tells you exactly what's being checked; `test_division` doesn't.\n",
    "- **One behavior per test method**: instead of one `test_grade_calculation` that checks everything, write `test_grade_returns_percentage`, `test_grade_raises_on_negative_score`, etc. Narrow tests pinpoint exactly what broke.\n",
    "- **Use `setUp` for shared state**: if multiple tests need the same object, create it in `setUp` rather than repeating the construction in every method.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "da7966f6",
   "metadata": {
    "tags": [
     "thebe-interactive"
    ]
   },
   "outputs": [],
   "source": [
    "### Exercise: Write a TestCase for to_celsius(f)\n",
    "#   to_celsius(f) converts Fahrenheit to Celsius: (f - 32) * 5 / 9\n",
    "#   Write a class TestToCelsius(unittest.TestCase) with three test methods:\n",
    "#     1. test_boiling: to_celsius(212) should equal 100.0\n",
    "#     2. test_freezing: to_celsius(32) should equal 0.0\n",
    "#     3. test_body_temp: to_celsius(98.6) should be approximately 37.0\n",
    "#        (hint: use assertAlmostEqual with places=1)\n",
    "#   Run using TextTestRunner with io.StringIO and verbosity=2.\n",
    "### Your code starts here.\n",
    "\n",
    "\n",
    "\n",
    "### Your code ends here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "40216e16",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test_body_temp (__main__.TestToCelsius.test_body_temp) ... ok\n",
      "test_boiling (__main__.TestToCelsius.test_boiling) ... ok\n",
      "test_freezing (__main__.TestToCelsius.test_freezing) ... ok\n",
      "\n",
      "----------------------------------------------------------------------\n",
      "Ran 3 tests in 0.000s\n",
      "\n",
      "OK\n",
      "\n",
      "All tests passed.\n"
     ]
    }
   ],
   "source": [
    "### Solution\n",
    "import io\n",
    "\n",
    "def to_celsius(f):\n",
    "    return (f - 32) * 5 / 9\n",
    "\n",
    "class TestToCelsius(unittest.TestCase):\n",
    "    def test_boiling(self):\n",
    "        self.assertEqual(to_celsius(212), 100.0)\n",
    "\n",
    "    def test_freezing(self):\n",
    "        self.assertEqual(to_celsius(32), 0.0)\n",
    "\n",
    "    def test_body_temp(self):\n",
    "        self.assertAlmostEqual(to_celsius(98.6), 37.0, places=1)\n",
    "\n",
    "buf = io.StringIO()\n",
    "runner = unittest.TextTestRunner(stream=buf, verbosity=2)\n",
    "result = runner.run(unittest.TestLoader().loadTestsFromTestCase(TestToCelsius))\n",
    "print(buf.getvalue())\n",
    "if result.wasSuccessful():\n",
    "    print('All tests passed.')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6dd33555",
   "metadata": {},
   "source": [
    "## Doctests"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ec0c161",
   "metadata": {},
   "source": [
    "A **doctest** is a test embedded directly in a function's docstring. You write an example interaction using `>>>` (the Python interactive prompt), followed by the expected output. Python's `doctest` module finds and runs these examples automatically.\n",
    "\n",
    "Doctests serve two purposes at once: they document how a function works *and* verify that it actually works that way. They're best for simple input/output examples, not a replacement for `pytest` or `unittest` when you need complex setup or many edge cases.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "4ecdc88c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def uses_any(word, letters):\n",
    "    \"\"\"Checks if a word uses any of a list of letters.\n",
    "    \n",
    "    >>> uses_any('banana', 'aeiou')\n",
    "    True\n",
    "    >>> uses_any('apple', 'xyz')\n",
    "    False\n",
    "    \"\"\"\n",
    "    for letter in word.lower():\n",
    "        if letter in letters.lower():\n",
    "            return True\n",
    "    return False"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ec62bdb0",
   "metadata": {},
   "source": [
    "Each test begins with `>>>`, which is used as a prompt in some Python environments to indicate where the user can type code.\n",
    "In a doctest, the prompt is followed by an expression, usually a function call.\n",
    "The following line indicates the value the expression should have if the function works correctly.\n",
    "\n",
    "In the first example, `'banana'` uses `'a'`, so the result should be `True`.\n",
    "In the second example, `'apple'` does not use any of `'xyz'`, so the result should be `False`.\n",
    "\n",
    "To run these tests, we have to import the `doctest` module and run a function called `run_docstring_examples`.\n",
    "To make this function easier to use, I wrote the following function, which takes a function object as an argument."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "62bf2bad",
   "metadata": {},
   "outputs": [],
   "source": [
    "from doctest import run_docstring_examples\n",
    "\n",
    "def run_doctests(func):\n",
    "    run_docstring_examples(func, globals(), name=func.__name__)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0eeafbfb",
   "metadata": {},
   "source": [
    ":::{admonition} What are `globals()` and `__name__`?\n",
    ":class: dropdown\n",
    "\n",
    "`globals()` is a built-in that returns a dictionary of every name defined in the current scope — variables, functions, imports, everything. `run_docstring_examples` evaluates the doctest expressions inside that dictionary, so any function you've already defined (like `uses_any`) is available by name.\n",
    "\n",
    "`func.__name__` is a string attribute every function carries — it's just the name you gave it when you wrote `def`. Passing it as `name=` makes error messages say `uses_any` instead of the placeholder `NoName`.\n",
    "\n",
    "You don't need to use either directly; the `run_doctests` wrapper handles them for you. They'll reappear later when we cover namespaces and function objects.\n",
    ":::\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c2ca2fb8",
   "metadata": {},
   "source": [
    "Now we can test `uses_any` like this."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "35c5c4de",
   "metadata": {},
   "outputs": [],
   "source": [
    "run_doctests(uses_any)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df72a37e",
   "metadata": {},
   "source": [
    "`run_doctests` finds the expressions in the docstring and evaluates them.\n",
    "If the result is the expected value, the test **passes**.\n",
    "Otherwise it **fails**.\n",
    "\n",
    "If all tests pass, `run_doctests` displays no output -- in that case, no news is good news.\n",
    "To see what happens when a test fails, here's an incorrect version of `uses_any`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "5974bb1d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def uses_any_incorrect(word, letters):\n",
    "    \"\"\"Checks if a word uses any of a list of letters.\n",
    "    \n",
    "    >>> uses_any_incorrect('banana', 'aeiou')\n",
    "    True\n",
    "    >>> uses_any_incorrect('apple', 'xyz')\n",
    "    False\n",
    "    \"\"\"\n",
    "    for letter in word.lower():\n",
    "        if letter in letters.lower():\n",
    "            return True\n",
    "        else:\n",
    "            return False  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f93b688f",
   "metadata": {},
   "source": [
    "And here's what happens when we test it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "b81b70b5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "**********************************************************************\n",
      "File \"__main__\", line 4, in uses_any_incorrect\n",
      "Failed example:\n",
      "    uses_any_incorrect('banana', 'aeiou')\n",
      "Expected:\n",
      "    True\n",
      "Got:\n",
      "    False\n"
     ]
    }
   ],
   "source": [
    "run_doctests(uses_any_incorrect)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d23ecfdd",
   "metadata": {},
   "source": [
    "The output includes the example that failed, the value the function was expected to produce, and the value the function actually produced.\n",
    "\n",
    "If you are not sure why this test failed, you'll have a chance to debug it as an exercise."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "0aab242c",
   "metadata": {
    "tags": [
     "thebe-interactive"
    ]
   },
   "outputs": [],
   "source": [
    "### Exercise: Fix the Failing Doctest\n",
    "#   `uses_any_incorrect` has a bug that causes one doctest to fail.\n",
    "#   1. Trace through what happens when the function is called with ('apple', 'xyz').\n",
    "#   2. Write a corrected version called `uses_any_fixed`.\n",
    "#   3. Include the same two doctest examples and run `run_doctests(uses_any_fixed)`.\n",
    "### Your code starts here.\n",
    "\n",
    "\n",
    "\n",
    "### Your code ends here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "b562023b",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [],
   "source": [
    "### Solution\n",
    "def uses_any_fixed(word, letters):\n",
    "    \"\"\"Checks if a word uses any of a list of letters.\n",
    "\n",
    "    >>> uses_any_fixed('banana', 'aeiou')\n",
    "    True\n",
    "    >>> uses_any_fixed('apple', 'xyz')\n",
    "    False\n",
    "    \"\"\"\n",
    "    for letter in word.lower():\n",
    "        if letter in letters.lower():\n",
    "            return True\n",
    "    return False  # moved outside the loop — was the bug in uses_any_incorrect\n",
    "\n",
    "run_doctests(uses_any_fixed)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "125c5630",
   "metadata": {
    "tags": []
   },
   "source": [
    "## Advanced pytest Techniques\n",
    "\n",
    "The sections below cover features you'll reach for on real projects: isolating code from external dependencies, running the same test logic across many inputs, and measuring how much of your code your tests actually exercise."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f298b95f",
   "metadata": {},
   "source": [
    "### Mocking Dependencies\n",
    "\n",
    "Tests should run without network calls, database access, or file I/O. **Mocking** replaces a real object with a controlled stand-in so tests are fast, isolated, and deterministic.\n",
    "\n",
    "`unittest.mock.patch` is the standard tool. It temporarily replaces the target for the duration of the `with` block and restores the original afterward.\n",
    "\n",
    "The example below patches a simple function (`get_price`) so the test never makes a real network call:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "b280c032",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "apply_discount('widget') = 90.0  ✓\n",
      "get_price was called with: call('widget')\n",
      "\n",
      "get_price is restored — calling it now would raise RuntimeError\n"
     ]
    }
   ],
   "source": [
    "from unittest.mock import patch\n",
    "\n",
    "# Imagine this lives in a pricing module and makes a real network call.\n",
    "def get_price(item):\n",
    "    \"\"\"Fetch item price from an external service.\"\"\"\n",
    "    raise RuntimeError(\"This would make a real network call\")\n",
    "\n",
    "def apply_discount(item, discount=0.10):\n",
    "    \"\"\"Return the discounted price for an item.\"\"\"\n",
    "    price = get_price(item)\n",
    "    return round(price * (1 - discount), 2)\n",
    "\n",
    "# Patch get_price so apply_discount never touches the network.\n",
    "with patch(\"__main__.get_price\", return_value=100.0) as mock_get:\n",
    "    result = apply_discount(\"widget\")\n",
    "    assert result == 90.0\n",
    "    print(f\"apply_discount('widget') = {result}  ✓\")\n",
    "    print(f\"get_price was called with: {mock_get.call_args}\")\n",
    "\n",
    "# Outside the 'with' block, get_price is restored to the original.\n",
    "print(\"\\nget_price is restored — calling it now would raise RuntimeError\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f982e982",
   "metadata": {},
   "source": [
    "\n",
    ":::{admonition} Advanced: mocking context managers\n",
    ":class: dropdown\n",
    "\n",
    "When the code under test uses a `with` statement (e.g., `with urllib.request.urlopen(url) as resp:`), the mock must also support the context-manager protocol (`__enter__` / `__exit__`). `MagicMock` does this automatically:\n",
    "\n",
    "```python\n",
    "from unittest.mock import patch, MagicMock\n",
    "import json\n",
    "\n",
    "def fetch_json(url):\n",
    "    import urllib.request\n",
    "    with urllib.request.urlopen(url) as resp:\n",
    "        return json.loads(resp.read())\n",
    "\n",
    "with patch(\"urllib.request.urlopen\") as mock_open:\n",
    "    mock_resp = MagicMock()\n",
    "    mock_resp.__enter__.return_value = mock_resp\n",
    "    mock_resp.__exit__.return_value = False\n",
    "    mock_resp.read.return_value = b'{\"status\": \"ok\"}'\n",
    "    mock_open.return_value = mock_resp\n",
    "\n",
    "    data = fetch_json(\"https://example.com/api\")\n",
    "    assert data == {\"status\": \"ok\"}\n",
    "```\n",
    "\n",
    "The key lines are `mock_resp.__enter__.return_value = mock_resp` (so `as resp` binds to the mock) and `mock_resp.__exit__.return_value = False` (so exceptions are not suppressed).\n",
    ":::\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "ea393949",
   "metadata": {
    "tags": [
     "thebe-interactive"
    ]
   },
   "outputs": [],
   "source": [
    "### Exercise: Mock a database lookup\n",
    "#   get_user(user_id) is supposed to look up a user in a database.\n",
    "#   format_greeting(user_id) calls get_user() and returns 'Hello, {name}!'\n",
    "#   Using patch, mock get_user so format_greeting never touches a real database:\n",
    "#     - patch get_user to return {'name': 'Alice'}\n",
    "#     - assert format_greeting(1) == 'Hello, Alice!'\n",
    "#     - assert get_user was called exactly once with argument 1\n",
    "#       (hint: mock_get.assert_called_once_with(1))\n",
    "### Your code starts here.\n",
    "\n",
    "\n",
    "\n",
    "### Your code ends here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "3f50e388",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "format_greeting(1) = 'Hello, Alice!'  ✓\n",
      "get_user called with: call(1)\n"
     ]
    }
   ],
   "source": [
    "### Solution\n",
    "from unittest.mock import patch\n",
    "\n",
    "def get_user(user_id):\n",
    "    raise RuntimeError('This would query a real database')\n",
    "\n",
    "def format_greeting(user_id):\n",
    "    user = get_user(user_id)\n",
    "    return f\"Hello, {user['name']}!\"\n",
    "\n",
    "with patch('__main__.get_user', return_value={'name': 'Alice'}) as mock_get:\n",
    "    result = format_greeting(1)\n",
    "    assert result == 'Hello, Alice!', f'Got: {result}'\n",
    "    mock_get.assert_called_once_with(1)\n",
    "    print(f'format_greeting(1) = {result!r}  ✓')\n",
    "    print(f'get_user called with: {mock_get.call_args}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b8e15ce",
   "metadata": {},
   "source": [
    "### Parametrized Tests\n",
    "\n",
    "Writing one test function per input case leads to repetitive code. `pytest.mark.parametrize` runs the same test logic across many input/expected pairs automatically.\n",
    "\n",
    "```python\n",
    "import pytest\n",
    "\n",
    "@pytest.mark.parametrize(\"input_val, expected\", [\n",
    "    (\"racecar\", True),\n",
    "    (\"hello\",   False),\n",
    "    (\"\",        True),\n",
    "])\n",
    "def test_is_palindrome(input_val, expected):\n",
    "    assert is_palindrome(input_val) == expected\n",
    "```\n",
    "\n",
    "Each row becomes a separate test case in the pytest report. Failed rows show the exact inputs that failed, making diagnosis fast."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "f34638f0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  is_palindrome('racecar') == True  →  PASS\n",
      "  is_palindrome('madam') == True  →  PASS\n",
      "  is_palindrome('hello') == False  →  PASS\n",
      "  is_palindrome('') == True  →  PASS\n",
      "  is_palindrome('a') == True  →  PASS\n",
      "\n",
      "5/5 test cases passed\n"
     ]
    }
   ],
   "source": [
    "# The @pytest.mark.parametrize decorator is designed to be discovered and run\n",
    "# by the pytest command-line runner, not executed inside a notebook kernel.\n",
    "# Rather than leave you with untestable code, we simulate the same logic below\n",
    "# with a plain loop — the test function body is identical to what you'd write\n",
    "# in a real test file.\n",
    "\n",
    "# ── In a real test file (test_palindrome.py) you would write: ──────────────\n",
    "# import pytest\n",
    "#\n",
    "# @pytest.mark.parametrize(\"s, expected\", [\n",
    "#     (\"racecar\", True),\n",
    "#     (\"madam\",   True),\n",
    "#     (\"hello\",   False),\n",
    "#     (\"\",        True),\n",
    "#     (\"a\",       True),\n",
    "# ])\n",
    "# def test_is_palindrome(s, expected):\n",
    "#     assert is_palindrome(s) == expected\n",
    "#\n",
    "# Then run: pytest test_palindrome.py -v\n",
    "# ──────────────────────────────────────────────────────────────────────────\n",
    "\n",
    "def is_palindrome(s):\n",
    "    return s == s[::-1]\n",
    "\n",
    "cases = [\n",
    "    (\"racecar\", True),\n",
    "    (\"madam\",   True),\n",
    "    (\"hello\",   False),\n",
    "    (\"\",        True),\n",
    "    (\"a\",       True),\n",
    "]\n",
    "\n",
    "# Manual equivalent: same logic, same visibility\n",
    "passed = 0\n",
    "for s, expected in cases:\n",
    "    result = is_palindrome(s)\n",
    "    status = \"PASS\" if result == expected else \"FAIL\"\n",
    "    print(f\"  is_palindrome({s!r}) == {expected!r}  →  {status}\")\n",
    "    if status == \"PASS\":\n",
    "        passed += 1\n",
    "\n",
    "print(f\"\\n{passed}/{len(cases)} test cases passed\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "a7767543",
   "metadata": {
    "tags": [
     "thebe-interactive"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "0/0 test cases passed\n"
     ]
    }
   ],
   "source": [
    "### Exercise: Add test cases for is_leap_year(year)\n",
    "#   A year is a leap year if divisible by 4,\n",
    "#   EXCEPT century years (divisible by 100) are not,\n",
    "#   UNLESS also divisible by 400.\n",
    "#   Add at least 5 (year, expected) tuples to `cases`:\n",
    "#     - divisible by 400 (e.g., 2000) → True\n",
    "#     - divisible by 100 but not 400 (e.g., 1900) → False\n",
    "#     - divisible by 4, not 100 (e.g., 2024) → True\n",
    "#     - not divisible by 4 (e.g., 2023) → False\n",
    "#     - one more case of your choice\n",
    "def is_leap_year(year):\n",
    "    return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)\n",
    "\n",
    "cases = [\n",
    "    # (year, expected)\n",
    "    ### Your cases start here.\n",
    "\n",
    "    ### Your cases end here.\n",
    "]\n",
    "\n",
    "passed = 0\n",
    "for year, expected in cases:\n",
    "    result = is_leap_year(year)\n",
    "    status = 'PASS' if result == expected else 'FAIL'\n",
    "    print(f'  is_leap_year({year}) == {expected}  ->  {status}')\n",
    "    if status == 'PASS':\n",
    "        passed += 1\n",
    "print(f'\\n{passed}/{len(cases)} test cases passed')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "3e257272",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  is_leap_year(2000) == True  ->  PASS\n",
      "  is_leap_year(1900) == False  ->  PASS\n",
      "  is_leap_year(2024) == True  ->  PASS\n",
      "  is_leap_year(2023) == False  ->  PASS\n",
      "  is_leap_year(1600) == True  ->  PASS\n",
      "\n",
      "5/5 test cases passed\n"
     ]
    }
   ],
   "source": [
    "### Solution\n",
    "def is_leap_year(year):\n",
    "    return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)\n",
    "\n",
    "cases = [\n",
    "    (2000, True),   # divisible by 400 -> leap\n",
    "    (1900, False),  # divisible by 100 but not 400 -> not leap\n",
    "    (2024, True),   # divisible by 4, not 100 -> leap\n",
    "    (2023, False),  # not divisible by 4 -> not leap\n",
    "    (1600, True),   # divisible by 400 -> leap\n",
    "]\n",
    "\n",
    "passed = 0\n",
    "for year, expected in cases:\n",
    "    result = is_leap_year(year)\n",
    "    status = 'PASS' if result == expected else 'FAIL'\n",
    "    print(f'  is_leap_year({year}) == {expected}  ->  {status}')\n",
    "    if status == 'PASS':\n",
    "        passed += 1\n",
    "print(f'\\n{passed}/{len(cases)} test cases passed')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b1cf2407",
   "metadata": {},
   "source": [
    "### Test Coverage Basics\n",
    "\n",
    "**Test coverage** measures what fraction of your source code is exercised by your test suite. A line is \"covered\" if at least one test causes it to run.\n",
    "\n",
    "The standard tool is `coverage.py`, which integrates with pytest:\n",
    "\n",
    "```bash\n",
    "# Install (once)\n",
    "pip install pytest-cov\n",
    "\n",
    "# Run tests with coverage report\n",
    "pytest --cov=my_module --cov-report=term-missing\n",
    "```\n",
    "\n",
    "| Coverage % | Interpretation |\n",
    "|---|---|\n",
    "| < 50% | Probably missing whole branches or functions |\n",
    "| 50–80% | Reasonable for scripts; low for libraries |\n",
    "| 80–90% | Good target for most production code |\n",
    "| > 90% | High confidence; watch for diminishing returns |\n",
    "\n",
    "**Key insight**: 100% coverage does not mean your code is correct — it means every line ran, not that every case was tested correctly. Focus on covering *branches* (if/else, try/except), not just lines.\n",
    "\n",
    "```bash\n",
    "# Generate an HTML report to see exactly which lines are missed\n",
    "pytest --cov=my_module --cov-report=html\n",
    "open htmlcov/index.html   # macOS; use 'start' on Windows or 'xdg-open' on Linux\n",
    "# Or just open htmlcov/index.html directly in your browser.\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "bed1beed",
   "metadata": {
    "tags": [
     "thebe-interactive"
    ]
   },
   "outputs": [],
   "source": [
    "### Exercise: Write pytest-style tests\n",
    "#   Write three test functions for `count_vowels(s)` defined below:\n",
    "#     1. test_count_vowels_typical: a string with multiple vowels (e.g., 'hello')\n",
    "#     2. test_count_vowels_none: a string with no vowels (e.g., 'gym')\n",
    "#     3. test_count_vowels_empty: the empty string should return 0\n",
    "#   Each function should use plain `assert` (no `self`, no class).\n",
    "#   After writing them, call each one to verify they pass.\n",
    "\n",
    "def count_vowels(s):\n",
    "    \"\"\"Return the number of vowels (a, e, i, o, u) in s (case-insensitive).\"\"\"\n",
    "    return sum(1 for c in s.lower() if c in 'aeiou')\n",
    "\n",
    "### Your code starts here.\n",
    "\n",
    "\n",
    "\n",
    "### Your code ends here.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "cc6bc28e",
   "metadata": {
    "tags": [
     "hide-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test_count_vowels_typical PASSED\n",
      "test_count_vowels_none PASSED\n",
      "test_count_vowels_empty PASSED\n"
     ]
    }
   ],
   "source": [
    "### Solution\n",
    "\n",
    "def count_vowels(s):\n",
    "    \"\"\"Return the number of vowels (a, e, i, o, u) in s (case-insensitive).\"\"\"\n",
    "    return sum(1 for c in s.lower() if c in 'aeiou')\n",
    "\n",
    "def test_count_vowels_typical():\n",
    "    assert count_vowels('hello') == 2\n",
    "    assert count_vowels('AEIOU') == 5\n",
    "\n",
    "def test_count_vowels_none():\n",
    "    assert count_vowels('gym') == 0\n",
    "    assert count_vowels('rhythm') == 0\n",
    "\n",
    "def test_count_vowels_empty():\n",
    "    assert count_vowels('') == 0\n",
    "\n",
    "for fn in [test_count_vowels_typical, test_count_vowels_none, test_count_vowels_empty]:\n",
    "    fn()\n",
    "    print(f'{fn.__name__} PASSED')\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv (3.13.7)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
