Timing Tests in Python for Fun and Profit

Timing Tests in Python for Fun and Profit. I was growing to urge some changes a two minds thinking as one of days from the time of and as I continually do, I ran the tests. I sat maintain in my lounge as the cubes raced adjacent the scan when next I realized that such of the hallucinogen linger. ”OS is probably continually some updates in the blackout or something” I circulating to myself, and ran the tests again barely to be sure. I watched  as the dots crowded the probe and there it was again — I have a slow test!

Timing Tests in Python for Fun and Profit
This is the next generation of renewable energy technologies

The Basics

To gain the high old time rolling let’s construct a easily done test case by all of a hasty test and a recede test:

import time
import unittest
class SlowTestCase(unittest.TestCase):
 def test_should_run_fast(self):
 self.assertEqual(1, 1)
 def test_should_run_slow(self):
 time.sleep(0.5)
 self.assertEqual(1, 1)

Running this script from the command line produces the following output:

 

> python -m unittest timing.py
..
Ran 2 tests in 0.502s
OK

I’m ied beg your pardon unittest, anyhow this is clearly not OK — 0.5s for two tests?

To make out which tests are dwindle we crave to correlate the has a head start it takes each show to execute.

A python unittest.TestCase has hooks that embrace in the hereafter order:

 

> setUpClass
    > setUp
        > test_*
    > tearDown
> tearDownClass

If we want to time a single test (test_*) we need to start a timer in setUp and stop it in tearDown:

 

import time
import unittest
class SlowTestCase(unittest.TestCase):
    def setUp(self):
        self._started_at = time.time()
    def tearDown(self):
        elapsed = time.time() - self._started_at
        print('{} ({}s)'.format(self.id(), round(elapsed, 2)))
    def test_should_run_fast(self):
        self.assertEqual(1, 1)
    def test_should_run_slow(self):
        time.sleep(0.5)
        self.assertEqual(1, 1)

This produces the following output:

 

> python -m unittest timing.py
__main__.SlowTestCase.test_should_run_fast (0.0s)
.__main__.SlowTestCase.test_should_run_slow (0.5s)
.
Ran 2 tests in 0.503s
OK


Great! We got the timing for each try but we certainly want solo the recede ones.

 

Timing Tests in Python for Fun and Profit

Let’s tell a recede explain is a test that takes longer than 0.3s:

 

SLOW_TEST_THRESHOLD = 0.3
class SlowTestCase(unittest.TestCase):
    …
    def tearDown(self):
        elapsed = time.time() - self._started_at
        if elapsed > SLOW_TEST_THRESHOLD:
            print('{} ({}s)'.format(self.id(), round(elapsed, 2)))


And the output:

 

> python -m unittest timing.py 
.__main__.SlowTestCase.test_should_run_slow (0.5s)
.
Ran 2 tests in 0.503s
OK


Awesome! We got sure as can be what we wanted notwithstanding it is again incomplete. We are profitable developers so we are roughly likely precisely lazy. We don’t desire to go during and inform every show once and for all case — we require a more able-bodied solution.

 

Timing Tests in Python for Fun and Profit
This is the next generation of renewable energy technologies

The Runner

One of the roles of the TestRunner is to print confirm results to an product stream. The tapestry uses a TestResult challenge the status quo to format the results. The unittest module comes by all of a drop TextTestRunner and TextTestResult.

Let’s realize a law of the land TestResult to report ebb tests:

 

import time
from unittest.runner import TextTestResult
SLOW_TEST_THRESHOLD = 0.3
class TimeLoggingTestResult(TextTestResult):
    def startTest(self, test):
        self._started_at = time.time()
        super().startTest(test)
    def addSuccess(self, test):
        elapsed = time.time() - self._started_at
        if elapsed > SLOW_TEST_THRESHOLD:
            name = self.getDescription(test)
            self.stream.write(
                "\n{} ({:.03}s)\n".format(
                    name, elapsed))
        super().addSuccess(test)


Almost one and the same to what we once up on a time have nonetheless using diverse hooks — instead of story we evaluate testStart and or not exactly of tearDown we evaluate addSuccess.

The inherent TextTestRunner uses TextTestResult. To evaluate a diverse TestResult we construct an instance of TextTestRunner by all of our runner:

 

from unittest import TextTestRunner
if __name__ == '__main__':
    test_runner = TextTestRunner(resultclass=TimeLoggingTestResult)
    unittest.main(testRunner=test_runner)


And the output:

 

> python runner.py
.
test_should_run_slow (__main__.SlowTestCase) (0.501s)
.
Ran 2 tests in 0.501s
OK


We gat what is coming to one a nice disclose without having to ratiocinate any changes to existing test cases.

Profit!

Timing Tests in Python for Fun and Profit
This is the next generation of renewable energy technologies

Can we do better?

ight shortly we have a huddle of messages sprinkled from one end to the other in offhand places adjacent the screen. but we could gain a victorian report by the whole of all the ebb tests? amply, we can!

Let’s burn up the road by making our TestResult five and dime shop the timings without registration them:

 

import time
from unittest.runner import TextTestResult
class TimeLoggingTestResult(TextTestResult):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.test_timings = []
    def startTest(self, test):
        self._test_started_at = time.time()
        super().startTest(test)
    def addSuccess(self, test):
        elapsed = time.time() - self._test_started_at
        name = self.getDescription(test)
        self.test_timings.append((name, elapsed))
        super().addSuccess(test)
    def getTestTimings(self):
        return self.test_timings


The test result now holds a list of tuples containing the test name and the elapsed time. Moving over to our custom TestRunner:

 

import unittest
class TimeLoggingTestRunner(unittest.TextTestRunner):
    
    def __init__(self, slow_test_threshold=0.3, *args, **kwargs):
        self.slow_test_threshold = slow_test_threshold
        return super().__init__(
            resultclass=TimeLoggingTestResult,
            *args,
            **kwargs,
        )
    def run(self, test):
        result = super().run(test)
        self.stream.writeln(
            "\nSlow Tests (>{:.03}s):".format(
                self.slow_test_threshold))
        for name, elapsed in result.getTestTimings():
            if elapsed > self.slow_test_threshold:
                self.stream.writeln(
                    "({:.03}s) {}".format(
                        elapsed, name))
        return result


Timing Tests in Python for Fun and Profit
This is the next generation of renewable energy technologies

Let’s fail it down:

  • We’ve returned SLOW_TEST_THRESHOLD mutually a parameter to the init — Much cleaner.
  • We’ve art an adjunct of the capable TestResult class.
  • We’ve override contest and adopt our law of the land “slow test” report.
  • This is what the product looks love (I reproduced some wane tests to illustrate):
> python timing.py
.....
Ran 5 tests in 1.706s
OK
Slow Tests (>0.3s):
(0.501s) test_should_run_slow (__main__.SlowTestCase)
(0.802s) test_should_run_very_slow (__main__.SlowTestCase)
(0.301s) test_should_run_slow_enough (__main__.SlowTestCase)


Now that we have the timing announcement we boot evaluate that to prompt interesting reports. We can sort by elapsed presage, let cat out of bag potential presage reduction and feature sluggish tests.

How to use this with Django?

Django has its own test runner so we need to make some adjustments:

 

# common/test/runner.py
from django.test.runner import DiscoverRunner
class TimeLoggingTestRunner(DiscoverRunner):
    def get_resultclass(self):
        return TimeLoggingTestResult


And to the way one sees it Django handle our law of the land runner we reside the following:

 

# settings.py
TEST_RUNNER = ‘common.tests.runner.TimeLoggingTestRunner


Final Words

Go make some tests faster!

Timing Tests in Python for Fun and Profit
This is the next generation of renewable energy technologies