Testing

The Python Unittest module provides access to a powerful unit testing framework that can be extended to test FPGA firmware designs. ChipTools provides the ChipToolsTest class which extends unittest.TestCase to allow the automated execution and checking of FPGA firmware simulations. Any tests that you define should inherit the ChipToolsTest class, which can be found in chiptools.testing.testloader. ChipTools is re-using the existing Python Unittest framework which means that the rich ecosystem of testing tools provided by the Python community can now be used on your FPGA designs.

Before attempting to create your own FPGA unit tests in ChipTools you should first acquaint yourself with the Python Unittest framework to understand how the test flow and assertions work.

Test Flow

The Unittest TestCase defines a set of pre-defined functions that are called during different points in the test, this allows the designer to prepare the test environment and inputs, execute the tests and then clean up temporary files and close unused processes. The full detail of the pre-defined setUp/tearDown functions can be found in the Python Unittest docs, a typical test flow is given below:

  1. setUpModule
  2. setUpClass
  3. setUp
  4. <user_test_1>
  5. tearDown
  6. setUp
  7. <user_test_2>
  8. tearDown
  9. tearDownClass
  10. tearDownModule

Using ChipToolsTest

Assertion Based Tests

You may already have some testbenches for a design that use a self-checking approach that prints messages to the simulator transcript. Incorporating tests like these into the unit test framework is a simple exercise as ChipTools provides access to the simulator stdout and stderr streams as well as the simulator return code:

import re
from chiptools.testing.testloader import ChipToolsTest


class TestSimulatorStdout(ChipToolsTest):
    duration = 0  # Run forever
    library = 'my_test_lib'  # Testbench library
    entity = 'my_testbench'  # Entity to simulate

    def test_simulator_stdout(self):
        # Run the simulation
        return_code, stdout, stderr = self.simulate()

        # Check return code
        self.assertEquals(return_code, 0)

        # Check stdout for 'Error:' using regex
        errors = re.search('.*Error:.*', stdout)
        self.assertIsNone(errors)

This is one of the simplest tests you can define although it is also fairly limited. ChipTools allows you to make this approach slightly more flexible by providing a way to override generics/parameters before the test is run:

import re
from chiptools.testing.testloader import ChipToolsTest


class TestSimulatorStdout(ChipToolsTest):
    duration = 0  # Run forever
    library = 'my_test_lib'  # Testbench library
    entity = 'my_testbench'  # Entity to simulate
    generics = {'width' : 3) # Default generic width to 3

    def check_simulator_stdout(self):
        # Run the simulation
        return_code, stdout, stderr = self.simulate()

        # Check return code
        self.assertEquals(return_code, 0)

        # Check stdout for 'Error:' using regex
        errors = re.search('.*Error:.*', stdout)
        self.assertIsNone(errors)

    def test_width_5(self):
        self.generics['width'] = 5
        self.check_simulator_stdout()

    def test_width_12(self):
        self.generics['width'] = 12
        self.check_simulator_stdout()

By using simple test cases like these you are able to re-use your existing self-checking testbenches and define new test cases for them by modifying parameters/generics or stimulus files through ChipTools.

Model Based Tests

One of the big benefits of using Python is that you have access to a wide range of open source libraries that can assist with test development; for example you could use Python to model the expected behavior of a system such as a signal processing pipeline or cryptographic core. You can incorporate such models into the ChipTools test framework and use them to generate sets of stimulus which can be fed into your testbench, and you can then check the simulation response against the model response to determine whether or not the implementation is correct:

import numpy as np
from chiptools.testing.testloader import ChipToolsTest

class TestFastFourierTransform(ChipToolsTest):

    duration = 0  # Run forever
    library = 'my_test_lib'  # Testbench library
    entity = 'my_testbench'  # Entity to simulate
    N = 1024  # Our fixed FFT size
    generics = {'n' : N)

    def test_noise(self):
        values = np.random.randint(0, 2**16-1, self.N)
        self.run_fft_simulation(values)

    def test_sinusoid(self):
        f = 10
        values = np.sin(2*np.pi*f*np.linspace(0, 1, self.N))
        self.run_fft_simulation(values)

    def run_fft_simulation(self, values):
        out_path = os.path.join(self.simulation_root, 'fft_out.txt')
        in_path = os.path.join(self.simulation_root, 'fft_in.txt')

        # Create the stimulus file
        with open(in_path, 'w') as f:
            for value in values:
                f.write('{0}\n'.format(value))

        # Run the simulation
        return_code, stdout, stderr = self.simulate()

        # Check return code
        self.assertEquals(return_code, 0)

        # Open the simulator response file that our testbench created.
        with open(out_path, 'r') as f:
            actual = [float(x) for x in f.readlines()]

        # Run the FFT model to generate the expected response
        expected = np.fft.fft(values)

        # (compare our actual and expected values)
        self.compare_fft_response(actual, expected)

The example above demonstrates how you might check a common signal processing application using a Fast Fourier Transform. By using this approach a large suite of stimulus can be created to thoroughly check the functionality of the design.

External Test Runners

Perhaps you would like to set up a continous integration system such as Jenkins to execute your tests on a nightly basis. ChipTools makes this easy to do by allowing your unit tests to be run using external test runners like Nosetests or Pytest. To enable a unit test to be run using an external test runner simply add a project attribute to the test class which provides a path to a valid ChipTools XML project file defining the files and libraries required by the simulation environment:

import numpy as np
import os
from chiptools.testing.testloader import ChipToolsTest

class TestFastFourierTransform(ChipToolsTest):

    duration = 0
    library = 'my_test_lib'
    entity = 'my_testbench'
    base = os.path.dirname(__file__)
    project = os.path.join(base, 'my_project.xml')

Test cases that do not provide a project attribute will not be able to be run using an external runner.

ChipToolsTest Class Detail

class chiptools.testing.testloader.ChipToolsTest(methodName='runTest')[source]

The ChipToolsTest class is derived from unittest.TestCase and provides a base class for your unit tests to allow them to make full use of ChipTools.

When creating a unit test class you should override the duration, generics, library and entity attributes to define the test configuration for the simulator. Individual tests can redefine these attributes at run-time to provide a powerful testing mechanism for covering different configurations.

A brief example of a basic unit test case is given below:

>>> from chiptools.testing.testloader import ChipToolsTest
>>> class MyBasicTestCase(ChipToolsTest):
...     duration = 0  # Run forever
...     generics = {'data_width' : 3}  # Set data-width to 3
...     library = 'lib_tb_max_hold'  # Testbench library
...     entity = 'tb_max_hold'  # Entity to simulate
...     # Defining a path to a project allows us to run this test case
...     # with Nosetests etc. as well as through ChipTools.
...     project = os.path.join('max_hold.xml')
...     # The setUp method is called at the beginning of each test:
...     def setUp(self):
...         # Do individual test set-up here
...         pass
...     # Methods starting with 'test_' are considered test cases:
...     def test_max_hold(self):
...         # Run the simulator
...         return_code, stdout, stderr = self.simulate()
...         self.assertEqual(return_code, 0)  # Check error code
...         # More advanced checks could search stdout/stderr for
...         # assertions, or read output files and compare the
...         # response to a Python model.
...         pass            
...     # The tearDown method is called at the end of each test:
...     def tearDown(self):
...         # Clean up after your tests here
...         pass

For a complete example refer to the Max Hold example in the examples folder.

duration = 0

The duration attribute defines the time in seconds that the simulation should run for if the chosen simulator supports this as an argument during execution. If a time of 0 is specified the simulation will run until it is terminated automatically by the testbench. e.g. To fix simulation time at 10ms set duration to 10e-3

entity = None

The entity attribute defines the name of the top level component to be simulated when running this test. The entity should name a valid design unit that has been compiled as part of the project.

generics = {}

The generics attribute is a dictionary of parameter/generic names and associated values. These key, value pairs will be passed to the simulator to override top-level generics or parameters to customise the test environment:

>>> generics = {
...     'data_width' : 3,
...     'invert_bits': True,
...     'test_string': 'hello',
...     'threshold': 0.33,
>>> }

The generics attribute can also be used to dynamically specify parameters for individual tests to check different configurations in your testbench:

>>> def test_32_bit_bus(self):
...     self.generics['data_width'] = 32
...     self.simulate()
>>> def test_16_bit_bus(self):
...     self.generics['data_width'] = 16
...     self.simulate()
static get_environment(project, tool_name=None)[source]

Return the simulation environment items from the supplied project instance as a tuple of (simulator, simulation_root, libraries).

library = None

The library attribute defines the name of the library in which the top level component to be simulated exists.

load_environment(project, tool_name=None)[source]

Initialise the TestCase simulation environment using the supplied Project reference so that the individual tests implemented in this TestCase are able to compile and simulate the design.

project = None

The project attribute is optional, but if used it should supply an absolute path to a valid ChipTools Project XML file that defines the libraries and source files that make up the design that this test case belongs to. This attribute is required when the test case is executed directly by an external test runner instead of ChipTools, as it will be used to prepare the simulation environment for the external test runner.

The following provides a convenient way of setting the project path so that your test can be run from any directory:

>>> from chiptools.testing.testloader import ChipToolsTest
>>> class MyUnitTest(ChipToolsTest)
...     base = os.path.dirname(__file__)
...     # Now use os.path.join to build a relative path to the project.
...     project = os.path.join(base, '..', 'my_project.xml')
classmethod setUpClass()[source]

The setUpClass method prepares the ChipTools simulation environment if it has not already been loaded.

If this test case is loaded via the ChipTools Project API it will be initialised via a call to the load_environment method, which pulls the simulation environment information from the parent Project instance.

If this test case is loaded via an external tool such as Nosetests the setUpClass method will attempt to load the project file pointed to by the project path stored in the project attribute. When you create your test case you can specify this attribute in your test class to allow an external runner like Nosetests to call your test cases.

If the environment was not already initialised by ChipTools and a valid project file path is not stored in the project attribute, this method will raise an EnvironmentError and cause your test to fail.

This method overloads the unittest.TestCase.setUpClass classmethod, which is called once when a TestCase class is instanced.

simulate()[source]

Launch the simulation tool in console mode to execute the testbench.

The simulation tool used and the arguments passed during simulation are defined by the test environment configured by the test case and the Project settings. When the simulation is complete this method will return a tuple of (return_code, stdout, stderr) which can be used to determine if the test was a success or failure. For example your testbench may use assertions to print messages during simulation, your Python TestCase could use regex to match success of failure criteria in the stdout string:

>>> def test_stdout(self):
...     return_code, stdout, stderr = self.simulate()
...     # Use an assertion to check for a negative result on a search
...     # for 'Error:' in the simulator stdout string.
...     self.assertIsNone(re.search('.*Error:.*', stdout))
simulation_root

The simulation_root property is an absolute path to the directory where the simulator is invoked when simulating the testbench. Any inputs required by the testbench, such as stimulus files, should be placed in this directory by your TestCase. Similarly, any outputs produced by the testbench will be placed in this directory.

For example, to build paths to a testbench input and output file you could do the following:

>>> def setUp(self):
...     self.tb_in = os.path.join(self.simulation_root, 'input.txt')
...     self.tb_out = os.path.join(self.simulation_root, 'output.txt')