Testing the Testbench
No one in their right mind would release a software product without having adequately tested the code. Yet, we release testbenches for production use all the time without thorough testing or even any testing at all. Why is that OK? Here are some reasons I have heard.
Time pressures. If we miss the tapeout deadline heads will roll. There are important customers waiting for our products; we have to get started on the next project; costs will skyrocket. We don't care if the all the i's are dotted and t's are crossed in the testbench. Can we reasonably demonstrate that the design works? Do the regressions all pass? It's far more important to hit the date than it is to build a bulletproof testbench.
It's expensive. A testbench is a complex multi-threaded piece of software with many separate elements. Testing each element and the system together both with and without the RTL design will add many weeks to the schedule and require a significant number of engineers to complete. This is just unnecessary expense.
It's hard to test a testbench. The inherent complexity of testbench code makes it hard to test. Exercising the testbench without an attached DUT is very difficult.
Historical reasons. Traditionally testbenches were throwaway code, particularly in the days of ASIC designs. Testing them did not seem very important. It has been ingrained in the corporate culture that testbenches are second class citizens, particularly compared to RTL code. We sell products constructed from RTL, but the testbench is just an internal artifact that no one really sees. What's the point of spending extra time and money to make them perfect? That's just wasted effort.
It's All About Reuse
These days testbenches and testbench elements are not throwaway code. Testbenches represent a significant investment in software development, an investment that is expected to pay dividends. If the testbench is not well tested then you are playing with fire. You are risking the next project in order to meet a current deadline. You may even be risking the current project if your poorly tested testbench allows bugs to escape.
Most importantly, testbench code should be reusable. If you are going to reuse it then you must test it. No matter how well generalized and parameterized your code, it is not reusable if it is not tested. Code only works as well as it is tested. Untested code is unreliable and therefore effectively non-functional. Someone once wisely said to me the difference between a toy and a product is the quality of the test suite.
What Can We Do About It?
The testbench is your last line of defense between before spending a lot of time and money to make masks and ramp up the fab. Defects not found by the testbench will escape to the product. Unless you test your testbenches you don't know what the testbenches are saying about your design? Put together a test plan for testbench code and allocate time in the schedule to build and run the tests. Add testbench tests to your regression suite.
Here are some things you can think about as you plan how to test your testbench.
Unit Testing. Test each element separately. Write tests for each class or small collection of classes that perform a single function. For example, write unit tests for sequences, sequence items, scoreboard classes.
Transactors. If you purchase VIP on the commercial market ask your vendor to see their test plan. If your vendor refuses, find another vendor. If you build transactors in-house for standard or custom protocols, test transactors the same way that you would verify a piece of hardware. Create a coverage model based on the protocol and write directed and randomized tests that will fill out the coverage model.
Scoreboards. Scoreboards are notoriously difficult to test, mostly because they can be quite complex. A scoreboard must mimic some or all of the functionality of the DUT at some level of abstraction and the complexity is related to the complexity of the behaviors being modeled. It can be difficult to test a scoreboard as a single entity. A good approach is to test the parts (see unit testing above). The parts may include:
- golden models. Golden models can be validated to ensure that requests results in the correct response.
- plumbing. Make sure that transactions get from here to there correctly.
- comparators. Make sure that transaction comparisons are done correctly.
- data structures. Any lists, tables, queues, trees, etc. that are used in the scoreboard work correctly
- asynchronous events (interrupts, resets, etc.). Does the scoreboard respond correctly to asynchronous events.
Stimulus. Run sequences to ensure they generate the correct stream of transactions
While it's a bit more work to build a test suite for your testbenches and testbench elements, the payoff is tremendous. Tested code is more robust and reliable than code that has not been tested. Your confidence increases that the testbench is providing you good information about the state of the design. Schedules become more predictable. This can only lead to increased peace of mind. What's not to like?