Global Sources
EE Times-India
Stay in touch with EE Times India
 
EE Times-India > FPGAs/PLDs
 
 
FPGAs/PLDs  

Address random tests' inability to spot regression

Posted: 07 Nov 2012     Print Version  Bookmark and Share

Keywords:ASIC  random tests  regression testing. 

Most ASIC companies utilise random tests not only to verify new designs but also for regression testing. Using random tests for regression testing is a great idea for coverage as the randomness over time will ensure that the total coverage will improve. Instead of running the same tests every night, each night's regression test suite is slightly different with different seeds. However improving coverage is not what the specific topic of regression testing is about. The purpose of regression testing is to quickly identify dips in quality, i.e. regressions, in order to address them and keep the quality high. And here random tests have one downside – they cannot identify regressions. But there are ways to address this issue.

Is it better or worse?
To be more precise, random tests cannot distinguish between a dip in quality and increased coverage. A random test that fails may do so because it hit a new and never before tested corner case which reveals a bug in a module that was designed by professors and PhD's long time ago in a completely different project. It is great news to stumble upon such a corner case in order to iron it out, hopefully before the customer will notice it. Alternatively the random test may fail because John accidentally sat on his keyboard while checking in his code update (he is very agile). This caused some unexpected behaviour in same functions he was not even working on (sitting on keyboards often do). This is a classic case of a regression. In the first case you have great news to report, an old corner case has been identified, you are a hero. In the second case, you have to hit the panic button and hold the release. Distinguishing between good and bad news is always welcome, not only in the world of regression testing, but alas random tests cannot help you with this. The random test just tells you that something failed, but cannot say whether it is a new or old problem.

Fast fixes for high quality
Another difference between regression bugs and new test that covers a new corner case is that regression bugs are comparatively easy to fix. If you point out that a developer made an error in an update then it is often quite easy to fix. Identifying problems as regressions, and even better, linking the problem to the revision(s) where the problem was introduced, results in faster fixes. The faster you fix regression bugs the better quality you have of the design during development, which in turn leads to earlier time to market, as the developers jobs are not hampered by quality dips. So separating regression bugs from failures due to new test scenarios also leads to a substantial productivity gain.

Figure 1: Diff cannot be used for random tests.

Diff does not work
A directed test, as opposed to a random test, is good at identifying regressions. If a directed test passed earlier, but it fails now then you most probably have identified a regression. Comparing the revision database today, when the test fails, with some time back when the test fails, makes it possible to narrow down when the problem occurred. You can basically do a diff between good result and bad result both in terms of log files and the revision database and draw some conclusions. The cause of the quality dip, i.e. the regression, is one of the updates to the revision database in this time window. You don't know exactly which one, but you have a list of changes and limited set of people you can blame. Directed tests are great at identifying regressions. They are not great at providing good and steadily improving coverage over time as random tests are, but in terms of being able to identifying regressions directed tests are great and doing a diff between pass and failure gives you lots of useful information.

Diff doesn't work at all with random tests (figure 1). If a random test passed yesterday, but failed today, but with a different seed, then this can be due to either a regression in quality, increased coverage or the test may even be illegal. An illegal test will probably lead to a constraint being set to eliminate this type of test, whereas in the case of regressions and coverage improvements will lead to fixes in the design under test. For random tests we must find a different solution.

Backtracking is the way forward
In order to draw conclusions why a random test failed we must retest the very same test on older revisions. This means rerunning the failing test, using the same seed, on older revisions, in order to identify when the problems started to arise. This is the only way to be able to compare the test results on older revisions with the test results on the latest revision. Once you have rerun the same test on an old revision then you will be able to do the same comparison as you would with a directed test. If the same test with the same seed passes in an older revision then you are able to identify that a regression has occurred. If the same test and seed has always failed then you know this is a new test. This new test may in turn either be catching a new corner case, or alternatively it may be an illegal test. Either way you are now able to distinguish between new tests and regression in quality.

Backtracking through older revisions used to be a manual process, consuming expensive engineering time, but this has now been automated in PinDown, the automatic debug tool. PinDown can automatically debug any test failure, both random and directed, down to the exact revision that caused the failure and send the developers who cause the failure a bug report before the night's regression has even finished.

1 • 2 Next Page Last Page



Comment on "Address random tests' inability to s..."
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top