top of page

Predictable Verification; Part 1

Bryan Dickman & Joe Convey

12 Feb 2024

“Plan your verification campaign" (Understanding the baseline)

Introduction

What does your verification campaign look like in terms of cost of product development, schedule for delivery, and quality of the end product? All critical questions you need answers to in order to plan and deliver your product.

 

Understanding this enables you to change or influence outcomes in a way the protects the ROI of your product and ensures that the delivered quality meets the end user requirements.

 

Best practice requires the visualisation of important data elements generated by the product development process to plan your product delivery schedule and resource allocations in a way that ensures


on-time delivery, within cost constraints, while meeting quality goals.


We explore the use of data in the form of visualisations based on generated data to illustrate what problems exist, areas for improvement indicated, costs involved and what conclusions can be drawn before moving on to the next stages in forthcoming blogs.


What problems are we trying to solve?

Over the blog series we consider how to use visualised data to address issues such as lack of predictability in IP quality, unreliable IP delivery timescales and sub-optimal return on investment/cost.

 

Am I spending enough, too little on predictability?

 

Engineering can a “black box” with limited visibility into engineering outcomes, presenting investment question marks for the Board. Ror the engineering team it presents difficulty in creating plausible business case to the Board for investment into verification methodology, people, tools and platform.

 

Add onto all that, difficulties in scaling the verification platform due to inadequate data accessibility or interpretation and importantly, potential scarcity of data competence in teams.


Focus of this blog

Simulation data along with bug data can inform us about the efficency, effectiveness and total cost of the verification campaign, and help development teams to refine their approach over the course of multiple projects. Analysis and understanding of the effort and costs and the overall effectivness of each method to find bugs and improve product quality are key to ensuring that your product’s ROI is improved, or at least protected from runaway development costs, or escalating post-release rework costs.


Effectiveness is the ability of a verification process to either find bugs, or demonstrate an absence of bugs while increasing coverage (both structural and functional).


If the verification environment is not achieving either of these aims, then it’s value is questionable, and running endless cycles may be futile. That’s not to say verification is done; further analysis of the verification environment may expose shortfalls that can be addressed, such that further verification value is possible.

 

Effectiveness needs to be understood both at the individual test or testbench level, and also at the regression level.

 

Intelligent regressions should minimise the amount of repeated testing that is performed where tests are known not to exercise the parts of the design that have changed, and need to be regressed. The “just re-run everything” approach can be very wasteful, especially when resources are costly and limited.


Efficiency mainly pertains to the performance of the verification environment.


If little or no consideration has been given to this, you may end up with an inefficient environment that will significantly impact your costs and schedule to achieve the desired levels of assurance from verification. Sometimes, many inefficiencies can be improved quickly with small efforts. There may be some low-hanging fruit. Analytics is the key to identifiying the critical areas that will benefit the most from the application of engineering time and effort.


You don’t have to fix every problem, but be sure to identify the larger ones and address them. Follow the 80:20 rule!


Scenario 1: The baseline

In the Predictable Verification series we show some representative data profiles that we have synthetically generated. The numbers shown are representative and realistic, but your numbers are likley to look quite different for your own product development campaign. Scale up the numbers to match your individual experience.

 

For those teams starting out today with no historical data, this modelling approach can be used to speculate on what the timeline and resource utilistation curves will look like. These prediction models can be refined over time with feedback from actuals data, so that prediction modelling becomes more accurate and teams can rely on it for planning and delivery of product roadmaps.

 

Let’s start out with what we named the “baseline” scenario as shown in in figure 1 below. Here you can see a timeline for a project divided into the typical overlapping development phases which result in a milestone or product release point where the product is delivered to the end user, maybe as an interim delivery such as a beta quality product, and then finally as the first full release quality product.

 

The quality of the released product will depend on the depth and breadth of verification achieved up to this milestone point. Were all the critical bugs flushed out by the verification campaign? If not, you can expect to see bug escapes post-release and further cost and effort to rework the product, fix the bugs and further extend the verification campaign to increase assurance levels and final product quality.


Reputations can be at risk with consequential losses for future sales, late delivery, delays to the development of other new products and consequential loss of market share.

 

In figure 1 we observe a common scenario where there is a ramp up of testing effort towards the latter stages of the product developent lifecycle. Too much of the verification effort is back-ended and this increases the risk of post-release bug escapes.

 

Chances are high that when verification efforts are suddenly terminated, there are still a few more bugs to find. We don’t see a plausible stabilisation period before final release sign-off. We also observe in the corresponding bug rate chart that there is a spike in bugs being found in the late phase of the project, which is another red flag pointing at high risk of post-release bug escapes.

 

In this example we see that the delivery time is >260 days for a total testing volume of 120M tests/seeds at a total cost of $122K. At the peak of this curve, 1,200 slots are being consumed on average during a week.



Conclusion

In this example we can see a clear case to shift-left the large bulk of simulation activity (which is currently happening in the release phase of the project) - too late. Also note the massive spike of bugs found during release…


Reputational damage here we come!


Clearly a full Root Cause Analysis (RCA) process needs to take place. The cause(s) can be evidenced in good simulation and bug data displayed in clear analytics leading to a better understanding of how to tackle, for example,  testbench effectiveness.


In the next blogs we will explore a “shift-left/shift-down” scenario, as well as an “improve quality” scenario and “invest more to improve” scenario.


To discuss your particular verification campaign challenges, Silicon Insights offer independent thought leadership, backed up with years of experience and insights in how to model senarios, exploit your data, and drive predictable delivery in your organisation.


Contact bryan.dickman@siliconinsights.co.uk, joe.convey@siliconinsights.co.uk


Copyright © 2024 SIlicon Insights Ltd. All rights reserved.


bottom of page