The coverage will indicate that all code was exercised. But we do not know that our design works as intended. All we know is that A, B, and O have been observed to take on both logic 0 and 1. We could not say for certain that the design was in fact an AND gate: it could just as easily be an OR gate. So we need functional coverage to tell us this. The first thing functional coverage will tell us is if we observed all possible values on the inputs. And by adding a cross between the inputs and the output it will tell us which gate we are looking at.
Reaching 100% functional coverage is not enough either. Remember functional coverage requires the designer to manually add coverage point into the design. Going back to our AND gate, let us say we take two of these and OR the outputs of the two.
If we only add the cover points from our original example, these will still exercise the new output of our system and therefore result in reaching 100% functional coverage, but half of the design was not exercised. This is called a coverage hole and code coverage would have indicated that a part of the code was never exercised.
While functional coverage will tell you if your design is working correctly, code coverage will highlight if your testplan is incomplete, or if there are any uncovered/unreachable features in the design. Coverage holes can be addressed accordingly through either updating the design specification and augmenting the testplan / written tests, or by optimizing the design to remove unreachable logic if possible. There may be features that cannot be tested and cannot be removed from the design either. These would have to be analyzed and excluded from the coverage collection as a waiver. This is an exercise the DV and the designer typically perform together, and is discussed in more detail below.
Post-simulation coverage analysis typically yields items that may need to be waived off for various reasons. This is documented via exclusions, which are generated by the simulator tools. The following are some of the best practices when adding exclusions:
Designers are required to sign-off on exclusions in a PR review.
Provide annotations for ALL exclusions to explain the reasoning for the waiver.
Annotate exclusions with a standardized prefix (this makes writing exclusions and reviewing them easier). Exclusions almost always fall under a set of categories that can be standardized. Annotation can be prefixed with a category tag reflecting one of those categories, like this:
[CATEGORY-TAG] <additional explanation if required>
These categories are as follows:
For a DUT containing pre-verified sub-modules, the DV effort can be slightly eased. From the code coverage collection perspective, such sub-modules can be ‘black boxed’ where we turn off all other metrics within their hierarchies and only collect the toggle coverage on their IOs. This eases our effort by allowing us to develop less complex tests and verification logic (pertaining to those pre-verified modules) since our criteria for closure reduces to only functionally exercising the IOs and their interactions with the rest of the DUT to prove that sub-modules are properly connected.
Of course, the rest of the sub-modules and glue-logic within the DUT that are not pre-verified do need to undergo the full spectrum of coverage closure. We achieve this by patterning the compile-time code coverage model in a particular way; this is a simulator tool-specific capability: for VCS, this uses the coverage hierarchy file that is written and passed to the simulator with the -cm_hier
option.
Coverage closure is perhaps the most time-consuming part of the whole DV effort, often with low return. Conservatively collecting coverage on everything might result in poor ROI of DV user's time. Also, excessive coverage collection slows down simulation. This section aims to capture some of the best practices related to coverage closure.
It is recommended to follow these guidelines when collecting coverage:
Instead of manually reviewing coverage reports to find unreachable code, we use VCS UNR to generate a UNR exclusion file which lists all the unreachable codes. VCS UNR (Unreachability) is a formal solution that determines the unreachable coverage objects automatically from simulation. The same coverage hierarchy file and the exclusion files used for the simulation can be supplied to VCS UNR.
Follow these steps to run and submit the exclusion file.
--cov
switch.util/dvsim/dvsim.py path/to/<dut>_sim_cfg.hjson --cov-unr
Here are some guidelines for using UNR and checking in generating exclusion.
CHECKSUM
along with exclusions, as it checks the exclusion isn‘t outdated by changes on the corresponding code. We should not use --exclude-bypass-checks
to disable the check, otherwise, it’s needed to have additional review to make sure exclusions match to the design.Note: VCS UNR doesn't support assertion or functional coverage.
Standard RTL simulations (RTL-sim) ignore the uncertainty of X-valued control signals and assign predictable output values. As a result, classic RTL-sim often fail to detect design problems related to the lack of propagation of unknown values, which actually can be detected in gate-level simulations (gate-sim). With Xprop in RTL-sim, we can detect these problems without having to run gate-sim.
Synopsys VCS and Cadence Xcelium both provide the following 2 modes for Xprop.
Example:
always @(posedge clk) begin if (cond) out <= a; else out <= b; end
In the above example, results of ‘out’ are shown as following.
a | b | cond | Classic RTL-sim | Gate-sim | Actual Hardware | Xprop Optimistic | Xprop Pessimistic |
---|---|---|---|---|---|---|---|
0 | 0 | X | 0 | 0 | 0 | 0 | X |
0 | 1 | X | 1 | X | 0/1 | X | X |
1 | 0 | X | 0 | X | 0/1 | X | X |
1 | 1 | X | 1 | X | 1 | 1 | X |
We choose Pessimistic Mode as we want to avoid using X value in the condition. Xprop is enabled by default when running simulations for all of our DUTs due to the acceptable level of overhead it adds in terms of wall-clock time (less than 10%).
It's mandatory to enable Xprop when running regression for coverage closure. Please refer to the simulator documentation to identify and apply the necessary build-time and / or run-time options to enable Xprop. In OpenTitan, Xprop is enabled by default for VCS and Xcelium simulators. To test Xprop more effectively, the address / data / control signals are required to be driven to Xs when invalid (valid bit is not set). For example, when a_valid is 0 in the TLUL interface, we drive data, address and control signals to unknown values.
function void invalidate_a_channel(); vif.host_cb.h2d.a_opcode <= tlul_pkg::tl_a_op_e'('x); vif.host_cb.h2d.a_param <= '{default:'x}; vif.host_cb.h2d.a_size <= '{default:'x}; vif.host_cb.h2d.a_source <= '{default:'x}; vif.host_cb.h2d.a_address <= '{default:'x}; vif.host_cb.h2d.a_mask <= '{default:'x}; vif.host_cb.h2d.a_data <= '{default:'x}; vif.host_cb.h2d.a_user <= '{default:'x}; vif.host_cb.h2d.a_valid <= 1'b0; endfunction : invalidate_a_channel
The simulator may report that some portions of RTL / DV could not be instrumented for X-propagation due to the way they were written. This is typically captured in a log file during the simulator's build step. It is mandatory to analyze these pieces of logic to either refactor them to be amenable to X-prop instrumentation, or waive them if it is not possible to do so. This is captured as the V3 checklist item X_PROP_ANALYSIS_COMPLETED
.
Xprop instrumentation is simulator specific; this focuses on VCS, and lists some simple and common constructs that block instrumentation.
* Some behavioral statements, like `break`, `continue`, `return`, and `disable`. * Functions that have multiple `return` statements. * Calling tasks or functions with side-effects, notably using the `uvm_info` macro.
If the problematic code is in a SV process, Xprop can be disabled adding the xprop_off
attribute to that process as follows:
always (* xprop_off *) @(posedge clk) begin <some problematic constructs> end
There are cases where the problematic constructs are not in processes, and the code cannot be rewritten to make it Xprop-friendly. In these cases we need to disable Xprop for the full module or interface using a custom xprop configuration file. This uses the vcs_xprop_cfg_file
field in the corresponding sim_cfg hjson file. See the top_earlgrey vcs xprop configuration file and vcs_xprop_cfg_file
override in the top_earlgrey sim_cfg hjson file for an example.
Refer to the formal verification documentation.
Security verification is one of the critical challenges in OpenTitan. Each design IP contains certain security countermeasures. There are several common countermeasures that are widely used in the blocks. Therefore, a common verification framework is built up in the DV base libraries. The following common countermeasures can be either automatically or semi-automatically verified by this framework.
For custom countermeasures, they have to be handled in a case-by-case manner.
One of the best ways to convince ourselves that we have done our job right is by seeking from, as well as providing feedback to, our contributors. We have the following types of reviews for DV.
Whenever a pull-request is made with DV updates, at least one approval is required by a peer and / or the original code developer to enable the changes to be submitted. DV updates are scrutinized in sufficient detail to enforce coding style, identify areas of optimizations / improvements and promote code-reuse.
In the initial work stage of verification, the DV document and the completed testplan documents are reviewed face-to-face with the following individuals:
The goal of this review is to achieve utmost clarity in the planning of the DV effort and resolve any queries or assumptions. The feedback in this review flows both ways - the language in the design specification could be made more precise, and missing items in both the design specification and the testplan can be identified and added. This enables the development stages to progress smoothly.
Subsequently, the intermediate transitions within the verification stages are reviewed within the GitHub pull-request made for updating the checklist and the project status.
Finally, after the verification effort is complete, there is a final sign-off review to ensure all checklist items are completed satisfactorily without any major exceptions or open issues.
We use the OpenTitan GitHub Issue tracker for filing possible bugs not just in the design, but also in the DV code base or in any associated tools or processes that may hinder progress.
The process for getting started with DV involves many steps, including getting clarity on its purpose, setting up the testbench, documentation, etc. These are discussed in the Getting Started with DV document.
These capabilities are currently under active development and will be available sometime in the near future: