[dv doc] prototype implmentation of hjson testplan
- added sample hjson testplan (util/testplanner/examples)
- can import 'common' testplans
- specifies planned tests with ability to set actual written tests
- added sample regression results hjson file (util/testplanner/examples)
- added testplanner.py script (and associated utils)
- parse testplan hjson into a data structure
- parse regr result if available into another data structure
- annotate testplan with regr results
- updated lowrisc_renderer.py script to be able to expand teststplan
data structure into a table inline within the dv plan doc
diff --git a/hw/dv/tools/testplans/csr_testplan.hjson b/hw/dv/tools/testplans/csr_testplan.hjson
new file mode 100644
index 0000000..86bc3ee
--- /dev/null
+++ b/hw/dv/tools/testplans/csr_testplan.hjson
@@ -0,0 +1,15 @@
+{
+ entries: [
+ {
+ name: csr
+ desc: '''Standard CSR suite of tests run from all valid interfaces to prove SW
+ accessibility.'''
+ milestone: v1
+ tests: ["{name}{intf}_csr_hw_reset",
+ "{name}{intf}_csr_rw",
+ "{name}{intf}_csr_bit_bash",
+ "{name}{intf}_csr_aliasing",]
+ }
+ ]
+}
+
diff --git a/hw/dv/tools/testplans/intr_test_testplan.hjson b/hw/dv/tools/testplans/intr_test_testplan.hjson
new file mode 100644
index 0000000..f8586b6
--- /dev/null
+++ b/hw/dv/tools/testplans/intr_test_testplan.hjson
@@ -0,0 +1,11 @@
+{
+ entries: [
+ {
+ name: intr_test
+ desc: "Verify common intr_test CSRs to force interrupts via SW."
+ milestone: v2
+ tests: ["{name}{intf}_intr_test"]
+ }
+ ]
+}
+
diff --git a/hw/dv/tools/testplans/mem_testplan.hjson b/hw/dv/tools/testplans/mem_testplan.hjson
new file mode 100644
index 0000000..e8dd8a6
--- /dev/null
+++ b/hw/dv/tools/testplans/mem_testplan.hjson
@@ -0,0 +1,18 @@
+{
+ entries: [
+ {
+ name: mem_walk
+ desc: "Walk 1 through memory addresses from all interfaces"
+ milestone: v1
+ tests: ["{name}{intf}_mem_walk"]
+ }
+ {
+ name: mem_partial_access
+ desc: "Do partial accesses to memories."
+ milestone: v1
+ // mem_walk does partial writes, so we can reuse that test here
+ tests: ["{name}{intf}_mem_walk"]
+ }
+ ]
+}
+
diff --git a/hw/dv/tools/testplans/tl_device_access_types_testplan.hjson b/hw/dv/tools/testplans/tl_device_access_types_testplan.hjson
new file mode 100644
index 0000000..df8e82d
--- /dev/null
+++ b/hw/dv/tools/testplans/tl_device_access_types_testplan.hjson
@@ -0,0 +1,37 @@
+{
+ entries: [
+ {
+ name: oob_addr_access
+ desc: "Access out of bounds address and verify correctness of response / behavior"
+ milestone: v2
+ tests: ["{name}_tl_errors"]
+ }
+ {
+ name: illegal_access
+ desc: '''Drive unsupported requests via TL interface and verify correctness of response
+ / behavior '''
+ milestone: v2
+ tests: ["{name}_tl_errors"]
+ }
+ {
+ name: outstanding_access
+ desc: '''Drive back-to-back requests without waiting for response to ensure there is one
+ transaction outstanding within the TL device. Also, verify one outstanding when back-
+ to-back accesses are made to the same address.'''
+ milestone: v2
+ tests: ["{name}{intf}_csr_hw_reset",
+ "{name}{intf}_csr_rw",
+ "{name}{intf}_csr_aliasing",
+ "{name}{intf}_same_csr_outstanding"]
+ }
+ {
+ name: partial_access
+ desc: '''Do partial accesses.'''
+ milestone: v2
+ tests: ["{name}{intf}_csr_hw_reset",
+ "{name}{intf}_csr_rw",
+ "{name}{intf}_csr_aliasing"]
+ }
+ ]
+}
+
diff --git a/hw/ip/uart/dv/uart_dv_plan.md b/hw/ip/uart/dv/uart_dv_plan.md
new file mode 100644
index 0000000..4bd55b5
--- /dev/null
+++ b/hw/ip/uart/dv/uart_dv_plan.md
@@ -0,0 +1,107 @@
+{{% lowrisc-doc-hdr UART DV Plan }}
+{{% import_testplan uart_testplan.hjson }}
+
+{{% toc 3 }}
+
+## Goals
+* **DV**
+ * Verify all UART IP features by running dynamic simulations with a
+ SV/UVM based testbench
+ * Close code and functional coverage on IP and all of its sub-modules
+* **FPV**
+ * Verify TileLink device protocol compliance with an SVA based testbench
+
+## Current status
+* [Design & verification stage](../doc/uart.prj.hjson)
+ * [HW development stages](../../../../doc/ug/hw_stages.md)
+* DV regression results dashboard (link TBD)
+
+## Design features
+For detailed information on UART design features, please see the
+[UART design specification](../doc/uart.md).
+
+## Testbench architecture
+UART testbench has been constructed based on the
+[CIP testbench architecture](../../../dv/sv/cip_lib/README.md).
+
+### Block diagram
+<!--  -->
+
+### Top level testbench
+Top level testbench is located at `hw/ip/uart/dv/tb/tb.sv`. It instantiates the
+UART DUT module `hw/ip/uart/rtl/uart.sv`. In addition, it instantiates interfaces
+for driving/sampling clock and reset, TileLink device, UART IOs and interrupts.
+
+### Common DV utility components
+* [common_ifs](../../../dv/sv/common_ifs/README.md)
+* [dv_utils_pkg](../../../dv/sv/dv_utils/README.md)
+* [csr_utils_pkg](../../../dv/sv/csr_utils/README.md)
+
+### Global types & methods
+All common types and methods defined at the package level can be found in
+`uart_env_pkg`. Some of them in use are:
+```systemverilog
+parameter uint UART_FIFO_DEPTH = 32;
+```
+
+### TL_agent
+UART instantiates (handled in CIP base env) [tl_agent](../../../dv/sv/tl_agent/README.md)
+which provides the ability to drive and independently monitor random traffic via
+TL host interface into UART device.
+
+### UART agent
+[describe or provide link to UART agent documentation]
+
+### RAL
+The UART RAL model is constructed using the
+[regtool.py script](../../../../util/doc/rm/RegisterTool.md)
+and is placed at `env/uart_reg_block.sv`.
+
+### Stimulus strategy
+#### Test sequences
+All test sequences reside in `hw/ip/uart/dv/env/seq_lib`. The `uart_base_vseq`
+virtual sequence is extended from `cip_base_vseq` and serves as a starting point.
+All test sequences are extended from `uart_base_vseq`. It provides commonly used
+handles, variables, functions and tasks that the test sequences can simple use / call.
+Some of the most commonly used tasks / functions are as
+follows:
+* task 1:
+* task 2:
+
+#### Functional coverage
+To ensure high quality constrained random stimulus, it is necessary to develop
+functional coverage model. The following covergroups have been developed to prove
+that the test intent has been adequately met:
+* cg1:
+* cg2:
+
+### Self-checking strategy
+#### Scoreboard
+The `uart_scoreboard` is primarily used for end to end checking. It creates the
+following analysis ports to retrieve the data monitored by corresponding
+interface agents:
+* analysis port1:
+* analysis port2:
+
+#### Assertions
+* TLUL assertions: The `tb/uart_bind.sv` binds the `tlul_assert`
+ [assertions](../../tlul/doc/TlulProtocolChecker.md) to uart to ensure TileLink
+ interface protocol compliance.
+* Unknown checks on DUT outputs: `../rtl/uart.sv` has assertions to ensure all
+ UART outputs are initialized to known values after coming out of reset.
+* assertion 1
+* assertion 2
+
+## Building and running tests
+We are using our in-house developed
+[regression tool](../../../dv/tools/README.md)
+for building and running our tests and regressions. Please take a look at the link
+for detailed information on the usage, capabilities, features and known
+issues. Here's how to run a basic sanity test:
+```
+ $ cd hw/ip/uart/dv
+ $ make TEST_NAME=uart_sanity
+```
+
+## Testplan
+{{% add_testplan x }}
diff --git a/hw/ip/uart/dv/uart_testplan.hjson b/hw/ip/uart/dv/uart_testplan.hjson
new file mode 100644
index 0000000..7a0c05c
--- /dev/null
+++ b/hw/ip/uart/dv/uart_testplan.hjson
@@ -0,0 +1,154 @@
+{
+ name: "uart"
+ import_testplans: ["hw/dv/tools/testplans/csr_testplan.hjson",
+ "hw/dv/tools/testplans/intr_test_testplan.hjson",
+ "hw/dv/tools/testplans/tl_device_access_types_testplan.hjson"]
+ entries: [
+ {
+ name: sanity
+ desc: '''Basic UART sanity test with few bytes transmitted and received asynchronously
+ and in parallel with scoreboard checks. Randomize UART baud rate and other
+ parameters such as TL agent delays.'''
+ milestone: v1
+ tests: ["uart_sanity"]
+ }
+ {
+ name: parity
+ desc: "Send / receive bytes with parity and odd parity enabled randomly."
+ milestone: v2
+ tests: []
+ }
+ {
+ name: parity_error
+ desc: '''Enable parity and randomly set even/odd parity. Inject parity error randomly
+ on data sent from rx and ensure the interrupt is raised.Send / receive bytes
+ with parity and odd parity enabled randomly.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: tx_watermark
+ desc: '''Program random tx fifo watermark level and keep pushing data out by writing to
+ wdata. As the number of pending data entries in the tx fifo reaches the programmed
+ watermark level, ensure that the tx watermark interrupt is asserted. Read the fifo
+ status to cross-check. Ensure interrupt stays asserted until cleared.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_watermark
+ desc: '''Program random rx watermark level. Keep sending data over rx and read rdata
+ register in parallel with randomized delays large enough to reach a point where
+ the number of pending data items in rx fifo reaches the watermark level. When
+ that happens, check that the interrupt is asserted.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: tx_reset
+ desc: '''Fill up the tx fifo with data to be sent out. After a random number (less than
+ filled fifo size) of bytes shows up on tx, reset the fifo and ensure that the
+ remaining data bytes do not show up.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_reset
+ desc: '''Fill up the rx fifo by sending data bytes in over rx. After a random number
+ (less than filled fifo size) of bytes sent over rx, reset the fifo and ensure
+ that reads to rdata register yield 0s.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: tx_overflow
+ desc: '''Keep writing over 32 bytes of data into wdata register and ensure excess data
+ bytes are dropped and overflow interrupt is asserted.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_overflow
+ desc: '''Keep sending over 32 bytes of data over rx and ensure excess data bytes are
+ dropped.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: tx_rx_fifo_full
+ desc: "Send over 32 bytes of data but stop when fifo is full"
+ milestone: v2
+ tests: []
+ }
+ {
+ name: frame_err
+ desc: '''Inject frame error in parity and non-parity cases by not setting stop bit = 1.
+ Ensure the interrupt gets asserted.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_break_err
+ desc: '''Program random number of break detection characters. Create a frame error
+ scenario and send random number of 0 bytes. If that random number exceeds the
+ programmed break characters, ensure that the break_err interrupt is asserted.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_timeout
+ desc: '''Program timeout_ctrl register to randomize the timeout. Send random number of
+ data over rx and read fewer data than sent and let the DUT sit idle or the
+ programmed timeout duration. Ensure timeout interrupt fires.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: stress
+ desc: '''Do combinations of multiple of above scenarios to get multiple interrupts
+ asserted at the same time. Scoreboard should be robust enough to deal with all
+ scenarios.'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: perf
+ desc: '''Run (perhaps the stress test) with highest supported baud rate.
+ - Core freq: 24mhz, 25mhz, 48mhz, 50mhz, 100mhz
+ - Baud rate: 9600, 115200, 230400, 1Mbps(1048576), 2Mbps(2097152)'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: sys_loopback
+ desc: '''Drive uart TX and data will be loopbacked through RX. After loopback is done,
+ RDATA will be equal to WDATA'''
+ milestone: v2
+ tests: []
+ }
+ {
+ name: line_loopback
+ desc: "Line loopback test"
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_noise_filter
+ desc: "16x fast clk to check it, need to be 3 clk same value"
+ milestone: v2
+ tests: []
+ }
+ {
+ name: tx_overide
+ desc: "Enable override control and use register programming to drive uart output directly"
+ milestone: v2
+ tests: []
+ }
+ {
+ name: rx_oversample
+ desc: "read RX oversampled value and check use 16x faster than baud clk to sample it"
+ milestone: v2
+ tests: []
+ }
+ ]
+}
diff --git a/util/docgen/lowrisc_renderer.py b/util/docgen/lowrisc_renderer.py
index 7fc410b..afbed83 100644
--- a/util/docgen/lowrisc_renderer.py
+++ b/util/docgen/lowrisc_renderer.py
@@ -40,6 +40,7 @@
import reggen.validate as validate
from docgen import html_data, mathjax
from docgen.hjson_lexer import HjsonLexer
+from testplanner import class_defs, testplan_utils
from wavegen import wavesvg
@@ -353,6 +354,18 @@
link=rel_md_path.with_suffix('.html'),
text=rel_md_path.with_suffix(''))
return html_data.doctree_head + return_string + html_data.doctree_tail
+ if token.type == "import_testplan":
+ self.testplan = testplan_utils.parse_testplan(
+ path.join(self.basedir, token.text))
+ return ""
+ if token.type == "add_testplan":
+ if self.testplan == None:
+ return "<B>Errors parsing testplan prevents insertion.</B>"
+ outbuf = io.StringIO()
+ testplan_utils.gen_html_testplan_table(self.testplan, outbuf)
+ generated = outbuf.getvalue()
+ outbuf.close()
+ return generated
bad_tag = '{{% ' + token.type + ' ' + token.text + ' }}'
log.warn("Unknown lowRISC tag " + bad_tag)
diff --git a/util/testplanner.py b/util/testplanner.py
new file mode 100755
index 0000000..9144eaa
--- /dev/null
+++ b/util/testplanner.py
@@ -0,0 +1,46 @@
+#!/usr/bin/env python3
+# Copyright lowRISC contributors.
+# Licensed under the Apache License, Version 2.0, see LICENSE for details.
+# SPDX-License-Identifier: Apache-2.0
+r"""Command-line tool to parse and process testplan hjson
+
+"""
+import argparse
+import logging as log
+import os
+import sys
+from pathlib import PurePath
+
+import hjson
+
+from testplanner import testplan_utils
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description=__doc__,
+ formatter_class=argparse.RawDescriptionHelpFormatter)
+ parser.add_argument(
+ 'testplan',
+ metavar='<hjson-file>',
+ help='input testplan file in hjson')
+ parser.add_argument(
+ '-r',
+ '--regr_results',
+ metavar='<hjson-file>',
+ help='input regression results file in hjson')
+ parser.add_argument(
+ '--outfile',
+ '-o',
+ type=argparse.FileType('w'),
+ default=sys.stdout,
+ help='output html file (without css)')
+ args = parser.parse_args()
+ outfile = args.outfile
+
+ with outfile:
+ testplan_utils.gen_html(args.testplan, args.regr_results, outfile)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/util/testplanner/__init__.py b/util/testplanner/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/util/testplanner/__init__.py
diff --git a/util/testplanner/class_defs.py b/util/testplanner/class_defs.py
new file mode 100644
index 0000000..c7b57a9
--- /dev/null
+++ b/util/testplanner/class_defs.py
@@ -0,0 +1,250 @@
+#!/usr/bin/env python3
+# Copyright lowRISC contributors.
+# Licensed under the Apache License, Version 2.0, see LICENSE for details.
+# SPDX-License-Identifier: Apache-2.0
+r"""TestplanEntry and Testplan classes for maintaining testplan entries
+"""
+
+import re
+import sys
+
+
+class TestplanEntry():
+ """An entry in the testplan
+
+ A testplan entry has the following information: name of the planned test (testpoint),
+ a brief description indicating intent, stimulus and checking procedure, targeted milestone
+ and the list of actual developed tests.
+ """
+ name = ""
+ desc = ""
+ milestone = ""
+ tests = []
+
+ fields = ("name", "desc", "milestone", "tests")
+ milestones = ("na", "v1", "v2", "v3")
+
+ def __init__(self, name, desc, milestone, tests, substitutions=[]):
+ self.name = name
+ self.desc = desc
+ self.milestone = milestone
+ self.tests = tests
+ if not self.do_substitutions(substitutions): sys.exit(1)
+
+ @staticmethod
+ def is_valid_entry(kv_pairs):
+ '''Pass a list of key=value pairs to check if testplan entries can be extracted
+ from it.
+ '''
+ for field in TestplanEntry.fields:
+ if not field in kv_pairs.keys():
+ print(
+ "Error: input key-value pairs does not contain all of the ",
+ "required fields to create an entry:\n", kv_pairs,
+ "\nRequired fields:\n", TestplanEntry.fields)
+ return False
+ if type(kv_pairs[field]) is str and kv_pairs[field] == "":
+ print("Error: field \'", field, "\' is an empty string\n:",
+ kv_pairs)
+ return False
+ if field == "milestone" and kv_pairs[
+ field] not in TestplanEntry.milestones:
+ print("Error: milestone \'", kv_pairs[field],
+ "\' is invalid. Legal values:\n",
+ TestplanEntry.milestones)
+ return False
+ return True
+
+ def do_substitutions(self, substitutions):
+ '''Substitute {wildcards} in tests
+
+ If tests have {wildcards}, they are substituted with the 'correct' values using
+ key=value pairs provided by the substitutions arg. If wildcards are present but no
+ replacement is available, then the wildcards are replaced with an empty string.
+ '''
+ if substitutions == []: return True
+ for kv_pair in substitutions:
+ resolved_tests = []
+ [(k, v)] = kv_pair.items()
+ for test in self.tests:
+ match = re.findall(r"{([A-Za-z0-9\_]+)}", test)
+ if len(match) > 0:
+ # match is a list of wildcards used in test
+ for item in match:
+ if item == k:
+ if type(v) is list:
+ if v == []:
+ resolved_test = test.replace(
+ "{" + item + "}", "")
+ resolved_tests.append(resolved_test)
+ else:
+ for subst_item in v:
+ resolved_test = test.replace(
+ "{" + item + "}", subst_item)
+ resolved_tests.append(resolved_test)
+ elif type(v) is str:
+ resolved_test = test.replace(
+ "{" + item + "}", v)
+ resolved_tests.append(resolved_test)
+ else:
+ print(
+ "Error: wildcard", item, "in test", test,
+ "has no viable",
+ "replacement value (need str or list):\n",
+ kv_pair)
+ return False
+ else:
+ resolved_tests.append(test)
+ if resolved_tests != []: self.tests = resolved_tests
+
+ # if wildcards have no available replacements in substitutions arg, then
+ # replace with empty string
+ resolved_tests = []
+ for test in self.tests:
+ match = re.findall(r"{([A-Za-z0-9\_]+)}", test)
+ if len(match) > 0:
+ for item in match:
+ resolved_tests.append(test.replace("{" + item + "}", ""))
+ if resolved_tests != []: self.tests = resolved_tests
+ return True
+
+ def map_regr_results(self, regr_results):
+ '''map regression results to tests in this entry
+
+ Given a list of regression results (a tuple containing {test name, # passing and
+ # total} find if the name of the test in the results list matches the written tests
+ in this testplan entry. If there is a match, then append the passing / total
+ information. If no match is found, or if self.tests is an empty list, indicate 0/1
+ passing so that it is factored into the final total.
+ '''
+ test_results = []
+ for test in self.tests:
+ found = False
+ for regr_result in regr_results:
+ if test == regr_result["name"]:
+ test_results.append(regr_result)
+ regr_result["mapped"] = True
+ found = True
+ break
+
+ # if a test was not found in regr results, indicate 0/1 passing
+ if not found:
+ test_results.append({"name": test, "passing": 0, "total": 1})
+
+ # if no written tests were indicated in the testplan, reuse planned
+ # test name and indicate 0/1 passing
+ if self.tests == []:
+ test_results.append({"name": self.name, "passing": 0, "total": 1})
+
+ # replace tests with test results
+ self.tests = test_results
+ return regr_results
+
+ def display(self):
+ print("testpoint: ", self.name)
+ print("description: ", self.desc)
+ print("milestone: ", self.milestone)
+ print("tests: ", self.tests)
+
+
+class Testplan():
+ """The full testplan
+
+ This comprises of TestplanEntry entries
+ """
+
+ name = ""
+ entries = []
+
+ def __init__(self, name):
+ self.name = name
+ self.entries = []
+ if name == "":
+ print("Error: testplan name cannot be empty")
+ sys.exit(1)
+
+ def entry_exists(self, entry):
+ '''check if new entry has the same name as one of the existing entries
+ '''
+ for existing_entry in self.entries:
+ if entry.name == existing_entry.name:
+ print("Error: found a testplan entry with name = ", entry.name)
+ print("existing entry:\n", existing_entry)
+ print("new entry:\n", entry)
+ return True
+ return False
+
+ def add_entry(self, entry):
+ '''add a new entry into the testplan
+ '''
+ if self.entry_exists(entry): sys.exit(1)
+ self.entries.append(entry)
+
+ def sort(self):
+ '''sort entries by milestone
+ '''
+ self.entries = sorted(self.entries, key=lambda entry: entry.milestone)
+
+ def map_regr_results(self, regr_results):
+ '''map regression results to testplan entries
+ '''
+
+ def sum_results(totals, entry):
+ '''function to generate milestone and grand totals
+ '''
+ ms = entry.milestone
+ for test in entry.tests:
+ totals[ms].tests[0]["passing"] += test["passing"]
+ totals[ms].tests[0]["total"] += test["total"]
+ return totals
+
+ totals = {}
+ for ms in TestplanEntry.milestones:
+ name = "<ignore>"
+ totals[ms] = TestplanEntry(
+ name=name,
+ desc=name,
+ milestone=ms,
+ tests=[{
+ "name": "TOTAL",
+ "passing": 0,
+ "total": 0
+ }])
+
+ for entry in self.entries:
+ regr_results = entry.map_regr_results(regr_results)
+ totals = sum_results(totals, entry)
+
+ # extract unmapped tests from regr_results and create 'unmapped' entry
+ unmapped_regr_results = []
+ for regr_result in regr_results:
+ if not "mapped" in regr_result.keys():
+ unmapped_regr_results.append(regr_result)
+
+ unmapped = TestplanEntry(
+ name="Unmapped tests",
+ desc="Unmapped tests",
+ milestone="na",
+ tests=unmapped_regr_results)
+ totals = sum_results(totals, unmapped)
+
+ # add the grand total: "na" key used for grand total
+ for ms in TestplanEntry.milestones:
+ if ms != "na":
+ totals["na"].tests[0]["passing"] += totals[ms].tests[0][
+ "passing"]
+ totals["na"].tests[0]["total"] += totals[ms].tests[0]["total"]
+
+ # add total back into 'entries'
+ for key in totals.keys():
+ if key != "na": self.entries.append(totals[key])
+ self.sort()
+ self.entries.append(unmapped)
+ self.entries.append(totals["na"])
+
+ def display(self):
+ '''display the complete testplan for debug
+ '''
+ print("name: ", self.name)
+ for entry in self.entries:
+ entry.display()
diff --git a/util/testplanner/examples/common_testplan.hjson b/util/testplanner/examples/common_testplan.hjson
new file mode 100644
index 0000000..6e9ce4c
--- /dev/null
+++ b/util/testplanner/examples/common_testplan.hjson
@@ -0,0 +1,20 @@
+{
+ // only 'entries' supported in imported testplans for now
+ entries: [
+ {
+ name: csr
+ desc: '''Standard CSR suite of tests run from all valid interfaces to prove SW
+ accessibility.'''
+ milestone: v1
+ // {name} and {intf} are wildcards in tests
+ // importer needs to provide substitutions for these as string or a list
+ // if list, then substitution occurs on all values in the list
+ // if substitution is not provided, it will be replaced with an empty string
+ tests: ["{name}{intf}_csr_hw_reset",
+ "{name}{intf}_csr_rw",
+ "{name}{intf}_csr_bit_bash",
+ "{name}{intf}_csr_aliasing",]
+ }
+ ]
+}
+
diff --git a/util/testplanner/examples/foo_dv_plan.md b/util/testplanner/examples/foo_dv_plan.md
new file mode 100644
index 0000000..ce63de6
--- /dev/null
+++ b/util/testplanner/examples/foo_dv_plan.md
@@ -0,0 +1,5 @@
+{{% lowrisc-doc-hdr FOO DV plan }}
+{{% import_testplan foo_testplan.hjson }}
+
+## Testplan
+{{% add_testplan x }}
diff --git a/util/testplanner/examples/foo_regr_results.hjson b/util/testplanner/examples/foo_regr_results.hjson
new file mode 100644
index 0000000..8b0ef20
--- /dev/null
+++ b/util/testplanner/examples/foo_regr_results.hjson
@@ -0,0 +1,104 @@
+{
+ timestamp: 10/10/2019 1:55AM
+ test_results: [
+ {
+ name: foo_sanity
+ passing: 25
+ total: 50
+ }
+ {
+ name: foo_csr_hw_reset
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_jtag_csr_hw_reset
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_csr_rw
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_jtag_csr_rw
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_csr_bit_bash
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_csr_aliasing
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_jtag_csr_aliasing
+ passing: 20
+ total: 20
+ }
+ {
+ name: foo_feature1
+ passing: 63
+ total: 80
+ }
+ {
+ name: foo_feature2_type1
+ passing: 1
+ total: 1
+ }
+ {
+ name: foo_feature2_type2
+ passing: 5
+ total: 5
+ }
+ {
+ name: foo_feature2_type3
+ passing: 0
+ total: 10
+ }
+ {
+ name: foo_unmapped_test
+ passing: 0
+ total: 10
+ }
+ ]
+ cov_results: [
+ {
+ name: line
+ result: 67
+ }
+ {
+ name: toggle
+ result: 85
+ }
+ {
+ name: branch
+ result: 78
+ }
+ {
+ name: condition
+ result: 23
+ }
+ {
+ name: fsm_seq
+ result: 96
+ }
+ {
+ name: fsm_trans
+ result: 88
+ }
+ {
+ name: assert
+ result: 40
+ }
+ {
+ name: groups
+ result: 22
+ }
+ ]
+}
diff --git a/util/testplanner/examples/foo_testplan.hjson b/util/testplanner/examples/foo_testplan.hjson
new file mode 100644
index 0000000..ff31355
--- /dev/null
+++ b/util/testplanner/examples/foo_testplan.hjson
@@ -0,0 +1,55 @@
+{
+ // 'name' is mandatory field
+ name: "foo"
+ intf: ["", "_jtag"]
+
+ // 'import_testplans' is a list of imported common testplans
+ // paths are relative to repo top
+ // all key_value pairs in this file other than 'import_testplans' and 'entries'
+ // can be used for wildcard substitutions in imported testplans
+ import_testplans: ["util/testplanner/examples/common_testplan.hjson"]
+ entries: [
+ {
+ // name of the testplan entry - should be unique
+ name: sanity
+ desc: '''Basic FOO sanity test. Describe this test in sufficient detail. You can
+ split the description on multiple lines like this (with 3 single-inverted
+ commas. Note that the subsequent lines are indented right below where the
+ inverted commas start.'''
+ // milestone for which this test is targeted for - v1, v2 or v3
+ milestone: v1
+ // tests of actual written tests that maps to this entry
+ tests: ["foo_sanity"]
+ }
+ {
+ name: feature1
+ desc: "A single line description with single double-inverted commas."
+ milestone: v2
+ // testplan entry with no tests added
+ tests: []
+ }
+ {
+ name: feature2
+ desc: '''**Goal**: How-to description
+
+ **Stimulus**: If possible, in the description indicate a brief one-liner
+ goal on the first line. Then, describe the stimulus and check procedures like
+ this.
+
+ **Check**: This style is not mandatory, but highly recommended. Also note that
+ the description supports markdown formatting. Add things:
+ - like bullets
+ - something in **bold** and in *italic*
+ - A sub-bullet item<br>
+ Continue describing above bullet on a new line with a html line break.
+
+ Start a new para with with 2 newlines.
+ '''
+ milestone: v2
+ // testplan entry with multiple tests added
+ tests: ["foo_feature2_type1",
+ "foo_feature2_type2",
+ "foo_feature2_type3"]
+ }
+ ]
+}
diff --git a/util/testplanner/testplan_utils.py b/util/testplanner/testplan_utils.py
new file mode 100644
index 0000000..5de2df9
--- /dev/null
+++ b/util/testplanner/testplan_utils.py
@@ -0,0 +1,236 @@
+#!/usr/bin/env python3
+# Copyright lowRISC contributors.
+# Licensed under the Apache License, Version 2.0, see LICENSE for details.
+# SPDX-License-Identifier: Apache-2.0
+r"""Command-line tool to parse and process testplan hjson into a data structure
+ The data structure is used for expansion inline within DV plan documentation
+ as well as for annotating the regression results.
+"""
+import logging as log
+import os
+import sys
+from pathlib import PurePath
+
+import hjson
+import mistletoe
+
+from .class_defs import *
+
+
+def parse_testplan(filename):
+ '''Parse testplan hjson file into a datastructure'''
+ self_path = os.path.dirname(os.path.realpath(__file__))
+ repo_root = os.path.abspath(os.path.join(self_path, os.pardir, os.pardir))
+
+ name = ""
+ imported_testplans = []
+ substitutions = []
+ obj = parse_hjson(filename)
+ for key in obj.keys():
+ if key == "import_testplans":
+ imported_testplans = obj[key]
+ elif key != "entries":
+ if key == "name": name = obj[key]
+ substitutions.append({key: obj[key]})
+ for imported_testplan in imported_testplans:
+ obj = merge_dicts(
+ obj, parse_hjson(os.path.join(repo_root, imported_testplan)))
+
+ testplan = Testplan(name=name)
+ for entry in obj["entries"]:
+ if not TestplanEntry.is_valid_entry(entry): sys.exit(1)
+ testplan_entry = TestplanEntry(
+ name=entry["name"],
+ desc=entry["desc"],
+ milestone=entry["milestone"],
+ tests=entry["tests"],
+ substitutions=substitutions)
+ testplan.add_entry(testplan_entry)
+ testplan.sort()
+ return testplan
+
+
+def gen_html_indent(lvl):
+ return " " * lvl
+
+
+def gen_html_write_style(outbuf):
+ outbuf.write("<style>\n")
+ outbuf.write("table.dv {\n")
+ outbuf.write(" border: 1px solid black;\n")
+ outbuf.write(" border-collapse: collapse;\n")
+ outbuf.write(" width: 100%;\n")
+ outbuf.write(" text-align: center;\n")
+ outbuf.write(" vertical-align: middle;\n")
+ outbuf.write(" margin-left: auto;;\n")
+ outbuf.write(" margin-right: auto;;\n")
+ outbuf.write(" display: table;\n")
+ outbuf.write("}\n")
+ outbuf.write("th, td {\n")
+ outbuf.write(" border: 1px solid black;\n")
+ outbuf.write("}\n")
+ outbuf.write("</style>\n")
+
+
+def gen_html_testplan_table(testplan, outbuf):
+ '''generate html table from testplan with the following fields
+ milestone, planned test name, description
+ '''
+
+ def print_row(ms, name, desc, tests, cell, outbuf):
+ cellb = "<" + cell + ">"
+ celle = "</" + cell + ">"
+ tests_str = ""
+ if cell == "td":
+ # remove leading and trailing whitespaces
+ desc = mistletoe.markdown(desc.strip())
+ for test in tests:
+ if tests_str != "": tests_str += "<br>"
+ tests_str += str(test)
+ else:
+ tests_str = tests
+
+ outbuf.write(gen_html_indent(1) + "<tr>\n")
+ outbuf.write(gen_html_indent(2) + cellb + ms + celle + "\n")
+ outbuf.write(gen_html_indent(2) + cellb + name + celle + "\n")
+
+ # make description text left aligned
+ cellb_desc = cellb
+ if cell == "td":
+ cellb_desc = cellb_desc[:-1] + " style=\"text-align: left\"" + ">"
+ outbuf.write(gen_html_indent(2) + cellb_desc + desc + celle + "\n")
+ outbuf.write(gen_html_indent(2) + cellb + tests_str + celle + "\n")
+ outbuf.write(gen_html_indent(1) + "</tr>\n")
+
+ gen_html_write_style(outbuf)
+ outbuf.write("<table class=\"dv\">\n")
+ print_row("Milestone", "Name", "Description", "Tests", "th", outbuf)
+ for entry in testplan.entries:
+ print_row(entry.milestone, entry.name, entry.desc, entry.tests, "td",
+ outbuf)
+ outbuf.write("</table>\n")
+ return
+
+
+def gen_html_regr_results_table(testplan, regr_results, outbuf):
+ '''map regr results to testplan and create a table with the following fields
+ milestone, planned test name, actual written tests, pass/total
+ '''
+
+ def print_row(ms, name, tests, results, cell, outbuf):
+ cellb = "<" + cell + ">"
+ celle = "</" + cell + ">"
+ tests_str = ""
+ results_str = ""
+ if cell == "td":
+ for test in tests:
+ if tests_str != "": tests_str += "<br>"
+ if results_str != "": results_str += "<br>"
+ tests_str += str(test["name"])
+ results_str += str(test["passing"]) + "/" + str(test["total"])
+ else:
+ tests_str = tests
+ results_str = results
+ if ms == "na": ms = ""
+ if name == "<ignore>":
+ name = ""
+ tests_str = "<strong>" + tests_str + "</strong>"
+ results_str = "<strong>" + results_str + "</strong>"
+
+ outbuf.write(gen_html_indent(1) + "<tr>\n")
+ outbuf.write(gen_html_indent(2) + cellb + ms + celle + "\n")
+ outbuf.write(gen_html_indent(2) + cellb + name + celle + "\n")
+ outbuf.write(gen_html_indent(2) + cellb + tests_str + celle + "\n")
+ outbuf.write(gen_html_indent(2) + cellb + results_str + celle + "\n")
+ outbuf.write(gen_html_indent(1) + "</tr>\n")
+
+ testplan.map_regr_results(regr_results["test_results"])
+
+ gen_html_write_style(outbuf)
+ outbuf.write("<h1 style=\"text-align: center\" " + "id=\"" +
+ testplan.name + "-regression-results\">" +
+ testplan.name.upper() + " Regression Results</h1>\n")
+
+ outbuf.write("<h2 style=\"text-align: center\">" + "Run on " +
+ regr_results["timestamp"] + "</h2>\n")
+
+ # test results
+ outbuf.write("<h3 style=\"text-align: center\">Test Results</h2>\n")
+ outbuf.write("<table class=\"dv\">\n")
+ print_row("Milestone", "Name", "Tests", "Results", "th", outbuf)
+ for entry in testplan.entries:
+ print_row(entry.milestone, entry.name, entry.tests, None, "td", outbuf)
+ outbuf.write("</table>\n")
+
+ # coverage results
+ outbuf.write("<h3 style=\"text-align: center\">Coverage Results</h2>\n")
+ outbuf.write("<table class=\"dv\">\n")
+ outbuf.write(gen_html_indent(1) + "<tr>\n")
+ # title
+ outbuf.write(gen_html_indent(1) + "<tr>\n")
+ for cov in regr_results["cov_results"]:
+ outbuf.write(
+ gen_html_indent(2) + "<th>" + cov["name"].capitalize() + "</th>\n")
+ outbuf.write(gen_html_indent(1) + "</tr>\n")
+ # result
+ outbuf.write(gen_html_indent(1) + "<tr>\n")
+ for cov in regr_results["cov_results"]:
+ outbuf.write(
+ gen_html_indent(2) + "<td>" + str(cov["result"]) + "</td>\n")
+ outbuf.write(gen_html_indent(1) + "</tr>\n")
+ outbuf.write("</table>\n")
+ return
+
+
+def parse_regr_results(filename):
+ obj = parse_hjson(filename)
+ # TODO need additional syntax checks
+ if not "test_results" in obj.keys():
+ print("Error: key \'test_results\' not found")
+ sys, exit(1)
+ return obj
+
+
+def parse_hjson(filename):
+ try:
+ f = open(filename, 'rU')
+ text = f.read()
+ odict = hjson.loads(text)
+ return odict
+ except IOError:
+ print('IO Error:', filename)
+ raise SystemExit(sys.exc_info()[1])
+
+
+def merge_dicts(list1, list2):
+ '''merge 2 dicts into one
+
+ This funciton takes 2 dicts as args list1 and list2. It recursively merges list2 into
+ list1 and returns list1. The recursion happens when the the value of a key in both lists
+ is a dict. If the values of the same key in both lists (at the same tree level) are of
+ type str (or of dissimilar type) then there is a conflict, and and error is thrown.
+ '''
+ for key in list2.keys():
+ if key in list1:
+ if type(list1[key]) is list and type(list2[key]) is list:
+ list1[key].extend(list2[key])
+ elif type(list1[key]) is dict and type(list2[key]) is dict:
+ list1[key] = merge_dicts(list1[key], list2[key])
+ else:
+ print("The type of value of key ", key, "in list1: ", str(type(list1[key])), \
+ " does not match the type of value in list2: ", str(type(list2[key])), \
+ " or they are not of type list or dict. The two lists cannot be merged.")
+ sys.exit(1)
+ else:
+ list1[key] = list2[key]
+ return list1
+
+
+def gen_html(testplan_file, regr_results_file, outbuf):
+ testplan = parse_testplan(os.path.abspath(testplan_file))
+ if regr_results_file:
+ regr_results = parse_regr_results(os.path.abspath(regr_results_file))
+ gen_html_regr_results_table(testplan, regr_results, outbuf)
+ else:
+ gen_html_testplan_table(testplan, outbuf)
+ outbuf.write('\n')