[doc] Manually changed remaining hugo links

The relink script, `util/check_links.py`,
didn't quite manage to convert all the hugo links
to standard markdown links,
so the remaining links have been converted manually.

Also, deleted the relink script.
It would be nice to set up an automatic, purpose built link checker
for the docs in the future.

Signed-off-by: Hugo McNally <hugo.mcnally@gmail.com>
diff --git a/doc/contributing/README.md b/doc/contributing/README.md
index 1c90d36..fae9f5d 100644
--- a/doc/contributing/README.md
+++ b/doc/contributing/README.md
@@ -15,7 +15,7 @@
 * All OpenTitan interactions are covered by [lowRISC's code of conduct](https://www.lowrisc.org/code-of-conduct/).
 * When communicating, remember OpenTitan is a security-focused project.
   Because of this, certain issues may need to be discussed in a small group first.
-  See the [Security Issues Process]({{<ref "#security-issues" >}}) described below for more details.
+  See the [Security Issues Process](#security-issues) described below for more details.
 * OpenTitan involves both hardware and software.
   We follow a hybrid approach involving both silicon and software design practices.
 * OpenTitan is a work in progress.
@@ -26,7 +26,7 @@
 
 ## Bug reports
 
-**To report a security issue, please follow the [Security Issues Process]({{<ref "#security-issues" >}})**.
+**To report a security issue, please follow the [Security Issues Process](#security-issues)**.
 
 Ideally, all designs are bug free.
 Realistically, each piece of collateral in our repository is in a different state of maturity with some still under active testing and development.
diff --git a/doc/contributing/bazel_notes.md b/doc/contributing/bazel_notes.md
index 2a52ff5..d589fe8 100644
--- a/doc/contributing/bazel_notes.md
+++ b/doc/contributing/bazel_notes.md
@@ -3,7 +3,7 @@
 ---
 
 Both OpenTitan hardware and software is built with Bazel.
-While our [Getting Started]({{< relref "getting_started" >}}) guides detail some of the Bazel commands that can be used to build both types of artifacts, below are detailed notes on:
+While our [Getting Started](https://docs.opentitan.org/doc/guides/getting_started/) guides detail some of the Bazel commands that can be used to build both types of artifacts, below are detailed notes on:
 * how Bazel is configured for our project, and
 * brief examples of Bazel commands that are useful for:
     * querying,
@@ -26,7 +26,7 @@
 Bazel loads specific external dependencies, such as various language toolchains.
 It uses them to build OpenTitan targets (like it does with bazel\_embedded) or to satisfy dependencies (as it does with abseil).
 To produce increasingly stable releases the external dependencies loaded in `WORKSPACE` file attempts to fix a all external `http_archive`s to a specific SHA.
-As we add more dependencies to the workspace, builds and tests will become less sensitive to external updates, and we will vastly simplify the [Getting Started]({{< relref "getting_started" >}}) instructions.
+As we add more dependencies to the workspace, builds and tests will become less sensitive to external updates, and we will vastly simplify the [Getting Started](https://docs.opentitan.org/doc/guides/getting_started/) instructions.
 
 ## BUILD files
 
@@ -48,7 +48,7 @@
   ```console
   bazel clean --expunge
   ```
-  Note: you should rarely need to run this, see [below]({{< relref "#troubleshooting-builds" >}}) for when this may be necessary.
+  Note: you should rarely need to run this, see [below](#troubleshooting-builds) for when this may be necessary.
 
 # Locating Build Artifacts
 
@@ -257,7 +257,7 @@
 
 Create a `.bazelrc` file in your home directory to simplify executing bazel commands.
 For example, you can use a `.bazelrc` to:
-* set up a [disk cache]({{< relref "#disk-cache" >}}), or
+* set up a [disk cache](#disk-cache), or
 * skip running tests on the CW310 FPGA if you don not have one.
 
 A `.bazelrc` file that would accomplish this would look like:
@@ -302,7 +302,7 @@
 bazel build //... --disk_cache=~/.cache/bazel-disk-cache
 ```
 will cache all built artifacts.
-Alternatively add the following to `$HOME/.bazelrc` to avoid having automatically use the disk cache on every Bazel invocation, as shown [above]({{< relref "#create-a-bazelrc-file" >}}).
+Alternatively add the following to `$HOME/.bazelrc` to avoid having automatically use the disk cache on every Bazel invocation, as shown [above](#create-a-bazelrc-file).
 
 Note that Bazel does not perform any garbage collection on the disk cache.
 To clean out the disk cache, you can set a cron job to periodically delete all files that have not been accessed for a certain amount of time.
@@ -347,7 +347,7 @@
 Bazel was not optimized for the `git` worktree workflow, but using worktrees can help with branch management and provides the advantage of being able to run multiple Bazel jobs simultaneously.
 Here are some tips that can improve the developer experience when using worktrees.
 
-1. Follow the [instructions above]({{< relref "#disk-cache" >}}) to enable the disk cache.
+1. Follow the [instructions above](#disk-cache) to enable the disk cache.
   Bazel uses the workspace's path when caching actions.
   Because each worktree is a separate workspace at a different path, different worktrees cannot share an action cache.
   They can, however, share a disk cache, which helps avoid rebuilding the same artifacts across different worktrees.
diff --git a/doc/contributing/detailed_contribution_guide/README.md b/doc/contributing/detailed_contribution_guide/README.md
index f78ec3d..af81d3f 100644
--- a/doc/contributing/detailed_contribution_guide/README.md
+++ b/doc/contributing/detailed_contribution_guide/README.md
@@ -3,7 +3,7 @@
 ---
 
 The way we work on OpenTitan is very similar to what is done in other collaborative open-source projects.
-For a brief overview see [Contributing to OpenTitan]({{< relref "contributing.md" >}}).
+For a brief overview see [Contributing to OpenTitan](../README.md).
 This document provides a detailed reference on how we collaborate within the OpenTitan project and is organized as follows:
 * [Communication](#communication)
 * [Working with Issues](#working-with-issues)
@@ -137,11 +137,11 @@
 All source code contributions must adhere to project style guides.
 We use separate style guides for different languages:
 * [SystemVerilog](https://github.com/lowRISC/style-guides/blob/master/VerilogCodingStyle.md)
-* [C/C++]({{< relref "c_cpp_coding_style.md" >}})
-* [Python]({{< relref "python_coding_style.md" >}})
-* [Markdown]({{< relref "markdown_usage_style.md" >}})
-* [Hjson]({{< relref "hjson_usage_style.md" >}})
-* [RISC-V Assembly]({{< relref "asm_coding_style.md" >}})
+* [C/C++](../style_guides/c_cpp_coding_style.md)
+* [Python](../style_guides/python_coding_style.md)
+* [Markdown](../style_guides/markdown_usage_style.md)
+* [Hjson](../style_guides/hjson_usage_style.md)
+* [RISC-V Assembly](../style_guides/asm_coding_style.md)
 
 If unsure about the style, be consistent with the existing code and do your best to match its style.
 
@@ -234,7 +234,7 @@
 
 Committers are the only people in the project that can definitively approve contributions for inclusion.
 
-See the [Committers]({{< relref "committers.md" >}}) definition and role description for a fuller explanation.
+See the [Committers](../../project_governance/committers.md) definition and role description for a fuller explanation.
 
 ## Labeling and assigning PRs
 
@@ -289,7 +289,7 @@
   This means that many people will see the same issue from different viewpoints.
   Always be friendly and patient and remember to adhere to our [code of conduct](https://www.lowrisc.org/code-of-conduct/).
 
-See also: [reviewing pull requests on GitHub](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/reviewing-proposed-changes-in-a-pull-request) and the [OpenTitan Commit Escalation Guidelines]({{< relref "committers.md#commit-escalation-guidelines" >}}).
+See also: [reviewing pull requests on GitHub](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/reviewing-proposed-changes-in-a-pull-request) and the [OpenTitan Commit Escalation Guidelines](../../project_governance/committers.md#commit-escalation-guidelines" >}}).
 
 ## How to receive a code review?
 
diff --git a/doc/contributing/dv/methodology/README.md b/doc/contributing/dv/methodology/README.md
index a1693d4..1272ce2 100644
--- a/doc/contributing/dv/methodology/README.md
+++ b/doc/contributing/dv/methodology/README.md
@@ -259,7 +259,7 @@
 
 ## Assertions
 
-In DV, we follow the same assertion methodology as indicated in the [design methodology]({{< relref "./design.md#assertion-methodology" >}}).
+In DV, we follow the same assertion methodology as indicated in the [design methodology](../../hw/design.md#assertion-methodology).
 Wherever possible, the assertion monitors developed for FPV are reused in UVM testbenches when running dynamic simulations.
 An example of this is the [TLUL Protocol Checker](../../../../hw/ip/tlul/doc/TlulProtocolChecker.md).
 
@@ -585,7 +585,7 @@
 Therefore, a common verification framework is built up in the DV base libraries.
 The following common countermeasures can be either automatically or semi-automatically verified by this framework.
 
-1. Countermeasures using common primitives can be verified by the [Security Countermeasure Verification Framework]({{< relref "sec_cm_dv_framework" >}}).
+1. Countermeasures using common primitives can be verified by the [Security Countermeasure Verification Framework](../sec_cm_dv_framework/README.md).
 2. The following common countermeasures can be verified by cip_lib.
 The steps to enabling them is described in cip_lib [document](../../../../hw/dv/sv/cip_lib/README.md#security-verification-in-cip_lib).
   - Bus integrity
@@ -631,7 +631,7 @@
 ## Getting Started with DV
 
 The process for getting started with DV involves many steps, including getting clarity on its purpose, setting up the testbench, documentation, etc.
-These are discussed in the [Getting Started with DV](../../../guides/getting_started/src/setup_dv.md) document.
+These are discussed in the [Getting Started with DV](https://docs.opentitan.org/doc/guides/getting_started/src/setup_dv.md) document.
 
 ## Pending Work Items
 
diff --git a/doc/contributing/hw/comportability/README.md b/doc/contributing/hw/comportability/README.md
index a5587bb..7c7b95a 100644
--- a/doc/contributing/hw/comportability/README.md
+++ b/doc/contributing/hw/comportability/README.md
@@ -79,7 +79,7 @@
 ## Comportable Peripheral Definition
 
 All comportable IP peripherals must adhere to a minimum set of functionality in order to be compliant with the framework that is going to be set around it.
-(An example framework is the [earlgrey top level design](../../hw/top_earlgrey/doc/top_earlgrey.md).)
+(An example framework is the [earlgrey top level design](../../../../hw/top_earlgrey/README.md).)
 This includes several mandatory features as well as several optional ones.
 It is notable that the framework contains designs that are neither the local host processor nor peripherals \- for example the power management unit, clock generators.
 These are handled as special case designs with their own specifications.
@@ -446,7 +446,7 @@
 Each item is a dictionary with keys `name` and `desc`.
 The `desc` field is a human-readable description of the countermeasure.
 The `name` field should be either of the form `ASSET.CM_TYPE` or `INSTANCE.ASSET.CM_TYPE`.
-Here, `ASSET` and `CM_TYPE` should be one of the values given in the tables in the [Security countermeasures]({{< relref "#countermeasures" >}}) section.
+Here, `ASSET` and `CM_TYPE` should be one of the values given in the tables in the [Security countermeasures](#countermeasures) section.
 If specified, `INSTANCE` should name a submodule of the IP block holding the asset.
 It can be used to disambiguate in situations such as where there are two different keys that are protected with different countermeasures.
 
@@ -645,7 +645,7 @@
 Recoverable alerts must be prefixed with `recov_*`, whereas fatal alerts must be prefixed with `fatal_*`.
 For instance, an uncorrectable parity error in SRAM could be named `fatal_parity_error`.
 
-In cases where many diverse alert sources are bundled into one alert event (see [Alert Hardware Implementation]({{< relref "#alert-hardware-implementation" >}})), it may sometimes be difficult to assign the alert event a meaningful and descriptive name.
+In cases where many diverse alert sources are bundled into one alert event (see [Alert Hardware Implementation](#alert-hardware-implementation)), it may sometimes be difficult to assign the alert event a meaningful and descriptive name.
 In such cases, it is permissible to default the alert names to just `recov` and/or `fatal`.
 Note that this implies that the peripheral does not expose more than one alert for that type.
 
diff --git a/doc/contributing/hw/methodology.md b/doc/contributing/hw/methodology.md
index 8d4926d..e62829c 100644
--- a/doc/contributing/hw/methodology.md
+++ b/doc/contributing/hw/methodology.md
@@ -199,7 +199,7 @@
 
 ## Automatic SV Code Formatting using Verible (Open Source)
 
-The open source Verible tool used for [style linting]({{< relref "#style-linting-using-verible-open-source" >}}) also supports an automatic code formatting mode for SystemVerilog.
+The open source Verible tool used for [style linting](#style-linting-using-verible-open-source) also supports an automatic code formatting mode for SystemVerilog.
 The formatter follows our [Verilog Style Guide](https://github.com/lowRISC/style-guides/blob/master/VerilogCodingStyle.md) and helps reducing manual code alignment steps.
 
 Note that this formatter is still under development and not entirely production ready yet due to some remaining formatting bugs and discrepancies - hence automatic code formatting is not enforced in CI at this point.
@@ -226,7 +226,7 @@
 ## FPGA vs Silicon
 
 One output of the OpenTitan project will be silicon instantiations of hardware functionality described in this open source repository.
-The RTL repository defines design functionality at a level satisfactory to prove the hardware and software functionality in an FPGA (see [user guides]({{< relref "doc/ug" >}})).
+The RTL repository defines design functionality at a level satisfactory to prove the hardware and software functionality in an FPGA (see [user guides](https://docs.opentitan.org/doc/guides/getting_started/)).
 That level is so-called "tapeout ready".
 Once the project reaches that milestone, the team will work with a vendor or vendors to ensure a trustworthy, industry-quality, fully functional OpenTitan chip is manufactured.
 
diff --git a/doc/guides/getting_started/src/README.md b/doc/guides/getting_started/src/README.md
index 9d95009..5fb9b8b 100644
--- a/doc/guides/getting_started/src/README.md
+++ b/doc/guides/getting_started/src/README.md
@@ -28,7 +28,7 @@
 ```
 
 If you wish to *contribute* to OpenTitan you will need to make a fork on GitHub and may wish to clone the fork instead.
-We have some [notes for using GitHub]({{< relref "github_notes.md" >}}) which explain how to work with your own fork (and perform many other GitHub tasks) in the OpenTitan context.
+We have some [notes for using GitHub](https://opentitan.org/book/doc/contributing/github_notes.md) which explain how to work with your own fork (and perform many other GitHub tasks) in the OpenTitan context.
 
 ***Note: throughout the documentation `$REPO_TOP` refers to the path where the OpenTitan repository is checked out.***
 Unless you've specified some other name in the clone, `$REPO_TOP` will be a directory called `opentitan`.
@@ -189,7 +189,7 @@
 ### Step 6a: Install Verible (optional)
 
 Verible is an open source SystemVerilog style linter and formatting tool.
-The style linter is relatively mature and we use it as part of our [RTL design flow](../../../contributing/hw/methodology.md).
+The style linter is relatively mature and we use it as part of our [RTL design flow](https://opentitan.org/book/doc/contributing/hw/methodology.md).
 The formatter is still under active development, and hence its usage is more experimental in OpenTitan.
 
 You can download and build Verible from scratch as explained on the [Verible GitHub page](https://github.com/google/verible/).
@@ -245,17 +245,16 @@
 If you are interested in these, check out the additional resources below.
 
 ### General
-* [Documentation Index]({{< relref "doc/_index.md" >}})
-* [Directory Structure](../../../contributing/directory_structure.md)
-* [GitHub Notes](../../../contributing/github_notes.md)
+* [Directory Structure](https://opentitan.org/book/doc/contributing/directory_structure.md)
+* [GitHub Notes](https://opentitan.org/book/doc/contributing/github_notes.md)
 * [Building Documentation](./build_docs.md)
-* [Design Methodology within OpenTitan](../../../contributing/hw/methodology.md)
+* [Design Methodology within OpenTitan](https://opentitan.org/book/doc/contributing/hw/methodology.md)
 
 ### Hardware
-* [Designing Hardware](../../../contributing/hw/design.md)
-* [OpenTitan Hardware](../../../../hw/README.md)
+* [Designing Hardware](https://opentitan.org/book/doc/contributing/hw/design.md)
+* [OpenTitan Hardware](https://opentitan.org/book/hw/README.md)
 
 ### Software
-* [OpenTitan Software](../../../../sw/README.md)
-* [Writing and Building Software for OTBN]({{< relref "otbn_sw.md" >}})
-* [Rust for Embedded C Programmers]({{< relref "rust_for_c.md" >}})
+* [OpenTitan Software](https://opentitan.org/book/sw/README.md)
+* [Writing and Building Software for OTBN](https://opentitan.org/book/doc/contributing/sw/otbn_sw.md)
+* [Rust for Embedded C Programmers](https://opentitan.org/book/doc/rust_for_c_devs.md)
diff --git a/doc/guides/getting_started/src/build_docs.md b/doc/guides/getting_started/src/build_docs.md
index a0d8fd0..7a16260 100644
--- a/doc/guides/getting_started/src/build_docs.md
+++ b/doc/guides/getting_started/src/build_docs.md
@@ -4,7 +4,7 @@
 
 The documentation for OpenTitan is available [online](https://docs.opentitan.org).
 The creation of documentation is mainly based around the conversion from Markdown to HTML files with [Hugo](https://gohugo.io/).
-Rules for how to write correct Markdown files can be found in the [reference manual](../../../contributing/style_guides/markdown_usage_style.md).
+Rules for how to write correct Markdown files can be found in the [reference manual](https://opentitan.org/book/doc/contributing/style_guides/markdown_usage_style.md).
 
 ## Building locally
 
diff --git a/doc/guides/getting_started/src/build_sw.md b/doc/guides/getting_started/src/build_sw.md
index 918f2c9..0fb2a48 100644
--- a/doc/guides/getting_started/src/build_sw.md
+++ b/doc/guides/getting_started/src/build_sw.md
@@ -5,9 +5,9 @@
 ---
 
 _Before following this guide, make sure you have read the_:
-* main [Getting Started]({{< relref "getting_started" >}}) instructions,
+* main [Getting Started](README.md) instructions,
 * install Verilator section of the [Verilator guide](./setup_verilator.md), and
-* [OpenTitan Software](../../../../sw/README.md) documentation.
+* [OpenTitan Software](https://opentitan.org/book/sw/README.md) documentation.
 
 All OpenTitan software is built with [Bazel](https://bazel.build/).
 Additionally, _most_ tests may be run with Bazel too.
@@ -23,9 +23,9 @@
 Future builds will be much faster; go get a coffee and come back later.
 
 If the test worked, your OpenTitan setup is functional; you can build the software and run on-device tests using the Verilator simulation tool.
-See [Running Tests with Bazel]({{< relref "#running-tests with Bazel" >}}) for information on how to find and run other tests.
+See [Running Tests with Bazel](#running-tests with Bazel) for information on how to find and run other tests.
 
-If the test didn't work, read the full guide, especially the [Troubleshooting]({{< relref "#troubleshooting" >}}) section.
+If the test didn't work, read the full guide, especially the [Troubleshooting](#troubleshooting) section.
 
 ## Installing Bazel
 
@@ -98,12 +98,12 @@
 On-host tests are compiled and run on the host machine, while on-device tests are compiled and run on (simulated/emulated) OpenTitan hardware.
 
 Examples of on-host tests are:
-* unit tests for device software, such as [DIF](../../../../sw/device/lib/dif/README.md) and [ROM](../../../../sw/device/silicon_creator/rom/README.md) unit tests.
+* unit tests for device software, such as [DIF](https://opentitan.org/book/sw/device/lib/dif/README.md) and [ROM](https://opentitan.org/book/sw/device/silicon_creator/rom/README.md) unit tests.
 * any test for host software, such as `opentitan{lib,tool}`.
 
 Examples of on-device tests are:
-* [chip-level tests](../../../../sw/device/tests/README.md).
-* [ROM functional tests](../../../../sw/device/silicon_creator/rom/README.md)
+* [chip-level tests](https://opentitan.org/book/sw/device/tests/README.md).
+* [ROM functional tests](https://opentitan.org/book/sw/device/silicon_creator/rom/README.md)
 
 Test target names normally match file names (for instance, `//sw/device/tests:uart_smoketest` corresponds to `sw/device/test/uart_smoketest.c`).
 You can see all tests available under a given directory using `bazel query`, e.g.:
@@ -165,7 +165,7 @@
 
 ### Running on-host DIF Tests
 
-The Device Interface Function or [DIF](../../../../sw/device/lib/dif/README.md) libraries contain unit tests that run on the host and are built and run with Bazel.
+The Device Interface Function or [DIF](https://opentitan.org/book/sw/device/lib/dif/README.md) libraries contain unit tests that run on the host and are built and run with Bazel.
 As shown below, you may use Bazel to query which tests are available, build and run all tests, or build and run only one test.
 
 #### Querying which tests are available
@@ -186,7 +186,7 @@
 
 ### Running on-host ROM Tests
 
-Similar to the DIF libraries, you can query, build, and run all the [ROM](../../../../sw/device/silicon_creator/rom/README.md) unit tests (which also run on the host) with Bazel.
+Similar to the DIF libraries, you can query, build, and run all the [ROM](https://opentitan.org/book/sw/device/silicon_creator/rom/README.md) unit tests (which also run on the host) with Bazel.
 
 #### Querying which (on-host) tests are available
 Note, the ROM has both on-host and on-device tests.
@@ -210,7 +210,7 @@
 
 ### Bazel-built Software Artifacts
 
-As described in the [OpenTitan Software](../../../../sw/README.md) documentation, there are three categories of OpenTitan software, all of which are built with Bazel. These include:
+As described in the [OpenTitan Software](https://opentitan.org/book/sw/README.md) documentation, there are three categories of OpenTitan software, all of which are built with Bazel. These include:
 1. _device_ software,
 1. _OTBN_ software,
 1. _host_ software,
diff --git a/doc/guides/getting_started/src/setup_dv.md b/doc/guides/getting_started/src/setup_dv.md
index 7b3b683..354335a 100644
--- a/doc/guides/getting_started/src/setup_dv.md
+++ b/doc/guides/getting_started/src/setup_dv.md
@@ -4,15 +4,15 @@
     - /doc/ug/getting_started_dv
 ---
 
-_Before following this guide, make sure you've followed the [dependency installation and software build instructions]({{< relref "getting_started" >}})._
+_Before following this guide, make sure you've followed the [dependency installation and software build instructions](https://opentitan.org/book/doc/guides/getting_started/)._
 
 This document aims to enable a contributor to get started with a design verification (DV) effort within the OpenTitan project.
 While most of the focus is on development of a testbench from scratch, it should also be useful to understand how to contribute to an existing effort.
-Please refer to the [DV methodology](../../../contributing/dv/methodology/README.md) document for information on how design verification is done in OpenTitan.
+Please refer to the [DV methodology](https://opentitan.org/book/doc/contributing/dv/methodology/README.md) document for information on how design verification is done in OpenTitan.
 
 ## Stages of DV
 
-The life stages of a design / DV effort within the OpenTitan are described in the [Hardware Development Stages](../../../project_governance/development_stages.md) document.
+The life stages of a design / DV effort within the OpenTitan are described in the [Hardware Development Stages](https://opentitan.org/book/doc/project_governance/development_stages.md) document.
 It separates the life of DV into three broad stages: Initial Work, Under Test and Testing Complete.
 This document attempts to give guidance on how to get going with the first stage and have a smooth transition into the Under Test stage.
 They are not hard and fast rules but methods we have seen work well in the project.
@@ -21,9 +21,9 @@
 
 ## Getting Started
 
-The very first thing to do in any DV effort is to [document the plan](../../../contributing/dv/methodology/README.md#documentation) detailing the overall effort.
+The very first thing to do in any DV effort is to [document the plan](https://opentitan.org/book/doc/contributing/dv/methodology/README.md#documentation) detailing the overall effort.
 This is done in conjunction with developing the initial testbench.
-It is recommended to use the [uvmdvgen](../../../../util/uvmdvgen/README.md) tool, which serves both needs.
+It is recommended to use the [uvmdvgen](https://opentitan.org/book/util/uvmdvgen/README.md) tool, which serves both needs.
 
 The `uvmdvgen` tool provides the ability to generate the outputs in a specific directory.
 This should be set to the root of the DUT directory where the `rtl` directory exists.
@@ -35,20 +35,20 @@
 
 ## Documentation and Initial Review
 
-The skeleton [DV document](../../../contributing/dv/methodology/README.md#dv-document) and the [Hjson testplan](../../../contributing/dv/methodology/README.md#testplan) should be addressed first.
+The skeleton [DV document](https://opentitan.org/book/doc/contributing/dv/methodology/README.md#dv-document) and the [Hjson testplan](https://opentitan.org/book/doc/contributing/dv/methodology/README.md#testplan) should be addressed first.
 The DV documentation is not expected to be completed in full detail at this point.
 However, it is expected to list all the verification components needed and depict the planned testbench as a block diagram.
 Under the 'design verification' directory in the OpenTitan team drive, some sample testbench block diagrams are available in the `.svg` format, which can be used as a template.
 The Hjson testplan, on the other hand, is required to be completed.
-Please refer to the [testplanner tool](../../../../util/dvsim/README.md) documentation for additional details on how to write the Hjson testplan.
+Please refer to the [testplanner tool](https://opentitan.org/book/util/dvsim/README.md) documentation for additional details on how to write the Hjson testplan.
 Once done, these documents are to be reviewed with the designer(s) and other project members for completeness and clarity.
 
 ## UVM RAL Model
 
-Before running any test, the [UVM RAL model](../../../contributing/dv/methodology/README.md#uvm-register-abstraction-layer-ral-model) needs to exist (if the design contains CSRs).
-The [DV simulation flow](../../../../util/dvsim/README.md) has been updated to generate the RAL model automatically at the start of the simulation.
+Before running any test, the [UVM RAL model](https://opentitan.org/book/doc/contributing/dv/methodology/README.md#uvm-register-abstraction-layer-ral-model) needs to exist (if the design contains CSRs).
+The [DV simulation flow](https://opentitan.org/book/util/dvsim/README.md) has been updated to generate the RAL model automatically at the start of the simulation.
 As such, nothing extra needs to be done.
-It can be created manually by invoking [`regtool`](../../../../util/reggen/doc/setup_and_use.md):
+It can be created manually by invoking [`regtool`](https://opentitan.org/book/util/reggen/doc/setup_and_use.md):
 ```console
 $ util/regtool.py -s -t /path-to-dv /path-to-module/data/<dut>.hjson
 ```
@@ -58,7 +58,7 @@
 ## Supported Simulators
 
 The use of advanced verification constructs such as SystemVerilog classes (on which UVM is based on) requires commercial simulators.
-The [DV simulation flow](../../../../util/dvsim/README.md) fully supports Synopsys VCS.
+The [DV simulation flow](https://opentitan.org/book/util/dvsim/README.md) fully supports Synopsys VCS.
 There is support for Cadence Xcelium as well, which is being slowly ramped up.
 
 ## Building and Running Tests
@@ -78,19 +78,19 @@
 To dump waves from the simulation, pass the `--waves <format>` argument to `dvsim.py`.
 If you are using Verdi for waveform viewing, then '--waves fsdb' is probably the best option. For use with other viewers, '--waves shm' is probably the best choice for Xcelium, and '--waves vpd' with vcs.
 
-Please refer to the [DV simulation flow](../../../../util/dvsim/README.md) for additional details.
+Please refer to the [DV simulation flow](https://opentitan.org/book/util/dvsim/README.md) for additional details.
 
 The `uvmdvgen` script also enables the user to run the full suite of CSR tests, if the DUT does have CSRs in it.
 The most basic CSR power-on-reset check test can be run by invoking:
 ```console
 $ util/dvsim/dvsim.py path/to/<dut>_sim_cfg.hjson -i <dut>_csr_hw_reset [--waves <format>] [--tool xcelium]
 ```
-Please refer to [CSR utilities](../../../../hw/dv/sv/csr_utils/README.md) for more information on how to add exclusions for the CSR tests.
+Please refer to [CSR utilities](https://opentitan.org/book/hw/dv/sv/csr_utils/README.md) for more information on how to add exclusions for the CSR tests.
 
 ## Full DV
 
-Running the sanity and CSR suite of tests while making progress toward reaching the [V1 stage](../../../project_governance/development_stages.md#hardware-verification-stages) should provide a good reference in terms of how to develop tests as outlined in the testplan and running and debugging them.
-Please refer to the [checklist](../../../project_governance/checklist/README.md) to understand the key requirements for progressing through the subsequent verification stages and final signoff.
+Running the sanity and CSR suite of tests while making progress toward reaching the [V1 stage](https://opentitan.org/book/doc/project_governance/development_stages.md#hardware-verification-stages) should provide a good reference in terms of how to develop tests as outlined in the testplan and running and debugging them.
+Please refer to the [checklist](https://opentitan.org/book/doc/project_governance/checklist/README.md) to understand the key requirements for progressing through the subsequent verification stages and final signoff.
 
 The [UART DV](https://github.com/lowRISC/opentitan/tree/master/hw/ip/uart/dv) area can be used as a canonical example for making progress.
 If it is not clear on how to proceed, feel free to file an issue requesting assistance.
diff --git a/doc/guides/getting_started/src/setup_formal.md b/doc/guides/getting_started/src/setup_formal.md
index 3661bcd..c14d7d1 100644
--- a/doc/guides/getting_started/src/setup_formal.md
+++ b/doc/guides/getting_started/src/setup_formal.md
@@ -4,12 +4,12 @@
     - /doc/ug/getting_started_formal
 ---
 
-_Before following this guide, make sure you've followed the [dependency installation and software build instructions]({{< relref "getting_started" >}})._
+_Before following this guide, make sure you've followed the [dependency installation and software build instructions](README.md)._
 
 This document aims to enable a contributor to get started with a formal verification effort within the OpenTitan project.
 While most of the focus is on development of a testbench from scratch, it should also be useful to understand how to contribute to an existing effort.
 
-Please refer to the [OpenTitan Assertions](../../../../hw/formal/README.md) for information on how formal verification is done in OpenTitan.
+Please refer to the [OpenTitan Assertions](https://opentitan.org/book/hw/formal/README.md) for information on how formal verification is done in OpenTitan.
 
 ## Formal property verification (FPV)
 
@@ -17,15 +17,15 @@
 There are three sets of FPV jobs in OpenTitan. They are all under the directory `hw/top_earlgrey/formal`.
 * `top_earlgrey_fpv_ip_cfgs.hjson`: List of IP targets.
 * `top_earlgrey_fpv_prim_cfgs.hjson`: List of prim targets (such as counters, fifos, etc) that are usually imported by an IP.
-* `top_earlgrey_fpv_sec_cm_cfgs.hjson`: List of IPs that contains standard security countermeasure assertions. This FPV environment only proves these security countermeasure assertions. Detailed description of this FPV use case is documented in [Running FPV on security blocks for common countermeasure primitives](../../../../hw/formal/README.md#running-fpv-on-security-blocks-for-common-countermeasure-primitives).
+* `top_earlgrey_fpv_sec_cm_cfgs.hjson`: List of IPs that contains standard security countermeasure assertions. This FPV environment only proves these security countermeasure assertions. Detailed description of this FPV use case is documented in [Running FPV on security blocks for common countermeasure primitives](https://opentitan.org/book/hw/formal/README.md#running-fpv-on-security-blocks-for-common-countermeasure-primitives).
 
-To automatically create a FPV testbench, it is recommended to use the [fpvgen](../../../../util/fpvgen/README.md) tool to create a template.
+To automatically create a FPV testbench, it is recommended to use the [fpvgen](https://opentitan.org/book/util/fpvgen/README.md) tool to create a template.
 To run the FPV tests in `dvsim`, please add the target to the corresponding `top_earlgrey_fpv_{category}_cfgs.hjson` file , then run with command:
 ```console
 util/dvsim/dvsim.py hw/top_earlgrey/formal/top_earlgrey_fpv_{category}_cfgs.hjson --select-cfgs {target_name}
 ```
 
-It is recommended to add the FPV target to [lint](../../../../hw/lint/README.md) script `hw/top_earlgrey/lint/top_earlgrey_fpv_lint_cfgs.hjson` to quickly find typos.
+It is recommended to add the FPV target to [lint](https://opentitan.org/book/hw/lint/README.md) script `hw/top_earlgrey/lint/top_earlgrey_fpv_lint_cfgs.hjson` to quickly find typos.
 
 ## Formal connectivity verification
 
diff --git a/doc/guides/getting_started/src/setup_fpga.md b/doc/guides/getting_started/src/setup_fpga.md
index f787ff3..c55f1e3 100644
--- a/doc/guides/getting_started/src/setup_fpga.md
+++ b/doc/guides/getting_started/src/setup_fpga.md
@@ -4,7 +4,7 @@
     - /doc/ug/getting_started_fpga
 ---
 
-_Before following this guide, make sure you've followed the [dependency installation and software build instructions]({{< relref "getting_started" >}})._
+_Before following this guide, make sure you've followed the [dependency installation and software build instructions](README.md)._
 
 Do you want to try out OpenTitan, but don't have a couple thousand or million dollars ready for an ASIC tapeout?
 Running OpenTitan on an FPGA board can be the answer!
@@ -19,7 +19,7 @@
 Depending on the design/target combination that you want to synthesize you will need different tools and boards.
 Refer to the design documentation for information on what exactly is needed.
 
-* [Obtain an FPGA board]({{< relref "fpga_boards.md" >}})
+* [Obtain an FPGA board](https://opentitan.org/book/doc/contributing/fpga/get_a_board.md)
 
 ## Obtain an FPGA bitstream
 
@@ -42,7 +42,7 @@
 By default, the bitstream is built with a version of the boot ROM used for testing (called the _test ROM_; pulled from `sw/device/lib/testing/test_rom`).
 There is also a version of the boot ROM used in production (called the _ROM_; pulled from `sw/device/silicon_creator/rom`).
 When the bitstream cache is used in bazel flows, the ROMs from the cache are not used.
-Instead, the bazel-built ROMs are spliced into the image to create new bitstreams, using the mechanism described in the [FPGA Reference Manual]({{< relref "ref_manual_fpga.md#boot-rom-development" >}}).
+Instead, the bazel-built ROMs are spliced into the image to create new bitstreams, using the mechanism described in the [FPGA Reference Manual](https://opentitan.org/book/doc/contributing/fpga/ref_manual_fpga.md#boot-rom-development).
 The metadata for the latest bitstream (the approximate creation time and the associated commit hash) is also available as a text file and can be [downloaded separately](https://storage.googleapis.com/opentitan-bitstreams/master/latest.txt).
 
 ### Using the `@bitstreams` repository
@@ -152,7 +152,7 @@
 bazel run //sw/host/opentitantool -- --interface=cw310 fpga set-pll
 ```
 
-Check that it's working by [running the demo]({{< relref "#hello-world-demo" >}}) or a test, such as the `uart_smoketest` below.
+Check that it's working by [running the demo](#hello-world-demo) or a test, such as the `uart_smoketest` below.
 ```console
 cd $REPO_TOP
 bazel test --test_output=streamed //sw/device/tests:uart_smoketest_fpga_cw310_test_rom
@@ -416,7 +416,7 @@
 
 The above will print out the contents of the registers upon successs.
 Note that you should have the RISC-V toolchain installed and on your `PATH`.
-For example, if you followed the [Getting Started]({{< relref "getting_started#step-3-install-the-lowrisc-risc-v-toolchain" >}}) instructions, then make sure `/tools/riscv/bin` is on your `PATH`.
+For example, if you followed the [Getting Started](README.md#step-3-install-the-lowrisc-risc-v-toolchain) instructions, then make sure `/tools/riscv/bin` is on your `PATH`.
 
 #### Common operations with GDB
 
diff --git a/doc/guides/getting_started/src/setup_verilator.md b/doc/guides/getting_started/src/setup_verilator.md
index 2aa8db0..91291f2 100644
--- a/doc/guides/getting_started/src/setup_verilator.md
+++ b/doc/guides/getting_started/src/setup_verilator.md
@@ -4,7 +4,7 @@
     - /doc/ug/getting_started_verilator
 ---
 
-_Before following this guide, make sure you've followed the [dependency installation instructions]({{< relref "getting_started" >}})._
+_Before following this guide, make sure you've followed the [dependency installation instructions](README.md)._
 
 ## About Verilator
 
diff --git a/doc/introduction.md b/doc/introduction.md
index 4733f07..24ec2fc 100644
--- a/doc/introduction.md
+++ b/doc/introduction.md
@@ -12,7 +12,7 @@
 ## Getting Started
 
 To get started with OpenTitan, see the [Getting Started](./guides/getting_started/src/README.md) page.
-For additional resources when working with OpenTitan, see the [list of user guides]({{< relref "doc/ug" >}}).
+For additional resources when working with OpenTitan, see the [list of user guides](https://docs.opentitan.org/doc/guides/getting_started/).
 For details on coding styles or how to use our project-specific tooling, see the [reference manuals](../util/README.md).
 Lastly, the [Hardware Dashboard page](../hw/README.md) contains technical documentation on the SoC, the Ibex processor core, and the individual IP blocks.
 For questions about how the project is organized, see the [project](./project_governance/README.md) landing spot for more information.
@@ -35,7 +35,7 @@
 ## Development
 
 * [Getting Started](./guides/getting_started/src/README.md)
-* [User Guides]({{< relref "doc/ug" >}})
+* [User Guides](https://docs.opentitan.org/doc/guides/getting_started/)
 * [Reference Manuals](../util/README.md)
 * [Style Guides](./contributing/style_guides/README.md)
 
diff --git a/doc/project_governance/checklist/README.md b/doc/project_governance/checklist/README.md
index f07e9f8..d2fb80e 100644
--- a/doc/project_governance/checklist/README.md
+++ b/doc/project_governance/checklist/README.md
@@ -141,7 +141,7 @@
 
 ### SEC_CM_DOCUMENTED
 
-Any custom security countermeasures other than standardized countermeasures listed under [SEC_CM_IMPLEMENTED]({{< relref "#sec_cm_implemented" >}})  have been identified, documented and their implementation has been planned.
+Any custom security countermeasures other than standardized countermeasures listed under [SEC_CM_IMPLEMENTED](#sec_cm_implemented)  have been identified, documented and their implementation has been planned.
 The actual implementation can be delayed until D2S.
 
 Where the area impact of countermeasures can be reliably estimated, it is recommended to insert dummy logic at D2 in order to better reflect the final area complexity of the design.
diff --git a/doc/security/specs/attestation/README.md b/doc/security/specs/attestation/README.md
index 9dea444..e8a184d 100644
--- a/doc/security/specs/attestation/README.md
+++ b/doc/security/specs/attestation/README.md
@@ -58,7 +58,7 @@
 attestation key used by the Kernel. This key is endorsed by the Creator Identity,
 but can also be endorsed by the Silicon Owner PKI. Endorsement of the Owner
 Identity with the Owner's PKI, is covered in detail in the
-[Owner Personalization](owner_personalization) process
+[Owner Personalization](../device_provisioning/README.md#owner_personalization) process
 described in the provisioning specification.
 
 When using a Silicon Owner PKI, the Owner is expected to maintain a device
diff --git a/doc/security/specs/ownership_transfer/README.md b/doc/security/specs/ownership_transfer/README.md
index 001eedb..5d2c349 100644
--- a/doc/security/specs/ownership_transfer/README.md
+++ b/doc/security/specs/ownership_transfer/README.md
@@ -20,7 +20,7 @@
 Key derivations will also correspond to the unlocked state rather than the previous owner.
     * The previous owner's public keys are still stored after ownership is unlocked.
 1. The new owner creates a set of keys by any means they wish (doesn't have to be on an OpenTitan device).
-These keys must include an RSA-3072 key pair for secure boot, an ECDSA-P256 key pair for unlocking ownership, and another ECDSA-P256 key pair for endorsing the next owner (see [Owner Keys]({{< relref "#owner-keys" >}})).
+These keys must include an RSA-3072 key pair for secure boot, an ECDSA-P256 key pair for unlocking ownership, and another ECDSA-P256 key pair for endorsing the next owner (see [Owner Keys](#owner-keys)).
     * The number of redundant keys stored is configurable up to around 2kiB of total key material.
 The new owner may choose, for example, to generate several RSA public/private key pairs.
 1. The new owner gives their public keys to the previous owner (which may be the manufacturer), who endorses them with a cryptographic signature.
@@ -30,7 +30,7 @@
 During boot (specifically during the `ROM_EXT` stage), the device will check the signature on the payload against the endorser's public key.
     * If the endorser is the previous owner, the device checks against the inactive public keys in the Silicon Owner slot.
     * If the endorser is the Silicon Creator, the device checks against public keys which have been injected during manufacturing.
-1. If the signature is valid, then `ROM_EXT` will set up the new owner as the current owner by randomly generating a new Silicon Owner "root secret" (which is used to derive the Silicon Owner identity key for attestation) and writing the new owner's public keys to one of the two slots described in [Key Provisioning]({{< relref "#key-provisioning" >}}).
+1. If the signature is valid, then `ROM_EXT` will set up the new owner as the current owner by randomly generating a new Silicon Owner "root secret" (which is used to derive the Silicon Owner identity key for attestation) and writing the new owner's public keys to one of the two slots described in [Key Provisioning](#key-provisioning).
     * The previous owner's public keys are still on the device and it is still in `UNLOCKED_OWNERSHIP` until the new owner's code successfully boots.
 This is a protection in case there is a problem with the new owner's keys or they have been tampered with in transit.
 1. At this point, the device should be restarted with the new owner's code.
diff --git a/doc/security/specs/secure_boot/README.md b/doc/security/specs/secure_boot/README.md
index 8675273..372493f 100644
--- a/doc/security/specs/secure_boot/README.md
+++ b/doc/security/specs/secure_boot/README.md
@@ -197,4 +197,4 @@
 [ot-unlock-flow]: #
 [ownership-transfer]: ../ownership_transfer/README.md
 [rv-isa-priv]: https://riscv.org/technical/specifications/
-[silicon-creator-keys]: {{< relref "#silicon-creator-keys" >}}
+[silicon-creator-keys]: #silicon-creator-keys
diff --git a/hw/dv/sv/cip_lib/README.md b/hw/dv/sv/cip_lib/README.md
index 2d69b60..64db316 100644
--- a/hw/dv/sv/cip_lib/README.md
+++ b/hw/dv/sv/cip_lib/README.md
@@ -4,7 +4,7 @@
 
 ## Overview
 Going along the lines of what it takes to design an IP that adheres to the
-[Comportability Specifications](/doc/rm/comportability_specification),
+[Comportability Specifications](../../../../doc/contributing/hw/comportability/README.md),
 we attempt to standardize the DV methodology for developing the IP level
 testbench environment as well by following the same approach. This document describes
 the Comportable IP (CIP) library, which is a complete UVM environment framework that
diff --git a/hw/dv/tools/dvsim/testplans/tl_device_access_types_wo_intg_testplan.hjson b/hw/dv/tools/dvsim/testplans/tl_device_access_types_wo_intg_testplan.hjson
index e72060d..be1d1e0 100644
--- a/hw/dv/tools/dvsim/testplans/tl_device_access_types_wo_intg_testplan.hjson
+++ b/hw/dv/tools/dvsim/testplans/tl_device_access_types_wo_intg_testplan.hjson
@@ -13,7 +13,7 @@
       name: tl_d_illegal_access
       desc: '''Drive unsupported requests via TL interface and verify correctness of response
             / behavior. Below error cases are tested bases on the
-            [TLUL spec]({{< relref "hw/ip/tlul/doc/_index.md#explicit-error-cases" >}})
+            [TLUL spec](/hw/ip/tlul/README.md#explicit-error-cases)
             - TL-UL protocol error cases
               - invalid opcode
               - some mask bits not set when opcode is `PutFullData`
diff --git a/hw/dv/tools/ralgen/README.md b/hw/dv/tools/ralgen/README.md
index 56d9d74..2d833b4 100644
--- a/hw/dv/tools/ralgen/README.md
+++ b/hw/dv/tools/ralgen/README.md
@@ -74,7 +74,7 @@
 scripts, which are the ones that actually create the RAL package.
 Due to the way those scripts are implemented, RAL packages for the IP level
 testbenches are generated using
-[`reggen`](({{< relref "util/reggen/doc" >}})), and for the chip level
+[`reggen`](../../../../util/reggen/README.md), and for the chip level
 testbench, `util/topgen.py`. Which one to choose is decided by whether
 the `ip_hjson` or `top_hjson` parameter is supplied.
 
diff --git a/hw/formal/README.md b/hw/formal/README.md
index 66c072b..58726fb 100644
--- a/hw/formal/README.md
+++ b/hw/formal/README.md
@@ -44,7 +44,7 @@
 ### ASSERT(__name, __prop, __clk = `ASSERT_DEFAULT_CLK, __rst = `ASSERT_DEFAULT_RST)
 *   This is a shortcut macro for a generic concurrent assignment.
 *   The first argument is the assertion name.
-    It is recommended to follow the [naming conventions]({{< relref "#naming-conventions" >}}).
+    It is recommended to follow the [naming conventions](#naming-conventions).
     The assertion name should be descriptive, which will help during debug.
 *   The second argument is the assertion property.
 *   The last two are optional arguments to specify the clock and reset signals (active-high reset) if different from default value.
@@ -193,7 +193,7 @@
 ## Symbolic Variables
 
 When design has a set of modules or signals that share same properties, symbolic variables can be used to reduce duplicated assertions.
-For example, in the [rv_plic design](../ip_template/rv_plic/doc/_index.md), the array of input `intr_src_i` are signals sharing the same properties.
+For example, in the [rv_plic design](../ip_templates/rv_plic/README.md), the array of input `intr_src_i` are signals sharing the same properties.
 Each `intr_src_i[index]` will trigger the interrupt pending (`ip`) signal depending on the corresponding level indicator (`le`) is set to level triggered or edge triggered.
 Without symbolic variables, the above assertions can be implemented as below:
 ```systemverilog
@@ -327,7 +327,7 @@
 * Basic assertions should be implemented directly in the RTL file.
   These basic functional assertions are often inserted by designers to act as a smoke check.
 * Assertions used for the testbench to achieve verification goals should be implemented under the `ip/hw/module_name/fpv/vip` folder.
-  This FPV environment can be automatically generated by the [`fpvgen.py` script](../../util/fpvgen/doc).
+  This FPV environment can be automatically generated by the [`fpvgen.py` script](../../util/fpvgen/README.md).
 * Portable assertions written for common interfaces or submodules should also be implemented under the `ip/hw/submodule_or_interface/fpv/vip` folder.
   These portable assertion collections can be easily reused by other testbench via a bind file.
 
diff --git a/hw/ip/adc_ctrl/README.md b/hw/ip/adc_ctrl/README.md
index e0b40ea..6e4938d 100644
--- a/hw/ip/adc_ctrl/README.md
+++ b/hw/ip/adc_ctrl/README.md
@@ -35,7 +35,7 @@
 
 ## Block Diagram
 
-![ADC_CTRL Block Diagram](adc_overview.svg "image_tooltip")
+![ADC_CTRL Block Diagram](doc/adc_overview.svg)
 
 
 ## Hardware Interface
@@ -214,7 +214,7 @@
 This insertion of this debug accessory into a system, can be detected by the ADC controller.
 
 The debug accessory voltage range of interest is shown in the diagram below:
-![Debug Cable Regions](debug_cable_regions.svg "image_tooltip")
+![Debug Cable Regions](doc/debug_cable_regions.svg)
 
 The ADC can be used to detect debug cable connection / disconnection in the non-overlapping regions.
 As an example use case of the two channel filters they can be used for detection of a USB-C debug accessory.
diff --git a/hw/ip/aes/README.md b/hw/ip/aes/README.md
index f659bfe..84e53c2 100644
--- a/hw/ip/aes/README.md
+++ b/hw/ip/aes/README.md
@@ -22,13 +22,13 @@
   - Output Feedback (OFB) mode, and
   - Counter (CTR) mode.
 - Support for AES-192 can be removed to save area, and is enabled/disabled using a compile-time Verilog parameter
-- First-order masking of the cipher core using domain-oriented masking (DOM) to aggravate side-channel analysis (SCA), can optionally be disabled using compile-time Verilog parameters (for more details see [Security Hardening below]({{< relref "#side-channel-analysis" >}}))
+- First-order masking of the cipher core using domain-oriented masking (DOM) to aggravate side-channel analysis (SCA), can optionally be disabled using compile-time Verilog parameters (for more details see [Security Hardening below](#side-channel-analysis))
 - Latency per 16 byte data block of 12/14/16 clock cycles (unmasked implementation) and 56/66/72 clock cycles (DOM) in AES-128/192/256 mode
 - Automatic as well as software-initiated reseeding of internal pseudo-random number generators (PRNGs) with configurable reseeding rate resulting in max entropy consumption rates ranging from 286 Mbit/s to 0.035 Mbit/s (at 100 MHz).
-- Countermeasures for aggravating fault injection (FI) on the control path (for more details see [Security Hardening below]({{< relref "#fault-injection" >}}))
+- Countermeasures for aggravating fault injection (FI) on the control path (for more details see [Security Hardening below](#fault-injection))
 - Register-based data and control interface
 - System key-manager interface for optional key sideload to not expose key material to the processor and other hosts attached to the system bus interconnect.
-- On-the-fly round-key generation in parallel to the actual encryption/decryption from a single initial 128/192/256-bit key provided through the register interface (for more details see [Theory of Operations below]({{< relref "#theory-of-operations" >}}))
+- On-the-fly round-key generation in parallel to the actual encryption/decryption from a single initial 128/192/256-bit key provided through the register interface (for more details see [Theory of Operations below](#theory-of-operations))
 
 This AES unit targets medium performance (16 parallel S-Boxes, \~1 cycle per round for the unmasked implementation, \~5 cycles per round for the DOM implementation).
 High-speed, single-cycle operation for high-bandwidth data streaming is not required.
@@ -102,7 +102,7 @@
 
 Therefore, the AES unit uses an iterative cipher core architecture with a 128-bit wide data path as shown in the figure below.
 Note that for the sake of simplicity, the figure shows the unmasked implementation.
-For details on the masked implementation of the cipher core refer to [Security Hardening below]({{< relref "#security-hardening" >}})).
+For details on the masked implementation of the cipher core refer to [Security Hardening below](#security-hardening)).
 Using an iterative architecture allows for a smaller circuit area at the cost of throughput.
 Employing a 128-bit wide data path allows to achieve the latency requirements of 12/14/16 clock cycles per 16B data block in AES-128/192/256 mode in the unmasked implementation, respectively.
 
@@ -252,7 +252,7 @@
 
 The DOM S-Box has a latency of 5 clock cycles.
 All other implementations are fully combinational (one S-Box evaluation every clock cycle).
-See also [Security Hardening below.]({{< relref "#1st-order-masking-of-the-cipher-core" >}})
+See also [Security Hardening below.](#1st-order-masking-of-the-cipher-core)
 
 ### ShiftRows
 
@@ -330,7 +330,7 @@
 Any write operations of the processor to the Initial Key registers {{< regref "KEY_SHARE0_0" >}} - {{< regref "KEY_SHARE1_7" >}} are then ignored.
 In normal/automatic mode, the AES unit only starts encryption/decryption if the sideload key is marked as valid.
 To update the sideload key, the processor has to 1) wait for the AES unit to become idle, 2) wait for the key manager to update the sideload key and assert the valid signal, and 3) write to the {{< regref "CTRL_SHADOWED" >}} register to start a new message.
-After using a sideload key, the processor has to trigger the clearing of all key registers inside the AES unit (see [De-Initialization]({{< relref "#de-initialization" >}}) below).
+After using a sideload key, the processor has to trigger the clearing of all key registers inside the AES unit (see [De-Initialization](#de-initialization) below).
 
 
 # Security Hardening
diff --git a/hw/ip/edn/README.md b/hw/ip/edn/README.md
index f4e627b..bbae574 100644
--- a/hw/ip/edn/README.md
+++ b/hw/ip/edn/README.md
@@ -210,7 +210,7 @@
 Once these commands have completed, a status bit will be set.
 At this point, firmware can later come and reconfigure the EDN block for a different mode of operation.
 
-The recommended write sequence for the entire entropy system is one configuration write to ENTROPY_SRC, then CSRNG, and finally to EDN (also see [Module enable and disable]({{<relref "#enable-disable">}})).
+The recommended write sequence for the entire entropy system is one configuration write to ENTROPY_SRC, then CSRNG, and finally to EDN (also see [Module enable and disable](#enable-disable)).
 
 ### Interrupts
 
diff --git a/hw/ip/flash_ctrl/README.md b/hw/ip/flash_ctrl/README.md
index 64245a5..f780e8c 100644
--- a/hw/ip/flash_ctrl/README.md
+++ b/hw/ip/flash_ctrl/README.md
@@ -44,7 +44,7 @@
 *  Flash memory protection at page boundaries.
 *  Life cycle RMA entry.
 *  Key manager secret seeds that are inaccessible to software.
-*  Support vendor flash module [erase suspend]({{< relref "#erase-suspend" >}}).
+*  Support vendor flash module [erase suspend](#erase-suspend).
 *  Provisioning of flash specific attributes:
    * High endurance.
 *  Idle indication to external power managers.
@@ -109,7 +109,7 @@
 *  System hosts (processor and other entities) can only directly read the data partition, they do not have any kind of access to information partitions.
    * System hosts are also not subject to memory protection rules, as those apply to the flash protocol controller only.
 
-For default assumptions of the design, see the [default configuration]({{< relref "#flash-default-configuration" >}}).
+For default assumptions of the design, see the [default configuration](#flash-default-configuration).
 
 #### Addresses Map
 
@@ -140,10 +140,10 @@
 For example, the page 0 address of any kind of partition is always the same.
 
 To distinguish which partition is accessed, use the configuration in {{< regref "CONTROL.PARTITION_SEL" >}} and {{< regref "CONTROL.INFO_SEL" >}}
-Note however, the system host is only able to access the [data partitions]({{< relref "#host-and-protocol-controller-handling" >}}).
+Note however, the system host is only able to access the [data partitions](#host-and-protocol-controller-handling).
 
 ##### Default Address Map
-Based on the [default configuration]({{< relref "#flash-default-configuration" >}}), the following would be the default address map for each partition / page.
+Based on the [default configuration](#flash-default-configuration), the following would be the default address map for each partition / page.
 
 Location        | Address      |
 ----------------|------------- |
@@ -223,7 +223,7 @@
 The flash controller then initiates RMA entry process and notifies the life cycle controller when it is complete.
 The RMA entry process wipes out all data, creator, owner and isolated partitions.
 
-After RMA completes, the flash controller is [disabled]({{< relref "#flash-access-disable" >}}).
+After RMA completes, the flash controller is [disabled](#flash-access-disable).
 When disabled the flash protocol controller registers can still be accessed.
 However, flash memory access are not allowed, either directly by the host or indirectly through flash protocol controller initiated transactions.
 It is expected that after an RMA transition, the entire system will be rebooted.
@@ -234,7 +234,7 @@
 The flash protocol controller is initialized through {{< regref "INIT" >}}.
 When initialization is invoked, the flash controller requests the address and data scrambling keys from an external entity, [otp_ctrl](../otp_ctrl/README.md#interface-to-flash-scrambler) in this case.
 
-After the scrambling keys are requested, the flash protocol controller reads the root seeds out of the [secret partitions]({{< relref "#secret-information-partitions" >}}) and sends them to the key manager.
+After the scrambling keys are requested, the flash protocol controller reads the root seeds out of the [secret partitions](#secret-information-partitions) and sends them to the key manager.
 Once the above steps are completed, the read buffers in the flash physical controller are enabled for operation.
 
 #### RMA Entry
@@ -349,7 +349,7 @@
 
 Custom faults represent custom errors, primarily errors generated by the life cycle management interface, the flash storage integrity interface and the flash macro itself.
 
-See ({{< relref "#flash-escalation" >}}) for further differentiation between standard and custom faults.
+See (#flash-escalation) for further differentiation between standard and custom faults.
 
 #### Transmission Integrity Faults
 
@@ -414,17 +414,17 @@
 Local escalation is triggered by a standard faults of flash, seen in {{< regref "STD_FAULT_STATUS" >}}.
 Local escalation is not configurable and automatically triggers when this subset of faults are seen.
 
-For the escalation behavior, see [flash access disable]({{< relref "#flash-access-disable" >}}) .
+For the escalation behavior, see [flash access disable](#flash-access-disable) .
 
 #### Flash Access Disable
 
 Flash access can be disabled through global escalation trigger, local escalation trigger, rma process completion or software command.
-The escalation triggers are described [here]({{< relref "#flash-escalation" >}}).
+The escalation triggers are described [here](#flash-escalation).
 The software command to disable flash can be found in {{< regref "DIS" >}}.
-The description for rma entry can be found [here]({{< relref "#rma-entry-handling" >}}).
+The description for rma entry can be found [here](#rma-entry-handling).
 
 When disabled, the flash has a two layered response:
-- The flash protocol controller [memory protection]({{< relref "#memory-protection" >}}) errors back all controller initiated operations.
+- The flash protocol controller [memory protection](#memory-protection) errors back all controller initiated operations.
 - The host-facing tlul adapter errors back all host initiated operations.
 - The flash physical controller completes any existing stateful operations (program or erase) and drops all future flash transactions.
 - The flash protocol controller arbiter completes any existing software issued commands and enters a disabled state where no new transactions can be issued.
diff --git a/hw/ip/hmac/dv/README.md b/hw/ip/hmac/dv/README.md
index e14244d..85182cd 100644
--- a/hw/ip/hmac/dv/README.md
+++ b/hw/ip/hmac/dv/README.md
@@ -81,7 +81,7 @@
 
 ##### Standard test vectors
 Besides constrained random test sequences, hmac test sequences also includes [standard
-SHA256 and HMAC test vectors]({{< relref "hw/dv/sv/test_vectors/doc.md" >}}) from
+SHA256 and HMAC test vectors](../../../dv/sv/test_vectors/README.md) from
 [NIST](https://csrc.nist.gov/Projects/Cryptographic-Algorithm-Validation-Program/Secure-Hashing#shavs)
 and [IETF](https://tools.ietf.org/html/rfc4868).
 The standard test vectors provide messages, keys (for HMAC only), and expected
diff --git a/hw/ip/i2c/README.md b/hw/ip/i2c/README.md
index 754fdc1..b8abfcd 100644
--- a/hw/ip/i2c/README.md
+++ b/hw/ip/i2c/README.md
@@ -189,7 +189,7 @@
 
 If the transaction is a read operation (R/W bit = 1), the target pulls bytes out of TX FIFO and transmits them to the bus until the host signals the end of the transfer by sending a NACK signal.
 If TX FIFO holds no data, or if the ACQ FIFO contains more than 1 entry, the target will hold SCL low to stretch the clock and give software time to write data bytes into TX FIFO or handle the available command.
-See ({{< relref "#stretching-during-read" >}}) for more details.
+See (#stretching-during-read) for more details.
 TX FIFO input corresponds to {{< regref TXDATA >}}.
 Typically, a NACK signal is followed by a STOP or repeated START signal and the IP will raise an exception if the host sends a STOP signal after an ACK.
 An ACK/NACK signal is inserted into the ACQ FIFO as the first bit (bit 0), in the same entry with a STOP or repeated START signal.
@@ -241,7 +241,7 @@
 These values can be directly computed using DIFs given the desired speed standard, the desired operating frequency, and the actual line capacitance.
 These timing parameters are then fed directly to the I2C state machine to control the bus timing.
 
-A detailed description of the algorithm for determining these parameters--as well as a couple of concrete examples--are given in the [Programmers Guide section of this document.]({{<relref "#timing-parameter-tuning-algorithm">}})
+A detailed description of the algorithm for determining these parameters--as well as a couple of concrete examples--are given in the [Programmers Guide section of this document.](#timing-parameter-tuning-algorithm)
 
 ### Timeout Control
 A malfunctioning (or otherwise very slow) target device can hold SCL low indefinitely, stalling the bus.
diff --git a/hw/ip/lc_ctrl/README.md b/hw/ip/lc_ctrl/README.md
index d09aaba..4062914 100644
--- a/hw/ip/lc_ctrl/README.md
+++ b/hw/ip/lc_ctrl/README.md
@@ -71,7 +71,7 @@
 ## Normal Operation
 
 Once the life cycle system is powered up and stable, its outputs remain static unless specifically requested to change or affected by security escalation.
-The life cycle controller can accept [change requests]({{< relref "#life-cycle-requests" >}}) from software as well as external entities.
+The life cycle controller can accept [change requests](#life-cycle-requests) from software as well as external entities.
 
 ### Unconditional Transitions
 
@@ -128,7 +128,7 @@
 The two escalation paths are redundant, and both trigger the same mechanism.
 Upon assertion of any of the two escalation actions, the life cycle state is **TEMPORARILY** altered.
 I.e. when this escalation path is triggered, the life cycle state is transitioned into "ESCALATE", which behaves like a virtual "SCRAP" state (i.e. this state is not programmed into OTP).
-This causes [all decoded outputs]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}) to be disabled until the next power cycle.
+This causes [all decoded outputs](#life-cycle-decoded-outputs-and-controls) to be disabled until the next power cycle.
 In addition to that, the life cycle controller asserts the ESCALATE_EN life cycle signal which is distributed to all IPs in the design that expose an escalation action (like moving FSMs into terminal error states or clearing sensitive registers).
 
 Whether to escalate to the life cycle controller or not is a software decision, please see the alert handler for more details.
@@ -168,7 +168,7 @@
 - The TAP controller is unable to issue any kind of self test that would disrupt and scramble live logic which could lead to unpredictable behavior
 - The TAP controller or test function is unable to alter the non-volatile contents of flash or OTP
 
-See [TAP isolation]({{< relref "#tap-and-isolation" >}}) for more implementation details.
+See [TAP isolation](#tap-and-isolation) for more implementation details.
 
 #### NVM_DEBUG_EN
 
@@ -179,14 +179,14 @@
 Since these back-door functions may bypass memory protection, they could be used to read out provisioned secrets that are not meant to be visible to software or a debug host.
 
 Note that NVM_DEBUG_EN is disabled in the last test unlocked state (TEST_UNLOCKED7) such that the isolated flash partition can be be securely populated, without exposing its contents via the NVM backdoor interface.
-See also accessibility description of the [isolated flash partition]({{< relref "#iso_part_sw_rd_en-and-iso_part_sw_wr_en" >}}).
+See also accessibility description of the [isolated flash partition](#iso_part_sw_rd_en-and-iso_part_sw_wr_en).
 
 #### HW_DEBUG_EN
 
 HW_DEBUG_EN refers to the general ungating of both invasive (JTAG control of the processor, bidirectional analog test points) and non-invasive debug (debug bus observation, and register access error returns).
 
 This signal thus needs to be routed to all security-aware and debug capable peripherals.
-This signal is used to determine whether OpenTitan peripheral register interfaces should [silently error]({{< relref "doc/rm/register_tool/_index.md#error_responses" >}}).
+This signal is used to determine whether OpenTitan peripheral register interfaces should [silently error](../../../util/reggen/README.md#error_responses" >}}).
 If HW_DEBUG_EN is set to ON, normal errors should be returned.
 If HW_DEBUG_EN is set to OFF, errors should return silently.
 
@@ -248,7 +248,7 @@
 
 While the OWNER_SEED_SW_RW_EN is statically enabled in the states shown above, the CREATOR_SEED_SW_RW_EN is only enabled if the device has not yet been personalized (i.e., the OTP partition holding the root key has not been locked down yet).
 
-For more a list of the collateral in Flash and OTP and an explanation of how that collateral is affected by these signals, see the [OTP collateral]({{< relref "#otp-collateral" >}}) and [flash collateral]({{< relref "#flash-collateral" >}}) sections.
+For more a list of the collateral in Flash and OTP and an explanation of how that collateral is affected by these signals, see the [OTP collateral](#otp-collateral) and [flash collateral](#flash-collateral) sections.
 
 #### SEED_HW_RD_EN
 
@@ -266,12 +266,12 @@
 ## OTP Collateral
 
 The following is a list of all life cycle related collateral stored in OTP.
-Most collateral also contain associated metadata to indicate when the collateral is restricted from further software access, see [accessibility summary]({{< relref "#otp-accessibility-summary-and-impact-of-provision_en" >}}) for more details.
+Most collateral also contain associated metadata to indicate when the collateral is restricted from further software access, see [accessibility summary](#otp-accessibility-summary-and-impact-of-provision_en) for more details.
 Since not all collateral is consumed by the life cycle controller, the consuming agent is also shown.
 
 {{< snippet "lc_ctrl_otp_collateral.md" >}}
 
-The TOKENs and KEYS are considered secret data and are stored in [wrapped format]({{< relref "#conditional-transitions">}}).
+The TOKENs and KEYS are considered secret data and are stored in [wrapped format](#conditional-transitions).
 Before use, the secrets are unwrapped.
 
 The SECRET0_DIGEST and SECRET2_DIGEST are the digest values computed over the secret partitions in OTP holding the tokens and root keys.
@@ -392,22 +392,22 @@
 `otp_manuf_state_i`          | `input`          | `otp_manuf_state_t`                      | HW_CFG bits from OTP ({{< regref MANUF_STATE_0 >}}).
 `lc_otp_vendor_test_o`       | `output`         | `otp_ctrl_pkg::lc_otp_vendor_test_req_t` | Vendor-specific test bits to OTP ({{< regref OTP_VENDOR_TEST_CTRL >}}).
 `lc_otp_vendor_test_i`       | `input`          | `otp_ctrl_pkg::lc_otp_vendor_test_rsp_t` | Vendor-specific test bits to OTP ({{< regref OTP_VENDOR_TEST_STATUS >}}).
-`lc_dft_en_o`                | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_nvm_debug_en_o`          | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_hw_debug_en_o`           | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_cpu_en_o`                | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_creator_seed_sw_rw_en_o` | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_owner_seed_sw_rw_en_o`   | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_iso_part_sw_rd_en_o`     | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_iso_part_sw_wr_en_o`     | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_seed_hw_rd_en_o`         | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_keymgr_en_o`             | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_escalate_en_o`           | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_check_byp_en_o`          | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_clk_byp_req_o`           | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_clk_byp_ack_i`           | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_flash_rma_req_o`         | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
-`lc_flash_rma_ack_i`         | `output`         | `lc_tx_t`                                | [Multibit control signal]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}).
+`lc_dft_en_o`                | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_nvm_debug_en_o`          | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_hw_debug_en_o`           | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_cpu_en_o`                | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_creator_seed_sw_rw_en_o` | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_owner_seed_sw_rw_en_o`   | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_iso_part_sw_rd_en_o`     | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_iso_part_sw_wr_en_o`     | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_seed_hw_rd_en_o`         | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_keymgr_en_o`             | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_escalate_en_o`           | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_check_byp_en_o`          | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_clk_byp_req_o`           | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_clk_byp_ack_i`           | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_flash_rma_req_o`         | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
+`lc_flash_rma_ack_i`         | `output`         | `lc_tx_t`                                | [Multibit control signal](#life-cycle-decoded-outputs-and-controls).
 
 #### Power Manager Interface
 
@@ -435,7 +435,7 @@
 
 #### Control Signal Propagation
 
-For better security, all the [life cycle control signals]({{< relref "#life-cycle-decoded-outputs-and-controls" >}}) are broadcast in multi-bit form.
+For better security, all the [life cycle control signals](#life-cycle-decoded-outputs-and-controls) are broadcast in multi-bit form.
 The active ON state for every signal is broadcast as `4'b1010`, while the inactive OFF state is encoded as `4'b0101`.
 For all life cycle signals except the escalation signal ESCALATE_EN, all values different from ON must be interpreted as OFF in RTL.
 In case of ESCALATE_EN, all values different from OFF must be interpreted as ON in RTL.
@@ -475,7 +475,7 @@
 ![LC Controller Block Diagram](./doc/lc_ctrl_blockdiag.svg)
 
 The main FSM implements a linear state sequence that always moves in one direction for increased glitch resistance.
-I.e., it never returns to the initialization and broadcast states as described in the [life cycle state controller section]({{< relref "#main-fsm" >}}).
+I.e., it never returns to the initialization and broadcast states as described in the [life cycle state controller section](#main-fsm).
 
 The main FSM state is redundantly encoded, and augmented with the life cycle state.
 That augmented state vector is consumed by three combinational submodules:
@@ -486,10 +486,10 @@
 Note that the two additional life cycle control signals `lc_flash_rma_req_o` and `lc_clk_byp_req_o` are output by the main FSM, since they cannot be derived from the life cycle state alone and are reactive in nature in the sense that there is a corresponding acknowledgement signal.
 
 The life cycle controller contains a JTAG TAP that can be used to access the same CSR space that is accessible via TL-UL.
-In order to write to the CSRs, a [hardware mutex]({{< relref "#hardware-mutex" >}}) has to be claimed.
+In order to write to the CSRs, a [hardware mutex](#hardware-mutex) has to be claimed.
 
 The life cycle controller also contains two escalation receivers that are connected to escalation severity 1 and 2 of the alert handler module.
-The actions that are triggered by these escalation receivers are explained in the [escalation handling section]({{< relref "#escalation-handling" >}}) below.
+The actions that are triggered by these escalation receivers are explained in the [escalation handling section](#escalation-handling) below.
 
 ### System Integration and TAP Isolation
 
@@ -597,8 +597,8 @@
 3. {{< regref "TRANSITION_TOKEN_*" >}}: Any necessary token for conditional transitions.
 4. {{< regref "TRANSITION_CMD" >}}: Start the life cycle transition.
 5. {{< regref "STATUS" >}}: Indicates whether the requested transition succeeded.
-6. {{< regref OTP_VENDOR_TEST_CTRL >}}: See [Macro-specific test control bits]({{< relref "#vendor-specific-test-control-register" >}}).
-7. {{< regref OTP_VENDOR_TEST_STATUS >}}: See [Macro-specific test control bits]({{< relref "#vendor-specific-test-control-register" >}}).
+6. {{< regref OTP_VENDOR_TEST_CTRL >}}: See [Macro-specific test control bits](#vendor-specific-test-control-register).
+7. {{< regref OTP_VENDOR_TEST_STATUS >}}: See [Macro-specific test control bits](#vendor-specific-test-control-register).
 
 If the transition fails, the cause will be reported in this register as well.
 
@@ -622,7 +622,7 @@
 These registers are only active during RAW, TEST_* and RMA life cycle states.
 In all other life cycle states, the status register reads back all-zero, and the control register value will be tied to 0 before forwarding it to the OTP macro.
 
-Similarly to the [Life Cycle Request Interface]({{< relref "#life-cycle-request-interface" >}}), the hardware mutex must be claimed in order to access both of these registers.
+Similarly to the [Life Cycle Request Interface](#life-cycle-request-interface), the hardware mutex must be claimed in order to access both of these registers.
 Note that these registers read back all-zero if the mutex is not claimed.
 
 ### TAP Construction and Isolation
@@ -632,7 +632,7 @@
 The life cycle TAP controller is functionally very similar to the [RISC-V debug module](https://github.com/lowRISC/opentitan/blob/master/hw/ip/rv_dm/rtl/rv_dm.sv) for the Ibex processor and reuses the same debug transport module (DTM) and the associated debug module interface (DMI).
 The DTM and DMI are specified as part of the [RISC-V external debug specification, v0.13](https://github.com/riscv/riscv-debug-spec/blob/release/riscv-debug-release.pdf) and essentially provide a simple mechanism to read and write to a register space.
 In the case of the life cycle TAP controller this register space is essentially the life cycle CSR space.
-Hence, the [register table]({{< relref "#register-table" >}}) is identical for both the SW view and the view through the DMI, with the only difference that the byte offsets have to be converted to word offsets for the DMI.
+Hence, the [register table](#register-table) is identical for both the SW view and the view through the DMI, with the only difference that the byte offsets have to be converted to word offsets for the DMI.
 
 The RISC-V external debug specification defines the two custom JTAG registers 0x10 (DTM control/status) and 0x11 (DMI).
 The former provides status info such as idle state, number of address bits and RISC-V specification version plus reset control.
@@ -649,7 +649,7 @@
 
 # Programmer's Guide
 
-The register layout and offsets shown in the [register table]{{< relref "#register-table" >}} below are identical for both the CSR and JTAG TAP interfaces.
+The register layout and offsets shown in the [register table]#register-table below are identical for both the CSR and JTAG TAP interfaces.
 Hence the following programming sequence applies to both SW running on the device and SW running on the test appliance that accesses life cycle through the TAP.
 
 1. In order to perform a life cycle transition, SW should first check whether the life cycle controller has successfully initialized and is ready to accept a transition command by making sure that the {{< regref "STATUS.READY" >}} bit is set to 1, and that all other status and error bits in {{< regref "STATUS" >}} are set to 0.
diff --git a/hw/ip/otbn/README.md b/hw/ip/otbn/README.md
index 43ca3a9..5a0eed8 100644
--- a/hw/ip/otbn/README.md
+++ b/hw/ip/otbn/README.md
@@ -452,7 +452,7 @@
 OTBN is a security co-processor.
 It contains various security features and is hardened against side-channel analysis and fault injection attacks.
 The following sections describe the high-level security features of OTBN.
-Refer to the [Design Details]({{< relref "#design-details" >}}) section for a more in-depth description.
+Refer to the [Design Details](#design-details) section for a more in-depth description.
 
 ## Data Integrity Protection
 
@@ -462,16 +462,16 @@
 Whenever possible, data transmitted or stored within OTBN is protected with an integrity protection code which guarantees the detection of at least three modified bits per 32 bit word.
 Additionally, instructions and data stored in the instruction and data memory, respectively, are scrambled with a lightweight, non-cryptographically-secure cipher.
 
-Refer to the [Data Integrity Protection]({{<relref "#design-details-data-integrity-protection">}}) section for details of how the data integrity protections are implemented.
+Refer to the [Data Integrity Protection](#design-details-data-integrity-protection) section for details of how the data integrity protections are implemented.
 
 ## Secure Wipe
 
 OTBN provides a mechanism to securely wipe all state it stores, including the instruction memory.
 
 The full secure wipe mechanism is split into three parts:
-- [Data memory secure wipe]({{<relref "#design-details-secure-wipe-dmem">}})
-- [Instruction memory secure wipe]({{<relref "#design-details-secure-wipe-imem">}})
-- [Internal state secure wipe]({{<relref "#design-details-secure-wipe-internal">}})
+- [Data memory secure wipe](#design-details-secure-wipe-dmem)
+- [Instruction memory secure wipe](#design-details-secure-wipe-imem)
+- [Internal state secure wipe](#design-details-secure-wipe-internal)
 
 A secure wipe is performed automatically in certain situations, or can be requested manually by the host software.
 The full secure wipe is automatically initiated as a local reaction to a fatal error.
@@ -480,7 +480,7 @@
 A secure wipe of only the internal state is performed after reset, whenever an OTBN operation is complete, and after a recoverable error.
 Finally, host software can manually trigger the data memory and instruction memory secure wipe operations by issuing an appropriate [command](#design-details-commands).
 
-Refer to the [Secure Wipe]({{<relref "#design-details-secure-wipe">}}) section for implementation details.
+Refer to the [Secure Wipe](#design-details-secure-wipe) section for implementation details.
 
 ## Instruction Counter
 
@@ -504,7 +504,7 @@
 When a path is blanked it is forced to 0 (by ANDing the path with a blanking signal) preventing sensitive data bits producing a power signature via that path where that path isn't needed for the current instruction.
 
 Blanking controls all come directly from flops to prevent glitches in decode logic reducing the effectiveness of the blanking.
-These control signals are determined in the [prefetch stage]({{<relref "#instruction-prefetch">}}) via pre-decode logic.
+These control signals are determined in the [prefetch stage](#instruction-prefetch) via pre-decode logic.
 Full decoding is still performed in the execution stage with the full decode results checked against the pre-decode blanking control.
 If the full decode disagrees with the pre-decode OTBN raises a `BAD_INTERNAL_STATE` fatal error.
 
@@ -582,15 +582,15 @@
 This is to allow OTBN applications to store sensitive information in the other 1kiB, making it harder for that information to leak back to Ibex.
 
 Each memory write through the register interface updates a checksum.
-See the [Memory Load Integrity]({{< relref "#mem-load-integrity" >}}) section for more details.
+See the [Memory Load Integrity](#mem-load-integrity) section for more details.
 
 ### Instruction Prefetch {#instruction-prefetch}
 
-OTBN employs an instruction prefetch stage to enable pre-decoding of instructions to enable the [blanking SCA hardening measure]({{<relref "#blanking">}}).
+OTBN employs an instruction prefetch stage to enable pre-decoding of instructions to enable the [blanking SCA hardening measure](#blanking).
 Its operation is entirely transparent to software.
 It does not speculate and will only prefetch where the next instruction address can be known.
 This results in a stall cycle for all conditional branches and jumps as the result is neither predicted nor known ahead of time.
-Instruction bits held in the prefetch buffer are unscrambled but use the integrity protection described in [Data Integrity Protection]({{<relref "#design-details-data-integrity-protection">}}).
+Instruction bits held in the prefetch buffer are unscrambled but use the integrity protection described in [Data Integrity Protection](#design-details-data-integrity-protection).
 
 ### Random Numbers
 
@@ -709,7 +709,7 @@
    - The {{< regref "ERR_BITS" >}} register is set to a non-zero value that describes the error.
    - The current operation is marked as complete by setting {{< regref "INTR_STATE.done" >}}.
    - The {{< regref "STATUS" >}} register is set to `IDLE`.
-2. A [recoverable alert]({{< relref "#alerts" >}}) is raised.
+2. A [recoverable alert](#alerts) is raised.
 
 The host software can start another operation on OTBN after a recoverable error was detected.
 
@@ -727,14 +727,14 @@
    - The {{< regref "ERR_BITS" >}} register is set to a non-zero value that describes the error.
    - The current operation is marked as complete by setting {{< regref "INTR_STATE.done" >}}.
 3. The {{< regref "STATUS" >}} register is set to `LOCKED`.
-4. A [fatal alert]({{< relref "#alerts" >}}) is raised.
+4. A [fatal alert](#alerts) is raised.
 
 Note that OTBN can detect some errors even when it isn't running.
 One example of this is an error caused by an integrity error when reading or writing OTBN's memories over the bus.
 In this case, the {{< regref "ERR_BITS" >}} register will not change.
 This avoids race conditions with the host processor's error handling software.
 However, every error that OTBN detects when it isn't running is fatal.
-This means that the cause will be reflected in {{< regref "FATAL_ALERT_CAUSE" >}}, as described below in [Alerts]({{< relref "#alerts" >}}).
+This means that the cause will be reflected in {{< regref "FATAL_ALERT_CAUSE" >}}, as described below in [Alerts](#alerts).
 This way, no alert is generated without setting an error code somewhere.
 
 ### List of Errors {#design-details-list-of-errors}
@@ -889,7 +889,7 @@
 OTBN uses the same integrity protection code everywhere to provide overarching data protection without regular re-encoding.
 The code is applied to 32b data words, and produces 39b of encoded data.
 
-The code used is an (39,32) Hsiao "single error correction, double error detection" (SECDED) error correction code (ECC) [[CHEN08]({{< relref "#ref-chen08">}})].
+The code used is an (39,32) Hsiao "single error correction, double error detection" (SECDED) error correction code (ECC) [[CHEN08](#ref-chen08)].
 It has a minimum Hamming distance of four, resulting in the ability to detect at least three errors in a 32 bit word.
 The code is used for error detection only; no error correction is performed.
 
@@ -904,20 +904,20 @@
 Scrambling is used to obfuscate the memory contents and to diffuse the data.
 Obfuscation makes passive probing more difficult, while diffusion makes active fault injection attacks more difficult.
 
-The scrambling mechanism is described in detail in the [section "Scrambling Primitive" of the SRAM Controller Technical Specification](/hw/ip/sram_ctrl/doc/#scrambling-primitive).
+The scrambling mechanism is described in detail in the [section "Scrambling Primitive" of the SRAM Controller Technical Specification](../sram_ctrl/README.md#scrambling-primitive).
 
 When OTBN comes out of reset, its memories have default scrambling keys.
-The host processor can request new keys for each memory by issuing a [secure wipe of DMEM]({{<relref "#design-details-secure-wipe-dmem">}}) and a [secure wipe of IMEM]({{<relref "#design-details-secure-wipe-imem">}}).
+The host processor can request new keys for each memory by issuing a [secure wipe of DMEM](#design-details-secure-wipe-dmem) and a [secure wipe of IMEM](#design-details-secure-wipe-imem).
 
 #### Actions on Integrity Errors
 
 A fatal error is raised whenever a data integrity violation is detected, which results in an immediate stop of all processing and the issuing of a fatal alert.
-The section [Error Handling and Reporting]({{< relref "#design-details-error-handling-and-reporting" >}}) describes the error handling in more detail.
+The section [Error Handling and Reporting](#design-details-error-handling-and-reporting) describes the error handling in more detail.
 
 #### Register File Integrity Protection
 
 OTBN contains two register files: the 32b GPRs and the 256b WDRs.
-The data stored in both register files is protected with the [Integrity Protection Code]({{< relref "#design-details-integrity-protection-code">}}).
+The data stored in both register files is protected with the [Integrity Protection Code](#design-details-integrity-protection-code).
 Neither the register file contents nor register addresses are scrambled.
 
 The GPRs `x2` to `x31` store a 32b data word together with the Integrity Protection Code, resulting in 39b of stored data.
@@ -948,14 +948,14 @@
 OTBN's data memory is 256b wide, but allows for 32b word accesses.
 To facilitate such accesses, all integrity protection in the data memory is done on a 32b word granularity.
 
-All data entering or leaving the data memory block is protected with the [Integrity Protection Code]({{< relref "#design-details-integrity-protection-code">}});
+All data entering or leaving the data memory block is protected with the [Integrity Protection Code](#design-details-integrity-protection-code);
 this code is not re-computed within the memory block.
 
-Before being stored in SRAM, the data word with the attached Integrity Protection Code, as well as the address are scrambled according to the [memory scrambling algorithm]({{< relref "#design-details-memory-scrambling">}}).
+Before being stored in SRAM, the data word with the attached Integrity Protection Code, as well as the address are scrambled according to the [memory scrambling algorithm](#design-details-memory-scrambling).
 The scrambling is reversed on a read.
 
 The ephemeral memory scrambling key and the nonce are provided by the [OTP block](../otp_ctrl/README.md).
-They are set once when OTBN block is reset, and changed whenever a [secure wipe]({{<relref "#design-details-secure-wipe-dmem">}}) of the data memory is performed.
+They are set once when OTBN block is reset, and changed whenever a [secure wipe](#design-details-secure-wipe-dmem) of the data memory is performed.
 
 
 The Integrity Protection Code is checked on every memory read, even though the code remains attached to the data.
@@ -964,14 +964,14 @@
 
 #### Instruction Memory (IMEM) Integrity Protection
 
-All data entering or leaving the instruction memory block is protected with the [Integrity Protection Code]({{< relref "#design-details-integrity-protection-code">}});
+All data entering or leaving the instruction memory block is protected with the [Integrity Protection Code](#design-details-integrity-protection-code);
 this code is not re-computed within the memory block.
 
-Before being stored in SRAM, the instruction word with the attached Integrity Protection Code, as well as the address are scrambled according to the [memory scrambling algorithm]({{< relref "#design-details-memory-scrambling">}}).
+Before being stored in SRAM, the instruction word with the attached Integrity Protection Code, as well as the address are scrambled according to the [memory scrambling algorithm](#design-details-memory-scrambling).
 The scrambling is reversed on a read.
 
 The ephemeral memory scrambling key and the nonce are provided by the [OTP block](../otp_ctrl/README.md).
-They are set once when OTBN block is reset, and changed whenever a [secure wipe]({{<relref "#design-details-secure-wipe-imem">}}) of the instruction memory is performed.
+They are set once when OTBN block is reset, and changed whenever a [secure wipe](#design-details-secure-wipe-imem) of the instruction memory is performed.
 
 The Integrity Protection Code is checked on every memory read, even though the code remains attached to the data.
 A further check must be performed when the data is consumed.
@@ -988,7 +988,7 @@
 However, in this case the attacker would be equally able to control responses from OTBN, so any such check could be subverted.
 
 The CRC used is the 32-bit CRC-32-IEEE checksum.
-This standard choice of generating polynomial makes it compatible with other tooling and libraries, such as the [crc32 function](https://docs.python.org/3/library/binascii.html#binascii.crc32) in the python 'binascii' module and the crc instructions in the RISC-V bitmanip specification [[SYMBIOTIC21]]({{<relref "#ref-symbiotic21" >}}).
+This standard choice of generating polynomial makes it compatible with other tooling and libraries, such as the [crc32 function](https://docs.python.org/3/library/binascii.html#binascii.crc32) in the python 'binascii' module and the crc instructions in the RISC-V bitmanip specification [[SYMBIOTIC21]](#ref-symbiotic21).
 The stream over which the checksum is computed is the stream of writes that have been seen since the last write to {{< regref "LOAD_CHECKSUM" >}}.
 Each write is treated as a 48b value, `{imem, idx, wdata}`.
 Here, `imem` is a single bit flag which is one for writes to IMEM and zero for writes to DMEM.
@@ -1010,9 +1010,9 @@
 Applications running on OTBN may store sensitive data in the internal registers or the memory.
 In order to prevent an untrusted application from reading any leftover data, OTBN provides the secure wipe operation.
 This operation can be applied to:
-- [Data memory]({{<relref "#design-details-secure-wipe-dmem">}})
-- [Instruction memory]({{<relref "#design-details-secure-wipe-imem">}})
-- [Internal state]({{<relref "#design-details-secure-wipe-internal">}})
+- [Data memory](#design-details-secure-wipe-dmem)
+- [Instruction memory](#design-details-secure-wipe-imem)
+- [Internal state](#design-details-secure-wipe-internal)
 
 The three forms of secure wipe can be triggered in different ways.
 
@@ -1077,7 +1077,7 @@
 
 OTBN is a specialized coprocessor which is used from the host CPU.
 This section describes how to interact with OTBN from the host CPU to execute an existing OTBN application.
-The section [Writing OTBN applications]({{< ref "#writing-otbn-applications" >}}) describes how to write such applications.
+The section [Writing OTBN applications](#writing-otbn-applications) describes how to write such applications.
 
 ## High-level operation sequence
 
@@ -1103,7 +1103,7 @@
   When {{< regref "STATUS" >}} has become `LOCKED` a fatal error has occurred and OTBN must be reset to perform further operations.
 * Alternatively, software can listen for the `done` interrupt to determine if the operation has completed.
   The standard sequence of working with interrupts has to be followed, i.e. the interrupt has to be enabled, an interrupt service routine has to be registered, etc.
-  The [DIF]({{<relref "#dif" >}}) contains helpers to do so conveniently.
+  The [DIF](#dif) contains helpers to do so conveniently.
 
 Note: This operation sequence only covers functional aspects.
 Depending on the application additional steps might be necessary, such as deleting secrets from the memories.
@@ -1114,7 +1114,7 @@
 
 ## Driver {#driver}
 
-A higher-level driver for the OTBN block is available at `sw/device/lib/runtime/otbn.h` ([API documentation](/sw/apis/lib_2runtime_2otbn_8h.html)).
+A higher-level driver for the OTBN block is available at `sw/device/lib/runtime/otbn.h` ([API documentation](../../../sw/apis/lib_2runtime_2otbn_8h.html)).
 
 Another driver for OTBN is part of the silicon creator code at `sw/device/silicon_creator/lib/drivers/otbn.h`.
 
@@ -1159,7 +1159,7 @@
 - The current operation is marked as complete by setting {{< regref "INTR_STATE.done" >}} and clearing {{< regref "STATUS" >}}.
 
 The first 2kiB of DMEM can be used to pass data back to the host processor, e.g. a "return value" or an "exit code".
-Refer to the section [Passing of data between the host CPU and OTBN]({{<relref "#writing-otbn-applications-datapassing" >}}) for more information.
+Refer to the section [Passing of data between the host CPU and OTBN](#writing-otbn-applications-datapassing) for more information.
 
 ## Using hardware loops
 
diff --git a/hw/ip/otbn/dv/README.md b/hw/ip/otbn/dv/README.md
index cda12e4..64c098c 100644
--- a/hw/ip/otbn/dv/README.md
+++ b/hw/ip/otbn/dv/README.md
@@ -40,7 +40,7 @@
 
 OTBN has the following interfaces:
 - A [Clock and reset interface](../../../dv/sv/common_ifs/README.md#clk_rst_if)
-- A [TileLink interface]({{< relref "/hw/dv/sv/tl_agent/doc.md" >}}).
+- A [TileLink interface](../../../dv/sv/tl_agent/README.md).
   OTBN is a TL-UL device, which expects to communicate with a TL-UL host.
   In the OpenTitan SoC, this will be the Ibex core.
 - Idle signals in each clock domain, `idle_o`, and `idle_otp_o`
diff --git a/hw/ip/otp_ctrl/README.md b/hw/ip/otp_ctrl/README.md
index 6c2b67c..f2c1e55 100644
--- a/hw/ip/otp_ctrl/README.md
+++ b/hw/ip/otp_ctrl/README.md
@@ -93,7 +93,7 @@
   - This controls whether a partition is locked and prevented from future updates.
   - A locked partition is stored alongside a digest to be used later for integrity verification.
 - Integrity Verification
-  - Once a partition is write-locked by calculating and writing a non-zero [digest]({{< relref "#locking-a-partition" >}}) to it, it can undergo periodic verification (time-scale configurable by software).
+  - Once a partition is write-locked by calculating and writing a non-zero [digest](#locking-a-partition) to it, it can undergo periodic verification (time-scale configurable by software).
 This verification takes two forms, partition integrity checks, and storage consistency checks.
 
 Since the OTP is memory-like in nature (it only outputs a certain number of bits per address location), some of the logical partitions are buffered in registers for instantaneous and parallel access by hardware.
@@ -109,7 +109,7 @@
 
 Generally speaking, the production life cycle of a device is split into 5 stages "Manufacturing" -> "Calibration and Testing" -> "Provisioning" -> "Mission" -> "RMA".
 OTP values are usually programmed during "Calibration and Testing", "Provisioning" and "RMA" stages, as explained below.
-A detailed listing of all the items and the corresponding memory map can be found in the [Programmer's Guide]({{< relref "#programmers-guide" >}})) further below.
+A detailed listing of all the items and the corresponding memory map can be found in the [Programmer's Guide](#programmers-guide)) further below.
 
 ### Calibration and Test
 
@@ -137,10 +137,10 @@
 
 Write access to a partition can be permanently locked when software determines it will no longer make any updates to that partition.
 To lock, an integrity constant is calculated and programmed alongside the other data of that partition.
-The size of that integrity constant depends on the partition size granule, and is either 32bit or 64bit (see also [Direct Access Memory Map]({{< relref "#direct-access-memory-map" >}})).
+The size of that integrity constant depends on the partition size granule, and is either 32bit or 64bit (see also [Direct Access Memory Map](#direct-access-memory-map)).
 
 Once the "integrity digest" is non-zero, no further updates are allowed.
-If the partition is secret, software is in addition no longer able to read its contents (see [Secret Partition description]({{< relref "#secret-vs-nonsecret-partitions" >}})).
+If the partition is secret, software is in addition no longer able to read its contents (see [Secret Partition description](#secret-vs-nonsecret-partitions)).
 
 Note however, in all partitions, the digest itself is **ALWAYS** readable.
 This gives software an opportunity to confirm that the locking operation has proceeded correctly, and if not, scrap the part immediately.
@@ -290,7 +290,7 @@
 The commands are issued from the [life cycle controller](../lc_ctrl/README.md), and similarly, successful or failed indications are also sent back to the life cycle controller.
 Similar to the functional interface, the life cycle controller allows only one update per power cycle, and after a requested transition reverts to an inert state until reboot.
 
-For more details on how the software programs the OTP, please refer to the [Programmer's Guide]({{< relref "#programmers-guide" >}})) further below.
+For more details on how the software programs the OTP, please refer to the [Programmer's Guide](#programmers-guide)) further below.
 
 
 ## Hardware Interfaces
@@ -305,9 +305,9 @@
 `AlertAsyncOn`              | 2'b11         | 2'b11        |
 `RndCnstLfsrSeed`           | (see RTL)     | (see RTL)    | Seed to be used for the internal 40bit partition check timer LFSR. This needs to be replaced by the silicon creator before the tapeout.
 `RndCnstLfsrPerm`           | (see RTL)     | (see RTL)    | Permutation to be used for the internal 40bit partition check timer LFSR. This needs to be replaced by the silicon creator before the tapeout.
-`RndCnstKey`                | (see RTL)     | (see RTL)    | Random scrambling keys for secret partitions, to be used in the [scrambling datapath]({{< relref "#scrambling-datapath" >}}).
-`RndCnstDigestConst`        | (see RTL)     | (see RTL)    | Random digest finalization constants, to be used in the [scrambling datapath]({{< relref "#scrambling-datapath" >}}).
-`RndCnstDigestIV`           | (see RTL)     | (see RTL)    | Random digest initialization vectors, to be used in the [scrambling datapath]({{< relref "#scrambling-datapath" >}}).
+`RndCnstKey`                | (see RTL)     | (see RTL)    | Random scrambling keys for secret partitions, to be used in the [scrambling datapath](#scrambling-datapath).
+`RndCnstDigestConst`        | (see RTL)     | (see RTL)    | Random digest finalization constants, to be used in the [scrambling datapath](#scrambling-datapath).
+`RndCnstDigestIV`           | (see RTL)     | (see RTL)    | Random digest initialization vectors, to be used in the [scrambling datapath](#scrambling-datapath).
 `RndCnstRawUnlockToken`     | (see RTL)     | (see RTL)    | Global RAW unlock token to be used for the first life cycle transition. See also [conditional life cycle transitions](../lc_ctrl/README.md#conditional-transitions).
 
 ### Signals
@@ -434,7 +434,7 @@
 ]}
 {{< /wavejson >}}
 
-The keys are derived from the FLASH_DATA_KEY_SEED and FLASH_ADDR_KEY_SEED values stored in the `SECRET1` partition using the [scrambling primitive]({{< relref "#scrambling-datapath" >}}).
+The keys are derived from the FLASH_DATA_KEY_SEED and FLASH_ADDR_KEY_SEED values stored in the `SECRET1` partition using the [scrambling primitive](#scrambling-datapath).
 If the key seeds have not yet been provisioned, the keys are derived from all-zero constants, and the `flash_otp_key_o.seed_valid` signal will be set to 0 in the response.
 
 Note that the req/ack protocol runs on the OTP clock.
@@ -449,7 +449,7 @@
 #### Interfaces to SRAM and OTBN Scramblers
 
 The interfaces to the SRAM and OTBN scrambling devices follow a req / ack protocol, where the scrambling device first requests a new ephemeral key by asserting the request channel (`sram_otp_key_i[*]`, `otbn_otp_key_i`).
-The OTP controller then fetches entropy from EDN and derives an ephemeral key using the SRAM_DATA_KEY_SEED and the [PRESENT scrambling data path]({{< relref "#scrambling-datapath" >}}).
+The OTP controller then fetches entropy from EDN and derives an ephemeral key using the SRAM_DATA_KEY_SEED and the [PRESENT scrambling data path](#scrambling-datapath).
 Finally, the OTP controller returns a fresh ephemeral key via the response channels (`sram_otp_key_o[*]`, `otbn_otp_key_o`), which complete the req / ack handshake.
 The wave diagram below illustrates this process for the OTBN scrambling device.
 
@@ -468,7 +468,7 @@
 It should be noted that this mechanism requires the EDN and entropy distribution network to be operational, and a key derivation request will block if they are not.
 
 Note that the req/ack protocol runs on the OTP clock.
-It is the task of the scrambling device to perform the synchronization as described in the previous subsection on the [flash scrambler interface]({{< relref "#interface-to-flash-scrambler" >}}).
+It is the task of the scrambling device to perform the synchronization as described in the previous subsection on the [flash scrambler interface](#interface-to-flash-scrambler).
 
 #### Hardware Config Bits
 
@@ -493,7 +493,7 @@
 
 ![OTP Controller Block Diagram](./doc/otp_ctrl_blockdiag.svg)
 
-Each of the partitions P0-P7 has its [own controller FSM]({{< relref "#partition-implementations" >}}) that interacts with the OTP wrapper and the [scrambling datapath]({{< relref "#scrambling-datapath" >}}) to fulfill its tasks.
+Each of the partitions P0-P7 has its [own controller FSM](#partition-implementations) that interacts with the OTP wrapper and the [scrambling datapath](#scrambling-datapath) to fulfill its tasks.
 The partitions expose the address ranges and access control information to the Direct Access Interface (DAI) in order to block accesses that go to locked address ranges.
 Further, the only two blocks that have (conditional) write access to the OTP are the DAI and the Life Cycle Interface (LCI) blocks.
 The partitions can only issue read transactions to the OTP macro.
@@ -546,7 +546,7 @@
 Also, read access through the DAI and the CSR window can be locked at runtime via a CSR.
 Read transactions through the CSR window will error out if they are out of bounds, or if read access is locked.
 
-Note that unrecoverable [OTP errors]({{< relref "#generalized-open-source-interface" >}}), ECC failures in the digest register or external escalation via `lc_escalate_en` will move the partition controller into a terminal error state.
+Note that unrecoverable [OTP errors](#generalized-open-source-interface), ECC failures in the digest register or external escalation via `lc_escalate_en` will move the partition controller into a terminal error state.
 
 #### Buffered Partition
 
@@ -580,7 +580,7 @@
 
 Upon reset release, the DAI controller first sends an initialization command to the OTP macro.
 Once the OTP macro becomes operational, an initialization request is sent to all partition controllers, which will read out and initialize the corresponding buffer registers.
-The DAI then becomes operational once all partitions have initialized, and supports read, write and digest calculation commands (see [here]({{< relref "#direct-access-interface" >}}) for more information about how to interact with the DAI through the CSRs).
+The DAI then becomes operational once all partitions have initialized, and supports read, write and digest calculation commands (see [here](#direct-access-interface) for more information about how to interact with the DAI through the CSRs).
 
 Read and write commands transfer either 32bit or 64bit of data from the OTP to the corresponding CSR and vice versa. The access size is determined automatically, depending on whether the partition is scrambled or not. Also, (de)scrambling is performed transparently, depending on whether the partition is scrambled or not.
 
@@ -594,7 +594,7 @@
 ![Life Cycle Interface FSM](./doc/otp_ctrl_lci_fsm.svg)
 
 Upon reset release the LCI FSM waits until the OTP controller has initialized and the LCI gets enabled.
-Once it is in the idle state, life cycle state updates can be initiated via the life cycle interface as [described here]({{< relref "#state-transitions" >}}).
+Once it is in the idle state, life cycle state updates can be initiated via the life cycle interface as [described here](#state-transitions).
 The LCI controller takes the life cycle state to be programmed and writes all 16bit words to OTP.
 In case of unrecoverable OTP errors, the FSM signals an error to the life cycle controller and moves into a terminal error state.
 
@@ -603,7 +603,7 @@
 ![Key Derivation Interface FSM](./doc/otp_ctrl_kdi_fsm.svg)
 
 Upon reset release the KDI FSM waits until the OTP controller has initialized and the KDI gets enabled.
-Once it is in the idle state, key derivation can be requested via the [flash]({{< relref "#interface-to-flash-scrambler" >}}) and [sram]({{< relref "#interface-to-sram-and-otbn-scramblers" >}}) interfaces.
+Once it is in the idle state, key derivation can be requested via the [flash](#interface-to-flash-scrambler) and [sram](#interface-to-sram-and-otbn-scramblers) interfaces.
 Based on which interface makes the request, the KDI controller will evaluate a variant of the PRESENT digest mechanism as described in more detail below.
 
 ### Scrambling Datapath
@@ -623,7 +623,7 @@
 
 #### Digest Calculation
 
-The integrity digests used in the [partition checks]({{< relref "#partition-checks" >}}) are computed using a custom [Merkle-Damgard](https://en.wikipedia.org/wiki/Merkle%E2%80%93Damg%C3%A5rd_construction) scheme, where the employed one-way compression function F is constructed by using PRESENT in a [Davies-Meyer arrangement](https://en.wikipedia.org/wiki/One-way_compression_function#Davies%E2%80%93Meyer).
+The integrity digests used in the [partition checks](#partition-checks) are computed using a custom [Merkle-Damgard](https://en.wikipedia.org/wiki/Merkle%E2%80%93Damg%C3%A5rd_construction) scheme, where the employed one-way compression function F is constructed by using PRESENT in a [Davies-Meyer arrangement](https://en.wikipedia.org/wiki/One-way_compression_function#Davies%E2%80%93Meyer).
 This is illustrated in subfigure b).
 
 At the beginning of the digest calculation the 64bit state is initialized with an initialization vector (IV).
@@ -720,7 +720,7 @@
 `rdata_o`               | `output`         | `logic [IfWidth-1:0]`       | Read data from read commands.
 `err_o`                 | `output`         | `logic [ErrWidth-1:0]`      | Error code.
 
-The `prim_otp` wrappers implements the `Macro*` error codes (0x0 - 0x4) defined in the [OTP error handling]({{< relref "#error-handling" >}}).
+The `prim_otp` wrappers implements the `Macro*` error codes (0x0 - 0x4) defined in the [OTP error handling](#error-handling).
 
 The timing diagram below illustrates the timing of a command.
 Note that both read and write commands return a response, and each command is independent of the previously issued commands.
@@ -782,7 +782,7 @@
 
 1. Check that the OTP controller has successfully initialized by reading {{< regref STATUS >}}. I.e., make sure that none of the ERROR bits are set, and that the DAI is idle ({{< regref STATUS.DAI_IDLE >}}).
 2. Set up the periodic background checks:
-    - Choose whether to enable periodic [background checks]({{< relref "#partition-checks" >}}) by programming nonzero mask values to {{< regref INTEGRITY_CHECK_PERIOD >}} and {{< regref CONSISTENCY_CHECK_PERIOD >}}.
+    - Choose whether to enable periodic [background checks](#partition-checks) by programming nonzero mask values to {{< regref INTEGRITY_CHECK_PERIOD >}} and {{< regref CONSISTENCY_CHECK_PERIOD >}}.
     - Choose whether such checks shall be subject to a timeout by programming a nonzero timeout cycle count to {{< regref CHECK_TIMEOUT >}}.
     - It is recommended to lock down the background check registers via {{< regref CHECK_REGWEN >}}, once the background checks have been set up.
 
@@ -823,7 +823,7 @@
 {{< regref DIRECT_ACCESS_CMD >}}     | Command register to trigger a read or a write access.
 {{< regref DIRECT_ACCESS_REGWEN >}}  | Write protection register for DAI.
 
-See further below for a detailed [Memory Map]({{< relref "#direct-access-memory-map" >}}) of the address space accessible via the DAI.
+See further below for a detailed [Memory Map](#direct-access-memory-map) of the address space accessible via the DAI.
 
 ### Readout Sequence
 
@@ -835,7 +835,7 @@
 3. Trigger a read command by writing 0x1 to {{< regref DIRECT_ACCESS_CMD >}}.
 4. Poll the {{< regref STATUS >}} until the DAI state goes back to idle.
 Alternatively, the `otp_operation_done` interrupt can be enabled up to notify the processor once an access has completed.
-5. If the status register flags a DAI error, additional handling is required (see [Section on Error handling]({{< relref "#error-handling" >}})).
+5. If the status register flags a DAI error, additional handling is required (see [Section on Error handling](#error-handling)).
 6. If the region accessed has a 32bit access granule, the 32bit chunk of read data can be read from {{< regref DIRECT_ACCESS_RDATA_0 >}}.
 If the region accessed has a 64bit access granule, the 64bit chunk of read data can be read from the {{< regref DIRECT_ACCESS_RDATA_0 >}} and {{< regref DIRECT_ACCESS_RDATA_1 >}} registers.
 7. Go back to 1. and repeat until all data has been read.
@@ -854,7 +854,7 @@
 4. Trigger a write command by writing 0x2 to {{< regref DIRECT_ACCESS_CMD >}}.
 5. Poll the {{< regref STATUS >}} until the DAI state goes back to idle.
 Alternatively, the `otp_operation_done` interrupt can be enabled up to notify the processor once an access has completed.
-6. If the status register flags a DAI error, additional handling is required (see [Section on Error handling]({{< relref "#error-handling" >}})).
+6. If the status register flags a DAI error, additional handling is required (see [Section on Error handling](#error-handling)).
 7. Go back to 1. and repeat until all data has been written.
 
 The hardware will set {{< regref DIRECT_ACCESS_REGWEN >}} to 0x0 while an operation is pending in order to temporarily lock write access to the CSRs registers.
@@ -871,7 +871,7 @@
 4. Trigger a digest calculation command by writing 0x4 to {{< regref DIRECT_ACCESS_CMD >}}.
 5. Poll the {{< regref STATUS >}} until the DAI state goes back to idle.
 Alternatively, the `otp_operation_done` interrupt can be enabled up to notify the processor once an access has completed.
-6. If the status register flags a DAI error, additional handling is required (see [Section on Error handling]({{< relref "#error-handling" >}})).
+6. If the status register flags a DAI error, additional handling is required (see [Section on Error handling](#error-handling)).
 
 The hardware will set {{< regref DIRECT_ACCESS_REGWEN >}} to 0x0 while an operation is pending in order to temporarily lock write access to the CSRs registers.
 
@@ -944,7 +944,7 @@
 For the software partition digests, it is entirely up to software to decide on the digest algorithm to be used.
 Hardware will determine the lock condition only based on whether a non-zero value is present at that location or not.
 
-For the hardware partitions, hardware calculates this digest and uses it for [background verification]({{< relref "#partition-checks" >}}).
+For the hardware partitions, hardware calculates this digest and uses it for [background verification](#partition-checks).
 Digest calculation can be triggered via the DAI.
 
 Finally, it should be noted that the RMA_TOKEN and CREATOR_ROOT_KEY_SHARE0 / CREATOR_ROOT_KEY_SHARE1 items can only be programmed when the device is in the DEV, PROD, PROD_END and RMA stages.
@@ -956,8 +956,8 @@
 
 The following represents a typical provisioning sequence for items in all partitions (except for the LIFE_CYCLE partition, which is not software-programmable):
 
-1. [Program]({{< relref "#programming-sequence" >}}) the item in 32bit or 64bit chunks via the DAI.
-2. [Read back]({{< relref "#readout-sequence" >}}) and verify the item via the DAI.
+1. [Program](#programming-sequence) the item in 32bit or 64bit chunks via the DAI.
+2. [Read back](#readout-sequence) and verify the item via the DAI.
 3. If the item is exposed via CSRs or a CSR window, perform a full-system reset and verify whether those fields are correctly populated.
 
 Note that any unrecoverable errors during the programming steps, or mismatches during the readback and verification steps indicate that the device might be malfunctioning (possibly due to fabrication defects) and hence the device may have to be scrapped.
@@ -968,8 +968,8 @@
 Once a partition has been fully populated, write access to that partition has to be permanently locked.
 For the HW_CFG and SECRET* partitions, this can be achieved as follows:
 
-1. [Trigger]({{< relref "#digest-calculation-sequence" >}}) a digest calculation via the DAI.
-2. [Read back]({{< relref "#readout-sequence" >}}) and verify the digest location via the DAI.
+1. [Trigger](#digest-calculation-sequence) a digest calculation via the DAI.
+2. [Read back](#readout-sequence) and verify the digest location via the DAI.
 3. Perform a full-system reset and verify that the corresponding CSRs exposing the 64bit digest have been populated ({{< regref "HW_CFG_DIGEST_0" >}}, {{< regref "SECRET0_DIGEST_0" >}}, {{< regref "SECRET1_DIGEST_0" >}} or {{< regref "SECRET2_DIGEST_0" >}}).
 
 It should be noted that locking only takes effect after a system reset since the affected partitions first have to re-sense the digest values.
@@ -978,8 +978,8 @@
 
 For the {{< regref "CREATOR_SW_CFG" >}} and {{< regref "OWNER_SW_CFG" >}} partitions, the process is similar, but computation and programming of the digest is entirely up to software:
 
-1. Compute a 64bit digest over the relevant parts of the partition, and [program]({{< relref "#programming-sequence" >}}) that value to {{< regref "CREATOR_SW_CFG_DIGEST_0" >}} or {{< regref "OWNER_SW_CFG_DIGEST_0" >}} via the DAI. Note that digest accesses through the DAI have an access granule of 64bit.
-2. [Read back]({{< relref "#readout-sequence" >}}) and verify the digest location via the DAI.
+1. Compute a 64bit digest over the relevant parts of the partition, and [program](#programming-sequence) that value to {{< regref "CREATOR_SW_CFG_DIGEST_0" >}} or {{< regref "OWNER_SW_CFG_DIGEST_0" >}} via the DAI. Note that digest accesses through the DAI have an access granule of 64bit.
+2. [Read back](#readout-sequence) and verify the digest location via the DAI.
 3. Perform a full-system reset and verify that the corresponding digest CSRs {{< regref "CREATOR_SW_CFG_DIGEST_0" >}} or {{< regref "OWNER_SW_CFG_DIGEST_0" >}} have been populated with the correct 64bit value.
 
 Note that any unrecoverable errors during the programming steps, or mismatches during the read-back and verification steps indicate that the device might be malfunctioning (possibly due to fabrication defects) and hence the device may have to be scrapped.
diff --git a/hw/ip/pinmux/README.md b/hw/ip/pinmux/README.md
index 98fefdb..5d969f0 100644
--- a/hw/ip/pinmux/README.md
+++ b/hw/ip/pinmux/README.md
@@ -58,7 +58,7 @@
 Further, all output enables are set to zero, which essentially causes all pads to be in high-Z state after reset.
 In addition to wiring programmability, each muxed peripheral input can be set constantly to 0 or 1, and each muxed chip output can be set constantly to 0, 1 or high-Z.
 
-See the [muxing matrix]({{< relref "#muxing-matrix">}}) section for more details about the mux implementation.
+See the [muxing matrix](#muxing-matrix) section for more details about the mux implementation.
 
 ### Retention and Wakeup Features
 
@@ -71,19 +71,19 @@
 The `pinmux` module itself is in the always-on (AON) power domain, and as such does not loose configuration state when a sleep power cycle is performed.
 However, only the wakeup detector logic will be actively clocked during sleep in order to save power.
 
-See the [retention logic]({{< relref "#retention-logic">}}) and [wakeup detectors]({{< relref "#wakeup-detectors">}}) sections for more details about the mux implementation.
+See the [retention logic](#retention-logic) and [wakeup detectors](#wakeup-detectors) sections for more details about the mux implementation.
 
 ### Test and Debug Access
 
 The hardware strap sampling and TAP isolation logic provides test and debug access to the chip during specific life cycle states.
-This mechanism is explained in more detail in the [strap sampling and TAP isolation]({{< relref "#strap-sampling-and-tap-isolation">}}) section.
+This mechanism is explained in more detail in the [strap sampling and TAP isolation](#strap-sampling-and-tap-isolation) section.
 
 ### Pad Attributes
 
 Additional pad-specific features such as inversion, pull-up, pull-down, virtual open-drain, drive-strength and input/output inversion etc. can be exercise via the pad attribute CSRs.
 The `pinmux` module supports a comprehensive set of such pad attributes, but it is permissible that some of them may not be supported by the underlying pad implementation.
 For example, certain ASIC libraries may not provide open-drain outputs, and FPGAs typically do not allow all of these attributes to be programmed dynamically at runtime.
-See the [generic pad wrapper]({{< relref "#generic-pad-wrapper" >}}) section below for more details.
+See the [generic pad wrapper](#generic-pad-wrapper) section below for more details.
 Note that static pad attributes for FPGAs are currently not covered in this specification.
 
 ## Hardware Interfaces
@@ -94,7 +94,7 @@
 
 The following table lists the main parameters used throughout the `pinmux` design.
 Note that the `pinmux` is generated based on the system configuration, and hence these parameters are placed into a package.
-The pinout and `pinmux` mappings are listed under [Pinout and Pinmux Mapping]({{< relref "#pinout-and-pinmux-mapping" >}}) for specific top-level configurations.
+The pinout and `pinmux` mappings are listed under [Pinout and Pinmux Mapping](#pinout-and-pinmux-mapping) for specific top-level configurations.
 
 Parameter      | Description
 ---------------|---------------
@@ -259,7 +259,7 @@
 `attr_i[8:7]`        | `input`    | `logic`     | Slew rate (0x0: slowest, 0x3: fastest)
 `attr_i[12:9]`       | `input`    | `logic`     | Drive strength (0x0: weakest, 0xf: strongest)
 
-Note that the corresponding pad attribute registers {{< regref "MIO_PAD_ATTR_0" >}} and {{< regref "DIO_PAD_ATTR_0" >}} have "writes-any-reads-legal" (WARL) behavior (see also [pad attributes]({{< relref "#pad-attributes" >}})).
+Note that the corresponding pad attribute registers {{< regref "MIO_PAD_ATTR_0" >}} and {{< regref "DIO_PAD_ATTR_0" >}} have "writes-any-reads-legal" (WARL) behavior (see also [pad attributes](#pad-attributes)).
 
 # Programmers Guide
 
@@ -349,7 +349,7 @@
 These detectors can be individually enabled and disabled regardless of the sleep state.
 This ensures that SW can set them up before and disable them after sleep in order to ensure that no events are missed during sleep entry and exit.
 
-For more information on the patterns supported by the wakeup detectors, see [wakeup detectors]({{< relref "#wakeup-detectors" >}}).
+For more information on the patterns supported by the wakeup detectors, see [wakeup detectors](#wakeup-detectors).
 
 A typical programming sequence for the wakeup detectors looks as follows:
 
diff --git a/hw/ip/prim/doc/prim_ram_1p_scr.md b/hw/ip/prim/doc/prim_ram_1p_scr.md
index 5b7cf85..5c55be1 100644
--- a/hw/ip/prim/doc/prim_ram_1p_scr.md
+++ b/hw/ip/prim/doc/prim_ram_1p_scr.md
@@ -16,7 +16,7 @@
 For area constrained settings, the parameter `ReplicateKeyStream` in `prim_ram_1p_scr` can be set to 1 in order to replicate the keystream block generated by one single primitive instead of using multiple parallel PRINCE instances (but it should be understood that this lowers the level of security).
 
 Since plain CTR mode does not diffuse the data bits due to the bitwise XOR, the scheme is augmented by passing each individual word through a two-layer substitution-permutation (S&P) network implemented with the `prim_subst_perm` primitive (the diffusion chunk width can be parameterized via the `DiffWidth` parameter).
-The S&P network employed is similar to the one employed in PRESENT and will be explained in more detail [further below]({{< relref "#custom-substitution-permutation-network" >}}).
+The S&P network employed is similar to the one employed in PRESENT and will be explained in more detail [further below](#custom-substitution-permutation-network).
 Note that if individual bytes need to be writable without having to perform a read-modify-write operation, the diffusion chunk width should be set to 8.
 
 Another CTR mode augmentation that is aimed at breaking the linear address space is SRAM address scrambling.
diff --git a/hw/ip/rstmgr/dv/README.md b/hw/ip/rstmgr/dv/README.md
index a745342..6a3af54 100644
--- a/hw/ip/rstmgr/dv/README.md
+++ b/hw/ip/rstmgr/dv/README.md
@@ -112,7 +112,7 @@
   Checked via SVAs in [`hw/ip/pwrmgr/dv/sva/pwrmgr_rstmgr_sva_if.sv`](https://github.com/lowRISC/opentitan/blob/master/hw/ip/pwrmgr/dv/sva/pwrmgr_rstmgr_sva_if.sv).
 * Response to `cpu_i.ndmreset_req` input: after it is asserted, rstmgr's `rst_sys_src_n` should go active.
   Checked via SVA in [`hw/ip/pwrmgr/dv/sva/pwrmgr_rstmgr_sva_if.sv`](https://github.com/lowRISC/opentitan/blob/master/hw/ip/pwrmgr/dv/sva/pwrmgr_rstmgr_sva_if.sv).
-* Resets cascade hierarchically per [Reset Topology]({{< relref "hw/ip/rstmgr/doc" >}}:#reset-topology).
+* Resets cascade hierarchically per [Reset Topology](../README.md#reset-topology).
   Checked via SVA in [`hw/ip/rstmgr/dv/sva/rstmgr_cascading_sva_if.sv`](https://github.com/lowRISC/opentitan/blob/master/hw/ip/rstmgr/dv/sva/rstmgr_cascading_sva_if.sv).
 * POR must be active for at least 32 consecutive cycles before going inactive before output resets go inactive.
   Checked via SVA in [`hw/ip/rstmgr/dv/sva/rstmgr_cascading_sva_if.sv`](https://github.com/lowRISC/opentitan/blob/master/hw/ip/rstmgr/dv/sva/rstmgr_cascading_sva_if.sv).
diff --git a/hw/ip/spi_host/README.md b/hw/ip/spi_host/README.md
index d115801..1ecd193 100644
--- a/hw/ip/spi_host/README.md
+++ b/hw/ip/spi_host/README.md
@@ -644,7 +644,7 @@
 
 The 8-bit fields {{< regref "STATUS.RXQD" >}} and {{< regref "STATUS.TXQD" >}} respectively indicate the number of words currently stored in the RX and TX FIFOs.
 
-The remaining fields in the {{< regref "STATUS" >}} register are all flags related to the management of the TX and RX FIFOs, which are described in the [section on SPI Events]({{< relref "#spi-events-and-event-interrupts" >}}).
+The remaining fields in the {{< regref "STATUS" >}} register are all flags related to the management of the TX and RX FIFOs, which are described in the [section on SPI Events](#spi-events-and-event-interrupts).
 
 ## Other Registers
 
@@ -864,7 +864,7 @@
 
 Dual- and Standard-mode segments can tolerate byte-to-byte delays of 7 or 15 clocks, so there are no known mechanism for transient stalls at these speeds.
 
-Please refer to the [the Appendix]({{< relref "#analysis-of-transient-datapath-stalls" >}}) for a detailed analysis of transient stall events.
+Please refer to the [the Appendix](#analysis-of-transient-datapath-stalls) for a detailed analysis of transient stall events.
 
 ## SPI_HOST Finite State Machine (FSM)
 
@@ -1141,7 +1141,7 @@
 
 ### Implementation of Configuration Change Delays
 
-As described in the [Theory of Operation]({{< relref "#idle-time-delays-when-changing-configurations" >}}), changes in configuration only occur when the SPI_HOST is idle.
+As described in the [Theory of Operation](#idle-time-delays-when-changing-configurations), changes in configuration only occur when the SPI_HOST is idle.
 The configuration change must be preceded by enough idle time to satisfy the previous configuration, and followed by enough idle time to satisfy the new configuration.
 
 In order to support these idle time requirements, the SPI_HOST FSM has two idle waiting states.
diff --git a/hw/ip/sram_ctrl/README.md b/hw/ip/sram_ctrl/README.md
index e0387c6..5e51820 100644
--- a/hw/ip/sram_ctrl/README.md
+++ b/hw/ip/sram_ctrl/README.md
@@ -36,10 +36,10 @@
 Note however that the throughput of read operations is the same for full- and sub-word read operations.
 
 The scrambling mechanism is always enabled and the `sram_ctrl` provides the scrambling device with a predefined scrambling key and nonce when it comes out of reset.
-It is the task of SW to request an updated scrambling key and nonce via the CSRs as described in the [Programmer's Guide]({{< relref "#programmers-guide" >}}) below.
+It is the task of SW to request an updated scrambling key and nonce via the CSRs as described in the [Programmer's Guide](#programmers-guide) below.
 
 For SW convenience, the SRAM controller also provides an LFSR-based memory initialization feature that can overwrite the entire memory with pseudorandom data.
-Similarly to the scrambling key, it is the task of of SW to request memory initialization via the CSRs as described in the [Programmer's Guide]({{< relref "#programmers-guide" >}}) below.
+Similarly to the scrambling key, it is the task of of SW to request memory initialization via the CSRs as described in the [Programmer's Guide](#programmers-guide) below.
 
 Note that TL-UL accesses to the memory that occur while a key request or hardware initialization is pending will be blocked until the request has completed.
 
diff --git a/hw/ip/sysrst_ctrl/README.md b/hw/ip/sysrst_ctrl/README.md
index eff0b75..9e8253f 100644
--- a/hw/ip/sysrst_ctrl/README.md
+++ b/hw/ip/sysrst_ctrl/README.md
@@ -140,7 +140,7 @@
 - A H -> L transition on the `pwrb_in_i` signal
 - A L -> H transition on the `lid_open_i` signal
 
-Note that the signals may be potentially inverted due to the [input inversion feature]({{< relref "#inversion" >}}).
+Note that the signals may be potentially inverted due to the [input inversion feature](#inversion).
 
 In order to activate this feature, software should do the following:
 
diff --git a/hw/ip/uart/README.md b/hw/ip/uart/README.md
index ee66a15..44728f4 100644
--- a/hw/ip/uart/README.md
+++ b/hw/ip/uart/README.md
@@ -6,7 +6,7 @@
 
 This document specifies UART hardware IP functionality. This module
 conforms to the
-[Comportable guideline for peripheral functionality.](/doc/rm/comportability_specification)
+[Comportable guideline for peripheral functionality.](../../../doc/contributing/hw/comportability/README.md)
 See that document for integration overview within the broader
 top level system.
 
diff --git a/hw/ip/usbdev/README.md b/hw/ip/usbdev/README.md
index d7e9af6..97293d0 100644
--- a/hw/ip/usbdev/README.md
+++ b/hw/ip/usbdev/README.md
@@ -45,7 +45,7 @@
 
 The block diagram shows a high level view of the USB device including the main register access paths.
 
-![Block Diagram](usbdev_block.svg "image_tooltip")
+![Block Diagram](doc/usbdev_block.svg)
 
 
 ## Clocking
diff --git a/hw/ip_templates/alert_handler/README.md b/hw/ip_templates/alert_handler/README.md
index e9d9e6a..a401258 100644
--- a/hw/ip_templates/alert_handler/README.md
+++ b/hw/ip_templates/alert_handler/README.md
@@ -155,7 +155,7 @@
 
 The `lpg_cg_en_i` and `lpg_rst_en_i` are two arrays with multibit indication signals from the [clock](../../ip/clkmgr/README.md) and [reset managers](../../ip/rstmgr/README.md).
 These indication signals convey whether a specific group of alert senders are either clock gated or in reset.
-As explained in [more detail below]({{< relref "#low-power-management-of-alert-channels" >}}), this information is used to temporarily halt the ping timer mechanism on channels that are in a low-power state in order to prevent false positives.
+As explained in [more detail below](#low-power-management-of-alert-channels), this information is used to temporarily halt the ping timer mechanism on channels that are in a low-power state in order to prevent false positives.
 
 #### Crashdump Output
 
@@ -369,7 +369,7 @@
 ### LFSR Timer
 
 The `ping_req_i` inputs of all signaling modules (`prim_alert_receiver`, `prim_esc_sender`) instantiated within the alert handler are connected to a central ping timer that alternatingly pings either an alert line or an escalation line after waiting for a pseudo-random amount of clock cycles.
-Further, this ping timer also randomly selects a particular alert line to be pinged (escalation senders are always pinged in-order due to the [ping monitoring mechanism]({{< relref "#monitoring-of-pings-at-the-escalation-receiver-side" >}}) on the escalation side).
+Further, this ping timer also randomly selects a particular alert line to be pinged (escalation senders are always pinged in-order due to the [ping monitoring mechanism](#monitoring-of-pings-at-the-escalation-receiver-side) on the escalation side).
 That should make it more difficult to predict the next ping occurrence based on past observations.
 
 The ping timer is implemented using an [LFSR-based PRNG of Galois type](../../ip/prim/doc/prim_lfsr.md).
@@ -397,7 +397,7 @@
 Only alerts that have been *enabled and locked* will be pinged in order to avoid spurious alerts.
 Escalation channels are always enabled, and hence will always be pinged once this mechanism has been turned on.
 
-In addition to the ping timer mechanism described above, the escalation receivers contain monitoring  counters that monitor the liveness of the alert handler (described in more detail in [this section]({{< relref "#monitoring-of-pings-at-the-escalation-receiver-side" >}})).
+In addition to the ping timer mechanism described above, the escalation receivers contain monitoring  counters that monitor the liveness of the alert handler (described in more detail in [this section](#monitoring-of-pings-at-the-escalation-receiver-side).
 This mechanism requires that the maximum wait time between escalation receiver pings is bounded.
 To that end, escalation senders are pinged in-order every second ping operation (i.e., the wait time is randomized, but the selection of the escalation line is not).
 
diff --git a/hw/top_earlgrey/data/chip_testplan.hjson b/hw/top_earlgrey/data/chip_testplan.hjson
index 36d75f7..9b0c645 100644
--- a/hw/top_earlgrey/data/chip_testplan.hjson
+++ b/hw/top_earlgrey/data/chip_testplan.hjson
@@ -1734,7 +1734,7 @@
               - lc_iso_part_sw_wr_en_o: impacts flash_ctrl
               - lc_seed_hw_rd_en_o: impacts flash_ctrl & otp_ctrl
             - These outputs are enabled per the
-              [life cycle architecture spec]({{< relref "doc/security/specs/device_life_cycle/#architecture" >}}).
+              [life cycle architecture spec](doc/security/specs/device_life_cycle/README.md#architecture).
 
             X-ref'ed with the respective IP tests that consume these signals.
 
diff --git a/hw/top_earlgrey/dv/README.md b/hw/top_earlgrey/dv/README.md
index 1f3a329..9abfe58 100644
--- a/hw/top_earlgrey/dv/README.md
+++ b/hw/top_earlgrey/dv/README.md
@@ -130,7 +130,7 @@
 ```console
 $ ./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_uart_tx_rx
 ```
-For a list of available tests  to run, please see the 'Tests' column in the [testplan]({{< relref "#testplan" >}}) below.
+For a list of available tests  to run, please see the 'Tests' column in the [testplan](#testplan) below.
 
 ## Regressions
 
diff --git a/hw/top_earlgrey/ip/pinmux/doc/autogen/targets.md b/hw/top_earlgrey/ip/pinmux/doc/autogen/targets.md
index dc8f616..b475239 100644
--- a/hw/top_earlgrey/ip/pinmux/doc/autogen/targets.md
+++ b/hw/top_earlgrey/ip/pinmux/doc/autogen/targets.md
@@ -7,5 +7,5 @@
 
 |  Target Name  |  #IO Banks  |  #Muxed Pads  |  #Direct Pads  |  #Manual Pads  |  #Total Pads  |                               Pinout / Pinmux Tables                                |
 |:-------------:|:-----------:|:-------------:|:--------------:|:--------------:|:-------------:|:-----------------------------------------------------------------------------------:|
-|     ASIC      |      4      |      47       |       14       |       10       |      71       | [Pinout Table](../../../top_earlgrey/ip/pinmux/doc/autogen/pinout_asic/index.html)  |
-|     CW310     |      4      |      47       |       14       |       15       |      76       | [Pinout Table](../../../top_earlgrey/ip/pinmux/doc/autogen/pinout_cw310/index.html) |
+|     ASIC      |      4      |      47       |       14       |       10       |      71       | [Pinout Table](../../../top_earlgrey/ip/pinmux/doc/autogen/pinout_asic.md)  |
+|     CW310     |      4      |      47       |       14       |       15       |      76       | [Pinout Table](../../../top_earlgrey/ip/pinmux/doc/autogen/pinout_cw310.md) |
diff --git a/hw/top_earlgrey/ip_autogen/alert_handler/README.md b/hw/top_earlgrey/ip_autogen/alert_handler/README.md
index 33bc75b..bd25c26 100644
--- a/hw/top_earlgrey/ip_autogen/alert_handler/README.md
+++ b/hw/top_earlgrey/ip_autogen/alert_handler/README.md
@@ -155,7 +155,7 @@
 
 The `lpg_cg_en_i` and `lpg_rst_en_i` are two arrays with multibit indication signals from the [clock](../../../ip/clkmgr/README.md) and [reset managers](../../../ip/rstmgr/README.md).
 These indication signals convey whether a specific group of alert senders are either clock gated or in reset.
-As explained in [more detail below]({{< relref "#low-power-management-of-alert-channels" >}}), this information is used to temporarily halt the ping timer mechanism on channels that are in a low-power state in order to prevent false positives.
+As explained in [more detail below](#low-power-management-of-alert-channels), this information is used to temporarily halt the ping timer mechanism on channels that are in a low-power state in order to prevent false positives.
 
 #### Crashdump Output
 
@@ -369,7 +369,7 @@
 ### LFSR Timer
 
 The `ping_req_i` inputs of all signaling modules (`prim_alert_receiver`, `prim_esc_sender`) instantiated within the alert handler are connected to a central ping timer that alternatingly pings either an alert line or an escalation line after waiting for a pseudo-random amount of clock cycles.
-Further, this ping timer also randomly selects a particular alert line to be pinged (escalation senders are always pinged in-order due to the [ping monitoring mechanism]({{< relref "#monitoring-of-pings-at-the-escalation-receiver-side" >}}) on the escalation side).
+Further, this ping timer also randomly selects a particular alert line to be pinged (escalation senders are always pinged in-order due to the [ping monitoring mechanism](#monitoring-of-pings-at-the-escalation-receiver-side) on the escalation side).
 That should make it more difficult to predict the next ping occurrence based on past observations.
 
 The ping timer is implemented using an [LFSR-based PRNG of Galois type](../../../ip/prim/doc/prim_lfsr.md).
@@ -397,7 +397,7 @@
 Only alerts that have been *enabled and locked* will be pinged in order to avoid spurious alerts.
 Escalation channels are always enabled, and hence will always be pinged once this mechanism has been turned on.
 
-In addition to the ping timer mechanism described above, the escalation receivers contain monitoring  counters that monitor the liveness of the alert handler (described in more detail in [this section]({{< relref "#monitoring-of-pings-at-the-escalation-receiver-side" >}})).
+In addition to the ping timer mechanism described above, the escalation receivers contain monitoring  counters that monitor the liveness of the alert handler (described in more detail in [this section](#monitoring-of-pings-at-the-escalation-receiver-side)).
 This mechanism requires that the maximum wait time between escalation receiver pings is bounded.
 To that end, escalation senders are pinged in-order every second ping operation (i.e., the wait time is randomized, but the selection of the escalation line is not).
 
diff --git a/sw/README.md b/sw/README.md
index 3398c5d..7329bf1 100644
--- a/sw/README.md
+++ b/sw/README.md
@@ -3,7 +3,7 @@
 ---
 
 This is the landing spot for software documentation within the OpenTitan project.
-More description and information can be found within the [Reference Manual](../util/README.md) and [User Guide]({{< relref "doc/ug" >}}) areas.
+More description and information can be found within the [Reference Manual](../util/README.md) and [User Guide](https://docs.opentitan.org/doc/guides/getting_started/) areas.
 
 There are three major parts to the OpenTitan software stack:
 
@@ -55,7 +55,7 @@
 
 ## Vendored in Code
 
-See [Vendor Tool User Guide]({{% relref "doc/ug/vendor_hw.md" %}})
+See [Vendor Tool User Guide](../util/doc/vendor.md)
 
 * [CoreMark](https://github.com/eembc/coremark)
 * [Cryptoc](https://chromium.googlesource.com/chromiumos/third_party/cryptoc/)
diff --git a/util/README.md b/util/README.md
index e62e78c..756fe4a 100644
--- a/util/README.md
+++ b/util/README.md
@@ -11,4 +11,4 @@
    * [Crossbar Tool](./tlgen/README.md): Describes `tlgen.py` and its Hjson format source. Used to generate self-documentation, rtl files of the crossbars at the toplevel.
    * [Vendor-In Tool](./doc/vendor.md): Describes `util/vendor.py` and its Hjson control file. Used to pull a local copy of code maintained in other upstream repositories and apply local patch sets.
 * [FPGA Reference Manual](../doc/contributing/fpga/ref_manual_fpga.md)
-* [OpenTitan Continuous Integration]({{< relref "continuous_integration.md" >}})
+* [OpenTitan Continuous Integration](../doc/contributing/ci/README.md)
diff --git a/util/check_links.py b/util/check_links.py
deleted file mode 100755
index cec0094..0000000
--- a/util/check_links.py
+++ /dev/null
@@ -1,653 +0,0 @@
-#!/usr/bin/env python3
-# Copyright lowRISC contributors.
-# Licensed under the Apache License, Version 2.0, see LICENSE for details.
-# SPDX-License-Identifier: Apache-2.0
-
-from pathlib import Path
-import tomli
-import re
-import argparse
-
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--debug",
-                    help = "print debugging information (very verbose)",
-                    action="store_true")
-parser.add_argument("--debug-only-file",
-                    help = "only process a particular file (helpful to debug)",)
-parser.add_argument("--debug-only-line",
-                    help = "only process a particular file (helpful to debug)",)
-parser.add_argument("--apply-suggestions",
-                    help = "apply all the fixes suggested by the tools",
-                    action="store_true")
-parser.add_argument("--pedantic-reldir",
-                    help = "be especially pedantic about relative path using ./",
-                    action="store_true")
-parser.add_argument("root_doc",
-                    help = "path to the root of documentation")
-parser.add_argument("--warn-internal-weblinks",
-                    help = "warn about links to internal documentation using docs.opentitan.org URLs",
-                    action=argparse.BooleanOptionalAction,
-                    default = True)
-parser.add_argument("--warn-unlisted-files",
-                    help = "warn about md files that are not listed in SUMMARY.md",
-                    action=argparse.BooleanOptionalAction,
-                    default = True)
-parser.add_argument("--warn-hugo-links",
-                    help = "warn about links using the old Hugo format",
-                    action=argparse.BooleanOptionalAction,
-                    default = True)
-parser.add_argument("--move-list-file",
-                    help = "provide a list of moved files to help fix the links")
-parser.add_argument("--ignore-book",
-                    help = "do not search for mdbook and just check all markdown files in a directory",
-                    action="store_true")
-g_args = parser.parse_args()
-
-
-def debug(msg):
-    if g_args.debug:
-        print("[debug] " + msg)
-
-
-def parse_move_list_file(path):
-    if not path.exists() or not path.is_file():
-        print(f"error: move file list {path} does not exist or is not a file")
-        return None
-    with path.open() as f:
-        res = {}
-        line_nr = 0
-        for line in f:
-            line_nr += 1
-            if line.strip() == "":
-                continue
-            # ignore empty lines and split at whitespace
-            arr = line.strip().split()
-            if len(arr) != 2:
-                print("move file list f{path}: f{line_nr} has the wrong format and will be ignored")
-                continue
-            res[arr[0]] = arr[1]
-        return res
-
-
-g_move_list_file = None
-if g_args.move_list_file:
-    g_move_list_file = parse_move_list_file(Path(g_args.move_list_file))
-
-
-class Book:
-    """
-    Represents an mdbook
-    """
-
-    def __init__(self, path):
-        """
-        Load a book by pointing to the directory containing a book.toml
-        This is will build the list of files listed in SUMMARY.md
-        and also build the list of all md files under this directory
-        """
-        self.path = path.resolve()
-        self.mdbook_defaults = ["README.md", "index.md"]
-        self.site_url = "NO_URL"
-        global g_args
-        if g_args.ignore_book:
-            # create a fake book
-            self.srcdir = path
-        else:
-            self._load_book(path)
-        # build list of all files that appear in the subdirectory
-        # for each file we will eventually set to True if they are listed in summary
-        self.allmdfiles = {}
-
-        def addfile(f):
-            name = self.local_name(f)
-            self.allmdfiles[name] = False
-        self.apply_to_all_files(addfile)
-        # load summary
-        if not g_args.ignore_book:
-            self._load_summary()
-
-    def _load_book(self, path):
-        toml = self.path / "book.toml"
-        assert toml.exists(), "there is not book.toml"
-        toml_dict = tomli.loads(toml.read_bytes().decode("utf-8"))
-        self.srcdir = (path / toml_dict["book"]["src"]).resolve()
-        if "output" in toml_dict and "html" in toml_dict["output"]:
-            self.site_url = toml_dict["output"]["html"].get("site-url", self.site_url)
-
-    def _load_summary(self):
-        # load summary and mark all links as true, ignore all errors for now
-        for (line, _, link) in Book.list_links_in_file(self.srcdir / "SUMMARY.md"):
-            name = self.local_name(self.srcdir / link)
-            # we only check .md files, SUMMARY.md can include more types of files
-            # with preprocessors
-            if name.name.endswith(".md"):
-                # if ever SUMMARY.md contains an invalid link, ignore it (it will reported by the general check)
-                if name in self.allmdfiles:
-                    # assert name in self.allmdfiles, f"book at {self.path}: link {link} in summary is invalid (normalized to {str(name)})"
-                    self.allmdfiles[name] = True
-
-    def __repr__(self):
-        return f"Book({repr(self.path)})"
-
-    def __str__(self):
-        return f"book at {str(self.path)}, src at {str(self.srcdir)}, url at {str(self.site_url)}"
-
-    def local_name(self, abspath):
-        """
-        Given an absolute Path to a file under the source directory of the book,
-        return a relative Path to the source directory, return None in case of error
-        """
-        try:
-            return abspath.resolve().relative_to(self.srcdir)
-        except ValueError:
-            return None
-
-    def apply_to_all_files(self, fn):
-        """
-        List all files under the directory of the book and call a function on each one
-        """
-        def apply_rec(path):
-            for child in path.iterdir():
-                if child.name.endswith(".md"):
-                    fn(child)
-                if child.is_dir():
-                    apply_rec(child.resolve())
-        apply_rec(self.path)
-
-    def list_links_in_file(abspath):
-        """
-        Return a list of all the links in a file (given by its absolute Path)
-        Each entry is a tuple of the form (line_nr, (start, end), link)
-        indicating the line number, the character range within the line and the link itself
-        """
-        cm_re = r"\[[^\]]*\]\(\s*(?P<url>[^)]*)\s*\)"
-        # we also look for non-markdown-but-Hugo-relref
-        hugo_relref_re = r"(?P<hugo>\{\{<\s*relref\s*\"[^\"]*\"\s*>\}\})"
-        prog = re.compile(f"{cm_re}|{hugo_relref_re}")
-        links = []
-        with abspath.open() as f:
-            line_nr = 0
-            for line in f:
-                line_nr += 1
-                for x in prog.finditer(line):
-                    grp = "url" if x.group("url") is not None else "hugo"
-                    posrange = (x.start(grp), x.end(grp))
-                    links.append((line_nr, posrange, x[grp]))
-        return links
-
-    def is_file_in_summary(self, path):
-        """
-        Check whether a file (given by its absolute Path) is in the SUMMARY.md file of the book
-        """
-        # if the file does not exists, it cannot be in summary
-        if not path.exists() or not path.is_file():
-            return False
-        local_name = self.local_name(path)
-        # if it points to an md file, it must be in the summary
-        assert path.name.endswith(".md"), f"this function only works for md files, called on {path}"
-        assert local_name in self.allmdfiles, f"link to \"{path}\" normalised to {local_name} not in list"
-        return self.allmdfiles[local_name]
-
-    def path_exists_or_was_moved(self, path, reverse_move=False):
-        """
-        This function takes a Path and returns a Path to the file, which could be potentially
-        different if the file was moved. This function handles file moves as recorded in the
-        move file list. It returns None if it cannot resolve it to an *existing* file or directory.
-        """
-        global g_move_list_file
-        move_list_file = g_move_list_file
-        if move_list_file is None:
-            move_list_file = {}
-        # reverse map if asked
-        if reverse_move:
-            res = {}
-            for (from_, to) in move_list_file.items():
-                res[to] = from_
-            move_list_file = res
-        # first check if it was moved because it could be that a file was moved
-        # and later replaced by another file with different content: the move takes priority
-        local_name = self.local_name(path)
-        debug(f"CHECK MOVE {path}, normalized to {local_name}, exists {path.exists()}, reverse {reverse_move}, moved {str(local_name) in move_list_file}")
-        if local_name is None:
-            return None
-        if str(local_name) in move_list_file:
-            moved_to = move_list_file[str(local_name)]
-            assert moved_to[0] != '/', "the target of a move cannot be an absolute path"
-            newpath = self.srcdir / moved_to
-            # we only check new path existence for non-reverse moved
-            if reverse_move or newpath.exists():
-                debug(f"move detected {path}, normalize to {local_name}, now {newpath}")
-                return newpath
-        if path.exists():
-            return path
-        return None
-
-    def _resolve_existing_path(self, path, report):
-        """
-        Given a Path to an existing file/directory, figure out what is the intended markdown target.
-        This handles for example a link to a directory that contains an README.md.
-        This returns a Path to the intended markdown file, or None if none could be found.
-        When retuning None, it will report the error using the report function.
-        """
-        # if it points to a directory, we try to resolve it to README.md or index.md
-        if path.is_dir():
-            for subfile in self.mdbook_defaults:
-                if (path / subfile).exists():
-                    return path / subfile
-            report(
-                f"link to directory \"{self.local_name(path)}\" but cannot find a valid subfile, tried {self.mdbook_defaults}")
-            return None
-        else:
-            return path
-
-    def resolve_relative_link(self, local_filename, link, relative_to, report):
-        """
-        Given a link (a string) in a file (represented by its relative Path to the source directory),
-        try to find the Path to the intended markdown target and return it.
-        If none can be found, returns None. Any error/warning will reporting via the report function.
-        This function will handle things like pointing to directories, using index.html instead of README.md
-        and other oddities.
-        """
-        # if the link starts with a '/', this will mess with pathlib because bla / link will just produce bla
-        # but we want the link to relative to what was specified, so remove the initial '/' if needed
-        while len(link) > 0 and link[0] == '/':
-            link = link[1:]
-        debug(f"resolving {link} relative to {local_filename}")
-        # there are several cases:
-        # - points to a file
-        # - points to a directory
-        path = (relative_to / link).resolve()
-        if (newpath := self.path_exists_or_was_moved(path)) is not None:
-            return self._resolve_existing_path(newpath, report)
-        debug("direct resolution to a moved file has failed")
-        # if the link ends with index.html, and the corresponding index.md or README.md file exists
-        # then resolve it to that file
-        index_html = "index.html"
-        if link.endswith(index_html):
-            for subfile in self.mdbook_defaults:
-                new_link = link[:-len(index_html)] + subfile
-                path = (relative_to / new_link).resolve()
-                if (newpath := self.path_exists_or_was_moved(path)) is not None:
-                    # FIXME should we check that this is in the summary?
-                    report(
-                        f"link to HTML file \"{link}\", resolved to \"{self.local_name(newpath)}\", which is a local file")
-                    return path
-        # no solution found
-        report(
-            f"link to \"{link}\", resolved to \"{self.local_name(path)}\", which does not exists")
-        return None
-
-    def resolve_local_link(self, local_filename, link, report):
-        return self.resolve_relative_link(local_filename, link, (self.srcdir / local_filename).parent, report)
-
-    def resolve_global_link(self, local_filename, link, report):
-        return self.resolve_relative_link(local_filename, link, self.srcdir, report)
-
-
-def find_all_books(path):
-    """
-    Return a list of Book constructed from the book.toml files that can be
-    found under path (recursively).
-    """
-    books = []
-    for child in path.iterdir():
-        if child.is_dir():
-            books.extend(find_all_books(child))
-        if child.is_file() and child.name == "book.toml":
-            books.append(Book(path))
-    return books
-
-
-def relative_link_to(relpath, relative_to_dir):
-    """
-    Given an absolute Path to a file (or directory) and an absolute Path to a directory,
-    return a relative path to reach the file from the directory.
-    For example relative_link_to(/path/to/some/very/deep/file/hello.md, /path/to/a/shallow/dir)
-    returns ../../../some/very/deep/file/hello.md
-    """
-    parent = relative_to_dir.resolve()
-    pref = "./"
-    while True:
-        try:
-            out = relpath.relative_to(parent)
-            return pref + str(out)
-        except ValueError:
-            parent = parent.parent
-            if pref == "./":
-                pref = "../"
-            else:
-                pref += "../"
-    print(
-        f"ERROR couldn't create link for {relpath} relative to {relative_to_dir.resolve()}, this should never happen")
-    return None
-
-# decide when to suggest a link rewriting
-# the main to detect here is that some links are of theform x
-# but the canonical link would be just "./x"
-# at this stage we don't want to bother the use with such nonsense
-
-
-def should_rewrite_link(newlink, oldlink):
-    if oldlink == newlink:
-        return False
-    if newlink == "./" + oldlink and not g_args.pedantic_reldir:
-        return False
-    return True
-
-
-def try_several_resolutions(book, local_filename, link, process_list, report):
-    """
-    This function tries to resolve a link is several different ways by
-    calling the various functions in the processing list. To make the
-    user output readable, it will not forward any of the report/suggestions
-    unless the list has size 1 or the function suceeds. If no function suceeds,
-    it will print error message saying that
-    no rule applied.
-    """
-    if (process_list) == 1:
-        return process_list[0](book, local_filename, link, report)
-    # try them one by one
-    for fn in process_list:
-        action_list = []
-
-        def store_report(*args):
-            action_list.append(("report", *args))
-        out = fn(book, local_filename, link, store_report)
-        if out is None:
-            continue
-        # do actions
-        for act in action_list:
-            if act[0] == "report":
-                report(*act[1:])
-            else:
-                assert False
-        return out
-    report(f"could not resolve \"{link}\"")
-    return None
-
-
-def process_path(book, local_filename, path, report):
-    # check that path is in the summary (for md files)
-    global g_args
-    if path.name.endswith(".md") and not book.is_file_in_summary(path) and not g_args.ignore_book:
-        report(f"link to \"{book.local_name(path)}\", which is not listed in SUMMARY.md")
-        # at the moment, report the error but consider this a good link nonetheless
-    return path  # this link is ok
-
-
-def _process_relative_link(book, local_filename, link, report):
-    # at this point, we know this is a local link and we want to check it
-    # but also suggest some changes if we can figure out how to fix/lint it
-    # the following function tries to resolve where this link should point to
-    # then we compute how it should be written in the file
-    # and if the result is different from the original, we suggest a rewrite
-    debug(f"processing local link {link}")
-
-    path = book.resolve_local_link(local_filename, link, report)
-    debug(f"local link resolution gave {path}")
-    if path is None:
-        # cannot resolve link, assume that resolve_local_link reported the error
-        return
-    return process_path(book, local_filename, path, report)
-
-
-def __process_hugo_relref(book, local_filename, link, report, local_search):
-    # relref can either be relative path to this file or relative path
-    # to the root of the documentation
-
-    debug(f"internal processing hugo link {link}")
-
-    if local_search:
-        if link.startswith('/'):
-            return None
-        return _process_relative_link(book, local_filename, link, report)
-    else:
-        debug(f"try to resolve global link {link}")
-        path = book.resolve_global_link(local_filename, link, report)
-        if path is None:
-            # cannot resolve link, assume that resolve_global_link reported the error
-            return None
-        return process_path(book, local_filename, path, report)
-
-
-def _process_hugo_relref(book, local_filename, link, report):
-    # Hugo references can do things like pointing to a file bla.md just by writing bla
-    # this is makes it quite tricky since when we start mixing this up with move file
-    # rewriting or guess of directories, there is no easy way to detect when we should
-    # add .md to the link; similarly a link can point to a directory x/ in which case
-    # it should be understood as pointing to x/_index.md or x/index.md
-    # the solution is just to try with and without it
-    debug(f"processing hugo relref {link}")
-
-    def add_md(book, local_filename, link, report, local_search):
-        if link.endswith(".md"):
-            return None
-        return __process_hugo_relref(book, local_filename, link + ".md", report, local_search)
-
-    def add_subfile(book, local_filename, link, report, subfile, local_search):
-        if link.endswith(".md"):
-            return None
-        suff = subfile if link.endswith('/') else '/' + subfile
-        return __process_hugo_relref(book, local_filename, link + suff, report, local_search)
-    # not that we try add .md only if the first (normal) resolution fails
-    # following Hugo's documentation, we first try to resolve it relatively
-    # to this file and then relatively to the repo (unless it starts with /
-    # in which case it cannot be a relative link)
-    processes = [
-        # LOCAL search
-        lambda *arg: __process_hugo_relref(*arg, True),  # normal processing
-        lambda *arg: add_md(*arg, True),  # try to add .md add the end
-        lambda *arg: add_subfile(*arg, "index.md", True),  # try to add /index.md
-        lambda *arg: add_subfile(*arg, "_index.md", True),  # try to add /_index.md
-        # GLOBAL search
-        lambda *arg: __process_hugo_relref(*arg, False),  # normal processing
-        lambda *arg: add_md(*arg, False),  # try to add .md add the end
-        lambda *arg: add_subfile(*arg, "index.md", False),  # try to add /index.md
-        lambda *arg: add_subfile(*arg, "_index.md", False),  # try to add /_index.md
-    ]
-    return try_several_resolutions(book, local_filename, link, processes, report)
-
-
-def split_anchor_and(book, local_filename, link, process, report):
-    arr = link.split("#")
-    if len(arr) > 2:
-        report(f"link \"{link}\" is weird, giving up on that one")
-        return
-    base = arr[0]
-    anchor = "#" + arr[1] if len(arr) == 2 else ''
-    if base == "":
-        return
-    debug(f"split link to base \"{base}\" and fragment \"{anchor}\"")
-    path = process(book, local_filename, base, report)
-    if path is None:
-        return None
-    else:
-        return (path, anchor)
-
-
-def process_relative_link(book, local_filename, link, report):
-    return split_anchor_and(book, local_filename, link, _process_relative_link, report)
-
-
-def process_hugo_relref(book, local_filename, link, report):
-    return split_anchor_and(book, local_filename, link, _process_hugo_relref, report)
-
-
-def show_unlisted_files(book):
-    print(f"#### {book.path} ####")
-    # check files not in summary
-    header_printed = False
-    for (name, in_summary) in book.allmdfiles.items():
-        # obviously ignore SUMMARY.md
-        if not in_summary and name.name != "SUMMARY.md":
-            if not header_printed:
-                print("Files not listed in SUMMARY.md:")
-                header_printed = True
-            print(f"- {name}")
-
-
-# we will store the list of all suggestions and rewrite everything at the end
-# for extra safety, when recording a suggestion, we not only memorize what
-# we should change to, but also what we are changing from
-# when doing the changes, we will check that this matches
-g_suggestions = {}
-
-
-def add_suggestion(path, line_nr, pos_range, oldvalue, newvalue):
-    path = path.resolve()
-    ff = g_suggestions.get(path, {})
-    ll = ff.get(line_nr, {})
-    ll[pos_range] = (oldvalue, newvalue)
-    ff[line_nr] = ll
-    g_suggestions[path] = ff
-
-# apply all suggestions in a file
-
-
-def apply_suggestions(filename, suggestions):
-    out_content = ""
-    with filename.open() as f:
-        print(filename)
-        line_nr = 0
-        for line in f:
-            line_nr += 1
-            if line_nr in suggestions:
-                pos_delta = 0
-                for ((start, end), (oldvalue, newvalue)) in sorted(suggestions[line_nr].items(), key=lambda x: x[0][0]):
-                    # adjust
-                    start += pos_delta
-                    end += pos_delta
-                    # we make sure that the bit that we are replacing matches what it is supposed to replace
-                    # this is just for extra safety, if there is no bug this assert will always hold
-                    assert line[start:
-                                end] == oldvalue, f"mismatch between expected old value ({oldvalue}) and actual old value ({line[start:end]})"
-                    line = line[:start] + newvalue + line[end:]
-                    # change delta
-                    pos_delta += len(newvalue) - len(oldvalue)
-            out_content += line
-    filename.write_text(out_content)
-
-
-def check_all_links(book):
-    global g_args
-    hugo_relref_prog = re.compile(r"^\{\{<\s*relref\s*\"(?P<url>[^\"]*)\"\s*>\}\}$")
-    print(f"#### {book.path} ####")
-    # for all listed file, check links
-    for (name, in_summary) in book.allmdfiles.items():
-        # debugging check
-        if g_args.debug_only_file is not None:
-            local_only_name = book.local_name(Path(g_args.debug_only_file))
-            if local_only_name != name:
-                continue  # skip file
-        # we ignore files that are not in the summary (but we do check SUMMARY.md for bad links)
-        # warning: the check for SUMMARY.md needs to be sure to only consider SUMMARY.md at the root
-        # of the book, there could be others SUMMARY.md in the tree that are unrelated, we don't want to
-        # include those if they are not listed
-        if not g_args.ignore_book and not in_summary and name == Path("SUMMARY.md"):
-            continue  # skip file
-        # get links
-        header_printed = False
-        for (line_nr, posrange, link) in Book.list_links_in_file(book.srcdir / name):
-            # debugging check
-            oldname = None
-            if g_args.debug_only_line is not None:
-                only_line = int(g_args.debug_only_line)
-                if line_nr != only_line:
-                    continue  # skip line
-
-            def suggest(newvalue):
-                add_suggestion(book.srcdir / name, line_nr, posrange, link, newvalue)
-
-            def report(msg):
-                # it's global because the whole code is not in a function and nonlocal does not work in this case
-                nonlocal header_printed
-                if not header_printed:
-                    if oldname is not None and oldname != name:
-                        print(f"In {name} (previously {oldname}):")
-                    else:
-                        print(f"In {name}:")
-                    header_printed = True
-                print(f"- line {line_nr}: {msg}")
-
-            def maybe_suggest_rewrite(res):
-                if res is None:
-                    return
-                (path, fragment) = res
-                # this link is good but maybe it is not written in the canonical way, warn about that
-                relpath = relative_link_to(path, (book.srcdir / name).parent)
-                newlink = str(relpath) + fragment
-                if should_rewrite_link(newlink, link):
-                    report(f"link \"{link}\" resolved to \"{newlink}\"")
-                    report(f"SUGGEST rewrite link to \"{newlink}\"")
-                    suggest(newlink)
-            # link to internal doc that use the opentitan url
-            if link.startswith("https://docs.opentitan.org/"):
-                if g_args.warn_internal_weblinks:
-                    report(f"link {link} points to internal document, use URL_ROOT or local link")
-                continue
-            # if this file was moved, we need to make sure that the relative links that we compute
-            # are relative to the old position of the file
-            oldname = book.local_name(book.path_exists_or_was_moved(book.srcdir / name, True))
-            if oldname != name:
-                debug(f"file {name} was previously {oldname}")
-            # old Hugo links
-            if link.startswith("{{<") or link.startswith("{{%"):
-                # this tool only understands a tiny subset of those
-                x = hugo_relref_prog.match(link)
-                if x is not None:
-                    res = process_hugo_relref(book, oldname, x["url"], report)
-                    maybe_suggest_rewrite(res)
-                elif g_args.warn_hugo_links:
-                    report(f"link \"{link}\" uses old Hugo syntax")
-                continue
-            # URL outside of our doc or mailto
-            if link.startswith("https://") or link.startswith("mailto:"):
-                # ignore
-                continue
-            # unsecure url
-            if link.startswith("http://"):
-                report(f"url {link} is not secure")
-                continue
-            # cross-book links, not checking those yet
-            if link.startswith("{{URL_ROOT}}"):
-                report(f"link \"{link}\" NOT CHECKED")
-                continue
-
-            # try to resolve the path and suggest a rewriting if necessary
-            res = process_relative_link(book, oldname, link, report)
-            maybe_suggest_rewrite(res)
-
-
-g_root_doc = Path(g_args.root_doc).resolve()
-# if root doc is a link to a book.toml file, just open this one
-if g_root_doc.is_file():
-    assert g_root_doc.name == "book.toml", "you must give a path a book.toml file or a directory"
-    g_book_list = [Book(g_root_doc.parent)]
-elif g_args.ignore_book:
-    # create a fake book that contains everything
-    g_book_list = [Book(g_root_doc)]
-else:
-    g_book_list = find_all_books(g_root_doc)
-print("books:")
-for p in g_book_list:
-    print("- {}".format(p))
-
-# find all md files in the tree that are not listed in any SUMMARY.md
-if not g_args.ignore_book and g_args.warn_unlisted_files:
-    for book in g_book_list:
-        show_unlisted_files(book)
-
-for book in g_book_list:
-    check_all_links(book)
-
-print("### Summary of suggestions ###")
-for (filename, per_file) in g_suggestions.items():
-    print(f"In {filename}:")
-    for (line_nr, per_line) in per_file.items():
-        for (posrange, (oldvalue, newvalue)) in per_line.items():
-            print(f"- line {line_nr},{posrange[0]}-{posrange[1]}: \"{oldvalue}\" -> \"{newvalue}\"")
-
-if g_args.apply_suggestions:
-    for (filename, per_file) in g_suggestions.items():
-        apply_suggestions(filename, per_file)
diff --git a/util/topgen/README.md b/util/topgen/README.md
index 26cafe6..395d43e 100644
--- a/util/topgen/README.md
+++ b/util/topgen/README.md
@@ -65,7 +65,7 @@
 Once all validation is passed, the final Hjson is created.
 This Hjson is then used to generate the final top RTL.
 
-As part of this process, topgen invokes other tools, please see the documentation for [`reggen`](register_tool.md) and [`tlgen`](crossbar_tool.md) for more tool specific details.
+As part of this process, topgen invokes other tools, please see the documentation for [`reggen`](../reggen/README.md) and [`tlgen`](../tlgen/README.md) for more tool specific details.
 
 ## Usage