Before following this guide, make sure you've followed the [dependency installation instructions]({{< relref “getting_started” >}}).
OpenTitan software is built using Meson. However, Meson is not an exact fit for a lot of things OpenTitan does (such as distinguishing between FPGA, ASIC, and simulations), so the setup is a little bit different.
For example, the following commands build the test_rom
and hello_world
binaries for FPGA:
# Configure the Meson environment. cd $REPO_TOP ./meson_init.sh # Build the two targets we care about, specifically. ninja -C build-out sw/device/lib/testing/test_rom/test_rom_export_fpga_cw310 ninja -C build-out sw/device/examples/hello_world/hello_world_export_fpga_cw310 # Build *everything*, including targets for other devices. ninja -C build-out all
Note that specific targets are followed by the device they are built for. OpenTitan needs to link the same device executable for multiple devices, so each executable target is duplicated one for each device we support.
In general, clean
rules are unnecessary, and Meson will set up ninja
such that it reruns meson.build
files which have changed.
Build intermediates will show up in $REPO_TOP/build-out
, including unlinked object files and libraries, while completed executables are exported to $REPO_TOP/build-bin
. As a rule, you should only ever need to refer to artifacts inside of build-bin
; the exact structure of build-out
is subject to change. Complete details of these semantics are documented in util/build_consts.sh
.
The locations of build-{out,bin}
can be controled by setting the $BUILD_ROOT
enviromnent variable, which defaults to $REPO_TOP
.
./meson_init.sh
itself is idempotent, but this behavior can be changed with additional flags; see ./meson_init.sh
for more information. For this reason, most examples involving Meson will include a call to ./meson_init.sh
, but you will rarely need to run it more than once per checkout.
Building an executable foo
destined to run on the OpenTitan device $DEVICE
will output the following files under build-bin/sw/device
:
foo_$DEVICE.elf
: the linked program, in ELF format.foo_$DEVICE.bin
: the linked program, as a plain binary with ELF debug information removed.foo_$DEVICE.dis
: the disassembled program with inline source code.foo_$DEVICE.vmem
: a Verilog memory file which can be read by $readmemh()
in Verilog code.In general, this executable is built by the foo_export_$DEVICE
target. For example, this builds the pwrmgr_smoketest
test binary for DEVICE sim_dv
:
ninja -C build-out sw/device/tests/pwrmgr_smoketest_export_sim_dv
Building an executable destined to run on a host machine (i.e., under sw/host
) will output a host excecutable under build-bin/sw/host
, which can be run directly.
If you encounter an error running ./meson_init.sh
you could re-run using the -f
flag which will erase any existing building directories to yield a clean build. This sledgehammer is only intended to be used as a last resort when the existing configuration is seriously broken:
./meson_init.sh -f
If any meson.build
files are changed the configuration can be regenerated by passing the -r
flag to ./meson_init.sh
:
./meson_init.sh -r
The Mask ROM and many other software targets are built with Bazel
TLDR: You can run:
$REPO_TOP/bazelisk.sh test //... --test_tag_filters=-cw310 --disk_cache=~/bazel_cache
Bazelisk and the script that installs it will work, but by installing Bazel you can get some quality of life features like tab completion. This section is optional and can be skipped by setting the alias:
alias bazel="$REPO_TOP/bazelisk.sh"
If you haven't yet installed Bazel, and would like to, you can add it to apt and install it on Ubuntu systems with the following commands as described in the Bazel documentation:
sudo apt install apt-transport-https curl gnupg curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor > bazel.gpg sudo mv bazel.gpg /etc/apt/trusted.gpg.d/ echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list sudo apt update && sudo apt install bazel-4.2.0
or by following instructions for your system.
You can then build and run all the tests all the Bazel rules for OpenTitan within the workspace with the command:
bazel test //...
This is likely to include unhealthy or slow tests and build targets so you're likely to want to run more specific builds and tests. This invocation instructs Bazel to build everything it needs to run all the tests in the workspace.
Specific targets can be built with:
bazel build //<path to build file>:<name of target>
The rules for Bazel are described in a language called starlark, which looks a lot like Python.
The OpenTitan directory is defined as a Bazel workspace by the //WORKSPACE
file. BUILD
files provide the information Bazel needs to build the targets in a directory. BUILD
files also manage any subdirectories that don't have their own BUILD
files.
OpenTitan uses .bzl files to specify rules to build artifacts that require specific attention like target specific test rules and project specific binaries.
The WORKSPACE
file controls external dependencies such that builds can be made reproducible and hermetic. Bazel loads specific commits of external git repositories. It uses them to build OpenTitan targets (like it does with bazel_embedded) or to satisfy dependencies (as it does with abseil). To produce increasingly stable releases the WORKSPACE
file will fix a number of these references to specific commits and not the default branch head. Dependencies will be added to the workspace so builds and tests are less sensitive to external updates and have a simpler getting started flow.
Throughout the OpenTitan repo, BUILD
files describe targets and dependencies in the same directory and subdirectories that lack their own BUILD
files. BUILD
files are mostly hand-written; to suit the request that hand-written files not be included in autogen directories, there are large BUILD
files that describe how to build and depend on auto-generated files in autogen subdirectories.
The Linter for Bazel files is called “buildifier” and can be built and run with Bazel by running:
bazel run //:buildifier_fix
Which will run the linter on all of the WORKSPACE, BUILD and .bzl files in the workspace. It's fairly strict so the best way to keep git logs and blame reports clean is to run it before committing, and ask the same during code review.
Bazel will execute tests when invoked with bazel test <label-expression>
. The test rules have various tags applied to them to allow for filtering of tests during dependency analysis. Our tests are typically tagged with verilator, cw310, or manual. The --test_tag_filters
switch can be used to filter out tests that you either cannot or do not want to execute.
For example, if you do not have a CW310 FPGA board, you cannot execute the tests meant to execute on that board. You can instruct Bazel to not execute those tests:
bazel test --test_tag_filters=-cw310 //...
Rather than manually adding that command line switch to every invocation of bazel, you can add it to your $HOME/.bazelrc
file:
test --test_tag_filters=-cw310
Verilator is slow to build and Verilator tests are slow to run. Many directories have Verilator targets that will cause them to build Verilator when you build all targets with the wildcard ...
. Tagging allows us to exclude Verilator tests, but doesn't exclude building Verilator for targets that depend on it. To exclude all targets that depend on Verilator you should use a target pattern that can be generated with Bazel queries, and sed and then passed with xargs:
bazel query "rdeps(//..., //hw:verilator)" \ | sed -e 's/^/-/' \ | xargs bazel test -- //...
The Bazel query lists all the dependencies in the workspace that depend on Verilator. These are inverted for the target pattern by prepending a dash and then the test is invoked by xargs on a pattern which starts with the whole workspace and then excludes each of the targets that depend on Verilator.
By default Bazel caches in memory and will evict Verilator binaries often. If you're regularly running tests that depend on Verilator and have a few GB of disk space available use the --disk_cache=<filename>
to specify a cache:
bazel build //... --disk_cache=~/bazel_cache
Alternatively add the following to $HOME/.bazelrc
:
build --disk_cache=~/bazel_cache
A disassembly of all executable sections is produced by the build system by default. It can be found by looking for files with the .dis
extension next to the corresponding ELF file.
To get a different type of disassembly, e.g. one which includes data sections in addition to executable sections, objdump can be called manually. For example the following command shows how to disassemble all sections of the UART DIF smoke test interleaved with the actual source code:
riscv32-unknown-elf-objdump --disassemble-all --headers --line-numbers --source build-bin/sw/device/tests/uart_smoketest_sim_verilator.elf
Refer to the output of riscv32-unknown-elf-objdump --help
for a full list of options.