blob: 6e319f6f2cbd3ec2bc2acd392874de91cc5cbb5f [file] [log] [blame] [view]
---
title: Getting Started Building Software
---
## Prerequisites
_Make sure you followed the install instructions to [prepare the system]({{< relref "install_instructions#system-preparation" >}}) and install the [compiler toolchain]({{< relref "install_instructions#compiler-toolchain" >}})._
## Building software with Meson
OpenTitan software is built using [Meson](https://mesonbuild.com).
However, Meson is not an exact fit for a lot of things OpenTitan does (such as distinguishing between FPGA, ASIC, and simulations), so the setup is a little bit different.
For example, the following commands build the `test_rom` and `hello_world` binaries for FPGA:
```console
# Configure the Meson environment.
cd $REPO_TOP
./meson_init.sh
# Build the two targets we care about, specifically.
ninja -C build-out sw/device/lib/testing/test_rom/test_rom_export_fpga_cw310
ninja -C build-out sw/device/examples/hello_world/hello_world_export_fpga_cw310
# Build *everything*, including targets for other devices.
ninja -C build-out all
```
Note that specific targets are followed by the device they are built for.
OpenTitan needs to link the same device executable for multiple devices, so each executable target is duplicated one for each device we support.
In general, `clean` rules are unnecessary, and Meson will set up `ninja` such that it reruns `meson.build` files which have changed.
Build intermediates will show up in `$REPO_TOP/build-out`, including unlinked object files and libraries, while completed executables are exported to `$REPO_TOP/build-bin`.
As a rule, you should only ever need to refer to artifacts inside of `build-bin`; the exact structure of `build-out` is subject to change.
Complete details of these semantics are documented in `util/build_consts.sh`.
The locations of `build-{out,bin}` can be controled by setting the `$BUILD_ROOT` enviromnent variable, which defaults to `$REPO_TOP`.
`./meson_init.sh` itself is idempotent, but this behavior can be changed with additional flags; see `./meson_init.sh` for more information.
For this reason, most examples involving Meson will include a call to `./meson_init.sh`, but you will rarely need to run it more than once per checkout.
Building an executable `foo` destined to run on the OpenTitan device `$DEVICE` will output the following files under `build-bin/sw/device`:
* `foo_$DEVICE.elf`: the linked program, in ELF format.
* `foo_$DEVICE.bin`: the linked program, as a plain binary with ELF debug information removed.
* `foo_$DEVICE.dis`: the disassembled program with inline source code.
* `foo_$DEVICE.vmem`: a Verilog memory file which can be read by `$readmemh()` in Verilog code.
In general, this executable is built by the `foo_export_$DEVICE` target.
For example, this builds the `pwrmgr_smoketest` test binary for DEVICE `sim_dv`:
```console
ninja -C build-out sw/device/tests/pwrmgr_smoketest_export_sim_dv
```
Building an executable destined to run on a host machine (i.e., under `sw/host`) will output a host excecutable under `build-bin/sw/host`, which can be run directly.
### Troubleshooting the build system
If you encounter an error running `./meson_init.sh` you could re-run using the `-f` flag which will erase any existing building directories to yield a clean build.
This sledgehammer is only intended to be used as a last resort when the existing configuration is seriously broken:
```console
./meson_init.sh -f
```
If any `meson.build` files are changed the configuration can be regenerated by passing the `-r` flag to `./meson_init.sh`:
```console
./meson_init.sh -r
```
### Bringing your own toolchain
`./meson_init.sh` needs to know where the toolchain you are using is, and which tools from it should be used.
If you are using the lowrisc-provided toolchain (obtained with `get-toolchain.py`), and it is installed in the default location (`/tools/riscv`), then `./meson_init.sh` does not need additional configuration.
If you are using the lowrisc-provided toolchain, but have located it in a non-default location (using `get-toolchain.py -t /path/to/lowrisc/toolchain`), you can use the environment variable `TOOLCHAIN_PATH` to point to your toolchain location, like so:
```console
export TOOLCHAIN_PATH=/path/to/lowrisc/toolchain
./meson_init.sh
```
If you have moved a lowrisc-provided toolchain (obtained with `get-toolchain.py`), you will need to update paths within the meson toolchain configuration files within the toolchain installation.
These are called `meson-<triple>-<compiler>.txt`, and are present in toolchains since version 20200602-1.
You can still use `TOOLCHAIN_PATH` to point to the toolchain location if you have updated the paths within these files.
If you have built your own toolchain by following option 2 under [Installing Software Build Requirements]({{< relref "doc/ug/install_instructions/index#device-compiler-toolchain-rv32imc" >}}), then you need to point `./meson_init.sh` to your custom toolchain file using `-t FILE`:
```console
./meson_init -t /path/to/toolchain/file
```
If you do not specify your own toolchain configuration file (using `./meson_init.sh -t`), and `meson_init.sh` cannot find the default configuration in your toolchain, the legacy `toolchain.txt` from the main OpenTitan repository will be used.
If `TOOLCHAIN_PATH` is set, this will be used to update any paths within the legacy configuration.
## Building Software with Bazel
The Mask ROM and many other software targets are built with (Bazel)[https://bazel.build/]
_TLDR: After installing Bazel you can run:_
```console
bazel test //... --test_tag_filters=-cw310 --disk_cache=~/bazel_cache
```
### Installing Bazel
If you haven't yet installed Bazel, add it to apt and install it on Ubuntu systems with the following commands [as described in the Bazel documentation](https://bazel.build/install/ubuntu):
```console
sudo apt install apt-transport-https curl gnupg
curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor > bazel.gpg
sudo mv bazel.gpg /etc/apt/trusted.gpg.d/
echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" |
sudo tee /etc/apt/sources.list.d/bazel.list
sudo apt update && sudo apt install bazel-4.2.0
```
or by following [instructions for your system](https://bazel.build/install).
### Building software with Bazel
You can then build and run all the tests all the Bazel rules for OpenTitan within the workspace with the command:
```console
bazel test //...
```
This is likely to include unhealthy or slow tests and build targets so you're likely to want to run more specific builds and tests.
This invocation instructs Bazel to build everything it needs to run all the tests in the workspace.
Specific targets can be built with:
```console
bazel build //<path to build file>:<name of target>
```
### Structure of our Bazel workspace
The rules for Bazel are described in a language called starlark, which looks a lot like Python.
The OpenTitan directory is defined as a Bazel workspace by the `//WORKSPACE` file.
`BUILD` files provide the information Bazel needs to build the targets in a directory.
`BUILD` files also manage any subdirectories that don't have their own `BUILD` files.
OpenTitan uses .bzl files to specify rules to build artifacts that require specific attention like target specific test rules and project specific binaries.
#### WORKSPACE file
The `WORKSPACE` file controls external dependencies such that builds can be made reproducible and hermetic.
Bazel loads specific commits of external git repositories.
It uses them to build OpenTitan targets (like it does with bazel\_embedded) or to satisfy dependencies (as it does with abseil).
To produce increasingly stable releases the `WORKSPACE` file will fix a number of these references to specific commits and not the default branch head.
Dependencies will be added to the workspace so builds and tests are less sensitive to external updates and have a simpler getting started flow.
#### BUILD files
Throughout the OpenTitan repo, `BUILD` files describe targets and dependencies in the same directory and subdirectories that lack their own `BUILD` files.
`BUILD` files are mostly hand-written; to suit the request that hand-written files not be included in autogen directories, there are large `BUILD` files that describe how to build and depend on auto-generated files in autogen subdirectories.
### Linting Starlark
The Linter for Bazel files is called "buildifier" and can be built and run with Bazel by running:
```console
bazel run //:buildifier_fix
```
Which will run the linter on all of the WORKSPACE, BUILD and .bzl files in the workspace.
It's fairly strict so the best way to keep git logs and blame reports clean is to run it before committing, and ask the same during code review.
### Test Execution
Bazel will execute tests when invoked with `bazel test <label-expression>`.
The test rules have various tags applied to them to allow for filtering of tests during dependency analysis.
Our tests are typically tagged with verilator, cw310, or manual.
The `--test_tag_filters` switch can be used to filter out tests that you either cannot or do not want to execute.
For example, if you do not have a CW310 FPGA board, you cannot execute the tests meant to execute on that board.
You can instruct Bazel to not execute those tests:
```console
bazel test --test_tag_filters=-cw310 //...
```
Rather than manually adding that command line switch to every invocation of bazel, you can add it to your `$HOME/.bazelrc` file:
```
test --test_tag_filters=-cw310
```
### Excluding Verilator
Verilator is slow to build and Verilator tests are slow to run.
Many directories have Verilator targets that will cause them to build Verilator when you build all targets with the wildcard `...`.
Tagging allows us to exclude Verilator tests, but doesn't exclude building Verilator for targets that depend on it.
To exclude all targets that depend on Verilator you should use a target pattern that can be generated with Bazel queries, and sed and then passed with xargs:
```console
bazel query "rdeps(//..., //hw:verilator)" \
| sed -e 's/^/-/' \
| xargs bazel test -- //...
```
The Bazel query lists all the dependencies in the workspace that depend on Verilator.
These are inverted for the target pattern by prepending a dash and then the test is invoked by xargs on a pattern which starts with the whole workspace and then excludes each of the targets that depend on Verilator.
### Caching on disk
By default Bazel caches in memory and will evict Verilator binaries often.
If you're regularly running tests that depend on Verilator and have a few GB of disk space available use the `--disk_cache=<filename>` to specify a cache:
```console
bazel build //... --disk_cache=~/bazel_cache
```
Alternatively add the following to `$HOME/.bazelrc`:
```
build --disk_cache=~/bazel_cache
```
## Debugging device software
### Attaching a debugger
GDB can be used to debug device software running on an FPGA or in a Verilator simulation.
Refer to the [Getting started on FPGAs]({{<relref getting_started_fpga >}}) and [Verilator]({{<relref getting_started_verilator >}}) guides for more details.
### Disassembling device code
A disassembly of all executable sections is produced by the build system by default.
It can be found by looking for files with the `.dis` extension next to the corresponding ELF file.
To get a different type of disassembly, e.g. one which includes data sections in addition to executable sections, objdump can be called manually.
For example the following command shows how to disassemble all sections of the UART DIF smoke test interleaved with the actual source code:
```console
riscv32-unknown-elf-objdump --disassemble-all --headers --line-numbers --source build-bin/sw/device/tests/uart_smoketest_sim_verilator.elf
```
Refer to the output of `riscv32-unknown-elf-objdump --help` for a full list of options.