blob: b911581d41084ff52b688799c57204a911c1c3e8 [file] [log] [blame] [view]
Timothy Chen1e68dd82020-09-22 16:20:17 -07001---
2title: "Reset Manager HWIP Technical Specification"
3---
4
5# Overview
6
7This document describes the functionality of the reset controller and its interaction with the rest of the OpenTitan system.
8
9## Features
10
11* Stretch incoming POR.
12* Cascaded system resets.
13* Peripheral system reset requests.
14* RISC-V non-debug-module reset support.
15* Limited and selective software controlled module reset.
16* Always-on reset information register.
Timothy Chenf92131e2021-03-31 16:17:38 -070017* Always-on alert crash dump register.
18* Always-on cpu crash dump register.
Timothy Chen8ced7f22021-07-23 16:14:55 -070019* Reset consistency checks.
Timothy Chen1e68dd82020-09-22 16:20:17 -070020
21# Theory of Operation
22
23The OpenTitan reset topology and reset controller block diagram are shown in the diagram below.
24The reset controller is closely related to the [power controller]({{< relref "hw/ip/pwrmgr/doc" >}}), please refer to that spec for details on how reset controller inputs are controlled.
25
26![Reset Topology](reset_topology.svg)
27
28## Reset Topology
29
30The topology can be summarized as follows:
31
32* There are two reset domains
33 * Test Domain - Driven by `TRSTn`
34 * Core Domain - Driven by internal [POR circuitry]() and an external pin reset connection.
35* Test domain is comprised of the following components
36 * SOC TAP and related DFT circuits
37 * RISC-V TAP (part of the `rv_dm` module)
38
39The test domain does not have sub reset trees.
40`TRSTn` is used directly by all components in the domain.
41
42The Core domain consists of all remaining logic and contains 4 sub reset trees, see table below.
43
44<table>
45 <tr>
46 <td>
47<strong>Reset Tree</strong>
48 </td>
49 <td><strong>Description</strong>
50 </td>
51 </tr>
52 <tr>
53 <td><code>rst_por_n</code>
54 </td>
55 <td><code>POR reset tree.</code>
56<p>
57<code>This reset is driven by ast, stretched inside the reset manager and resets all core domain logic in the design. </code>
58 </td>
59 </tr>
60 <tr>
61 <td><code>rst_lc_n</code>
62 </td>
63 <td><code>Life Cycle reset tree.</code>
64<p>
65<code>This reset is derived from rst_por_n and resets all logic in the design except:</code><ul>
66
67<li><code>Power manager</code>
68<li><code>Clock manager </code>
69<li><code>Reset manager</code></li></ul>
70
71 </td>
72 </tr>
73 <tr>
74 <td><code>rst_sys_n</code>
75 </td>
76 <td><code>System reset tree.</code>
77<p>
78<code>This reset is derived from rst_lc_n and resets all logic in the design except:</code><ul>
79
80<li><code>Power manager</code>
81<li><code>Clock manager</code>
82<li><code>Reset manager</code>
83<li><code>OTP controller</code>
84<li><code>Flash controller</code>
85<li><code>Life cycle controller</code>
86<li><code>Alert manager</code>
87<li><code>Always-on timers</code></li></ul>
88
89 </td>
90 </tr>
91 <tr>
92 <td><code>rst_{module}_n</code>
93 </td>
94 <td><code>Module specific reset.</code>
95<p>
96<code>This reset is derived from rst_sys_n and sets only the targeted module and nothing else.</code>
97<p>
98<code>For OpenTitan, the only current targets are spi_device and usb_device.</code>
99 </td>
100 </tr>
101</table>
102
103The reset trees are cascaded upon one another in this order:
104`rst_por_n` -> `rst_lc_n` -> `rst_sys_n` -> `rst_module_n`
105This means when a particular reset asserts, all downstream resets also assert.
106
107The primary difference between `rst_lc_n` and `rst_sys_n` is that the former controls the reset state of all non-volatile related logic in the system, while the latter can be used to issue system resets for debug.
108This separation is required because the non-volatile controllers (OTP / Lifecycle) are used to qualify DFT and debug functions of the design.
109If these modules are reset along with the rest of the system, the TAP and related debug functions would also be reset.
110By keeping these reset trees separate, we allow the state of the test domain functions to persist while functionally resetting the rest of the core domain.
111
Philipp Wagner0d7bd2c2021-03-19 10:33:52 +0000112Additionally, modules such as alert handler and [aon timers]() (which contain the watchdog function) are also kept on the `rst_lc_n` tree.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700113This ensures that an erroneously requested system reset through `rst_sys_n` cannot silence the alert mechanism or prevent the system from triggering a watchdog mechanism.
114
115The reset topology also contains additional properties:
116* Selective processor HART resets, such as `hartreset` in `dmcontrol`, are not implemented, as it causes a security policy inconsistency with the remaining system.
117 * Specifically, these selective resets can cause the cascaded property shown above to not be obeyed.
118* Modules do not implement local resets that wipe configuration registers, especially if there are configuration enable locks.
119 * Modules are allowed to implement local soft resets that clear datapaths; but these are examined on a case by case basis for possible security side channels.
120* In a production system, the Test Reset Input (`TRSTn`) should be explicitly asserted through system integration.
121 * In a production system, `TRSTn` only needs to be released for RMA transitions and nothing else.
122.
123
124## Reset Manager
125
Guillermo Maturana67ff7722021-07-23 11:07:23 -0700126The reset manager handles the reset of the core domain, and also holds relevant reset information in CSR registers, such as:
Timothy Chen1e68dd82020-09-22 16:20:17 -0700127
Guillermo Maturana67ff7722021-07-23 11:07:23 -0700128* {{< regref "RESET_INFO" >}} indicates why the system was reset.
129* {{< regref "ALERT_INFO" >}} contains the recorded alert status prior to system reset.
130 * This is useful in case the reset was triggered by an alert escalation.
131* {{< regref "CPU_INFO" >}} contains recorded CPU state prior to system reset.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700132 * This is useful in case the reset was triggered by a watchdog where the host hung on a particular bus transaction.
133
134Additionally, the reset manager, along with the power manager, accepts requests from the system and asserts resets for the appropriate clock trees.
135These requests primarily come from the following sources:
Guillermo Maturana67ff7722021-07-23 11:07:23 -0700136* Peripherals capable of reset requests: such as [sysrst_ctrl]({{< relref "hw/ip/sysrst_ctrl/doc/_index.md" >}}) and [always on timers ]({{< relref "hw/ip/aon_timer/doc/_index.md" >}}).
Timothy Chen1e68dd82020-09-22 16:20:17 -0700137* Debug modules such as `rv_dm`.
138* Power manager request for low power entry and exit.
139
140### Shadow Resets
141
142OpenTitan supports the concept of shadow registers.
143These are registers stored in two-or-more constantly checking copies to ensure the values were not maliciously or accidentally disturbed.
144For these components, the reset manager outputs a shadow reset dedicated to resetting only the shadow storage.
145This reset separation ensures that a targetted attack on the reset line cannot easily defeat shadow registers.
146
Timothy Chen8ced7f22021-07-23 16:14:55 -0700147### Reset Consistency Checks
148
149The reset manager implements reset consistency checks to ensure that triggered resets are supposed to happen and not due to some fault in the system.
150Every leaf reset in the system has an associated consistency checker.
151
152The consistency check ensures that when a leaf reset asserts, either its parent reset must have asserted, or the software request, if available, has asserted.
153While this sounds simple in principle, the check itself crosses up to 3 clock domains and must be carefully managed.
154
155First, the parent and leaf resets are used to asynchronously assert a flag indication.
156This flag indication is then synchronized into the reset manager's local clock domain.
157
158The reset manager then checks as follows:
159- If a leaf reset has asserted, check to see either its parent or software request (synchronous to the local domain) has asserted.
160
161- If the condition is not true, it is possible the parent reset indication is still being synchronized, thus we wait for the parent indication.
162
163- It is also possible the parent indication was seen first, but the leaf condition was not, in this case, we wait for the leaf indication.
164
165- A timeout period corresponding to the maximum synchronization delay is used to cover both waits.
166 - If the appropriate pairing is not seen in the given amount of time, signal an error, as the leaf reset asserted without cause.
167
168- If all reset conditions are satisfied, wait for the reset release to gracefully complete the cycle.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700169
170## Hardware Interfaces
171
Philipp Wagnere5203622021-03-15 19:59:09 +0000172{{< incGenFromIpDesc "/hw/top_earlgrey/ip/rstmgr/data/autogen/rstmgr.hjson" "hwcfg" >}}
Timothy Chen1e68dd82020-09-22 16:20:17 -0700173
174### Signals
175
176Signal | Direction | Description
177------------------------|-----------|---------------
178`ast_i.aon_pok` | `input` | Input from `ast`. This signal is the root reset of the design and is used to generate `rst_por_n`.
179`cpu_i.rst_cpu_n` | `input` | CPU reset indication. This informs the reset manager that the processor has reset.
180`cpu_i.ndmreset_req` | `input` | Non-debug-module reset request from `rv_dm`.
181`pwr_i.rst_lc_req` | `input` | Power manager request to assert the `rst_lc_n` tree.
182`pwr_i.rst_sys_req` | `input` | Power manager request to assert the `rst_sys_n` tree.
183`pwr_i.reset_cause` | `input` | Power manager indication for why it requested reset, the cause can be low power entry or peripheral issued request.
184`pwr_i.rstreqs` | `input` | Peripheral reset requests.
185`pwr_o.rst_lc_src_n` | `output` | Current state of `rst_lc_n` tree.
186`pwr_o.rst_sys_src_n` | `output` | Current state of `rst_sys_n` tree.
187`resets_ast_o` | `output` | Resets used by `ast`.
188`resets_o` | `output` | Resets used by the rest of the core domain.
189
190## Design Details
191
192The reset manager generates the resets required by the system by synchronizing reset tree components to appropriate output clocks.
193As a result, a particular reset tree (for example `rst_lc_n`) may have multiple outputs depending on the clock domains of its consumers.
194
195Each reset tree is discussed in detail below.
196
197
198## POR Reset Tree
199
200The POR reset tree, `rst_por_n`, is the root reset of the entire device.
201If this reset ever asserts, everything in the design is reset.
202
203The `ast` input `aon_pok` is used as the root reset indication.
204It is filtered and stretched to cover any slow voltage ramp scenarios.
205The stretch parameters are design time configurations.
206
207* The filter acts as a synchronizer and is by default 3 stages.
208* The count by default is 32.
209 * The counter increments only when the last two stages of the filter are both '1'
210 * If any stage at any point becomes '0', the reset counter returns to 0 and downstream logic is driven to reset again.
211* Both functions are expected to operate on slow, always available KHz clocks.
212
213
214## Life Cycle Reset Tree
215
216Life cycle reset, `rst_lc_n` asserts under the following conditions:
217* Whenever `rst_por_n` asserts.
218* Whenever a peripheral reset request (always on timer watchdog, rbox reset request, alert handler escalation) is received.
219
220The `rst_lc_n` tree contains both always-on and non-always-on versions.
221How many non-always-on versions is dependent on how many power domains are supported by the system.
222
223## System Reset Tree
224
225System reset, `rst_sys_n` asserts under the following conditions:
226* Whenever `rst_lc_n` asserts.
227* Whenever `ndmreset_req` asserts.
228
229The `rst_sys_n` tree contains both always-on and non-always-on versions.
230How many non-always-on versions is dependent on how many power domains are supported by the system.
231
232## Output Leaf Resets
233
234The reset trees discussed above are not directly output to the system for consumption.
235Instead, the output leaf resets are synchronized versions of the various root resets.
236How many leaf resets there are and to which clock is decided by the system and templated through the reset manager module.
237
238Assuming a leaf output has N power domains and M clock domains, it potentially means one reset tree may output NxM outputs to satisfy all the reset scenario combinations.
239
240## Power Domains and Reset Trees
241
242It is alluded above that reset trees may contain both always-on and non-always-on versions.
243This distinction is required to support power manager's various low power states.
Guillermo Maturanac95d1282021-08-12 11:09:23 -0700244When a power domain goes offline, all of its components must reset, regardless of the reset tree to which it belongs.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700245
246For example, assume a system with two power domains - `Domain A` is always-on, and `Domain B` is non-always-on.
247When `Domain B` is powered off, all of `Domain B`'s resets, from `rst_lc_n`, `rst_sys_n` to `rst_module_n` are asserted.
248However, the corresponding resets for `Domain A` are left untouched because it has not been powered off.
249
250## Software Controlled Resets
251
252Certain leaf resets can be directly controlled by software.
253Due to security considerations, most leaf resets cannot be controlled, only a few blocks are given exceptions.
254The only blocks currently allowed to software reset are `usbdev` and `spidev`. Future potential candidates are `i2cdev`, `i2chost` and `spihost`.
255
256The criteria for selecting which block is software reset controllable is meant to be overly restrictive.
257Unless there is a clear need, the default option is to not provide reset control.
258
259In general, the following rules apply:
260* If a module has configuration register lockdown, it cannot be software resettable.
261* If a module operates on secret data (keys), it cannot be software resettable.
262 * Or a software reset should render the secret data unusable until some initialization routine is run to reduce the Hamming leakage of secret data.
263* If a module can alter the software's perception of time or general control flow (timer or interrupt aggregator), it cannot be software resettable.
264* If a module contains sensor functions for security, it cannot be software resettable.
265* If a module controls life cycle or related function, it cannot be software resettable.
266
267
268## Shadow Resets
269
270Leaf resets also can be design time configured to output [shadow resets](https://docs.google.com/document/d/1Oiv1ewvxhhk6c8aY2f2bV6tTZMI-zUlLeRZJMmXMBmo/edit?usp=sharing).
271The details of this function are TBD.
272
273## Reset Information
274
275The reset information register is a reflection of the reset state from the perspective of the system.
276In OpenTitan, since there is only 1 host, it is thus from the perspective of the processor.
277This also suggests that if the design had multiple processors, there would need to be multiple such registers.
278
279If a reset does not cause the processor to reset, there is no reason for the reset information to change (this is also why there is a strong security link between the reset of the processor and the rest of the system).
280The following are the currently defined reset reasons and their meaning:
281
282Reset Cause | Description
283------------------------|---------------
284`POR` | Cold boot, the system was reset through POR circuitry.
285`LOW_POWER_EXIT` | Warm boot, the system was reset through low power exit.
286`NDM RESET` | Warm boot, the system was reset through `rv_dm` non-debug-module request.
Timothy Chen1a55b242022-01-27 15:24:10 -0800287`SW_REQ` | Warm boot, the system was reset through {{< regref "RST_REQ" >}}.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700288`HW_REQ` | Warm boot, the system was reset through peripheral requests. There may be multiple such requests.
289
290
291The reset info register is write 1 clear.
292It is software responsibility to clear old reset reasons; the reset manager simply records based on the rules below.
293
294Excluding power on reset, which is always recorded when the device POR circuitry is triggered, the other resets are recorded when authorized by the reset manager.
Timothy Chen1a55b242022-01-27 15:24:10 -0800295Reset manager authorization is based on reset categories as indicated by the power manager.
296The power manager has three reset categories that are mutually exclusive:
297* No reset has been triggered by pwrmgr.
298* Low power entry reset has been triggered by pwrmgr.
299* Software or peripheral reset request has been triggered by pwrmgr.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700300
Timothy Chen1a55b242022-01-27 15:24:10 -0800301The reset categories are sent to the reset manager so that it can decide which reason to record when the processor reset is observed.
302Non-debug-module resets are allowed only when no resets have been triggered by pwrmgr.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700303
304Since a reset could be motivated by multiple reasons (a security escalation during low power transition for example), the reset information registers constantly record all reset causes in which it is allowed.
305The only case where this is not done is `POR`, where active recording is silenced until the first processor reset release.
306
Timothy Chen1a55b242022-01-27 15:24:10 -0800307Even though four reset causes are labeled as warm boot, their effects on the system are not identical.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700308
Timothy Chen1a55b242022-01-27 15:24:10 -0800309* When the reset cause is `LOW_POWER_EXIT`, it means only the non-always-on domains have been reset.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700310 * Always-on domains retain their pre-low power values.
311* When the reset cause is `NDM_RESET`, it means only the `rst_sys_n` tree has asserted for all power domains.
Timothy Chen1a55b242022-01-27 15:24:10 -0800312* When the reset cause is `HW_REQ` or `SW_REQ`, it means everything other than power / clock / reset managers have reset.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700313
314This behavioral difference may be important to software, as it implies the configuration of the system may need to be different.
315
Timothy Chenf92131e2021-03-31 16:17:38 -0700316## Crash Dump Information
317
318The reset manager manages crash dump information for software debugging across unexpected resets and watchdogs.
319When enabled, the latest alert information and latest cpu information are captured in always-on registers.
320
321When the software resumes after the reset, it is then able to examine the last cpu state or the last set of alert information to understand why the system has reset.
322
323The enable for such debug capture can be locked such that it never captures.
324
325### Alert Information
Timothy Chen1e68dd82020-09-22 16:20:17 -0700326
327The alert information register contains the value of the alert crash dump prior to a triggered reset.
328Since this information differs in length between system implementation, the alert information register only displays 32-bits at a time.
329
Guillermo Maturana67ff7722021-07-23 11:07:23 -0700330The {{< regref "ALERT_INFO_ATTR" >}} register indicates how many 32-bit data segments must be read.
331Software then simply needs to write in {{< regref "ALERT_INFO_CTRL.INDEX" >}} which segment it wishes and then read out the {{< regref "ALERT_INFO" >}} register.
Timothy Chenf92131e2021-03-31 16:17:38 -0700332
333### CPU Information
334
335The cpu information register contains the value of the cpu state prior to a triggered reset.
336Since this information differs in length between system implementation, the information register only displays 32-bits at a time.
337
Guillermo Maturana67ff7722021-07-23 11:07:23 -0700338The {{< regref "CPU_INFO_ATTR" >}} register indicates how many 32-bit data segments must be read.
339Software then simply needs to write in {{< regref "CPU_INFO_CTRL.INDEX" >}} which segment it wishes and then read out the {{< regref "CPU_INFO" >}} register.
Timothy Chen1e68dd82020-09-22 16:20:17 -0700340
341# Programmers Guide
342
343## Alert Information Gathering and Reading
344
345To enable alert crash dump capture, set {{< regref "ALERT_INFO_CTRL.EN" >}} to 1.
346Once the system has reset, check {{< regref "ALERT_INFO_ATTR.CNT_AVAIL" >}} for how many reads need to be done.
347Set {{< regref "ALERT_INFO_CTRL.INDEX" >}} to the desired segment, and then read the output from {{< regref "ALERT_INFO" >}}.
348
349## Register Table
350
Philipp Wagnere5203622021-03-15 19:59:09 +0000351{{< incGenFromIpDesc "/hw/top_earlgrey/ip/rstmgr/data/autogen/rstmgr.hjson" "registers" >}}