Add diagrams to ml-frameworks website pages. (#14145)
Diagrams are a nice way to break up the flow of a webpage and
communicate relationships visually. This "ml-frameworks" pages
(https://openxla.github.io/iree/guides/ml-frameworks/) already uses
sections, icons, and lists, but I want to keep making the website easier
to scan through, as we bridge between so many different technologies.




---
I also tried a version of this diagram with more detail, but the layout
was tricky to tune:

diff --git a/docs/website/docs/guides/ml-frameworks/index.md b/docs/website/docs/guides/ml-frameworks/index.md
index 6fb1b05..99c1731 100644
--- a/docs/website/docs/guides/ml-frameworks/index.md
+++ b/docs/website/docs/guides/ml-frameworks/index.md
@@ -3,6 +3,26 @@
IREE supports popular machine learning frameworks using the same underlying
technology.
+``` mermaid
+graph LR
+ accTitle: ML framework to runtime deployment workflow overview
+ accDescr {
+ Programs start in some ML framework.
+ Programs are imported into MLIR.
+ The IREE compiler uses the imported MLIR.
+ Compiled programs are used by the runtime.
+ }
+
+ A[ML frameworks]
+ B[Imported MLIR]
+ C[IREE compiler]
+ D[Runtime deployment]
+
+ A --> B
+ B --> C
+ C --> D
+```
+
## :octicons-list-unordered-16: Supported frameworks
See end-to-end examples of how to use each framework with IREE:
@@ -25,8 +45,10 @@
Each machine learning framework has some "export" mechanism that snapshots the
structure and data in your program. These exported programs can then be
"imported" into IREE's compiler by using either a stable import format or one of
-IREE's importer tools. This export/import process is specific to each frontend
-and typically involves a number of stages:
+IREE's importer tools.
+
+This export/import process is specific to each frontend and typically involves a
+number of stages:
1. Capture/trace/freeze the ML model into a graph
2. Write that graph to an interchange format (e.g. SavedModel, TorchScript)
@@ -39,14 +61,18 @@
## :octicons-gear-16: Compilation
-During compilation we load an MLIR file and compile for the specified set of
-backends (CPU, GPU, etc). Each of these backends creates custom native code to
-execute on the target device. Once compiled, the resulting artifact can be
-executed on the specified devices using IREE's runtime.
+IREE compiles MLIR files for specified sets of backends (CPU, GPU, etc). Each
+backend generates optimized native code custom to the input program and
+intended target platform. Once compiled, modules can be executed using IREE's
+runtime.
+
+See the [deployment configuration guides](../deployment-configurations/index.md)
+for details on selecting a compiler backend and tuning options for your choice
+of target platform(s) or device(s).
## :octicons-rocket-16: Execution
-The final stage is executing the now compiled module. This involves selecting
-what compute devices should be used, loading the module, and executing the
-module with the intended inputs. IREE provides several
-[language bindings](../../reference/bindings/index.md) for its runtime API.
+Compiled modules can be executed by selecting what compute devices to use,
+loading the module, and then executing it with the intended inputs. IREE
+provides several [language bindings](../../reference/bindings/index.md) for its
+runtime API.
diff --git a/docs/website/docs/guides/ml-frameworks/pytorch.md b/docs/website/docs/guides/ml-frameworks/pytorch.md
index 45c4739..01d0435 100644
--- a/docs/website/docs/guides/ml-frameworks/pytorch.md
+++ b/docs/website/docs/guides/ml-frameworks/pytorch.md
@@ -12,6 +12,42 @@
`nn.Module` [classes](https://pytorch.org/docs/stable/generated/torch.nn.Module.html)
as well as models defined using [`functorch`](https://pytorch.org/functorch/).
+``` mermaid
+graph LR
+ accTitle: PyTorch to runtime deployment workflow overview
+ accDescr {
+ Programs start as either PyTorch nn.Module or functorch programs.
+ Programs are imported into MLIR as either StableHLO, TOSA, or Linalg.
+ The IREE compiler uses the imported MLIR.
+ Compiled programs are used by the runtime.
+ }
+
+ subgraph A[PyTorch]
+ direction TB
+ A1[nn.Module]
+ A2[functorch]
+
+ A1 --- A2
+ end
+
+ subgraph B[MLIR]
+ direction TB
+ B1[StableHLO]
+ B2[TOSA]
+ B3[Linalg]
+
+ B1 --- B2
+ B2 --- B3
+ end
+
+ C[IREE compiler]
+ D[Runtime deployment]
+
+ A -- torch_mlir --> B
+ B --> C
+ C --> D
+```
+
## Prerequisites
Install IREE pip packages, either from pip or by
diff --git a/docs/website/docs/guides/ml-frameworks/tensorflow.md b/docs/website/docs/guides/ml-frameworks/tensorflow.md
index a8ed3cb..e8d008b 100644
--- a/docs/website/docs/guides/ml-frameworks/tensorflow.md
+++ b/docs/website/docs/guides/ml-frameworks/tensorflow.md
@@ -13,7 +13,35 @@
or stored in the `SavedModel`
[format](https://www.tensorflow.org/guide/saved_model).
-<!-- TODO(??): notes about TensorFlow 2.0, supported features? -->
+``` mermaid
+graph LR
+ accTitle: TensorFlow to runtime deployment workflow overview
+ accDescr {
+ Programs start as either TensorFlow SavedModel or tf.Module programs.
+ Programs are imported into MLIR as StableHLO.
+ The IREE compiler uses the imported MLIR.
+ Compiled programs are used by the runtime.
+ }
+
+ subgraph A[TensorFlow]
+ direction TB
+ A1[SavedModel]
+ A2[tf.Module]
+
+ A1 --- A2
+ end
+
+ subgraph B[MLIR]
+ B1[StableHLO]
+ end
+
+ C[IREE compiler]
+ D[Runtime deployment]
+
+ A -- iree-import-tf --> B
+ B --> C
+ C --> D
+```
## Prerequisites
diff --git a/docs/website/docs/guides/ml-frameworks/tflite.md b/docs/website/docs/guides/ml-frameworks/tflite.md
index 46a4a42..40993df 100644
--- a/docs/website/docs/guides/ml-frameworks/tflite.md
+++ b/docs/website/docs/guides/ml-frameworks/tflite.md
@@ -12,6 +12,32 @@
FlatBuffers](https://www.tensorflow.org/lite/guide). These files can be
imported into an IREE-compatible format then compiled to a series of backends.
+``` mermaid
+graph LR
+ accTitle: TFLite to runtime deployment workflow overview
+ accDescr {
+ Programs start as TensorFlow Lite FlatBuffers.
+ Programs are imported into MLIR's TOSA dialect using iree-import-tflite.
+ The IREE compiler uses the imported MLIR.
+ Compiled programs are used by the runtime.
+ }
+
+ subgraph A[TFLite]
+ A1[FlatBuffer]
+ end
+
+ subgraph B[MLIR]
+ B1[TOSA]
+ end
+
+ C[IREE compiler]
+ D[Runtime deployment]
+
+ A -- iree-import-tflite --> B
+ B --> C
+ C --> D
+```
+
## Prerequisites
Install TensorFlow by following the
diff --git a/docs/website/mkdocs.yml b/docs/website/mkdocs.yml
index bc71b99..72d40af 100644
--- a/docs/website/mkdocs.yml
+++ b/docs/website/mkdocs.yml
@@ -88,7 +88,15 @@
options:
custom_icons:
- overrides/.icons
- - pymdownx.superfences
+ # Diagram support, see
+ # https://squidfunk.github.io/mkdocs-material/reference/diagrams/
+ # Docs : https://mermaid.js.org/
+ # Editor: https://mermaid.live/edit
+ - pymdownx.superfences:
+ custom_fences:
+ - name: mermaid
+ class: mermaid
+ format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist: