Standardization pass through docs/ folder.  (#3597)

* recommend an out-of-tree CMake build directory `../iree-build/` instead of in-tree `build/`
* consistently use `.vmfb` as VM module flatbuffer file extensions
* make some docs use CMake examples instead of Bazel examples
* prefix all shell commands with `$`
* indent shell commands two spaces when continuing onto multiple lines
* consistently use `-` for LLVM flags and `--` for Abseil flags
* typo fixes
diff --git a/docs/developing_iree/contributor_tips.md b/docs/developing_iree/contributor_tips.md
index 19e4fd5..c8a170f 100644
--- a/docs/developing_iree/contributor_tips.md
+++ b/docs/developing_iree/contributor_tips.md
@@ -32,17 +32,17 @@
 
     ```shell
     # From your existing git repo
-    git remote rename origin upstream
-    git remote add origin git@github.com:<github_username>/iree.git
+    $ git remote rename origin upstream
+    $ git remote add origin git@github.com:<github_username>/iree.git
     ```
 
     b. If you haven't already cloned:
 
     ```shell
     # From whatever directory under which you want to nest your repo
-    git clone git@github.com:<github_username>/iree.git
-    cd iree
-    git remote add upstream git@github.com:google/iree.git
+    $ git clone git@github.com:<github_username>/iree.git
+    $ cd iree
+    $ git remote add upstream git@github.com:google/iree.git
     ```
 
     This is especially important for maintainers who have write access (so can
@@ -60,8 +60,8 @@
     little trickier than it should be. You can also add this as a git alias.
 
     ```shell
-    git config alias.update "! /path/to/git-update"
-    git config alias.sync "update main"
+    $ git config alias.update "! /path/to/git-update"
+    $ git config alias.sync "update main"
     ```
 
 ## Useful Tools
diff --git a/docs/developing_iree/developer_overview.md b/docs/developing_iree/developer_overview.md
index 3560205..b73671b 100644
--- a/docs/developing_iree/developer_overview.md
+++ b/docs/developing_iree/developer_overview.md
@@ -80,7 +80,7 @@
 [test file](https://github.com/google/iree/blob/main/iree/compiler/Dialect/IREE/Transforms/test/drop_compiler_hints.mlir):
 
 ```shell
-$ bazel run iree/tools:iree-opt -- \
+$ ../iree-build/iree/tools/iree-opt \
   -split-input-file \
   -print-ir-before-all \
   -iree-drop-compiler-hints \
@@ -93,7 +93,7 @@
 model file:
 
 ```shell
-$ bazel run iree/tools:iree-opt -- \
+$ ../iree-build/iree/tools/iree-opt \
   -iree-transformation-pipeline \
   -iree-hal-target-backends=vmla \
   $PWD/iree/test/e2e/models/fullyconnected.mlir
@@ -115,11 +115,11 @@
 For example, to translate `simple.mlir` to an IREE module:
 
 ```shell
-$ bazel run iree/tools:iree-translate -- \
+$ ../iree-build/iree/tools/iree-translate \
   -iree-mlir-to-vm-bytecode-module \
-  --iree-hal-target-backends=vmla \
+  -iree-hal-target-backends=vmla \
   $PWD/iree/tools/test/simple.mlir \
-  -o /tmp/simple.module
+  -o /tmp/simple.vmfb
 ```
 
 Custom translations may also be layered on top of `iree-translate`, see
@@ -133,12 +133,12 @@
 
 This program can be used in sequence with `iree-translate` to translate a
 `.mlir` file to an IREE module and then execute it. Here is an example command
-that executes the simple `simple.module` compiled from `simple.mlir` above on
+that executes the simple `simple.vmfb` compiled from `simple.mlir` above on
 IREE's VMLA driver:
 
 ```shell
-$ bazel run iree/tools:iree-run-module -- \
-  --module_file=/tmp/simple.module \
+$ ../iree-build/iree/tools:iree/run-module \
+  --module_file=/tmp/simple.vmfb \
   --driver=vmla \
   --entry_function=abs \
   --function_inputs="i32=-2"
@@ -153,16 +153,16 @@
 [check framework](https://github.com/google/iree/tree/main/docs/developing_iree/testing_guide.md#end-to-end-tests).
 
 ```shell
-$ bazel run iree/tools:iree-translate -- \
+$ ../iree-build/iree/tools/iree-translate \
   -iree-mlir-to-vm-bytecode-module \
-  --iree-hal-target-backends=vmla \
+  -iree-hal-target-backends=vmla \
   $PWD/iree/test/e2e/xla_ops/abs.mlir \
-  -o /tmp/abs.module
+  -o /tmp/abs.vmfb
 ```
 
 ```shell
-$ bazel run iree/modules/check:iree-check-module -- \
-  /tmp/abs.module \
+$ ../iree-build/iree/modules/check:iree/check-module \
+  /tmp/abs.vmfb \
   --driver=vmla
 ```
 
@@ -179,10 +179,10 @@
 [iree/tools/test/simple.mlir](https://github.com/google/iree/blob/main/iree/tools/test/simple.mlir):
 
 ```shell
-$ bazel run iree/tools:iree-run-mlir -- \
+$ ../iree-build/iree/tools:iree/run-mlir \
   $PWD/iree/tools/test/simple.mlir \
-  --function-input="i32=-2" \
-  --iree-hal-target-backends=vmla
+  -function-input="i32=-2" \
+  -iree-hal-target-backends=vmla
 ```
 
 ### iree-dump-module
@@ -193,7 +193,7 @@
 For example, to inspect the module translated above:
 
 ```shell
-$ bazel run iree/tools:iree-dump-module -- /tmp/simple.module
+$ ../iree-build/iree/tools:iree-dump-module -- /tmp/simple.b
 ```
 
 ### Useful generic flags
@@ -231,8 +231,8 @@
 
 ### Useful Vulkan driver flags
 
-For IREE's Vulkan runtime driver, there are a few useful
-[flags](https://github.com/google/iree/blob/main/iree/hal/vulkan/vulkan_driver.cc):
+For IREE's Vulkan runtime driver, there are a few useful flags defined in
+[vulkan_driver_module.cc](https://github.com/google/iree/blob/main/iree/hal/vulkan/vulkan_driver_module.cc):
 
 #### `--vulkan_renderdoc`
 
@@ -242,9 +242,8 @@
 to the default location on your system (e.g. `/tmp/RenderDoc/`):
 
 ```shell
-$ bazel build iree/tools:iree-run-mlir
-LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/renderdoc/lib/path \
-  bazel-bin/iree/tools/iree-run-mlir \
+$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/renderdoc/lib/path \
+  ../iree-build/iree/tools/iree-run-mlir \
     $PWD/iree/samples/vulkan/simple_mul.mlir \
     -iree-hal-target-backends=vulkan-spirv \
     -function-input="4xf32=1,2,3,4" \
diff --git a/docs/developing_iree/e2e_benchmarking.md b/docs/developing_iree/e2e_benchmarking.md
index 6930102..ba4cb3f 100644
--- a/docs/developing_iree/e2e_benchmarking.md
+++ b/docs/developing_iree/e2e_benchmarking.md
@@ -20,7 +20,7 @@
 # By default `get_e2e_artifacts.py` will run all of our test suites, including
 # those that take a long time to complete, so we specify
 # `--test_suites=e2e_tests` to only run the smaller tests.
-python3 ./scripts/get_e2e_artifacts.py --test_suites=e2e_tests
+$ python3 ./scripts/get_e2e_artifacts.py --test_suites=e2e_tests
 ```
 
 Each test/module has a folder with the following artifacts (filtered to only
@@ -98,7 +98,7 @@
 wildcard expansion. They can be run by invoking the following test suite:
 
 ```shell
-python3 ./scripts/get_e2e_artifacts.py --test_suites=vision_external_tests
+$ python3 ./scripts/get_e2e_artifacts.py --test_suites=vision_external_tests
 ```
 
 The previous command compiles `MobileNet`, `MobileNetV2` and `ResNet50` to run
@@ -114,7 +114,7 @@
 automatically store the benchmarking artifacts in `/tmp/iree/modules/`.
 
 ```shell
-bazel run //integrations/tensorflow/e2e:matrix_ops_static_test_manual -- \
+$ bazel run //integrations/tensorflow/e2e:matrix_ops_static_test_manual -- \
   --target_backends=iree_vmla,tflite
 ```
 
@@ -126,7 +126,7 @@
 at the same time.
 
 ```shell
-bazel build -c opt //iree/tools:iree-benchmark-module
+$ bazel build -c opt //iree/tools:iree-benchmark-module
 ```
 
 This creates `bazel-bin/iree/tools/iree-benchmark-module`. The rest of the guide
@@ -157,7 +157,7 @@
 using `MatrixOpsStaticModule` on VMLA we would run the following command:
 
 ```shell
-./bazel-bin/iree/tools/iree-benchmark-module \
+$ ./bazel-bin/iree/tools/iree-benchmark-module \
   --flagfile="/tmp/iree/modules/MatrixOpsStaticModule/iree_vmla/traces/matmul_lhs_batch/flagfile"
 ```
 
@@ -166,7 +166,7 @@
 weights. For example:
 
 ```shell
-./bazel-bin/iree/tools/iree-benchmark-module \
+$ ./bazel-bin/iree/tools/iree-benchmark-module \
   --flagfile="/tmp/iree/modules/ResNet50/cifar10/iree_vmla/traces/predict/flagfile"
 ```
 
@@ -176,10 +176,10 @@
 
 ```shell
 # Enter the TensorFlow Bazel workspace.
-cd third_party/tensorflow/
+$ cd third_party/tensorflow/
 
 # Build the benchmark_model binary.
-bazel build --copt=-mavx2 -c opt \
+$ bazel build --copt=-mavx2 -c opt \
   //tensorflow/lite/tools/benchmark:benchmark_model
 
 # By default, TFLite/x86 uses various matrix multiplication libraries.
@@ -191,12 +191,12 @@
 # so this passing this flag here isn't going to make a difference to
 # matrix multiplications. However, the rest of TFLite's kernels outside
 # of ruy will still benefit from -mavx2.
-bazel build --copt=-mavx2 -c opt \
+$ bazel build --copt=-mavx2 -c opt \
   --define=tflite_with_ruy=true \
   //tensorflow/lite/tools/benchmark:benchmark_model
 
 # The binary can now be found in the following directory:
-ls bazel-bin/tensorflow/lite/tools/benchmark/
+$ ls bazel-bin/tensorflow/lite/tools/benchmark/
 ```
 
 ### 3.2 Benchmark the model on TFLite
@@ -209,7 +209,7 @@
 
 ```shell
 # Run within `third_party/tensorflow/`.
-./bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
+$ ./bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
   --graph=$(cat "/tmp/iree/modules/MatrixOpsStaticModule/tflite/traces/matmul_lhs_batch/graph_path") \
   --warmup_runs=1 \
   --num_threads=1 \
@@ -228,10 +228,10 @@
 ```shell
 # After following the instructions above up to 'Build all targets', the
 # iree-benchmark-module binary should be in the following directory:
-ls build-android/iree/tools/
+$ ls build-android/iree/tools/
 
 # Copy the benchmarking binary to phone.
-adb push build-android/iree/tools/iree-benchmark-module /data/local/tmp
+$ adb push build-android/iree/tools/iree-benchmark-module /data/local/tmp
 ```
 
 ### 4.2 Push the IREE's compilation / benchmarking artifacts to the device
@@ -245,17 +245,17 @@
 
 ```shell
 # Make a directory for the module/backend pair we want to benchmark.
-adb shell mkdir -p /data/local/tmp/MatrixOpsStaticModule/iree_vmla/
+$ adb shell mkdir -p /data/local/tmp/MatrixOpsStaticModule/iree_vmla/
 
 # Transfer the files.
-adb push /tmp/iree/modules/MatrixOpsStaticModule/iree_vmla/* \
+$ adb push /tmp/iree/modules/MatrixOpsStaticModule/iree_vmla/* \
   /data/local/tmp/MatrixOpsStaticModule/iree_vmla/
 ```
 
 ### 4.3 Benchmark the module
 
 ```shell
-adb shell /data/local/tmp/iree-benchmark-module \
+$ adb shell /data/local/tmp/iree-benchmark-module \
   --flagfile="/data/local/tmp/MatrixOpsStaticModule/iree_vmla/traces/matmul_lhs_batch/flagfile" \
   --module_file="/data/local/tmp/MatrixOpsStaticModule/iree_vmla/compiled.vmfb"
 ```
@@ -284,46 +284,46 @@
 # Note that unlike TFLite/x86, TFLite/ARM uses Ruy by default for all
 # matrix multiplications (No need to pass tflite_with_ruy), except for some
 # matrix*vector products. Below we show how to force using ruy also for that.
-bazel build -c opt \
+$ bazel build -c opt \
   --config=android_arm64 \
   --cxxopt='--std=c++17' \
   //tensorflow/lite/tools/benchmark:benchmark_model
 
 # Copy the benchmarking binary to phone and allow execution.
-adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
+$ adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
   /data/local/tmp
-adb shell chmod +x /data/local/tmp/benchmark_model
+$ adb shell chmod +x /data/local/tmp/benchmark_model
 ```
 
 ```shell
 # Build the benchmark_model binary using ruy even for matrix*vector
 # products. This is only worth trying in models that are heavy on matrix*vector
 # shapes, typically LSTMs and other RNNs.
-bazel build -c opt \
+$ bazel build -c opt \
   --config=android_arm64 \
   --cxxopt='--std=c++17' \
   --copt=-DTFLITE_WITH_RUY_GEMV \
   //tensorflow/lite/tools/benchmark:benchmark_model
 
 # Rename the binary for comparison with the standard benchmark_model.
-mv bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
+$ mv bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \
   bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model_plus_ruy_gemv
-adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model_plus_ruy_gemv \
+$ adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model_plus_ruy_gemv \
   /data/local/tmp/
-adb shell chmod +x /data/local/tmp/benchmark_model_plus_ruy_gemv
+$ adb shell chmod +x /data/local/tmp/benchmark_model_plus_ruy_gemv
 ```
 
 ```shell
 # Build the benchmark_model binary with flex.
-bazel build -c opt \
+$ bazel build -c opt \
   --config=android_arm64 \
   --cxxopt='--std=c++17' \
   //tensorflow/lite/tools/benchmark:benchmark_model_plus_flex
 
 # Copy the benchmarking binary to phone and allow execution.
-adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model_plus_flex \
+$ adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model_plus_flex \
   /data/local/tmp
-adb shell chmod +x /data/local/tmp/benchmark_model_plus_flex
+$ adb shell chmod +x /data/local/tmp/benchmark_model_plus_flex
 ```
 
 Alternatively, you can download and install the
@@ -335,14 +335,14 @@
 
 ```shell
 # Copy the data over to the phone.
-mkdir -p /data/local/tmp/MatrixOpsStaticModule/tflite
-adb push /tmp/iree/modules/MatrixOpsStaticModule/tflite/* \
+$ mkdir -p /data/local/tmp/MatrixOpsStaticModule/tflite
+$ adb push /tmp/iree/modules/MatrixOpsStaticModule/tflite/* \
   /data/local/tmp/MatrixOpsStaticModule/tflite/
 ```
 
 ```shell
 # Benchmark with TFLite.
-adb shell taskset f0 /data/local/tmp/benchmark_model \
+$ adb shell taskset f0 /data/local/tmp/benchmark_model \
   --graph=/data/local/tmp/MatrixOpsStaticModule/tflite/matmul_lhs_batch.tflite \
   --warmup_runs=1 \
   --num_threads=1 \
@@ -351,7 +351,7 @@
 
 ```shell
 # Benchmark with TFLite + RUY GEMV
-adb shell taskset f0 /data/local/tmp/benchmark_model_plus_ruy_gemv \
+$ adb shell taskset f0 /data/local/tmp/benchmark_model_plus_ruy_gemv \
   --graph=/data/local/tmp/MatrixOpsStaticModule/tflite/matmul_lhs_batch.tflite \
   --warmup_runs=1 \
   --num_threads=1 \
@@ -360,7 +360,7 @@
 
 ```shell
 # Benchmark with TFLite + Flex.
-adb shell taskset f0 /data/local/tmp/benchmark_model_plus_flex \
+$ adb shell taskset f0 /data/local/tmp/benchmark_model_plus_flex \
   --graph=/data/local/tmp/MatrixOpsStaticModule/tflite/matmul_lhs_batch.tflite \
   --warmup_runs=1 \
   --num_threads=1 \
@@ -369,7 +369,7 @@
 
 ```shell
 # Benchmark with TFLite running on GPU.
-adb shell taskset f0 /data/local/tmp/benchmark_model \
+$ adb shell taskset f0 /data/local/tmp/benchmark_model \
   --graph=/data/local/tmp/MatrixOpsStaticModule/tflite/matmul_lhs_batch.tflite \
   --warmup_runs=1 \
   --num_threads=1 \
@@ -382,7 +382,7 @@
 
 ```shell
 # Op profiling on GPU using OpenCL backend.
-sh tensorflow/lite/delegates/gpu/cl/testing/run_performance_profiling.sh \
+$ sh tensorflow/lite/delegates/gpu/cl/testing/run_performance_profiling.sh \
   -m /data/local/tmp/MatrixOpsStaticModule/tflite/matmul_lhs_batch.tflite
 ```
 
@@ -394,12 +394,16 @@
 
 ### Profile
 
-There are 2 profilers built into TFLite's `benchmark_model` program. Both of them impact latencies, so they should only be used to get a breakdown of the relative time spent in each operator type, they should not be enabled for the purpose of measuring a latency.
+There are 2 profilers built into TFLite's `benchmark_model` program. Both of
+them impact latencies, so they should only be used to get a breakdown of the
+relative time spent in each operator type, they should not be enabled for the
+purpose of measuring a latency.
 
-The first is `enable_op_profiling`. It's based on timestamps before and after each op. It's a runtime commandline flag taken by `benchmark_model`. Example:
+The first is `enable_op_profiling`. It's based on timestamps before and after
+each op. It's a runtime command-line flag taken by `benchmark_model`. Example:
 
 ```
-adb shell taskset f0 /data/local/tmp/benchmark_model \
+$ adb shell taskset f0 /data/local/tmp/benchmark_model \
   --graph=/data/local/tmp/MatrixOpsStaticModule/tflite/matmul_lhs_batch.tflite \
   --warmup_runs=1 \
   --num_threads=1 \
@@ -407,14 +411,17 @@
   --enable_op_profiling=true
 ```
 
-The second is `ruy_profiler`. Despite its name, it's available regardless of whether `ruy` is used for the matrix multiplications. It's a sampling profiler, which allows it to provide some more detailed informations, particularly on matrix multiplications. It's a build-time switch:
+The second is `ruy_profiler`. Despite its name, it's available regardless of
+whether `ruy` is used for the matrix multiplications. It's a sampling profiler,
+which allows it to provide some more detailed information, particularly on
+matrix multiplications. It's a build-time switch:
 
 ```
-bazel build \
+$ bazel build \
   --define=ruy_profiler=true \
   -c opt \
   --config=android_arm64 \
   //tensorflow/lite/tools/benchmark:benchmark_model
 ```
 
-The binary thus built can be run like above, no commandline flag needed.
+The binary thus built can be run like above, no command-line flag needed.
diff --git a/docs/developing_iree/profiling.md b/docs/developing_iree/profiling.md
index e8a5ce4..f1864d6 100644
--- a/docs/developing_iree/profiling.md
+++ b/docs/developing_iree/profiling.md
@@ -68,14 +68,14 @@
   -iree-mlir-to-vm-bytecode-module \
   -iree-hal-target-backends=vmla \
   $PWD/iree/tools/test/simple.mlir \
-  -o /tmp/simple.module
+  -o /tmp/simple.vmfb
 ```
 
 Run a compiled module once:
 
 ```shell
 $ build/iree/tools/iree-run-module \
-  --module_file=/tmp/simple.module \
+  --module_file=/tmp/simple.vmfb \
   --driver=vmla \
   --entry_function=abs \
   --function_inputs="i32=-2"
@@ -85,7 +85,7 @@
 
 ```shell
 $ build/iree/tools/iree-benchmark-module \
-  --module_file=/tmp/simple.module \
+  --module_file=/tmp/simple.vmfb \
   --driver=vmla \
   --entry_function=abs \
   --function_inputs="i32=-2"
@@ -137,7 +137,7 @@
 
 ```mlir
 func @dot(%lhs: tensor<2x4xf32>, %rhs: tensor<4x2xf32>) -> tensor<2x2xf32>
-  attributes { iree.module.export } {
+  attributes { iree.vmfb.export } {
   %0 = "mhlo.dot"(%lhs, %rhs) : (tensor<2x4xf32>, tensor<4x2xf32>) -> tensor<2x2xf32>
   return %0 : tensor<2x2xf32>
 }
diff --git a/docs/developing_iree/repository_management.md b/docs/developing_iree/repository_management.md
index df769f7..5b1f10a 100644
--- a/docs/developing_iree/repository_management.md
+++ b/docs/developing_iree/repository_management.md
@@ -40,10 +40,10 @@
 
 ```shell
 # Update SUBMODULE_VERSIONS from current git submodule pointers
-./scripts/git/submodule_versions.py export
+$ ./scripts/git/submodule_versions.py export
 
 # Update current git submodule pointers based on SUBMODULE_VERSIONS
-./scripts/git/submodule_versions.py import
+$ ./scripts/git/submodule_versions.py import
 ```
 
 ### The special relationship with LLVM and TensorFlow
@@ -122,7 +122,7 @@
 ```shell
 # Performs a submodule sync+update and stages an updated SUBMODULE_VERSIONS
 # file.
-./scripts/git/submodule_versions.py export
+$ ./scripts/git/submodule_versions.py export
 ```
 
 If you don't know if this is required, you may run:
@@ -131,7 +131,7 @@
 # The check command is intended to eventually be usable as a git hook
 # for verification of consistency between SUBMODULE_VERSIONS and the
 # corresponding local git state.
-./scripts/git/submodule_versions.py check
+$ ./scripts/git/submodule_versions.py check
 ```
 
 #### Pulling dependency changes
@@ -142,7 +142,7 @@
 ```shell
 # Updates the commit hash of any entries in SUBMODULE_VERSIONS that differ
 # and stages the changes.
-./scripts/git/submodule_versions.py import
+$ ./scripts/git/submodule_versions.py import
 ```
 
 This will stage any needed changes to the submodules to bring them up to date
diff --git a/docs/get_started/getting_started_android_cmake.md b/docs/get_started/getting_started_android_cmake.md
index e740b27..05cbd71 100644
--- a/docs/get_started/getting_started_android_cmake.md
+++ b/docs/get_started/getting_started_android_cmake.md
@@ -39,12 +39,12 @@
 After downloading, it is recommended to set the `ANDROID_NDK` environment
 variable pointing to the directory. For Linux, you can `export` in your shell's
 rc file. For Windows, you can search "environment variable" in the taskbar or
-use `Windows` + `R` to open the "Run" dialog to run `rundll32
-sysdm.cpl,EditEnvironmentVariables`.
+use `Windows` + `R` to open the "Run" dialog to run
+`rundll32 sysdm.cpl,EditEnvironmentVariables`.
 
 ### Install Android Debug Bridge (ADB)
 
-For Linux, search your the distro's package manager to install `adb`. For
+For Linux, search your the distribution's package manager to install `adb`. For
 example, on Ubuntu:
 
 ```shell
@@ -64,12 +64,12 @@
 ### Configure on Linux
 
 ```shell
-# Assuming in IREE source root
-$ cmake -G Ninja -B build-android  \
-    -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK?}/build/cmake/android.toolchain.cmake" \
-    -DANDROID_ABI="arm64-v8a" -DANDROID_PLATFORM=android-29 \
-    -DIREE_BUILD_COMPILER=OFF -DIREE_BUILD_SAMPLES=OFF \
-    -DIREE_HOST_C_COMPILER=`which clang` -DIREE_HOST_CXX_COMPILER=`which clang++`
+$ cmake -G Ninja -B ../iree-build-android/ \
+  -DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK?}/build/cmake/android.toolchain.cmake" \
+  -DANDROID_ABI="arm64-v8a" -DANDROID_PLATFORM=android-29 \
+  -DIREE_BUILD_COMPILER=OFF -DIREE_BUILD_SAMPLES=OFF \
+  -DIREE_HOST_C_COMPILER=`which clang` -DIREE_HOST_CXX_COMPILER=`which clang++` \
+  .
 ```
 
 *   The above configures IREE to cross-compile towards 64-bit
@@ -82,38 +82,36 @@
     [CMake documentation](https://developer.android.com/ndk/guides/cmake) for
     more toolchain arguments.
 *   Building IREE compilers and samples for Android is not supported at the
-    moment; they will be enabled soon.
-*   We need to define `IREE_HOST_{C|CXX}_COMPILER` to Clang here because IREE
-    does [not support](https://github.com/google/iree/issues/1269) GCC well at
-    the moment.
+    moment.
+*   We define `IREE_HOST_{C|CXX}_COMPILER` to Clang here because IREE has
+    [unstable support for GCC](https://github.com/google/iree/issues/1269).
 
 ### Configure on Windows
 
 On Windows, we will need the full path to the `cl.exe` compiler. This can be
 obtained by
 [opening a developer command prompt window](https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=vs-2019#developer_command_prompt)
-and type `where cl.exe`. Then in a command prompt (`cmd.exe`):
+and running `where cl.exe`. Then in a command prompt (`cmd.exe`):
 
 ```cmd
-REM Assuming in IREE source root
-> cmake -G Ninja -B build-android  \
+> cmake -G Ninja -B ../iree-build-android/  \
     -DCMAKE_TOOLCHAIN_FILE="%ANDROID_NDK%/build/cmake/android.toolchain.cmake" \
     -DANDROID_ABI="arm64-v8a" -DANDROID_PLATFORM=android-29 \
     -DIREE_BUILD_COMPILER=OFF -DIREE_BUILD_SAMPLES=OFF \
     -DIREE_HOST_C_COMPILER="<full-path-to-cl.exe>" \
     -DIREE_HOST_CXX_COMPILER="<full-path-to-cl.exe>" \
-    -DLLVM_HOST_TRIPLE="x86_64-pc-windows-msvc"
+    -DLLVM_HOST_TRIPLE="x86_64-pc-windows-msvc" \
+    .
 ```
 
 *   See the Linux section in the above for explanations of the used arguments.
 *   We need to define `LLVM_HOST_TRIPLE` in the above because LLVM cannot
-    properly detect host triple under Android CMake toolchain file. This might
-    be fixed later.
+    yet properly detect host triple from the Android CMake toolchain file.
 
 ### Build all targets
 
 ```shell
-$ cmake --build build-android/
+$ cmake --build ../iree-build-android/
 ```
 
 ## Test on Android
@@ -135,7 +133,7 @@
 Then you can run all device tests via
 
 ```shell
-$ cd build-android
+$ cd ../iree-build-android
 $ ctest --output-on-failure
 ```
 
@@ -150,17 +148,17 @@
 
 ```shell
 # Assuming in IREE source root
-$ build-android/host/bin/iree-translate \
-    -iree-mlir-to-vm-bytecode-module \
-    -iree-hal-target-backends=vmla \
-    iree/tools/test/simple.mlir \
-    -o /tmp/simple-vmla.vmfb
+$ ../iree-build-android/host/bin/iree-translate \
+  -iree-mlir-to-vm-bytecode-module \
+  -iree-hal-target-backends=vmla \
+  $PWD/iree/tools/test/simple.mlir \
+  -o /tmp/simple-vmla.vmfb
 ```
 
 Then push the IREE runtime executable and module to the device:
 
 ```shell
-$ adb push build-android/iree/tools/iree-run-module /data/local/tmp/
+$ adb push ../iree-build-android/iree/tools/iree-run-module /data/local/tmp/
 $ adb shell chmod +x /data/local/tmp/iree-run-module
 $ adb push /tmp/simple-vmla.vmfb /data/local/tmp/
 ```
@@ -188,18 +186,17 @@
 Translate a source MLIR into IREE module:
 
 ```shell
-# Assuming in IREE source root
-$ build-android/host/bin/iree-translate \
+$ ../iree-build-android/host/bin/iree-translate \
     -iree-mlir-to-vm-bytecode-module \
     -iree-hal-target-backends=vulkan-spirv \
-    iree/tools/test/simple.mlir \
+    $PWD/iree/tools/test/simple.mlir \
     -o /tmp/simple-vulkan.vmfb
 ```
 
 Then push the IREE runtime executable and module to the device:
 
 ```shell
-$ adb push build-android/iree/tools/iree-run-module /data/local/tmp/
+$ adb push ../iree-build-android/iree/tools/iree-run-module /data/local/tmp/
 $ adb shell chmod +x /data/local/tmp/iree-run-module
 $ adb push /tmp/simple-vulkan.vmfb /data/local/tmp/
 ```
@@ -244,7 +241,7 @@
 `/vendor/lib[64]` as `libvulkan.so` under `/data/local/tmp` and use
 `LD_LIBRARY_PATH=/data/local/tmp` when invoking IREE executables.
 
-For Qualcomm Adreno GPUs, the vendor Vulkan implemenation is at
+For Qualcomm Adreno GPUs, the vendor Vulkan implementation is at
 `/vendor/lib[64]/hw/vulkan.*.so`. So for example for Snapdragon 865:
 
 ```shell
@@ -262,9 +259,10 @@
 
 ### Dylib LLVM AOT backend
 
-To compile IREE module for the target Android device (assume Android 10 AArc64)
-we need to use the corresponding standalone toolchain (which can be found in
-ANDROID_NDK) and setting AOT linker path environment variable:
+To compile an IREE module using the Dylib LLVM ahead-of-time (AOT) backend for
+a target Android device (e.g. Android 10 AArch64) we need to use the
+corresponding standalone toolchain which can be found in `ANDROID_NDK`.
+Set the AOT linker path environment variable:
 
 ```shell
 $ export IREE_LLVMAOT_LINKER_PATH="${ANDROID_NDK?}/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android29-clang++ -static-libstdc++ -O3"
@@ -276,19 +274,18 @@
 Translate a source MLIR into an IREE module:
 
 ```shell
-# Assuming in IREE source root
-$ build-android/host/bin/iree-translate \
-    -iree-mlir-to-vm-bytecode-module \
-    -iree-llvm-target-triple=aarch64-linux-android \
-    -iree-hal-target-backends=dylib-llvm-aot \
-    iree/tools/test/simple.mlir \
-    -o /tmp/simple-llvm_aot.vmfb
+$ ../iree-build-android/host/bin/iree-translate \
+  -iree-mlir-to-vm-bytecode-module \
+  -iree-llvm-target-triple=aarch64-linux-android \
+  -iree-hal-target-backends=dylib-llvm-aot \
+  $PWD/iree/tools/test/simple.mlir \
+  -o /tmp/simple-llvm_aot.vmfb
 ```
 
 Then push the IREE runtime executable and module to the device:
 
 ```shell
-$ adb push build-android/iree/tools/iree-run-module /data/local/tmp/
+$ adb push ../iree-build-android/iree/tools/iree-run-module /data/local/tmp/
 $ adb shell chmod +x /data/local/tmp/iree-run-module
 $ adb push /tmp/simple-llvm_aot.vmfb /data/local/tmp/
 ```
diff --git a/docs/get_started/getting_started_linux_bazel.md b/docs/get_started/getting_started_linux_bazel.md
index dcd6bad..a05ebb6 100644
--- a/docs/get_started/getting_started_linux_bazel.md
+++ b/docs/get_started/getting_started_linux_bazel.md
@@ -86,11 +86,11 @@
 ```shell
 build --disk_cache=/tmp/bazel-cache
 
-# Use --config=debug to compile iree and llvm without optimizations
+# Use --config=debug to compile IREE and LLVM without optimizations
 # and with assertions enabled.
 build:debug --config=asserts --compilation_mode=opt '--per_file_copt=iree|llvm@-O0' --strip=never
 
-# Use --config=asserts to enable assertions in iree and llvm.
+# Use --config=asserts to enable assertions in IREE and LLVM.
 build:asserts --compilation_mode=opt '--per_file_copt=iree|llvm@-UNDEBUG'
 ```
 
@@ -117,7 +117,7 @@
 
 ```shell
 $ ./bazel-bin/iree/tools/iree-run-mlir ./iree/tools/test/simple.mlir \
-    -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
+  -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
 ```
 
 ### Further Reading
diff --git a/docs/get_started/getting_started_linux_cmake.md b/docs/get_started/getting_started_linux_cmake.md
index cf53f42..04f499f 100644
--- a/docs/get_started/getting_started_linux_cmake.md
+++ b/docs/get_started/getting_started_linux_cmake.md
@@ -72,7 +72,7 @@
 Configure:
 
 ```shell
-$ cmake -G Ninja -B build/ -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ .
+$ cmake -G Ninja -B ../iree-build/ -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ .
 ```
 
 > Tip:<br>
@@ -84,7 +84,7 @@
 Build all targets:
 
 ```shell
-$ cmake --build build/
+$ cmake --build ../iree-build/
 ```
 
 ## What's next?
@@ -94,8 +94,8 @@
 Check out the contents of the 'tools' build directory:
 
 ```shell
-$ ls build/iree/tools
-$ ./build/iree/tools/iree-translate --help
+$ ls ../iree-build/iree/tools
+$ ../iree-build/iree/tools/iree-translate --help
 ```
 
 Translate a
@@ -103,8 +103,8 @@
 and execute a function in the compiled module:
 
 ```shell
-$ ./build/iree/tools/iree-run-mlir $PWD/iree/tools/test/simple.mlir \
-    -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
+$ ../iree-build/iree/tools/iree-run-mlir $PWD/iree/tools/test/simple.mlir \
+  -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
 ```
 
 ### Further Reading
diff --git a/docs/get_started/getting_started_linux_vulkan.md b/docs/get_started/getting_started_linux_vulkan.md
index 28a54f4..951ee38 100644
--- a/docs/get_started/getting_started_linux_vulkan.md
+++ b/docs/get_started/getting_started_linux_vulkan.md
@@ -47,8 +47,8 @@
 ```shell
 # -- CMake --
 $ export VK_LOADER_DEBUG=all
-$ cmake --build build/ --target iree_hal_vulkan_dynamic_symbols_test
-$ ./build/iree/hal/vulkan/iree_hal_vulkan_dynamic_symbols_test
+$ cmake --build ../iree-build/ --target iree_hal_vulkan_dynamic_symbols_test
+$ ../iree-build/iree/hal/vulkan/iree_hal_vulkan_dynamic_symbols_test
 
 # -- Bazel --
 $ bazel test iree/hal/vulkan:dynamic_symbols_test --test_env=VK_LOADER_DEBUG=all
@@ -63,8 +63,8 @@
 ```shell
 # -- CMake --
 $ export VK_LOADER_DEBUG=all
-$ cmake --build build/ --target iree_hal_cts_driver_test
-$ ./build/iree/hal/cts/iree_hal_cts_driver_test
+$ cmake --build ../iree-build/ --target iree_hal_cts_driver_test
+$ ../iree-build/iree/hal/cts/iree_hal_cts_driver_test
 
 # -- Bazel --
 $ bazel test iree/hal/cts:driver_test --test_env=VK_LOADER_DEBUG=all --test_output=all
@@ -115,11 +115,11 @@
 
 ```shell
 # -- CMake --
-$ cmake --build build/ --target iree_tools_iree-translate
-$ ./build/iree/tools/iree-translate -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv ./iree/tools/test/simple.mlir -o /tmp/module.fb
+$ cmake --build ../iree-build/ --target iree_tools_iree-translate
+$ ../iree-build/iree/tools/iree-translate -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv ./iree/tools/test/simple.mlir -o /tmp/module.vmfb
 
 # -- Bazel --
-$ bazel run iree/tools:iree-translate -- -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv $PWD/iree/tools/test/simple.mlir -o /tmp/module.fb
+$ bazel run iree/tools:iree-translate -- -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv $PWD/iree/tools/test/simple.mlir -o /tmp/module.vmfb
 ```
 
 > Tip:<br>
@@ -132,11 +132,11 @@
 
 ```shell
 # -- CMake --
-$ cmake --build build/ --target iree_tools_iree-run-module
-$ ./build/iree/tools/iree-run-module -module_file=/tmp/module.fb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
+$ cmake --build ../iree-build/ --target iree_tools_iree-run-module
+$ ../iree-build/iree/tools/iree-run-module -module_file=/tmp/module.vmfb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
 
 # -- Bazel --
-$ bazel run iree/tools:iree-run-module -- -module_file=/tmp/module.fb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
+$ bazel run iree/tools:iree-run-module -- -module_file=/tmp/module.vmfb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
 ```
 
 ## Running IREE's Vulkan Samples
@@ -145,8 +145,8 @@
 
 ```shell
 # -- CMake --
-$ cmake --build build/ --target iree_samples_vulkan_vulkan_inference_gui
-$ ./build/iree/samples/vulkan/vulkan_inference_gui
+$ cmake --build ../iree-build/ --target iree_samples_vulkan_vulkan_inference_gui
+$ ../iree-build/iree/samples/vulkan/vulkan_inference_gui
 
 # -- Bazel --
 $ bazel run iree/samples/vulkan:vulkan_inference_gui
diff --git a/docs/get_started/getting_started_macos_bazel.md b/docs/get_started/getting_started_macos_bazel.md
index ad20d84..71b6818 100644
--- a/docs/get_started/getting_started_macos_bazel.md
+++ b/docs/get_started/getting_started_macos_bazel.md
@@ -69,9 +69,9 @@
 
 ```shell
 $ bazel test -k //iree/... \
-    --test_env=IREE_VULKAN_DISABLE=1 \
-    --build_tag_filters="-nokokoro" \
-    --test_tag_filters="--nokokoro,-driver=vulkan"
+  --test_env=IREE_VULKAN_DISABLE=1 \
+  --build_tag_filters="-nokokoro" \
+  --test_tag_filters="--nokokoro,-driver=vulkan"
 ```
 
 > Tip:<br>
@@ -89,11 +89,11 @@
 ```shell
 build --disk_cache=/tmp/bazel-cache
 
-# Use --config=debug to compile iree and llvm without optimizations
+# Use --config=debug to compile IREE and LLVM without optimizations
 # and with assertions enabled.
 build:debug --config=asserts --compilation_mode=opt '--per_file_copt=iree|llvm@-O0' --strip=never
 
-# Use --config=asserts to enable assertions in iree and llvm.
+# Use --config=asserts to enable assertions in IREE and LLVM.
 build:asserts --compilation_mode=opt '--per_file_copt=iree|llvm@-UNDEBUG'
 ```
 
@@ -120,7 +120,7 @@
 
 ```shell
 $ ./bazel-bin/iree/tools/iree-run-mlir ./iree/tools/test/simple.mlir \
-    -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
+  -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
 ```
 
 ### Further Reading
diff --git a/docs/get_started/getting_started_macos_cmake.md b/docs/get_started/getting_started_macos_cmake.md
index d1e393a..e602290 100644
--- a/docs/get_started/getting_started_macos_cmake.md
+++ b/docs/get_started/getting_started_macos_cmake.md
@@ -84,7 +84,7 @@
 Build all targets:
 
 ```shell
-$ cmake --build build/
+$ cmake --build ../iree-build/
 ```
 
 ## What's next?
@@ -94,8 +94,8 @@
 Check out the contents of the 'tools' build directory:
 
 ```shell
-$ ls build/iree/tools
-$ ./build/iree/tools/iree-translate --help
+$ ls ../iree-build/iree/tools
+$ ../iree-build/iree/tools/iree-translate --help
 ```
 
 Translate a
@@ -103,15 +103,15 @@
 and execute a function in the compiled module:
 
 ```shell
-$ ./build/iree/tools/iree-run-mlir $PWD/iree/tools/test/simple.mlir \
-    -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
+$ ../iree-build/iree/tools/iree-run-mlir $PWD/iree/tools/test/simple.mlir \
+  -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
 ```
 
 ### Further Reading
 
 *   For an introduction to IREE's project structure and developer tools, see
     [Developer Overview](../developing_iree/developer_overview.md).
-*   To understand how IREE implements HAL over Metal, see
+*   To understand how IREE implements a HAL driver using Metal, see
     [Metal HAL Driver](../design_docs/metal_hal_driver.md). <!-- TODO: Link to
     macOS versions of these guides once they are developed.
 *   To use IREE's Python bindings, see
diff --git a/docs/get_started/getting_started_windows_cmake.md b/docs/get_started/getting_started_windows_cmake.md
index 70c7ba3..56f918e 100644
--- a/docs/get_started/getting_started_windows_cmake.md
+++ b/docs/get_started/getting_started_windows_cmake.md
@@ -45,7 +45,7 @@
 *   Initialize MSVC by running `vcvarsall.bat`:
 
     ```powershell
-    > "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvars64.bat"
+    > & "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Auxiliary\Build\vcvars64.bat"
     ```
 
 ## Clone and Build
@@ -70,7 +70,7 @@
 Configure:
 
 ```powershell
-> cmake -G Ninja -B build\ .
+> cmake -G Ninja -B ..\iree-build\ .
 ```
 
 > Tip:<br>
@@ -82,7 +82,7 @@
 Build all targets:
 
 ```powershell
-> cmake --build build\
+> cmake --build ..\iree-build\
 ```
 
 ## What's next?
@@ -92,8 +92,8 @@
 Check out the contents of the 'tools' build directory:
 
 ```powershell
-> dir build\iree\tools
-> .\build\iree\tools\iree-translate.exe --help
+> dir ..\iree-build\iree\tools
+> ..\iree-build\iree\tools\iree-translate.exe --help
 ```
 
 Translate a
@@ -101,7 +101,7 @@
 and execute a function in the compiled module:
 
 ```powershell
-> .\build\iree\tools\iree-run-mlir.exe .\iree\tools\test\simple.mlir -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
+> ..\iree-build\iree\tools\iree-run-mlir.exe .\iree\tools\test\simple.mlir -function-input="i32=-2" -iree-hal-target-backends=vmla -print-mlir
 ```
 
 ### Further Reading
diff --git a/docs/get_started/getting_started_windows_vulkan.md b/docs/get_started/getting_started_windows_vulkan.md
index 5c6c4fb..cdd89ac 100644
--- a/docs/get_started/getting_started_windows_vulkan.md
+++ b/docs/get_started/getting_started_windows_vulkan.md
@@ -47,8 +47,8 @@
 ```powershell
 # -- CMake --
 > set VK_LOADER_DEBUG=all
-> cmake --build build\ --target iree_hal_vulkan_dynamic_symbols_test
-> .\build\iree\hal\vulkan\iree_hal_vulkan_dynamic_symbols_test.exe
+> cmake --build ..\iree-build\ --target iree_hal_vulkan_dynamic_symbols_test
+> ..\iree-build\iree\hal\vulkan\iree_hal_vulkan_dynamic_symbols_test.exe
 
 # -- Bazel --
 > bazel test iree/hal/vulkan:dynamic_symbols_test --test_env=VK_LOADER_DEBUG=all
@@ -63,8 +63,8 @@
 ```powershell
 # -- CMake --
 > set VK_LOADER_DEBUG=all
-> cmake --build build\ --target iree_hal_cts_driver_test
-> .\build\iree\hal\cts\iree_hal_cts_driver_test.exe
+> cmake --build ..\iree-build\ --target iree_hal_cts_driver_test
+> ..\iree-build\iree\hal\cts\iree_hal_cts_driver_test.exe
 
 # -- Bazel --
 > bazel test iree/hal/cts:driver_test --test_env=VK_LOADER_DEBUG=all --test_output=all
@@ -113,11 +113,11 @@
 
 ```powershell
 # -- CMake --
-> cmake --build build\ --target iree_tools_iree-translate
-> .\build\iree\tools\iree-translate.exe -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv .\iree\tools\test\simple.mlir -o .\build\module.fb
+> cmake --build ..\iree-build\ --target iree_tools_iree-translate
+> ..\iree-build\iree\tools\iree-translate.exe -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv .\iree\tools\test\simple.mlir -o .\build\module.vmfb
 
 # -- Bazel --
-> bazel run iree/tools:iree-translate -- -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv .\iree\tools\test\simple.mlir -o .\build\module.fb
+> bazel run iree/tools:iree-translate -- -iree-mlir-to-vm-bytecode-module -iree-hal-target-backends=vulkan-spirv .\iree\tools\test\simple.mlir -o .\build\module.vmfb
 ```
 
 > Tip:<br>
@@ -130,11 +130,11 @@
 
 ```powershell
 # -- CMake --
-> cmake --build build\ --target iree_tools_iree-run-module
-> .\build\iree\tools\iree-run-module.exe -module_file=.\build\module.fb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
+> cmake --build ..\iree-build\ --target iree_tools_iree-run-module
+> ..\iree-build\iree\tools\iree-run-module.exe -module_file=.\build\module.vmfb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
 
 # -- Bazel --
-> bazel run iree/tools:iree-run-module -- -module_file=.\build\module.fb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
+> bazel run iree/tools:iree-run-module -- -module_file=.\build\module.vmfb -driver=vulkan -entry_function=abs -function_inputs="i32=-2"
 ```
 
 ## Running IREE's Vulkan Samples
@@ -143,8 +143,8 @@
 
 ```powershell
 # -- CMake --
-> cmake --build build\ --target iree_samples_vulkan_vulkan_inference_gui
-> .\build\iree\samples\vulkan\vulkan_inference_gui.exe
+> cmake --build ..\iree-build\ --target iree_samples_vulkan_vulkan_inference_gui
+> ..\iree-build\iree\samples\vulkan\vulkan_inference_gui.exe
 
 # -- Bazel --
 > bazel run iree/samples/vulkan:vulkan_inference_gui
diff --git a/docs/using_iree/using_colab.md b/docs/using_iree/using_colab.md
index 150c7ed..56a73c2 100644
--- a/docs/using_iree/using_colab.md
+++ b/docs/using_iree/using_colab.md
@@ -9,7 +9,7 @@
 Run:
 
 ```shell
-./colab/start_colab_kernel.py
+$ python3 ./colab/start_colab_kernel.py
 ```
 
 This will start a jupyter notebook on port 8888. Then navigate to
@@ -33,15 +33,15 @@
 ### Install Jupyter (from https://jupyter.org/install)
 
 ```shell
-python3 -m pip install --upgrade pip
-python3 -m pip install jupyter
+$ python3 -m pip install --upgrade pip
+$ python3 -m pip install jupyter
 ```
 
 ### Setup colab (https://research.google.com/colaboratory/local-runtimes.html)
 
 ```shell
-python3 -m pip install jupyter_http_over_ws
-jupyter serverextension enable --py jupyter_http_over_ws
+$ python3 -m pip install jupyter_http_over_ws
+$ jupyter serverextension enable --py jupyter_http_over_ws
 ```
 
 ## Local and Hosted Runtimes