[dvsim] Create jobs with dependencies instead of sub-jobs

This looks a bit more like normal schedulers, where you have a big
load of jobs with dependencies between them. One reason to do this
sort of thing is that you simplify the process of actually running
things (nothing new to kick off).

The main visible change here, however, is that if a build fails then
we don't run every dependent job for a second before giving up on
it. (We could have retro-fitted that into the existing design, but I'm
trying to move things towards a more "standard" shape as I go). With
the new code, when we're about to dispatch a job, we check to see
whether its dependencies have all run successfully. If not, we kill
it.

Note that this logic is made easier because dvsim runs in
phases (called "targets" in the code). We ensure that dependencies are
always in an earlier phase, so know that they will have run to
completion or failed before any dependent job is started. In code,
this is the assertion that dep.status is P, F or K in
Scheduler.dispatch().

The only other change in this patch is to printing. Because we now
have jobs for future phases/targets, we don't want to print both a
"[build]: ..." and a "[run]: ..." line each time. To avoid that, we
skip targets where everything is still queued if we've printed
something for a previous target.

Signed-off-by: Rupert Swarbrick <rswarbrick@lowrisc.org>
diff --git a/util/dvsim/Deploy.py b/util/dvsim/Deploy.py
index 1bf07e1..a11e1a2 100644
--- a/util/dvsim/Deploy.py
+++ b/util/dvsim/Deploy.py
@@ -32,17 +32,10 @@
     # be joined with '&&' instead of a space.
     cmds_list_vars = []
 
-    def __self_str__(self):
-        if log.getLogger().isEnabledFor(VERBOSE):
-            return pprint.pformat(self.__dict__)
-        else:
-            ret = self.cmd
-            if self.sub != []:
-                ret += "\nSub:\n" + str(self.sub)
-            return ret
-
     def __str__(self):
-        return self.__self_str__()
+        return (pprint.pformat(self.__dict__)
+                if log.getLogger().isEnabledFor(VERBOSE)
+                else self.cmd)
 
     def __init__(self, sim_cfg):
         '''Initialize common class members.'''
@@ -67,8 +60,8 @@
         # List of vars required to be exported to sub-shell
         self.exports = None
 
-        # Deploy sub commands
-        self.sub = []
+        # A list of jobs on which this job depends
+        self.dependencies = []
 
         # Process
         self.process = None
@@ -382,12 +375,6 @@
                     self.status = 'F'
                     break
 
-    # Recursively set sub-item's status if parent item fails
-    def set_sub_status(self, status):
-        for sub_item in self.sub:
-            sub_item.status = status
-            sub_item.set_sub_status(status)
-
     def link_odir(self):
         if self.status == '.':
             log.error("Method unexpectedly called!")
@@ -429,10 +416,6 @@
             if self.log_fd:
                 self.log_fd.close()
             self.status = "K"
-        # recurisvely kill sub target
-        elif len(self.sub):
-            for item in self.sub:
-                item.kill()
 
     def kill_remote_job(self):
         '''
@@ -585,7 +568,7 @@
 
     cmds_list_vars = ["pre_run_cmds", "post_run_cmds"]
 
-    def __init__(self, index, test, sim_cfg):
+    def __init__(self, index, test, build_job, sim_cfg):
         # Initialize common vars.
         super().__init__(sim_cfg)
 
@@ -615,6 +598,9 @@
             "run_fail_patterns": False
         })
 
+        if build_job is not None:
+            self.dependencies.append(build_job)
+
         self.index = index
         self.seed = RunTest.get_seed()
 
@@ -810,10 +796,12 @@
     # Register all builds with the class
     items = []
 
-    def __init__(self, sim_cfg):
+    def __init__(self, merge_job, sim_cfg):
         # Initialize common vars.
         super().__init__(sim_cfg)
 
+        self.dependencies.append(merge_job)
+
         self.target = "cov_report"
         self.pass_patterns = []
         self.fail_patterns = []