Uploaded image for project: 'CDAP'
  1. CDAP
  2. CDAP-7444

If a program fails during startup, destroy() is never called

    XMLWordPrintableJSON

    Details

    • Release Notes:
      Ensured that destroy() is always called for MapReduce, even if initialize() fails.
    • Rank:
      1|hzzn47:

      Description

      I observed this for MapReduce, but looking at code, it is likely to be the same for Spark and workflows, and possibly other program types.

      In the MapReduce case, I had a MapReduce that did not configure the output, thus no output format was set and job.submit() failed with "invalid job configuration". In that case I observed that destroy was not called (my test case depended on that).

      Looking at code, this is because MapReduceRuntimeService extends AbstractExecutionThreadService, which does not call shutdown() if startup() fails. That means, other tasks that we run in shutdown, such as running the cleanup task, also don't happen.

      It works fine if the program fails after startUp() has completed.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                andreas Andreas Neumann
                Reporter:
                andreas Andreas Neumann
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: