I observed this for MapReduce, but looking at code, it is likely to be the same for Spark and workflows, and possibly other program types.
In the MapReduce case, I had a MapReduce that did not configure the output, thus no output format was set and job.submit() failed with "invalid job configuration". In that case I observed that destroy was not called (my test case depended on that).
Looking at code, this is because MapReduceRuntimeService extends AbstractExecutionThreadService, which does not call shutdown() if startup() fails. That means, other tasks that we run in shutdown, such as running the cleanup task, also don't happen.
It works fine if the program fails after startUp() has completed.