Pipelines will emit gauges for min, max, and stddev for processing time per record. This gauge is per mapper/reducer/executor. However, when the UI gets these metrics, it does so with the workflow as the context, meaning the gauges are aggregated. What this means is that the most recent gauge will be taken, which means the min, max, and stddev is only for a single mapper/reducer/executor and not across them all.
This can result in impossible numbers, like a max that is lower than the average.
Fixing this purely in the UI would require extra logic to get all the metrics for all the different contexts and knowing how to aggregate them. However, it seems like it would be useful if the backend supported different aggregation methods rather than sum (for counts) and latest (for gauges). Standard deviation is tricky in either scenario, as merging standard deviations requires extra per context information like the average and total count.