Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

When a Spark job has a Peak Execution Memory Onheap of 0.0B, it means that the job did not use any on-heap memory during its execution. This could happen if the job processed a small amount of data or if it was optimized to use off-heap memory or direct memory access (DMA) to improve its performance. This can also happen if there is a bug or issue with the job or the way it is executed, and it is not properly logging or reporting its peak memory usage. It is important to accurately monitor and measure memory usage in Spark jobs to optimize their performance and ensure they run smoothly.