There could be several reasons why the Spark 3.2 driver experiences garbage collection while reading a JSON file whereas it does not happen in Spark 2.3. Some possible reasons are:
Changes in the underlying code: Spark 3.2 may have introduced changes in the code that result in more garbage collection while reading a JSON file. These changes could be related to the way Spark handles data serialization and deserialization, or the way it manages memory.
Differences in the configuration: Spark 3.2 and Spark 2.3 may have different default settings for memory allocation, garbage collection, or other configuration parameters that affect how much garbage collection is needed while reading a JSON file. It is possible that the Spark 3.2 driver is configured differently than the Spark 2.3 driver.
Size of the JSON file: If the JSON file being read is large, Spark 3.2 may need to allocate more memory and perform more garbage collection than Spark 2.3. This could be due to changes in the way Spark handles memory or the size of the objects being serialized and deserialized.
Dependencies used: The dependencies used in Spark 3.2 might be different from the dependencies used in Spark 2.3. This could affect the way Spark manages memory, which could result in more garbage collection in Spark 3.2.
In general, it is difficult to pinpoint the exact reason for the difference in garbage collection between Spark 3.2 and Spark 2.3 without more information about the specific use case and configurations.
Asked: 2022-11-25 11:00:00 +0000
Seen: 1 times
Last updated: Jul 11 '21