Spark Spill to Disk Thread n spilling sort data of n GB to disk ( n times so far) - isgaur/AWS-BigData-Solutions GitHub Wiki

Manually repartition() your prior stage so that you have smaller partitions from input. Increase the shuffle buffer by increasing the memory in your executor processes (spark.executor.memory) Increase the shuffle buffer by increasing the fraction of executor memory allocated to it (spark.shuffle.memoryFraction) from the default of 0.2. You need to give back spark.storage.memoryFraction. Increase the shuffle buffer per thread by reducing the ratio of worker threads (SPARK_WORKER_CORES) to executor memory you may also consider increasing the default number (spark.sql.shuffle.partitions) of partitions from 200 (when shuffle occurs) to a number that will result in partitions of size close to the hdfs block size (i.e. 128mb to 256mb)

https://0x0fff.com/spark-memory-management/

https://www.tutorialdocs.com/article/spark-memory-management.html

https://www.youtube.com/watch?v=7ooZ4S7Ay6Y&feature=youtu.be