About 7 months ago I completed work on re-implementing a data-flow process using Apache Spark and Kafka. After the soft-release I faced two main challenges with Apache Spark and I would like to share my experience and how I solved those problems.
The implementation – Brief explanation
Let me do my best to explain what the application does and how it does it without revealing sensitive company information.
Below is a simplified illustration of what the process looks like.

Fig. 1.
The Spark application ingests data from two Kafka topics and then applies some computation on the messages from both topics based on some given business and application logic. The result is then output to another kafka topic.
The application reads in batches from both input topics every 30 seconds, but writes to the output topic every 90 seconds. It is able to do this using a Spark feature called Window Operation. Basically a Window Operation allows you to apply computations on data streams that have been received over a set number of time windows. In the case of this application a unit of time is 30 seconds, the window length is 3 units of time, and the sliding interval is also 3 units of time.
The application was deployed to production in client mode using docker.
Challenges
After the soft-release I ran into two major challenges;
- Out of memory exceptions
- Insufficient disk space errors
Now I experienced these one after the other. The first one being the out of memory exceptions
Out of Memory Exceptions:
During the test release I started running into out of memory exceptions. After some research I figured out that I had to set the spark.executor.memory
config param to something appropriate for the application. The spark executor memory determines the java heap size.
By default the spark executor memory is 1G. I know what you are thinking, why isn’t 1G of memory enough. Well the truth is that even though the spark executor memory is set to 1G the application only has access to a fraction of that. This is as a result of how Spark manages it’s memory.

Fig 2. Apache Spark Memory Management Model. https://0x0fff.com/spark-memory-management
For more detail on how Spark manages memory read this article.
If spark.executor.memory
= 1G then Spark memory = (1G - 300MB) * 0.75
= 525MB.
So the Storage memory becomes 525MB/2 = 256MB. The Storage memory is the memory available to the application for caching data.
Because the application made use of the window operation which puts a lot of data in cache it was always exceeding the 256MB especially during peak periods.
To remedy this I set the spark.executor.memory to 4G, which now made the Storage memory = ((1G - 300MB) * 0.75)/2
= 1423MB
Insufficient disk space errors:
So after the fix for the memory issue and running the application for a few days everything seemed fine. But after some weeks our Systems team started noticing that the application was using a lot of disk space and it was growing consistently. As a temporary solution they would kill the docker container in which the application was running and spin up a new one. This cycle continued for weeks.
root@docker:/app/code# ls -la /tmp/ drwxrwxrwt 110 root root 20K Jun 12 09:31 . drwxr-xr-x 75 root root 4.0K Jun 12 09:31 .. drwxr-xr-x 66 root root 4.0K Jun 9 11:55 blockmgr-0e460744-08e2-47a5-b706-d0fb026e0b59 drwxr-xr-x 66 root root 4.0K Jun 9 12:33 blockmgr-12b905bd-9d79-4ce9-b305-8ccf0b5b4b2c drwxr-xr-x 66 root root 4.0K Jun 9 12:00 blockmgr-16f0bfdf-f779-42da-b56f-ca392b97d29c drwxr-xr-x 66 root root 4.0K Jun 12 09:17 blockmgr-1808b3db-eb97-46d0-afcd-d205302fd863 drwxr-xr-x 2 root root 4.0K Jun 12 09:31 hsperfdata_root -rwxr--r-- 1 root root 258K Jun 9 13:14 snappy-1.1.2-0093e660-f792-4d6b-8d0d-9932aa3b4825-libsnappyjava.so -rwxr--r-- 1 root root 258K Jun 9 12:57 snappy-1.1.2-01f4243b-1cb5-4400-9878-a7c2a89788bc-libsnappyjava.so -rwxr--r-- 1 root root 258K Jun 9 10:40 snappy-1.1.2-022e3d11-d86e-4d22-bb39-850dcae123a2-libsnappyjava.so -rwxr--r-- 1 root root 258K Jun 12 09:31 snappy-1.1.2-03c1b15f-dcb1-4bdb-b9df-40cce262120f-libsnappyjava.so drwx------ 4 root root 4.0K Jun 9 13:03 spark-054a1704-dd37-4a72-8552-87031b45d362 drwx------ 4 root root 4.0K Jun 12 09:25 spark-0d6662aa-7036-41e4-b9aa-90ef9bfe0c91 drwx------ 4 root root 4.0K Jun 9 12:08 spark-0e51d1d9-749a-4f5c-8dba-c9283389a94f drwx------ 4 root root 4.0K Jun 9 14:21 spark-22831086-e35e-43e6-8df5-9232e5908bb6
With the help of the Systems team we found that the major culprit of the disk usage was the temp folder. We took a peek and found a bunch of temp files. The file size of each was very small but it adds up when you have a whole lot of them. After some research we found that when a Spark application starts up it creates three temp files. For some reason the container running our application was always restarting every couple of days or hours and each time it does that the application creates 3 new temp files. We also discovered that if a Spark application is deployed in cluster mode that the master node in the cluster will handle the clean up of the temp files. But since we deployed our application in client mode it was our responsibility to clean them up.
Now as you can imagine our Systems team wasn’t very happy with always being paged due to disk space issues. So I had to figure out a way to clean up these temp files programmatically.
So after further research I discovered SparkListner.
SparkListener is a mechanism in Spark to intercept events that are emitted over the course of execution of a Spark application. One of such events is the OnApplicationStartEvent
which is the perfect time to clean up the temp files. So I wrote a little logic to implement the SparkListner interface.
public class CustomListener implements SparkListener { @Override public void onApplicationStart(final SparkListenerApplicationStart arg0) { final SparkTempCleanupService sparkTempCleanupService = new SparkTempCleanupService(); sparkTempCleanupService.cleanupTempFiles(); } } public class SparkTempCleanupService { public void cleanupTempFiles() { final File[] tempFiles = this.tempDir.listFiles(); //on a fresh start the tmp directory will have only four temp files, //if that is the case then don't bother. if (tempFiles.length > 4) { this.fetchTempFiles(tempFiles); this.sortTempFilesByLastModified(); this.selectTempFilesToBeDeleted(); this.deleteOldTempFiles(tempFiles); this.logger.info("Temp files cleanup COMPLETE!"); } else { this.logger.info("No temp files to delete. Nothing to do here, carry on..."); } } }
The above is just a semi-pseudocode of the SparkListner implementation.
We deployed and it worked like a charm. We haven’t had disk space issue since then.
Lessons Learned
- Understand your data and the kind of operation that needs to be performed on it and how that operation will be performed. This will give you some insight as to how much memory the operation may likely use and then using the Spark Memory formulae calculate how much memory to allocate.
- Be aware of the temp files. Temp files are created each time a Spark application starts. Deploying a Spark application in client mode means you have to handle the cleaning up of the temp files. The temp files to be deleted are the ones whose name begins with either
snappy-
,spark-
, orblockmgr-
. Only delete the older ones, do not delete the ones created during the current application start up.