One of the common requests we receive from customers at Qubole is to debug a slow Spark application. Usually this process is done with trial and error, which takes time and requires running clusters beyond normal usage (read wasted resources). Moreover, it doesn’t tell us where to look for further improvements. We at Qubole are looking into making this process more self-serve. Towards this goal we have built Sparklens (https://github.com/qubole/sparklens), an OSS tool based on Spark's event listener framework.
From a single run of the application, Sparklens provides insights about scalability limits of a given Spark application. In this talk we will cover what Sparklens does and the theory behind it. We will talk about how the structure of a Spark application puts important constraints on its scalability; how can we find these structural constraints and how to use them as a guide in solving performance and scalability problems of Spark applications.
This talk will help the audience with answering the following questions about their Spark applications: 1) Will their application run faster with more executors? 2) How will cluster utilization change as the number of executors changes? 3) What is the absolute minimum time this application will take even if we give it infinite executors? 4) What is the expected wall clock time for the application when we fix the most important structural limits of these application?
Sparklens makes the ROI of additional executors extremely obvious for a given application and needs just a single run of the application to determine how the application will behave with different executor counts. Specifically, it will help managers take the correct side of the tradeoff between spending developer time optimizing applications vs. spending money on compute bills.
Data Council, PO Box 2087, Wilson, WY 83014, USA - Phone: +1 (415) 800-4938 - Email: community (at) datacouncil.ai