First approaches to Apache Spark and PySpark.
When you need to analyze really big data , the use of Pandas, sometime, cannot fit the problems.
The idea, of course, was to utilize the ML library developed by the Spark Team.
Start a Spark session
I started using Spark in standalone mode, not in cluster mode ( for the moment 🙂 ).
First of all I need to load a CSV file from disk in csv format. In the documentation I read: As of Spark 2.0, the RDD-based APIs in the
spark.mllib package have entered maintenance mode. The primary Machine Learning API for Spark is now the DataFrame-based API in the
So, it will be easier for me to avoid entering in the Resilient Distributed Datasets (RDDs) world.
As first step we need to start a Spark session. We can do easily with the following lines, where we can specify some options configuration and the name of the App for example. Please, refer to the main Spark documentation website .
Load and analyze the data
Now we can load our training data set with this simple line
And this was the first success for me. While on my I7 core with 16Gb RAM, I was not able to read the numeric train set of the Bosch Kaggle competition, in few seconds I had all the data in a DataFrame: Wow!
The ML classification routines used as input a specific format: you have to say to your algorithm what are the labels you want to predict and which are the Vector of features. This was the first obstacle. I was used to use a set of columns as features with Scikit-Learn and now I have to find the way to assemble almost 1000 columns in one single one.
After reading a bit the documentation, the solution to handle and assemble numeric features was easy:
Now I have the train data ready for a fast and simple classification model
Prepare the submission file
Finally, following the simple steps also for test data set, we can prepare the submission file
As a result I easily obtained a score of 0.13…