The Data Scientist’s Toolkit: Hadoop
By Kat Campise, Data Scientist, Ph.D. Just about everything we do on a daily basis generates data. Whether we stroll through the aisles at our local grocery store or send a quick text message to a friend or family member, some activity or digital thought sharing is making its way from you to someone’s database. All enterprises are now in the data (and tech) industry, and this is true even if it’s a small restaurant or a lone freelancer trying to attract additional clientele. On a larger scale, massive data collection, processing, and analysis require equally substantial storage and computational resources. Thus, we now have Hadoop.
What is Hadoop?
Hadoop is a software ecosystem used for data storage and computation. Created by Doug Cutting, a software designer, and Mike Cafarella, a computer scientist, Hadoop was initially based on a paper entitled, “The Google File System.” Therefore, it could be accurately stated that Google spawned the Hadoop idea — which isn’t a far stretch since they manage one of the most extensive datasets in the world: a search engine. Initially released in 2011, Hadoop’s seven-year evolution as a distributed storage and processing system has blossomed into a vast ecosystem which includes but is not limited to:
- Apache Pig: scripting language similar to SQL that is used in conjunction with Hadoop and used instead of writing code in Java; it’s used for data analysis tasks.
- Apache Spark: is a clustered computational framework that provides distributed data processing for more complex tasks such as machine learning.
- Apache Hive: provides for the use of SQL for data querying, summarization, analysis, and exploration.
- Apache Flume: data collection software that can handle massive amounts of streaming data; Flume is used for data ingestion.
- Yarn: used for resource management and job scheduling.
Other than the basic architecture (described in more detail below), Hadoop has a plethora of open source software utilities which includes NoSQL and NewSQL databases, distributed programming frameworks, data ingestion software, and a variety of data visualization capabilities (e.g., Hadoop software can be used in conjunction with R, Tableau, and SAS Visual Analytics). Thus, Hadoop is a comprehensive protocol with many interchangeable utilities.
Basic Hadoop Architecture
In the beginning, before the explosion in the number of software utilities available for Hadoop integration, there were two primary Hadoop building modules: Hadoop Distributed File System (HDFS) and MapReduce. The HDFS system is relatively self-explanatory; it distributes datasets across what’s known as “commodity hardware” (low cost and low-performance computers). If we think in terms of a social media giant such as Facebook, which is perpetually collecting and managing enormous amounts of data, HDFS provides the infrastructure needed for computation and storage which is “shared” (or distributed) across the commodity hardware. It’s similar to working on a collaborative project where each member of the group is tasked with aggregating information about a topic; thus, the workload is reduced. One of Hadoop’s strengths is its ability to scale up or down, so as more resources are needed, the Hadoop system can handle the data volume fluctuation. HDFS is the base layer of the entire Hadoop ecosystem. MapReduce is frequently defined as a programming model that acts as a go-between for HDFS and the rest of the Hadoop system. In one sense, it can be viewed as a data project manager that splits the data into smaller pieces and distributes the fragments to computer clusters for parallel processing (which means that all of the data pieces from the original dataset are processed simultaneously).
Who Uses Hadoop?
Many of the tech behemoths utilize Hadoop. Among the most recognizable Hadoop user base are:
- Amazon
- Alibaba
- eBay
- Hulu
Google formerly used MapReduce — which makes sense since they created MapReduce over a decade ago. However, Google is not known for resting on its laurels, so the tech brainiacs switched to deploying Cloud Dataflow in lieu of MapReduce. Regardless of Google’s transition, plenty of enterprises continue to use the Hadoop system (including MapReduce). Thus, learning how to use Hadoop’s core functions along with its additional software integrations continues to be valuable knowledge for a data scientist.
Where to Get Started with Hadoop
Often, when jumping into a new career, there exists a “chicken and egg” problem: you need the experience to be considered for the job, but having on the job experience is primarily the way to gain the required knowledge. Fortunately, for just about everything data science related, we are in the open source age where would-be learners can find tutorials and massive online open courses (MOOCs) that are freely available (or learners can earn certificates for a small-ish fee).
- Coursera offers a Big Data specialization that includes a specific course on the Hadoop Platform. Big Data Essentials, Big Data Analysis, and Data Science at Scale also provide ample information and practice for Hadoop software. Learners can either audit the courses for free or pay to access all course materials (for some courses, auditing doesn’t include completing quizzes and assignments).
- edX has courses in Big Data and an Introduction to Apache Hadoop offered by the Linux Foundation. Most, if not all, of the edX course offerings can be completed free of charge. Learners won’t receive a certificate, but they’ll still have access to the video lectures and other course materials.
- Udemy has a selection of Hadoop offerings that are either free or at a reasonable cost. Since these are courses created by individuals rather than official academic institutions, the mileage may vary in terms of overall quality.
- Udacity has partnered with Cloudera to offer an Intro to Hadoop and MapReduce course at no cost to the learner: it’s absolutely free. However, they recommend that learners have at least introductory knowledge of computer science.
It’s important to note that being a data scientist requires curiosity. While data science is an emerging industry, it’s situated at the nexus of at least two sectors that are continually evolving: business and technology. Both sectors are driven by innovation, and the insight data scientists derive from the vast data pools (or data oceans) is fundamental to an enterprise’s progress. The courses above are only the beginning of the lifelong learning that is data science.