Design Considerations in Building a Hadoop Cluster

Design Considerations in Building a Hadoop Cluster

Distributed computing is one of the key concepts in computer science that allow for networked computers to solve computational problems by dividing them into multiple tasks that can be processed independently by a computer in the network. Using this concept, Hadoop offers a reliable and scalable distributed computing framework for a cluster of machines with local compute and storage capacity, leveraging commodity hardware with ‘Just a bunch of Disks’ (JBOD) configurations. Building a Hadoop cluster and running it well at scale as a system of record requires key design considerations in storage, compute and networking along with data redundancy and high availability options available in Hadoop.

Hadoop is a master/slave architecture for distributed data processing and storage using map reduce framework and HDFS. The master node for parallel processing using map reduce is the Job Tracker and the master node for storing data in HDFS is the Name Node. The slave nodes are the majority of the machines in the cluster that store data and run computations. Each slave node has the Data Node and the Task Tracker daemons that coordinate with the Name Node and the Job Tracker respectively. These master/slave servers can be setup on-premise or in the cloud. In an on-premise solution these master/slave servers are setup in a rack using commodity hardware and are connected to a Top of Rack (TOR) Switch with 10 Gbe interconnects.

Top of Rack Switch

 

With this, let’s dive into some of the key design considerations in building a Hadoop Cluster.

 

Storage Capacity

Building a highly reliable cluster using commodity hardware requires that multiple redundant copies of data are stored in HDFS. HDFS accomplishes this by replicating the data across the Data Nodes in the cluster. Hadoop cluster with more that 8 data nodes generally have the replication value set to 3. This is important to take into account when sizing the Hadoop cluster for storage capacity, as the true storage capacity is roughly 1/4th of the raw storage capacity.

Hadoop Cluster for Storage

 

 

Tiered Storage

Cluster design should take into account the temperature of the data that would be stored in HDFS. Data can be hot, warm or cold based on the frequency of use and should be stored accordingly using heterogeneous storage capability in Hadoop. Storage is much cheaper without compute and some of the replicated data that is cold can be moved to data nodes that are configured as archival storage tiers with slow spinning high density storage drives and low compute power.

Hadoop Cluster for Tiered Storage

 

 

Rack Awareness

Hadoop is rack aware and uses rack awareness for fault tolerance by placing a replica of the data in a data node on a different rack. Starting with at least 2 racks will ensure that data is never lost, if an entire rack fails. Data nodes with high storage and compute density are available in 1U rack mount form factors and can help build a Petabyte cluster of raw storage with just one rack, but the goal would be to start small and build to scale by setting up 2 racks and spreading the data nodes between the racks.

Hadoop Rack Awareness

 

 

HDFS Federation

The new HDFS architecture provides HDFS federation by allowing multiple Name Nodes. Setting up a federated cluster will allow for the Name Node and cluster Namespace to scale horizontally and isolate applications and users in a multi user/application cluster environment. The Name Nodes are independent and do not require coordination with each other, while the data nodes are used as common storage for all the Name Nodes in the cluster. Using HDFS federation a cluster can be designed and shared as a Multi-Tenant environment.

HDFS Federation

Networking

Networking for Hadoop Cluster

Data center networks over the past decades have been designed for Client/Server, north/south traffic. These traditional networks are based on Layer 2 Networking and designed to scale-up following hierarchical tree architecture. These networks work well, but are subject to bottlenecks if uplinks between layers are oversubscribed and are not well suited for modern data storage and processing platforms like Hadoop.

Networking for Hadoop Data Cluster

Modern data centers for big data and cloud require networks that can scale-out and support inter-server communication with east/west traffic. To best leverage the scale out capabilities of Hadoop, storage and compute resources should be deployed using a Layer 3 CLOS Leaf and Spine network fabric. In a leaf and spine network fabric all nodes are equidistant and the network can be scaled-out by adding more spine switches as needed.

Layer 3 CLOS Leaf and Spine network fabric

 

 

Automation

Managing Hadoop clusters without automation can quickly spin out of hand as the cluster scales. As part of the Hadoop cluster design, it is important to leverage an automation/orchestration devops tool to provision and configure the server and the network components of a Hadoop cluster for converged administration.

These are some of my thoughts around designing Hadoop Clusters. I would love to hear your thoughts and considerations in building Hadoop Clusters for scale- tweet to me @madhumoy.