Skip to main content


Home Hadoop cluster

Hadoop cluster

Hadoop cluster definition

Hadoop cluster refers to a collection of commodity hardware servers, or nodes, interconnected to form a distributed computing infrastructure. It enables parallel processing of data across multiple nodes, allowing for efficient storage, retrieval, and analysis of large datasets.

Hadoop was initially developed by Doug Cutting and Mike Cafarella in 2005 as an open-source implementation of Google's MapReduce programming model and the Google File System (GFS). The project was named after Cutting's son's toy elephant and quickly gained popularity as a scalable and cost-effective solution for processing large datasets. In 2006, Yahoo adopted Hadoop for its search engine infrastructure, contributing to its widespread adoption in the industry.

See also: application clustering

Where are Hadoop clusters used?

  • Big data analytics. They are used for processing and analyzing vast amounts of data from multiple sources.
  • Data warehousing. They can serve as cost-effective data warehouses for storing and querying structured and unstructured data, in some cases replacing traditional relational databases.
  • Log processing. Since they can process logs generated by web servers, applications, and network devices, Hadoop clusters help with troubleshooting, performance monitoring, and security analysis.
  • Machine learning. They provide a scalable platform for training and deploying machine learning models on large datasets, supporting advanced analytics and artificial intelligence applications.
  • Genomic analysis. Hadoop clusters can also be used in bioinformatics to process and analyze genomic data.