Starting at the Basics: What is Hadoop and what problems does it solve?

With this post I start with the basics on Hadoop, including its history.

The story starts with the early days of Google. Engineers needed to design new ways to store and process and retrieve data that would scale to very large sizes. The published two papers on their design in 2003, and the highly regarded community-focused Doug Cutting produced an open source version of the software called Hadoop.

Along with that open source project came many other related open source capabilities, and soon an entire big data framework was created. New methods of storing, processing and retrieving data were now available, free, from the Apache Software Foundation. And innovation continued as a firm called Cloudera stood up to continue to accelerate innovation into the open source project.

Hadoop is a single data platform infrastructure that is more simplified, efficient, and runs on affordable commodity hardware.

Hadoop is designed to handle the three V's of Big Data: volume, variety, velocity. First lets look at volume, Hadoop is a distributed architecture that scales cost effectively. In other words, Hadoop was designed to scale out, and it is much more cost effective to grow the system. As you need more storage or computing capacity, all you need to do is add more nodes to the cluster. Second is variety, Hadoop allows you to store data in any format, be that structured or unstructured data. This means that you will not need to alter your data to fit any single schema before putting it into Hadoop. Next is velocity, with Hadoop you can load raw data into the system and then later define how you want to view it. Because of the flexibility of the system, you are able to avoid many network and processing bottlenecks associated with loading raw data. Since data is always changing, the flexibility of the system makes it much easier to integrate any changes.

Hadoop will allow you to process massive amounts of data very quickly. Hadoop is known as a distributing processing engine which leverages data locality. That means it was designed to execute transformations and processes where the data actually exists. Another benefit of value is from an analytics perspective, Hadoop allows you load raw data and then define the structure of the data at the time of query. This means that Hadoop is quick, flexible, and able to handle any type of analysis you want to conduct.

Organizations begin to utilize Hadoop when they need faster processing on large data sets, and often find they save the organization some money too. Large users of Hadoop include: Facebook, Amazon, Adobe, EBay, and LinkedIn. It is also in use throughout the financial sector and the US government. These organizations are a testament to what can be done at internet speed by utilizing big data to its fullest extent

To read more about Hadoop, click here.

Kimberly Kelly

Kimberly Kelly has been involved in entrepreneurial activities on the Internet since she was 8 years old when she created an organization raising over $2000.00 for charities. Since that time she has continued to immerse herself in technology for positive change. She writes at CTOvision.com and DelphiBrief.com and the new analysis focused Analyst One.
About Kimberly Kelly

Kimberly Kelly has been involved in entrepreneurial activities on the Internet since she was 8 years old when she created an organization raising over $2000.00 for charities. Since that time she has continued to immerse herself in technology for positive change. She writes at CTOvision.com and DelphiBrief.com and the new analysis focused Analyst One.

Leave a Reply