Starting at the Basics: What is Hadoop and what problems does it solve?

With this post I start with the basics on Hadoop, including its history.

The story starts with the early days of Google. Engineers needed to design new ways to store and process and retrieve data that would scale to very large sizes. The published two papers on their design in 2003, and the highly regarded community-focused Doug Cutting produced an open source version of the software called Hadoop.

Along with that open source project came many other related open source capabilities, and soon an entire big data framework was created. New methods of storing, processing and retrieving data were now available, free, from the Apache Software Foundation. And innovation continued as a firm called Cloudera stood up to continue to accelerate innovation into the open source project.

Hadoop is a single data platform infrastructure that is more simplified, efficient, and runs on affordable commodity hardware.

Hadoop is designed to handle the three V’s of Big Data: volume, variety, velocity. First lets look at volume, Hadoop is a distributed architecture that scales cost effectively. In other words, Hadoop was designed to scale out, and it is much more cost effective to grow the system. As you need more storage or computing capacity, all you need to do is add more nodes to the cluster. Second is variety, Hadoop allows you to store data in any format, be that structured or unstructured data. This means that you will not need to alter your data to fit any single schema before putting it into Hadoop. Next is velocity, with Hadoop you can load raw data into the system and then later define how you want to view it. Because of the flexibility of the system, you are able to avoid many network and processing bottlenecks associated with loading raw data. Since data is always changing, the flexibility of the system makes it much easier to integrate any changes.

Hadoop will allow you to process massive amounts of data very quickly. Hadoop is known as a distributing processing engine which leverages data locality. That means it was designed to execute transformations and processes where the data actually exists. Another benefit of value is from an analytics perspective, Hadoop allows you load raw data and then define the structure of the data at the time of query. This means that Hadoop is quick, flexible, and able to handle any type of analysis you want to conduct.

Organizations begin to utilize Hadoop when they need faster processing on large data sets, and often find they save the organization some money too. Large users of Hadoop include: Facebook, Amazon, Adobe, EBay, and LinkedIn. It is also in use throughout the financial sector and the US government. These organizations are a testament to what can be done at internet speed by utilizing big data to its fullest extent

Track the most disruptive technologies by diving into our categorized index:

Artificial Intelligence Companies – A fast overview of Artificial Intelligence companies we believe are poised to cause the most positive disruption in the enterprise.

Big Data Companies – Reference to the greatest, most disruptive Big Data companies in the tech ecosystem.

Business Intelligence Companies – We assess these to be the Business Intelligence Companies most impactful for delivering real decision advantage.

Cybersecurity Companies – We apply our deep expertise in cybersecurity to assessing the best across multiple categories including:

Cloud Computing Companies – We include both platform and software as a service providers, capturing only the most innovative and disruptive.

Collaborative Tool Companies – These are the firms that help humans connect to humans to create, manage and lead.

Infrastructure Companies – Critical enterprise foundations for business agility.

IoT Companies – Internet of Things and Industrial Internet of Things are here. How do you manage them?

Mobile Companies – Help manage, configure, secure and optimize these very powerful capabilities.

Robotics Companies – Including innovations in Robotic Process Automation, Drones, and industrial robotics.

Services Companies – We only track a few, the ones we really know well.

Tech Titans – These are the big players. We track the tech titans closely since their capabilities change continuously.

VC, PE and Finance Companies – Keeping an eye on the investors can give indications of coming developments.

You can also use our topical pages to get up to speed quickly on the current status of the major megatrends. See our pages on Cloud ComputingArtificial IntelligenceMobilityBig DataRoboticsInternet of ThingsCybersecurity and Blockchain and Cryptocurrencies.

We also provide special pages focused on high interest topics, including Science FictionEntertainmentCyber WarTech CareersTraining and Education and Tech Tips.

What do you think?

DNSSEC: An interview with Joe Klein on why DNSSEC matters, how it works, and how you can use it

The Problem With PEAP