Never ever do this to Hadoop…..unless you are using Isilon

Let me start by saying that the ideas discussed here are my own, and not necessarily that of my employer (EMC). This is my own personal blog.

A great article by Andrew Oliver has been doing the rounds called “Never ever do this to Hadoop”. Andrew argues that the best architecture for Hadoop is not external shared storage, but rather direct attached storage (DAS). The article can be found here:–ever-do-this-to-hadoop.html

I want to present a counter argument to this. Now having seen what a lot of companies are doing in this space, let me just say that Andrew’s ideas are spot on, but only applicable to traditional SAN and NAS platforms. There is a new next generation storage architecture that is taking the Hadoop world by storm (pardon the pun!). EMC has done something very different which is to embed the Hadoop filsyetem (HDFS) into the Isilon platform. This approach changes every part of the Hadoop design equation.

Here’s where I agree with Andrew. From my experience, we have seen a few companies deploy traditional SAN and NAS systems for small-scale Hadoop clusters. Sub 100TBs this seems to be a workable solution and brings all the benefits of traditional external storage architectures (easy capacity management, monitoring, fault tolerance, etc). However once these systems reach a certain scale, the economics and performance needed for the Hadoop scale architecture don’t match up.

The traditional thinking and solution to Hadoop at scale has been to deploy direct attached storage within each server. While this approach served us well historically with Hadoop, the new approach with Isilon has proven to be better, faster, cheaper and more scalable. It brings capabilities that enterprises need with Hadoop and have been struggling to implement. This Isilon-Hadoop architecture has now been deployed by over 600 large companies, often at the 1-10-20 Petabyte scale. Some of these companies include major social networking and web scale giants, to major enterprise accounts. A great example is Adobe (they have an 8PB virtualized environment running on Isilon) more detail can be found here:

There are 4 keys reasons why these companies are moving away from the traditional DAS approach and leveraging the embedded HDFS architecture with Isilon:

Often companies deploy a DAS / Commodity style architecture to lower cost

The traditional SAN and NAS architectures become expensive at scale for Hadoop environments. Often this is related to point 2 below (ie more controllers for performance) however sometimes it is just due to the fact that enterprise class systems are expensive.

So how does Isilon provide a lower TCO than DAS. Well there are a few factors:

  • The Hadoop DAS architecture is really inefficient. The default is typically to store 3 copies of data for redundancy. What this means is that to store a petabyte of information, we need 3 petabytes of storage (ouch). Even commodity disk costs a lot when you multiply it by 3x. With Isilon, data protection typically needs a ~20% overhead, meaning a petabyte of data needs ~1.2PBs of disk.
  • Storage management, diagnostics and component replacement become much easier when you decouple the HDFS platform from the compute nodes
  • Isilon allows you to scale compute and storage independently. One of the things we have noticed is how different companies have widely varying compute to storage ratios (do a web search for Pandora and Spotify and you will see what I mean). One company might have 200 servers and a petabyte of storage. Another might have 200 servers and 20 PBs of storage. The question is how do you know when you start, but more importantly with the traditional DAS architecture, to add more storage you add more servers, or to add more compute you add more storage. With Isilon you scale compute and storage independently, giving a more efficient scaling mechanism.

It is not uncommon for organizations to halve their total cost of running Hadoop with Isilon. EMC has developed a very simple and quick tool to help identify the cost savings that Isilon brings versus DAS. The tool can be found here:

The DAS architecture scales performance in a linear fashion

Hadoop is a scale out architecture, which is why we can build these massive platforms that do unbelievable things in a “batch” style. This is counter to the traditional SAN and NAS platforms that are built around a “scale up” approach (ie few controllers, add lots of disk).

The unique thing about Isilon is it scales horizontally just like Hadoop. It can scale from 3 to 144 nodes in a single cluster. What this delivers is massive bandwidth, but with an architecture that is more aligned to commodity style TCO than a traditional enterprise class storage system. This approach gives Hadoop the linear scale and performance levels it needs.

More importantly, Hadoop spends a lot of compute processing time doing “storage” work, ie managing the HDFS control and placement of data. With Isilon, these storage-processing functions are offloaded to the Isilon controllers, freeing up the compute servers to do what they do best: manage the map reduce and compute functions.

The net effect is that generally we are seeing performance increase and job times reduce, often significantly with Isilon. Internally we have seen customers literally halve the time it takes to execute large jobs by moving off DAS and onto HDFS with Isilon.

For some data, see IDC’s validation on page 5 of this document:

 Once the Hadoop cluster becomes large and critical, it needs better data protection

Typically Hadoop starts out as a non-critical platform. Most companies begin with a pilot, copy some data to it and look for new insights through data science. Because Hadoop is such a game changer, when companies start to production-ise it, the platform quickly becomes an integral part of their organization. In one large company, what started out as a small data analysis engine, quickly became a mission critical system governed by regulation and compliance. Because Hadoop has very limited inherent data protection capabilities, many organizations develop a home grown disaster recovery strategy that ends up being inefficient, risky or operationally difficult.

Isilon brings 3 brilliant data protection features to Hadoop (1) The ability to automatically replicate to a second offsite system for disaster recovery (2) snapshot capabilities that allow a point in time copy to be created with the ability to restore to that point in time (3) NDMP which allows backup to technologies such as data domain.

Some other great information on backing up and protecting Hadoop can be found here:

 The data lake idea: Support multiple Hadoop distributions from the one cluster

One observation and learning I had was that while organizations tend to begin their Hadoop journey by creating one enterprise wide centralized Hadoop cluster, inevitability what ends up being built are many silos of Hadoop “puddles”. A number of the large Telcos and Financial institutions I have spoken to have 5-7 different Hadoop implementations for different business units. Typically they are running multiple Hadoop flavors (such as Pivotal HD, Hortonworks and Cloudera) and they spend a lot of time extracting and moving data between these isolated silos.

Arguably the most powerful feature that Isilon brings is the ability to have multiple Hadoop distributions accessing a single Isilon cluster. Not only can these distributions be different flavors, Isilon has a capability to allow different distributions access to the same dataset. Imagine having Pivotal HD for one business unit and Cloudera for another, both accessing a single piece of data without having to copy that data between clusters. This is the Isilon Data lake idea and something I have seen businesses go nuts over as a huge solution to their Hadoop data management problems.

The rate at which customers are moving off direct attached storage for Hadoop and converting to Isilon is outstanding. It is one of the fastest growing businesses inside EMC. At the current rate, within 3-5 years I expect there will be very few large-scale Hadoop DAS implementations left.

Andrew, if you happen to read this, ping me – I would love to share more with you about how Isilon fits into the Hadoop world and maybe you would consider doing an update to your article 🙂

7 thoughts on “Never ever do this to Hadoop…..unless you are using Isilon”

  1. It is fair to say Andrew’s argument is based on one thing (locality), but even that can be overcome with most modern storage solution. Cost will quickly come to bite many organisations that try to scale Petabytes of Hadoop Cluster and EMC Isilon would provide a far better TCO. Not to mention EMC Isilon (amongst other benefits) can also help transition from Platform 2 to Platform 3 and provide a “Single Copy of Truth” aka “Data Lake” with data accessible via multiple protocols.

  2. Let me not agree with you:

    1. Performance. Most of Hadoop clusters are IO-bound. IO performance depends on the type and amount of spindles. Given the same amount of spindles, HW would definitely cost smaller than the same HW + Isilon licenses. So for the same price amount of spindles in DAS implementation would always be bigger, thus better performance

    2. Capacity. Isilon plays with its 20% storage overhead claiming the same level of data protection as DAS solution. And this is really so, the thing underneath is called “erasure coding”. Every IT specialist knows that RAID10 is faster than RAID5 and many of them go with RAID10 because of performance. Same for DAS vs Isilon, copying the data vs erasure coding it. But now this “benefit” is gone with – you can use the same erasure coding with DAS and have the same small overhead for some part of your data sacrificing performance

    3. Network. All the performance and capacity considerations above were made based on the assumption that the network is as fast as internal server message bus, for Isilon to be on par with DAS. Unfortunately, usually it is not so and network has limited bandwidth. Thus for big clusters with Isilon it becomes tricky to plan the network to avoid oversubscription both between “compute” nodes and between “compute” and “storage”. Also marketing people does not know how Hadoop really works – within the typical mapreduce job amount of local IO is usually greater than the amount of HDFS IO, because all the intermediate data is staged on the local disks of the “compute” servers

    The only real benefit of Isilon solution is listed by you and I agree with this – it allows you to decouple “compute” from “storage”. So Isilon plays well on the “storage-first” clusters, where you need to have 1PB of capacity and 2-3 “compute” machines for the company IT specialists to play with Hadoop. But this is mostly the same case as pure Isilon storage case with nasty “data lake” marketing on top of it. Real-world implementations of Hadoop would remain with DAS still for a long time, because DAS is the main benefit of Hadoop architecture – “bring computations closer to bare metal”

  3. Good points 0x0fff. If I could add to point #2, one of the main purposes of 3x replication is to provide data redundancy on physically separate data nodes, so in the even of a catastrophic failure on one of the nodes you don’t lose that data or access to it..

    In the event of a catastrophic failure of a NAS component you don’t have that luxury, losing access to the data and possibly the data itself.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s