This is Part 2 in a 3 part series on Choosing the Right Shared Storage Solution.  You may want to check out Part 1: Bandwidth & Connections.

You can also view the entire 3 part series here: https://www.youtube.com/playlist?list=PLdrhoSWYyu_WJ7QUOLcBALfCHbS0d6fC2

Hard drives are a dime a dozen nowadays. At last check, I think Best Buy and Fry’s had them at the checkout counter next to the latest Star Magazine and Chewlies gum. Despite the seemingly over abundance of drives, not all drives (let alone a collection of ’em) are created equal.

Most hard drives, as I’m sure you know, have spinning platters in them were data resides. While spinning, they allow data to be read and written to them. The faster the platter (spindle) the quicker a computer can read and write the data. So, wouldn’t the same hold true if you connect a bunch of drives (hence the storage geek acronym: JBOD: Just A Bunch of Drives) together? Why yes, you’ve got a point there. More drives = more and faster throughput.

So, a JBOD, with some management software on it, and WHAMMO, you’ve got a large mass of faster, useable storage. Should be pretty cheap, right?

Wrong.

Let’s start at the beginning where this logic falls apart.

All drives are not created equal. Drives are measured in many categories. Some of which include:

RPM (Spindle speed)
Essentially, the faster the better. The faster the rotational speed, the faster the drive can be written and read to, barring no other bottlenecks.

Disk Buffer: (AKA Disk Cache or Cache Buffer)
This is where data is stored on the drive temporarily, before it’s read from or written to the drive. This allows time for the disk to “catch up” if the requests for reading and writing cannot be met immediately. However, it mainly makes the reading to and writing from a drive more efficient and organized. Bigger is usually better (currently, most drives are 8-32MB in size), but there is some debate over the validity to this point. However, since Enterprise drives typically have bigger Cache/Buffers, take the MBs and run.

Interface (SAS, SATA, PATA, IDE, SCSI, TIN CAN AND STRING)
Each drive I/O interface has it’s own quirks and own thresholds, but for our discussion, it’s main limiting factor is how much data the connection allows to flow through it at one time. SATA or SAS are the most common interface nowadays for single drives or small arrays. These interface types typically allow for more throughput than single drive could ever deliver. Thus, a SAS or SATA connection is rarely your bottleneck, until you get into many “striped” drives (an array). Once you get into this realm, we move to other, more robust solutions. However, as we discussed in Part 1 of Choosing the right shared storage solution: Bandwidth & Connections, a fatter pipeline to and from the array may NOT be what you need.

MTBF (Mean Time Between Failure)
Muy importante. The lower the MTBF, the less robust the drive is, compared to others with “enterprise” branding.  This means the drive may fail earlier in its lifetime of expected use, compared to an enterprise-class drive with a higher MTBF. This also comes at a price premium – better parts and more strict QC. If you want the steak, you gotta pay for it, lest you get ground chuck.

Size
The bigger the better, ladies. More storage = more space for your important media professionally-ripped for marketing campaigns, and pr0ntube, err, research.

Shades of Drives

Some manufacturers market their drives into more easily to digest categories, as hard drives can have varying degrees of quality components. Let’s take Western Digital’s drive categorization to better illustrate this. WD hasGreen, Blue, and Black designations. Green drives are usually in the mid to slower side of the the speed pool, and hey also tend to spin at lower RPMs and have smaller disk cache’s. They are called Green because they tend to spin down when not in use. This saves energy, saves money, and saves Gaia. These are usually the least expensive drives. They are horrible for video usage, as how can you playback fat video files if a drives decides to take a nap?

Blue drives are more meant for everyday computing and are commonly found in laptops. They are usually priced around the same, if not slightly more, than Green drives. These are the most common of the HDDs out there, and are mid range on the speed race. Black drives are enterprise class, are usually the fastest, and have the lowest MTBF. They can operate in warmer temperatures (drive arrays can get toasty) and due to these extras, are the most expensive in the bunch. While I use Western Digital as an example, drives with specs which are similar to WD’s Black classification are what you WANT in a mass storage solution. These are not your fathers hard drives, are typically NOT the ones you see in the weekly electronics flyer. You need to seek them out.

As we’ll examine later, determining the right combination of the right hard drives, then building software around them is paramount to decent performance.

Assemble the troops

That was just choosing the drives. That, in and of itself, is a pain. Now, we need a chassis to house the drives. When you buy a shared storage SOLUTION, the solution provider (manufacturer or vendor) has already factored in all of these drives variables and incorporated a drive chassis in order to create a turnkey package *just* for you.

When designing these turnkey solutions of drives and chassis, typically the Manufacturer or Vendor will:

  • build or OEM a chassis which holds the drives, and test the aforementioned drives IN the chassis to assure performance is good and constant
  • beat up on the drives for failure and performance with various software applications
  • systematically write data across the drives equally to ensure sustained performance as the drives fill up.
  • design management and sharing software for the user to actually USE the data, and possibly with other users. I will cover this in detail in part 3 of the 3 part shared storage blog.
  • write software to poll the drives to check for impending failures (health), as well as optimize the usage of their buffer(s)
  • last but most critical, protect the data (using RAID or better technology)

These solutions, if done right, relay on a battle tested combination of hardware components. Off the shelf components, put together because the cables fit, will never deliver the performance a tuned system can. This is yet another reason why consulting a shared storage systems integrator or consultant assures you’re getting a best of breed solution – not product scattershot.  I’m serious about this and I can’t stress this point enough.

I know, this is fun, right? Are you not entertained?!

We now move on to yet another acronym: RAID.

RAID, RAID, RAID. Oh how you complicate thee.

A RAID is an Redundant Array of Independent Disks. Take this scenario: suppose a drive fails (MTBF) once in 1 million uses. We don’t know *when* it will happen, we just know it probably will – and before 1 million uses. Where this happens is up to chance, environment, and usage. Now, let’s say we RAID two drives together as one, because that would yield twice as much space and speed. This obviously increases the risk of MTBF. Plus, if ONE (yes, ONE) drive starts to smoke – you’ve lost all of your data, because you spanned the disks together. You can’t edit with half of every bit and byte gone. Given this truth, now multiply this by 4 drives. Howabout 16 drives or more? Russian Roulette, geek-style. Combining drives in this manner is known as RAID 0, the largest combination of throughput and capacity, but absolutely no support for preserving data if one drive dies. This absolutely bites when it comes to DATA AVAILABILITY.

The loss of one drive in a RAID 0 array could be a massive problem for the video editor (you just lost the entire movie, no problem… right?!?).  Since this is not acceptable in most circumstances, other RAID data-protection and performance formats have been developed that ensure there is some REDUNDANT distribution of data across multiple disks. (Ah yes, the R in RAID!) While there are as many as the day is long, let’s examine those you will probably find out in the wild when dealing with video shared storage solutions:

RAID 1: If RAID 0 doubles your storage, then conversely, RAID 1 cuts the cumulative size in half. Why? RAID 1 (AKA “mirroring”) allows that in the event of the rapture and half of all of the drives in your array blow chunks, that you don’t lose any of your data, because a 1:1 copy of the data has been made on that extra storage. This can cause a slight hit in throughput (after all, your data is being written twice), and as outlined, a massive hit in storage space. Those of you who have used Avid’s Unity (not ISIS) have used RAID 0 or RAID 1 for years – it’s all Unity supported. Most other shared storage solutions also offer RAID 0 or RAID 1 as well.

RAID 2, 3, & 4. Outdated, or yielded unnecessary by RAID 5. Move along. BTW, what ever happened to Leonard Parts 1-5?

RAID5 - Kills Data Loss Dead

RAID 5: Probably the most popular out there. It balances throughput and redundancy, with minimal overhead. This achieved through parity. (Warning! Higher geek content: When data is written, parity data is introduced and written with your video data across the drives. In the event a drive fails, this parity data is combined with existing media to recreate the lost media for your enjoyment. As one can imagine, this decreases performance slightly: not only while the initial parity data is created and written, but if a drive dies, the array needs to recreate the media for usage in real time.) All this being said, RAID 5 allows for the speed benefit afforded by a RAID, along with good redundancy. As a bonus, if a drive dies, most shared storage chassis can rebuild the lost media once a correct drive is inserted into the chassis to replace the dead drive – restoring your array to it’s former glory.  Just give it until tomorrow, it usually takes a bit of time and you will see a modest performance hit during the rebuild process. But hey, it’s not gone!

RAID 6: Very similar to RAID 5, although the user has one more guard at the gate: 2 drives worth of parity are written, instead of 1.  If a drive does die, there remains 1 drive still spinning and keeping the array functional.  Same basic performance and storage hits as RAID 5.  RAID 6 is slightly less common when seeking out Video RAIDs.

I usually ballpark a 12-20% hit on storage space AND throughput for your shared storage solution to handle a RAID5. This varies by manufacturer, but the 12-20% plays heavily into my storage formula at the end of this article. I know;  no one wants to lose space, but it’s better than losing half of your space with RAID1 or having no redundancy, a la RAID 0.

Other less popular RAID formats include RAID 6, RAID 0+1, RAID1+0 (AKA RAID 10), RAID 0+3, RAID 3+0, etc. Consult your local storage geek if you *realllllly* want to delve into these.

It should be noted that a RAIDSET can be done at either the hardware level, or the software level. In Windows or on OS X, for example, you can RAID drives (usually RAID 0 or 1, rarely RAID 5) from within the OS. (Manage–>Storage and Disk Utility, respectively) This is a software RAID, and while it works, it’s usually not as fast or bullet-proof as a hardware RAID 0 or 1, which (if available) is done on the chassis which contains your drives.  Most hardware RAID controllers are designed specifically for RAID 1, 5 and 6. Hardware RAID is faster than software RAID for managing the layout of data and parity bits.

Now for my patent pending formula.

Let’s take a 1 TB drive. Small, and easy on my math challenged brain.

As you probably know, marketing 1TB is not equal to 1TB useable storage. This hard drive loss is due to base 2 math rather than base 8. (1000 bytes = 1 kilobyte marketing, 1024 bytes = 1 kilobyte in reality) (Editor’s note: Thanks Micheal!) Thus, when you begin to multiply bits, bytes, kilobytes, megabytes, gigabytes, etc…you end up with less than 1TB. And of course, marketing wins: 1TB is easier to sell than 930GB. So, we’re saddled with a 7% loss. Keep that number written down.

We now need to initialize the drives, and format them into a RAID. Let’s say we go with RAID 5. Best balance of speed and redundancy. RAID 5 in hardware can be between 12-20% in loss of space due to the aforementioned redundancy. Again, this is different for each manufacturer, so no need for the math hate mail. Let’s use 15%, and subtract that from 930GB. This comes out to approx. 790GB. So, now we’re down 210GB from the advertised size.

As I mentioned earlier, performance (throughput, in this case) can decrease as the drive fills up, if the data is written sequentially on the disc(s). Think of trying to pull a toy from the bottom of a box of cereal: it’s tougher to get to when the box is full of cereal than when it’s empty. Some shared storage manufacturers (Facilis comes to mind) scatters the data around the drive, so a user never sees a performance hit; as performance is equal regardless of the amount of free space. That’s in the minority, so the magic number before a noticeable loss in throughput seems to be around 80%. Thus, we subtract another 20% from the 790GB. This comes out to 632GB.

That’s right. Of that shiny new 1TB drive, once introduced into a RAID 5 RAIDSET, and given some room for performance, we have nearly a 40% space loss.

Useable Storage Space vs. Advertised Storage Space

So concludes Part 2 of our 3 part series.  Stay tuned for Part 3: Management, Permissions & Support.  Same bat time, same bat channel.

Special Thanks: David Sallak of Isilon.

You can also view the entire 3 part series here: https://www.youtube.com/playlist?list=PLdrhoSWYyu_WJ7QUOLcBALfCHbS0d6fC2