« vSphere 4.1 Array Integration on steroids | Main | Ineption No 1: The CRO gives The Storage Arrogance his dream job »

July 14, 2010


Feed You can follow this conversation by subscribing to the comment feed for this post.

the storage anarchist

There are other arrays that offer similar and/or additional optimizations for efficiently handling zero-blocks.

One such example is VMAX, which tracks the state of blocks via metadata that defines "should be zero" and "never written by host", for both Thick and Thin devices. Through intelligent use of these flags, operations like "Block Zero" are accelerated by merely updating the metadata and writing the actual zeros asynchronously to the request. Similarly, this metadata enables VMAX to minimize data transfers for Copy/Clone/Replicate requests to only the non-zero, never-been-written blocks for both Thick and Thin devices.

These are VMAX features that are available today; one can expect even more optimizations in the not-too-distant future (and not only for VMware environments).

marc farley

Thanks Barry. Its interesting that VMAX has thick volumes that can identify unwritten blocks of data. Of course the problem with thick volumes is that they consume unnecessary capacity with or without the metadata. Keep working on that thin provisioning!

the storage anarchist

VMAX Virtually Provisioned volumes also track unwritten and zero blocks. VP volumes can be thin (on-demand) and/or partially pre-allocated, or fully pre-allocated, at the customers' discretion.

As rate of customer adoption of VP accellerates, the efficiency of traditional "thick" volumes eases the transition and minimizes I/O and replication overhead for unused (or unneeded) data blocks, just as for "thin" (VP) volumes.

I notie that you've never mentioned having similar optimizations, BTW.

marc farley

No Barry we do not have a "feature" whereby our thin volumes can actually be thick - what a concept!

I can see where the things you've done to make thick volumes more efficient for replication would have value for some, but it's mostly compensating for the unfortunate reality that VP is a second tier implementation of TP.

EMC's zero handling is a welcome development here at 3PAR. For starters, people started thinking about this technology as "zero page reclaim" because HDS beat everybody out the door with their special-case feature. What isn't obvious to most people yet, but EMC has apparently seen the light, is that there is actually a lot more potential for zero detection/handling. For instance, the work 3PAR did with Oracle to shrink the footprint of Oracle databases with a safe, real time (non-bloated) process I know got your attention. http://www.storagerap.com/2010/04/3par-countdown-storage-reclamation-with-oracle.html

We don't always like going it alone at 3PAR with new technology. It's harder for us to raise awareness than it is if a big competitor like EMC is also promoting it. We want people to think beyond reclaim - to other things like efficient data copies, cloning, migrations and WRITE SAME.

So thanks for your comments, they are definitely appreciated.

the storage anarchist

Don't break your arm patting yourself on the back - EMC has had as much to do with Oracle reclaim as did 3PAR; as too with Symantec's WRITE_SAME and the T10 standardization efforts.

As to your backhanded slight against VP - take care. It could be that VMAX VP alone already has more thin GB under management than does 3PAR. Add in CLARiiON and Celerra VP, and yours is a shrinking slice of the pie...

BTW - pre-allocating a "thin" device is one way to overcome the huge impact on performance that chunk fragmentation has on 3PAR arrays - we actually added it based on direct feedback from 3PAR customers (so thanks!). Avoids having to optimize the devices to regain performance - a task that is reportedly dog-slow on your kit.

Preallocation also allows storage and DB admins to sleep at night knowing that a runaway application isn't going to consume all the available space and crash things should the database need to autoexpand. You decry the notion of pools; customers frequently compliment us for our recognition that all applications are not equal, and thus unrestricted sharing is not always desirable.

And then, we do have customers with 1+PB usable arrays that are 100% VP as a single large pool.

Finally, FAST VP will create the tipping point...we're already seeing the adoption spike starting in anticipation of the first *real* automated sub-LUN tiering.

Thanks for the discussion...it's good to cut through the hype and FUD every once in a while...

marc farley

Oh my! Now that's the Barry Burke we were looking for! That's entertainment!

the storage anarchist

I hope I earned another satirical video episode with that last one!


marc farley

It's close. You never know...

Constancia Fairchild

The reason why VMAX must provide a thickened thin volume may be that only through their thin provisioning (VP) implementation can you get more automated striping. Thus, if a customer wanted the new striping benefit, but for whatever reason, didn't wish to use thin provisioning, then EMC had to provide an option for pre-dedicated thin volumes. However, I understand that EMC thin volumes (thickened or not) are so-called 'cache devices' and so, if used extensively, may cripple performance.

The comments to this entry are closed.

Search StorageRap


Latest tweets


  • Loading...


Infosmack Podcasts

Virtumania Podcasts