Nate at Techopsguys has put together a comparison of SPC-1 benchmarks with six different bar charts showing the various characteristics of the configurations, performance and cost.
Here's an example of what's in is post.
Nate at Techopsguys has put together a comparison of SPC-1 benchmarks with six different bar charts showing the various characteristics of the configurations, performance and cost.
Here's an example of what's in is post.
A couple weeks ago, one of the major storage vendors had two major problems to resolve after one of their arrays suffered a firmware bug-induced failure at one of their cloud (email) service provider customers. They had to:
Meanwhile, their service provider customer had four major problems to resolve:
A vendor employee tried to address their public relations problem this way in his blog:
"OK, I'll take the blame for this -- sort of. We pride ourselves in putting a lot of thought into our customer designs. I'd argue that we're really, really good at it as well.
But not everyone is 100% sure of how their application will grow over time -- unfortunately, we're not psychics. And, let's be honest, not everyone necessarily wants to pay for redundancy we like to put into our designs.
We don't always get to directly engage all the time, either -- with products such as the (blanked out), most of this stuff moves through the channel. Somebody calls up one of our partners, says that they want to buy one of our products, and one gets sold -- and a lot of product gets sold that way."
I understand the desire to explain how messes become messy, but I'm not sure why he felt the need to speculate that his company's business partners or that their customer's budget were key elements of the problem. That is tantamount to saying, "All of our (blanked out) customers could have the same thing happen to them too." Anybody who has ever been close to one of these melt-downs knows there are many variables involved - including vendors underbidding each other and shaving elements from their bid in order to win the business.
From a distance, it looks like the vendor's response to the customer was good, although there apparently were some issues with failure notification from the array when the event occurred. I wouldn't call these sorts of things "Perfect Storms", but there are unfortunate scenarios where multiple things go awry. All vendors have these sorts of bad days, which serve as painful learning experiences. Unfortunately for customers, it's one of the ways vendors improve their customer support processes.
The customer also wrote in his blog, explaining the situation to their customers:
"Our SAN vendor analyzed the system logs for the event and determined that the service processor failure occurred due to a unique bug in the specific version of firmware on the system. Our vendor performed an emergency upgrade. The newer version of firmware includes a fix for the bug. We are taking additional corrective actions to make certain that there is enough spare capacity on the SAN. This will assure it performs without performance degradation in the event of a single hardware failure."
The reparation sounds reasonable, but it's not what I would call best of breed either. I'll explain why in the remainder of this post.
The explanation the service provider gave to their customers was only half correct. Yes, the failure in one controller was due to a firmware bug -and yes, all vendors find out about some of them at customer sites - but the inability of the surviving controller to handle the workload was another matter altogether.
The major defect of all dual controller designs for service provider applications is the uselessness of write cache when operating in degraded mode on a single controller.
When a dual controller array has a controller failure, all traffic is failed over to the surviving controller. However, this controller can't afford to place writes in cache because if this controller also fails any un-flushed writes in cache would be lost- making the recovery process all the more painful. As a result, the throughput of the controller degrades significantly because writes now take several orders of magnitude longer to process as each write must be completed at the physical disk level, instead of in fast cache memory. When you consider the sort of read/write ratios involved with an email application (heavy writes), it's not surprising to hear that it took 32 hours for the system to get caught up. I suspect that if the surviving controller had been able to use write cache, the customer might have experienced some amount of service level problems, but not nearly as bad as they suffered.
Write performance during array component failures is an important point that many customers give insufficient weight to when making their purchases. Public service providers certainly need to understand this. The exact same scenario - controller failure and subsequent drop in service levels - could certainly happen to a traditional data center customer, but the ramifications of this scenario are not as ugly as they are for a multi-tenant public service provider.
This case is a perfect example of how an older architecture is incapable of meeting the requirements of the new cloud service business model. If you are a cloud service provider reading this and wondering if you might have a similar exposure to a controller failure (including 3PAR customers with dual -controller arrays), my advice is to review what you have and start thinking about what you should expect if you have a controller failure and how you might want to deal with it on both a short-term and long-term basis. Best of breed cloud storage should not include dual controller arrays.
One of the identified corrective actions is having "enough spare capacity on the SAN", which in this case involves installing a second array. Without knowing the inside scoop, it looks like the idea is to split the workload across the two arrays so that if a controller failure occurs in either array, the performance drop won't be as noticeable. The array that doesn't suffer the failure will keep working as expected and the array that has the failure will only have half the load to deal with.
There are two primary problems with this "fix"
You are always going to have performance degradation of some sort when you can't use write caching, unless you are only reading data - which isn't the case here. It is flat out wrong to assume that a performance problem will not occur. Regardless, with the new two-array SAN, whichever system has the controller failure should be able to get caught up much faster than the 32 hours this customer had to wait. Of course, the customer's capacity and I/O load will almost certainly increase over time, and as that happens, the strategy of splitting the load between two arrays loses its effectiveness.
Along with adding the controllers, they are also certainly adding disk drives, and some notion of what "reasonable" utilization limits should be for them. The problem with limiting utilization as a best practice is that it puts the stamp of approval on inefficiency - not only for capacity utilization abut also for the power and cooling required to support all those underutilized drives. Most legacy arrays have built-in inefficiencies in the way data is laid out on disks, making it virtually impossible to achieve uniform utilization across all disk resources. The result is uneven consumption of disk capacity, as well as uneven I/O service levels among different disk groups, which is another variable in how much performance degrades following a controller failure in a dual controller array.
Finally, the customer now has two arrays to manage, including multipath connections, SAN zones, and all other aspects of the configuration, which all contribute down the road to change management complexities. The result is a net drag on administrator effort and an increased TCO.
A true best of breed solution would address the root-cause deficiency in the array's design, without creating additional management and cost burdens to the customer. Obviously, more than two controllers are needed. But how many controllers does a cloud service provider need in an array? The answer is at least three. Why? Because when a single controller fails, there can still be two surviving controllers working together, mirroring their cache contents, and performing fast writes to cache memory. That said, controllers are usually packaged in pairs for redundancy purposes, which means that the most likely configurations will have four controllers.
If you compare a single quad controller array with two dual controller arrays there are some key advantages that immediately jump out:
The next question is; "Is there a suitable quad controller array that the customer could have used instead of the two dual controller arrays they have?" Yes, 3PAR's F400 or T400 arrays are both quad controller arrays. The disk drives in these arrays can be either SATA or FC, or a mix of both types if the customer wanted to implement tiering. Product information of the F400 can be found here, and the T400 here.
However, simply putting four controllers in an array does not necessarily guarantee that they will be able to sustain write caching if one of them fails. The array must have the ability to remap and re-mirror the write cache contents of all four controllers to the surviving controllers following the loss of a controller. It's an interesting geometric sort of problem: There are four controllers, each with their own cache and cache that is mirrored from the other controllers in the array. All cache contents, including mirrors, need to be distributed evenly across all controllers to avoid congestion and load imbalances. All cache content, including mirrors needs to be accounted for within the array so that if a controller fails, the other controllers will be able to identify all the surviving original and mirrored copies of data. For cache data that has lost either a primary or mirrored copy, a second (new) copy needs to be made. Finally, the amount of data in cache may need to be re-leveled (decreased) to fit into the degraded cache capacity (3 controllers instead of 4).
I made a 9 minute last year video describing how Persistent Cache works. Here it is again. Thanks for watching.
Posted at 04:40 AM in 3P, 3PAR, bloggers, cloud computing, clustered storage, Compellent, customers, Dell, EMC, enterprise storage, Exchange, green computing, HDS, HP, IBM, mid range storage, multi-tenant storage, performance, SAN, storage companies, storage management, storage services, utility computing, video | Permalink | Comments (6) | TrackBack (0)
Tags: 3PAR, best of breed, cloud, cloud storage, failures, performance, storage, write cache
Technology integration makes computing products much easier to use and significantly drives down the cost and effort of owning it. For instance, technologies such as WiFi that were recently beyond the grasp of most people are now inexpensively integrated into PCs and usable by almost anyone.
The trick with integration is understanding what variables should be exposed - or as my friend Rick Vanover likes to say - how many knobs there are to turn. End user and infrastructure provider requirements differ considerably when it comes to knobs. For instance, Apple computers are great end user machines because they lack knobs, but are not always loved by technology professionals for the same reason. Data center operators need products with knobs in order to accommodate all the cross-purposed requirements that stretch beyond a one-size-fits-all design.
So knobs are generally good - but like so many things - their usefulness depends on how effective they are and their station in FARLEY'S HIERARCHY OF KNOBS, which includes the following levels:
Suicide Knobs: knobs that delete data and make things blow up. A good example of a Suicide Knob is something that formats storage.
Prison Knobs: knobs that make changes that are very difficult or impossible to reverse. Many storage provisioning knobs fall into this category. Once you provision and reserve storage with most storage arrays today you are stuck with that decision until the array's EOL.
Faux Knobs: knobs that never seem to do anything, no matter how far you turn them. For features past and future, but not now.
Random Knobs: knobs that produce unanticipated results that can go unnoticed for years. These are the knobs that fuel the technical publishing industry.
Slippery Slope Knobs: knobs that start you down a path to ruin through a chain of system dependencies. These are the knobs you spend a lot of money to learn about in vendor classes.
Dumb Ass Knobs: knobs that do things, but not anything useful. Granted there is a LOT of subjectivity in making a call on a dumb ass knob - but we all agree they exist.
Honest Knobs: knobs that actually do something you need them to without having to plan for weeks on how to use them. Most knobs should fall into this category, but alas!
Magic Knobs: knobs that do things so useful it makes you wonder how anybody thought of a knob like that. Most of these knobs are actually Honest Knobs, but we are so accustomed to seeing Suicide, Prison, Faux, Random, Slippery Slope and Dumb Ass knobs that we are blown away by a truly great Honest Knob.
I'd like to say I was surprised yesterday when graphically-challenged Hitachi announced their intention to sell their own Unified Cloud Graphic, (complete with Hitachi compute servers!). But it wasn't a big shock considering their marketing strategy of "just copy it".
I really don't know how they expect their graphic to compete with vBlock's graphic, with all the color, multiple font sizes and graphics within graphics.
What's missing from the both stack graphics are the knobs that administrators use to get real work done. Yes, knobs tend to be part of the underlying details, but to anybody that actually uses a product, they are very important details. The detail that C-level executives need to understand is that the stack does not have nearly the automation that is being promised today and that administrators will be doing a lot of work, turning the knobs that the stack provides. Again, it's not the number of knobs that matters, as much as it is the quality of those knobs.
Some people have speculated that the vBlock was a knob-less invention that originated in the board rooms of the VCE companies. Some have even suggested that it was the fallout after a failed acquisition bid by Cisco to acquire EMC. I don't know if THAT's true, but there is some evidence that the engineering groups in the companies involved have been scrambling to put meat on the bone.
Maybe someday stacks will be the next big thing, but I don't see it playing out that way unless an awful lot changes in the underlying products that make up the stack. Here's my take on STACK WARS:
STACK WARS give everybody something to write about- me included, right now!
Bloggers that write about stacks have a chance of getting jobs with stack vendors. If you are out of a job, start a stack blog today and twitter your back-stack off!
Stacks are all about packaging. Stacks will be assembled and shipped together (presumably), which could make things easier if your goal is to streamline receiving.
Stack products, are actually more services than products. However, if you ever want to make configuration changes in your stack it might not be economically feasible. (Think gigantic FRUs) For example, there is not a lot of flexibility in vBlock's configurations.
Due to the limited configuration options, stack resources are not likely to be used very efficiently and the economic return on the investment will lag. However, EMC customers are already accustomed to low storage utilization levels - so poor utilization might not be THAT big a deal. Definitely a weird way to win a point, but I 'll concede it grudgingly.
The business advantage of integration should be much lower costs. However, the VCE companies all need to maintain their margins if they want to satisfy investors. It's not clear how they will be able to leverage the integration effort to reduce the cost of vBlock, but then again if STACK WARS turn into PRICING WARS for STACKS, things could get very interesting. IBM must be STACKING up something - after all Hitachi already beat them to the punch.
The C level view of stacks are that they smooth out purchasing and operations expenses by providing a smaller number of Purchasing Knobs (that would be a Faux Knob). John Nash posted in his blog last week,"The Case for the vBlock":
What is interesting is that, usually, the higher up in an organization you are communicating the better the Vblock conversation goes. Remove the detailed technical questions and the value of the Vblock idea really shines. You get a known “product” from trusted sources. You get known costs today as well as known costs for future expansion. It greatly removes the risk from the organization with unknown infrastructure expenses.
There you have it, vBlocks will be sold from the top down by Cisco and EMC - companies that are good at selling from the top down, which will make it somewhat easier for the VCE companies to justify their price tag. But that won't make the price any easier to swallow.
As Nash wrote, "remove the detailed technical questions and the value of the Vblock idea really shines." That's like saying chapulines (fried grasshoppers) might appeal if Anthony Bourdain is talking about them on TV, but your own personal experience chewing and swallowing them might be different. I'm not talking about price here, I'm referring to the experience of running the vBlock. There is going to be a lot more involved than the knob-less graphics portray.
The weakest link in the vBlock chain today is EMC's contribution. There are far too many Prison (provisioning) and Slippery Slope Knobs in EMC storage. They aren't the only vendor with this problem, but they are the E in VCE. Provisioning storage with a v-Max is the about the same as it was with a DMX - despite what EMC employees would have you believe.
Prison Knob provisioning creates a lot of problems for customers as storage ages and as demands shift. Once storage has been reserved for usage in an EMC system, it is pretty much bound to that purpose.
My advice is to buy the products with the most Magic Knobs and avoid those with the most Prison provisioning Knobs. If you have ever felt trapped by a storage configuration that you couldn't live with or afford, you know what I'm talking about. Magic Knobs are those that reduce the effort to manage and change storage, increase the efficiency of storage and provide the most versatility for all applications, workloads and multi-tenancy.
Posted at 01:18 PM in 3PAR, Cisco, cloud computing, EMC, enterprise storage, HDS, Hitachi, IBM, mid range storage, reservationless, storage companies, storage services, utility computing, virtualization, VMware | Permalink | Comments (3) | TrackBack (0)
Tags: Cisco, EMC, Hitachi, knobs, provisioning, Stack, storage, v-block, VCE, VMware, Wars
Yesterday, 3PAR announced Adaptive Optimization (AO), our solution for storage tiering and support for SSD flash drives. Here are the elements of this technology that I believe will have the most impact on customers and the rest of the industry.
1) Tiering works by making copies of data on lower cost, low-IOPS storage to high-IOPS storage - and back again. Storage tiering has been associated with ILM, which assumed data is initially located on more expensive, high-IOPS storage and, as it ages and is accessed less frequently, is moved to lower-cost, low-IOPS storage. The perception that tiering implies fast to slow data migration was reinforced by Compellent with it's early entrant storage tiering technology, Data Progression.
The economic benefits of tiering are much more compelling if data is originally located on low-IOPS storage and then moved to high-IOPS storage when it becomes useful to do so. This reduces the amount of high-IOPS storage that needs to be purchased and reserves high-IOPS storage for the applications that need it the most. This model of promoting data to high-IOPS storage will replace the old model of data "trickling downhill to cheap storage."
2) Sub-volume tiering means high-IOPS storage can be reserved for high-IOPS work and effectively shared by the applications that need it the most. AO copies data in 128 MB sub-volume regions that contain specific RAIDed volume slices. Many physical and virtual servers can have their volume's most active regions located in high-IOPS storage capacity at the same time.
Data redundancy is accomplished when AO reads data from it's source region and restripes it into a region on the target tier - using the RAID level of the target. AO allows data to be protected by whatever RAID is appropriate for the tier and the data. 3PAR's chunklet architecture is maintained for SSDs, which means a SSDs in an InServ array can apply several different RAID levels simultaneously. Every vendor's sub-volume tiering technology will be different, including the number of ways devices can be combined in RAID and how wide striping can be applied.
3) Tiering does not mean you have to buy SSDs to make it pay off. Tiering is a cost-reduction technology. One of the most obvious ways to reduce the cost of storage is to buy cheaper disks with higher capacity, such as SATA drives.
The regions used by AO are the same on-disk structures that 3PAR uses for it's Dynamic Optimization (DO) software that re-levels volumes across disk drives in an InServ array. A customer with all FC drives in an InServ array could take advantage of both AO and DO by increasing the capacity of an array with SATA drives, using Dynamic Optimization to redistribute their volumes across the SATA drives and then using FC drives as their high-IOPS AO tier. This way, they can continue to get the IO rates they expect, but reduce the cost of incremental capacity as they upgrade their system.
4) The system determines what to move and how to move it. I/O density rate is a term that refers to how much data access occurs in a region over a given amount of time. AO recognizes region candidates for tiering by their I/O density rates.
Administrators control the AO participation for each volume by assigning them to an AO Profile and a QoS Gradient. The profile is a short stack of device-RAID levels, such as SATA RAID 6, FC RAID 5(7+1) and SSD RAID5 (3+1). AO allows either 2 or 3 device RAID levels in the profile's stack.
The QoS gradient is a relative determinant of how quickly the volume will be acted upon. I like to think of it as something like different viscosities for different fluids, but for storage. AO today has three QoS gradients, performance, cost and balanced.
Back in Novemebr, Tony Asaro wrote about his discussions with HDS' storage customers regarding storage tiering.
Another discussion was around using policies to automate the process. One group was a bit concerned about automating this process but realized that, again, with PBs of data being stored that the only way to effectively implement intelligent tiered storage is via automation. Additionally, it is not an all or nothing proposition. You can select certain volumes and applications to implement and gain a comfort level before deploying more widely. One of the key tenants of technology is to automate otherwise manually cumbersome processes. We just need to get over that hurdle but we need to do so in a planned, considered and reasoned way.
By applying measured I/O density rates, AO profiles and QoS Gradients, 3PAR has taken the first major steps to automating storage tiering and removing the burden from administrators.
5) Tiering can and should scale out. David Floyer from Wikibon wrote a good piece yesterday on our announcement where, among many things, he discussed how 3PAR is using smaller SSDs spread over more controllers:
....it spreads a small amount of SSD amongst the 3PAR engines so the IO’s aren’t all going to a single drive and sucking up a lot of bandwidth – it’s nicely balanced. Traditional implementations will use larger drive with more IO’s going to that drive. The part of the array with that drive will get more activity.
In practice we don’t think this will matter all that much because, for example, EMC’s V-Max has more bandwidth to play with than 3PAR and EMC uses its cache to transfer data between tiers to avoid bottlenecks. Nonetheless, on paper, the 3PAR implementation looks to be more efficient which means (in theory) it can do more with less flash. But nobody really knows yet.
3PAR storage arrays avoid I/O bottlenecks by incorporating tiny virtual storage elements (chunklets) and spreading the workload over as many devices and controllers as possible. This approach differs from other vendors where smaller groups of resources are created and then combined into larger constructs that are more cumbersome to manage and tune than a single widely distributed storage span. The same concepts apply to SSD integration, where InServ arrays accommodate multiples of many, smaller sized SSDs for scaling out high-IOPS tiers for those customers that may want to expand their use of AO in the future .
Posted at 12:15 PM in 3PAR, Adaptive Optimization, bloggers, cloud computing, clustered storage, Compellent, Dynamic Optimization, enterprise storage, flash, HDS, performance, SSD, storage management, tiering, utility computing, wide striping | Permalink | Comments (3) | TrackBack (0)
Tags: 3PAR, Adaptive, AO, Compellent, EMC, EMC, optimization, QoS gradient, SSD, storage, sub-volume, tiering
iKnerd (Greg Knieriemen) broke the story yesterday about Oracle/Sun breaking off their relationship with HDS. That got everybody twittering - with the majority of tweets from the storage universe suggesting Oracle had greedy motives. How unfair! So, the video below attempts to restore balance to the universe and brings Netapp, HP, cloud computing, 3PAR and Larry's toys into the discussion.
If you are a Sun storage customer and think its time to change, you should check out 3PAR. We have a lot of ex-Sun server engineers who designed our storage cluster. I'm sure you'll appreciate the architecture of our InServ arrays, as well as our 50% capacity reduction guarantee.(Hey, Claus Mikkelson at HDS. I've had a comment in on your blog for a couple days and it hasn't been posted yet. I know things can slip through the cracks sometimes, so I thought I'd bring it to your attention.)
There's been a dysfunctional discussion of capacity guarantee programs over on Chuck's blog. There had been more sensible, independent discussions on the Storage Architect's blog, but that apparently wasn't good enough for EMC - a company without a capacity guarantee program of their own. Unfortunately, Chuck decided to shut down comments on his post, citing an overload of vendor hash - which could continue to go on as long as there is breath left in any bloggers from Netapp.
Chuck's post poses the question - do you want to buy from a doctor or a used car salesman. The suggestion he makes is that EMC treats you like a doctor while 3PAR, HDS and Netapp treat you like used car salesmen.
The doctor picture he used was this one:
Which reminded me of Scrubs - but of course there are other doctor images he could have used:
In case you've been shunning the news, this is Dr. Conrad Murray.
The used car salesman picture was pretty funny:
I'd suggest Chuck is using classic used car sales tactics: "Who loves ya baby? The warranty them guys offer don't protect you from nuthin'. Your engine will blow up the day after the warranty expires. All they want is your munny!"
Still, seeing as how he was linking this image to 3PAR (in one way or another), I'd have hoped he would have used a picture like this instead:
You might not end up buying that car, but you should at least check it out.
Chuck characterizes capacity guarantee programs as not being in the customer's best interests. That would be true if 3PAR, HDS and Netapp wanted to increase the number of unhappy customers they have, but that is just CRAZY EMC thought diarrheaship:
Instead, I'm pretty sure we all want our customers to be very happy with their storage solution:
Yes, 3PAR's capacity guarantee is a way to attract customers, but it's much more than that - it's a way to back up our efficiency claims by putting our money where our mouths are:
RecoveryMonkey had a post recently about FUD and the ridiculous corner case claims storage vendors sometimes make about each other. 3PAR has been telling customers for years that our products are more efficient than theirs and we are now backing it up with our capacity guarantee. It's not FUD, it's not spin and its definitely not a corner case.
As Mike Riley points out on his Netapptips blog today, I was not exactly a fan of Netapp's guarantee program when it first came out, but now I am an unabashed supporter. Sometimes other vendors come up with excellent ideas.
Yesterday HDS announced their capacity guarantee program and although it depends heavily on the capacity differences between RAID 1 and RAID 5 (which is a little cheesy), they offer a contract and appear to be ready to put back the program with more than a hand wave. That leaves HP, IBM and EMC (oh wait - and Oracle Sun) as the major storage players who aren't offering a capacity guarantee for customers making a technology refresh on storage.
The question is - are these programs just marketing ploys? Sure they are, just as any customer satisfaction guarantee for any product is a marketing ploy designed to hook customers - whether it's soap, kitchen knives, bass lures, vacuum cleaners, etc, but these ploys and the products behind them are targeted directly at data center operators that are tired of over-spending on storage. All are serious products from some of the biggest names in the storage industry and proven technology leaders. 3PAR's thin technologies (provisioning, conversion, persistence, reclamation and zero-detection) continue to lead our industry.
Depending on the applications and requirements of your data center some of these products and programs will be a better fit than others. All of them should save you money on capacity purchases, but there are other things to consider, such as required software and services. The details of each vendors' programs are different. That said, if you are offered a contract to reduce your storage capacity costs, that's pretty strong negotiating leverage with any other vendor - whether or not they offer a guarantee. And even if you are not ready to make a purchase now, you might want to know how these programs work so you can be better prepared when the time comes.
To find out more about 3PAR's capacity program, click the image below.
Its all over the news today. STEC's stock is getting hammered because their largest customer, EMC, is delaying orders. I feel for the folks at STEC, its difficult being a supplier to industry behemoths like EMC. When realities don't meet expectations, things can crater in a hurry.
There is nothing wrong with flash SSD technology, it's simply a matter that the market demand hasn't ramped yet. Some of the EMC bloggers have chided me for my position on SSDs, but today's news pretty clearly vindicates what I've been saying all along:
The situation is changing gradually. Prices for SSDs are coming down, but there still needs to be a lot of work done by storage system vendors to flexibly use them - and this is going to continue to take time. EMC has said they will release their first version of FAST this year, but almost everybody is looking at future versions of FAST to get the sort of functionality they can actually use. Compellent's Data Progression software has the basic ability to move data on and off SSDs, but the number of SSDs that can be used per system is small and their sales are far too small to make up for what EMC has failed to deliver on. SSDs have a bright future, its just going to take longer to flourish.
So where is 3PAR on SSDs? We think they will be an important technology in the future and we are working on integrating them into our products in a way that will allow customers to take advanatge of their capabilities. In other words, we're going to have them, but we're not going to sell something that costs a lot of money if our customers can't leverage them.
In the meantime we continue to sell the most efficient storage systems that also happen to have the best implementation of wide striping on the planet, which delivers optimal mixed workload performance for our customers.
Posted at 01:23 PM in 3PAR, bloggers, clustered storage, Compellent, customers, EMC, enterprise storage, flash, green computing, HDS, mid range storage, performance, storage companies, thin provisioning, wide striping | Permalink | Comments (8) | TrackBack (0)
Tags: 3PAR, EFD, EMC, flash, SSDs, STEC
3PAR is introducing four powerful technologies today to help our customers crush their cost of owning and operating storage. Here is the rundown.
Posted at 08:15 AM in 3PAR, cloud computing, clustered storage, Countdown, customers, EMC, enterprise storage, HDS, mid range storage, partners, performance, storage companies, storage management, thin provisioning, video | Permalink | Comments (1) | TrackBack (0)
Tags: 3PAR, array, conversion, persistence, provisioning, reclamation, storage, Symantec, Thin, Veritas
Somebody asked me on Twitter the other day if our arrays allowed a pair of host HBAs to concurrently access a single LUN though two different controllers - telling me a competitor was saying we couldn't. Well, that was pretty whacked because our clustered architecture is designed specifically for that purpose. What they probably meant to say was that their own product couldn't do it. Anyway, it got the juices flowing for a new SWCSA vid!
If you are wondering how to get the most out of VMware's v-Sphere multipathing options, you need to make sure your storage array allows you to access individual LUNs through multiple controllers at the same time.
Why would you limit yourself to one controller per LUN, like Clariion, Netapp and HDS among others when you can balance the load dynamically across multiple HBAs and controllers with 3PAR InServ arrays with mesh-active controllers. To be clear, this industry-leading clustered storage capability is designed into all our arrays, from our mid-range F Class arrays to our enterprise T Class arrays.
Posted at 01:51 AM in 3PAR, clustered storage, EMC, enterprise storage, HDS, mid range storage, Netapp, performance, servers, SWCSA, virtualization, VMware, wide striping | Permalink | Comments (15) | TrackBack (0)
Tags: 3PAR, Clariion, EMC, enterprise, ESX 4.0, HDS, mid range, Netapp, storage, v-Sphere, VMware
(Author's note: a couple typos in the next to last paragraph have been corrected, which should make things clearer -mf 8-08-2009)
It's funny how things work in the stoblogosphere. The Anarchist goes after my reservationless post and steps in a pot hole of tchit and in the process brings blogging newcomer Enrico Signoretti - a Compellent reseller from Italy - out of the woodwork to howl about the stench.
A couple things about what Anarchist and Enrico said brought to light that people get 3PAR and Compellent confused sometimes. Mostly these are financial people - but lately Anarchist seems to be looking for a new career as a financial analyst as the long slow decline of Symmetrix sets in. Anyway, I felt compelled to point out the differences between 3PAR and Compellent, which got Enrico's attention again and like a true defender of the faith he posted on some things he felt I mis-spun. God, I love blogging. And for money's worth - it is much, much richer than twittering.
Of course, as things go, I recognized some pretty lame Compellent FUD that they have been feeding their resellers for some time now about 3PAR's hardware, which reminded me to post about it because it's actually a huge advantage for us. FWIW, I'm betting that no one from Compellent shows up to challenge it as they seem content to let their FUD-fed resellers do the dirty work for them. Clever, but in the end it's not a nice thing to do to guys like Enrico (have them carry you FUD for you). To be clear Enrico, you are OK with me, my gripe is not with you.
3PAR has been using Intel processors from it's inception in it's storage controllers. There are a lot of good things about this and it's heartwarming to see EMC finally move to Intel on V-Max after all these years. It takes them a long time to get caught up and of course, the target continues to move in front of them - despite their hollow declarations of parity. Oh yeah! And implementations don't matter either!
The proprietary stuff in Compellent's FUD has to do with our ASIC, which is a co-processor for storage functions. While Intel platforms have some great advantages (that we recognized 10 years ago), there are some shortcomings to using them - like segregating different types of I/O traffic - especially small database I/O versus long sequential streams. The ASIC in our system does this superbly, which is why our mixed application workload is so freaking good (along with our reservationless true wide striping.) Why would you want to buy a different array for each major application - that's a waste of resources.
But why would Compellent tell their resellers that 3PAR runs on proprietary hardware, when we run on Intel with a kick-ass co-processor to handle the hard stuff? It wouldn't be the usual disingenuous marketing tactics they like to accuse EMC of - would it?
The other thing our ASIC does is compute parity. Why on earth would you want to tie up your main controller processor doing this? Because you don't have the architecture to do it in a co-processor.
Which brings me to some of Hu Yoshida's recent posts. Hu has obviously been fed some things from the engineers in Odawara, Japan when he recently wrote about about the trade offs you have to make when you select your allocation size. Now I don't have much beef with Hu personally because he is a pretty good guy, but this post really only exposes the limitations of Hitachi's engineering team. Yes, there are trade offs in every design decision, but there are are an almost infinite number of ways to pay for them. 3PAR's design inception is still way ahead of the rest of the industry because people that designed high end cluster server systems for SUN designed the 3PAR architecture. I don't want to be too snide, but it's pretty clear that Hitachi has does not have the same sort of perspective on clustered systems design.
Whew - rant over. I'm going for my first coffee of the day now.
Claus Mikkelsen at HDS wrote in his most recent blog post:
What?? "don't beat me up on this" Claus, what sort of lame disclaimer is that?? If you are going to write something stupid - and you know it when you are writing it, don't beg for mercy like a stoolie. The math checkers in the storage blogosphere will find out and then decide if they want to give you a heaping load of sh%& for it.
But I'm going to cut you slack because you are promoting wide striping, an excellent technology that doesn't get enough attention. So, thanks. We're really good at wide striping in 3PAR arrays and we appreciate the fact that HDS is trying to help people understand how well the technology can work - even if the explanation you provided is screwed up. While I'm at it, let me give you props for this line too:
A page from the 3PAR hymnal! - except you don't have to be using thin provisioning on a 3PAR array in order to get wide striping. Automated provisioning in 3PAR arrays provides wide striping that performs much better than can be achieved with manually tuning. Not only that, but the utilization levels with 3PAR arrays are very high - especially for mixed workloads, where customers often want to store data for different applications on different RAID levels or tiers.
With HBP (HDS Bloated Provisioning) - if application data is stored on different RAID levels you need different pools for each RAID level, which means you have to divide your disk resources between pools, which means your wide stripes aren't as wide and the performance advantages of wide striping are diminished. The other problem with using multiple pools is that utilization tends to be lower because disk resources are leveraged unevenly across different subsets of applications.
Nigel Poulton, who is a pretty good guy for a dyed-in-the-wool HDS cheerleader, and I have been videoing back and forth about HBP and 3PAR's thin provisioning differences. I seem to recall him saying that you could setup an HDS array with a single pool. Really Nigel - would that be s serious recommended best practice? All data stored in one RAID level?
Tony Asaro, in his blog, quoted Nigel as saying:
Which was my point - Hitachi's engineer's did it to make it easier for themselves as opposed to fitting it to customer requirements. Which leads me to Nigel's metadata fixation where he assumes that all metadata processes obey the same immutable rules of coding. They don't. That's because processes that are core design concepts usually are more efficient than bolt-on afterthoughts.