A lot has been written about HP's acquisition of 3PAR. I see it as a real game changer in our industry. This screencast explains why.
A lot has been written about HP's acquisition of 3PAR. I see it as a real game changer in our industry. This screencast explains why.
Posted at 09:25 PM in 3PAR, Adaptive Optimization, Autonomic, Efficient, energy, enterprise storage, green computing, HP, multi-tenant storage, performance, reservationless, storage management, storage services, thin provisioning, tiering, utility computing, video, virtualization | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, Autonomic, Converged Infrastructure, Efficient, HP, Multi-tenant
Steve Taylor, one of our SEs, created an animation that shows the multiple layers of virtualization that create the natively wide-striped data layout on a 3PAR storage server. I think it's the coolest thing I'd seen since joining the company that quickly summarizes the multiple layers of virtualization in a 3PAR array.
All the functions shown are automatically done for the customer with minimal administrative effort. 3PAR customers do not spend time planning the layout of special disk pools or preparing their disk drives configurations for certain functions. All they do is select the drive class and the RAID level for the volume they are creating and the rest of the data layout work is done for them.
The demo shows how a RAID 5 3+1 virtual volume is created, what it does not show is the way other volumes would be created using different RAID levels over the same set of resources. It would be a replay of this, but with a different RAID level applied - everything else would be the same.
Not only does this design provide massive throughput, it also responds very quickly when customers need to add volumes. It's like driving a freight train that can corner. Try doing that with your v-Max on anything but a test track.
Posted at 05:54 AM in 3PAR, clustered storage, enterprise storage, mid range storage, multi-tenant storage, performance, reservationless, storage management, tool talk, utility computing, video, virtualization, wide striping | Permalink | Comments (5) | TrackBack (0)
Tags: 3PAR, array, data layout, RAID, v-Max, virtual storage
Good question Nigel, one of the biggest problems customers have is being able to fully utilize all their resources. It's not just that the ROI for storage tends to be underwhelming, but more frustrating is the fact that their storage was provisioned in a way that makes resources inappropriate or unavailable for the pressing needs at hand.
Pools are used two ways - to reserve storage capacity for certain functions such as snapshots or to create QoS levels for storage. The difficulty lies that in the creation of pools for QoS, resources that are committed to pools are practically locked into them and cannot be easily redistributed to other pools to meet changing demands. As storage systems age and are filled with data, the various pools are consumed unevenly. For example, consider an array with six pools provisioned as follows:
Pool #1: SATA, 60TB (usable) RAID 6 - primary bulk storage: no performance requirementsPool #2 SATA, 40TB (usable) RAID 10 - primary low cost storage: capacity over performance
Pool #3 FC/SAS, 30TB (usable) RAID 5 - primary high performance storage: performance over capacity
Pool #4 FC/SAS, 20TB (usable) RAID 10 - primary highest performance storage
#5 SATA, 50TB (usable) RAID 5 secondary snapshot storage
Pool #6 FC/SAS, 60TB (usable) RAID 5 secondary snapshot storage
The type of problems storage administrators are constantly dealing with occur when one pool becomes maxed out - making it's associated QoS unavailable. The admin, then has three choices, 1) use a higher QoS, 2) a lower QoS, or 3) add new resources to the pool, if possible. If they decide to use a higher QoS they may have to deal with performance problems in higher-priority applications. If they decide to use a lower QoS they will have to deal with performance problems for the application. If they decide to add resources, they might have to interrupt many other applications and take them off line while workloads are shifted. Then you get the ripple effect of "remodeling the kitchen".
When you consider the fact that some storage systems force users to establish separate pools for thin provisioned volumes and thick volumes, the number of pools in the system increases and the fragmentation of resources becomes a much bigger problem.
The best practice for managing storage pools is to do away with them entirely so they don't inhibit access to expensive resources and, more importantly, so they don't soak up so much administrative time and create increased risk of downtime and data loss.
Pools of disk drives are just a thin layer above bare disk drives where virtualization is concerned. Considering the transparent nature of system virtualization technology it is almost incomprehensible that storage systems force customers to create these artificial constructs that force hard choices about something as basic as the layout of data on disks. Vendors with pool-based volume management like to distract customers by talking about whiz bang functionality that doesn't address the core storage problem - the fact that their customers are still doing much of the work that the system ought to do for them.
The best practice then is to replace outdated storage designs with new designs that do not reserve storage resources in pools and do not use pools to create QoS levels. 3PAR InServ storage systems do not use pools and do not reserve capacity for different QoS levels.
3PAR InServ storage systems are used by many of the largest companies in the world, which saves them an enormous amount of money by lowering lower capacity requirements and administrator overhead. For example, Priceline.Com has been a 3PAR customer for many years and they talk about how it has worked for them in this video on YouTube.
The InServ's data layout starts with the subdivision of all disk resources into 256MB "mini-disks" we call chunklets. All the higher level RAID level functions in a 3PAR system are applied at the chunklet level, not at the disk level. RAID in an InServ system is implemented as "micro-RAID" sets which then are concatenated together and formed into virtual volumes that are exported as LUNs.
FWIW, the term virtual volume was used by 3PAR years before the system virtualization phenomenon became the market force that it is today. I only mention this to reinforce the fact that from it's inception, the InServ internal storage architecture was designed to virtualize storage. It makes storage administration transparent by doing the low level provisioning work on behalf of the storage administrator.
As new storage is provisioned in a 3PAR system, the data is spread across chunklets in small 16KB increments. All the disk drives (of the same class SATA VS high performance) in the system are used by default, so that data is widely striped for optimal throughput and to avoid hotspots. While there actually are small amounts of capacity pre-allocated for use before storage is provisioned, this is done automatically by the system and it is done in thin slices across all drives.
There are no pools, no constraints, no weeks-long planning efforts needed for storage installations and change management.
If you are looking to dump your nagging storage administration problems why would you ever go back to pool based storage when that is the root cause of your problems?
Posted at 02:23 PM in 3PAR, customers, enterprise storage, mid range storage, multi-tenant storage, performance, reservationless, snapshots, storage management, utility computing, virtualization, wide striping | Permalink | Comments (12) | TrackBack (0)
Tags: 3PAR, cloud, enterprise, utility, virtual storage
(A quote from Dieter Rams - former Chief of Design at Braun)
It's hard to think of a company that has had more success with it's product designs than Apple. When you look into how Apple did it, you find out about Jonathan Ive - Apple's lead industrial designer - and how his designs have followed the philosophy outlined by Dieter Rams, who was the lead designer for many years at Braun. When you compare photos of their designs, it is obvious that Ive has a strong appreciation for Rams' work.
What Ive and others have found compelling in Rams' work is nicely summarized in the design principles Rams used at Braun for many years.
The design goals for consumer products differ considerably from those for industrial products. For example, aesthetics and innovation tend to be less important than reliability and ROI - two characteristics that didn't even make it onto Rams' list of design principles. But there are also principles that certainly belong to both, such as making a product useful and unobtrusive. So, what should the 10 design principles be for information infrastructures products? Here's my list:
Producing this list was much more interesting than I thought it would be. For starters, it took me some time to get settled in a customer's perspective - as opposed to my usual vendor employee perspective. (l have this wonderful hammer you need). Also, to clarify a point, the idea of management scalability involves the number of people who can effectively manage and control a system simultaneously. That might not be a concern for smaller IT systems, but it certainly is for large-scale systems.
What would you change? Would you reduce or expand this list?
Technology integration makes computing products much easier to use and significantly drives down the cost and effort of owning it. For instance, technologies such as WiFi that were recently beyond the grasp of most people are now inexpensively integrated into PCs and usable by almost anyone.
The trick with integration is understanding what variables should be exposed - or as my friend Rick Vanover likes to say - how many knobs there are to turn. End user and infrastructure provider requirements differ considerably when it comes to knobs. For instance, Apple computers are great end user machines because they lack knobs, but are not always loved by technology professionals for the same reason. Data center operators need products with knobs in order to accommodate all the cross-purposed requirements that stretch beyond a one-size-fits-all design.
So knobs are generally good - but like so many things - their usefulness depends on how effective they are and their station in FARLEY'S HIERARCHY OF KNOBS, which includes the following levels:
Suicide Knobs: knobs that delete data and make things blow up. A good example of a Suicide Knob is something that formats storage.
Prison Knobs: knobs that make changes that are very difficult or impossible to reverse. Many storage provisioning knobs fall into this category. Once you provision and reserve storage with most storage arrays today you are stuck with that decision until the array's EOL.
Faux Knobs: knobs that never seem to do anything, no matter how far you turn them. For features past and future, but not now.
Random Knobs: knobs that produce unanticipated results that can go unnoticed for years. These are the knobs that fuel the technical publishing industry.
Slippery Slope Knobs: knobs that start you down a path to ruin through a chain of system dependencies. These are the knobs you spend a lot of money to learn about in vendor classes.
Dumb Ass Knobs: knobs that do things, but not anything useful. Granted there is a LOT of subjectivity in making a call on a dumb ass knob - but we all agree they exist.
Honest Knobs: knobs that actually do something you need them to without having to plan for weeks on how to use them. Most knobs should fall into this category, but alas!
Magic Knobs: knobs that do things so useful it makes you wonder how anybody thought of a knob like that. Most of these knobs are actually Honest Knobs, but we are so accustomed to seeing Suicide, Prison, Faux, Random, Slippery Slope and Dumb Ass knobs that we are blown away by a truly great Honest Knob.
I'd like to say I was surprised yesterday when graphically-challenged Hitachi announced their intention to sell their own Unified Cloud Graphic, (complete with Hitachi compute servers!). But it wasn't a big shock considering their marketing strategy of "just copy it".
I really don't know how they expect their graphic to compete with vBlock's graphic, with all the color, multiple font sizes and graphics within graphics.
What's missing from the both stack graphics are the knobs that administrators use to get real work done. Yes, knobs tend to be part of the underlying details, but to anybody that actually uses a product, they are very important details. The detail that C-level executives need to understand is that the stack does not have nearly the automation that is being promised today and that administrators will be doing a lot of work, turning the knobs that the stack provides. Again, it's not the number of knobs that matters, as much as it is the quality of those knobs.
Some people have speculated that the vBlock was a knob-less invention that originated in the board rooms of the VCE companies. Some have even suggested that it was the fallout after a failed acquisition bid by Cisco to acquire EMC. I don't know if THAT's true, but there is some evidence that the engineering groups in the companies involved have been scrambling to put meat on the bone.
Maybe someday stacks will be the next big thing, but I don't see it playing out that way unless an awful lot changes in the underlying products that make up the stack. Here's my take on STACK WARS:
STACK WARS give everybody something to write about- me included, right now!
Bloggers that write about stacks have a chance of getting jobs with stack vendors. If you are out of a job, start a stack blog today and twitter your back-stack off!
Stacks are all about packaging. Stacks will be assembled and shipped together (presumably), which could make things easier if your goal is to streamline receiving.
Stack products, are actually more services than products. However, if you ever want to make configuration changes in your stack it might not be economically feasible. (Think gigantic FRUs) For example, there is not a lot of flexibility in vBlock's configurations.
Due to the limited configuration options, stack resources are not likely to be used very efficiently and the economic return on the investment will lag. However, EMC customers are already accustomed to low storage utilization levels - so poor utilization might not be THAT big a deal. Definitely a weird way to win a point, but I 'll concede it grudgingly.
The business advantage of integration should be much lower costs. However, the VCE companies all need to maintain their margins if they want to satisfy investors. It's not clear how they will be able to leverage the integration effort to reduce the cost of vBlock, but then again if STACK WARS turn into PRICING WARS for STACKS, things could get very interesting. IBM must be STACKING up something - after all Hitachi already beat them to the punch.
The C level view of stacks are that they smooth out purchasing and operations expenses by providing a smaller number of Purchasing Knobs (that would be a Faux Knob). John Nash posted in his blog last week,"The Case for the vBlock":
What is interesting is that, usually, the higher up in an organization you are communicating the better the Vblock conversation goes. Remove the detailed technical questions and the value of the Vblock idea really shines. You get a known “product” from trusted sources. You get known costs today as well as known costs for future expansion. It greatly removes the risk from the organization with unknown infrastructure expenses.
There you have it, vBlocks will be sold from the top down by Cisco and EMC - companies that are good at selling from the top down, which will make it somewhat easier for the VCE companies to justify their price tag. But that won't make the price any easier to swallow.
As Nash wrote, "remove the detailed technical questions and the value of the Vblock idea really shines." That's like saying chapulines (fried grasshoppers) might appeal if Anthony Bourdain is talking about them on TV, but your own personal experience chewing and swallowing them might be different. I'm not talking about price here, I'm referring to the experience of running the vBlock. There is going to be a lot more involved than the knob-less graphics portray.
The weakest link in the vBlock chain today is EMC's contribution. There are far too many Prison (provisioning) and Slippery Slope Knobs in EMC storage. They aren't the only vendor with this problem, but they are the E in VCE. Provisioning storage with a v-Max is the about the same as it was with a DMX - despite what EMC employees would have you believe.
Prison Knob provisioning creates a lot of problems for customers as storage ages and as demands shift. Once storage has been reserved for usage in an EMC system, it is pretty much bound to that purpose.
My advice is to buy the products with the most Magic Knobs and avoid those with the most Prison provisioning Knobs. If you have ever felt trapped by a storage configuration that you couldn't live with or afford, you know what I'm talking about. Magic Knobs are those that reduce the effort to manage and change storage, increase the efficiency of storage and provide the most versatility for all applications, workloads and multi-tenancy.
Posted at 01:18 PM in 3PAR, Cisco, cloud computing, EMC, enterprise storage, HDS, Hitachi, IBM, mid range storage, reservationless, storage companies, storage services, utility computing, virtualization, VMware | Permalink | Comments (3) | TrackBack (0)
Tags: Cisco, EMC, Hitachi, knobs, provisioning, Stack, storage, v-block, VCE, VMware, Wars
I wrote a post yesterday that showed IOPS calculations for a few different native wide striping configurations and I thought I'd add storage tiering to the mix today. Native wide striping places data from all volumes across all drives in the array (or of a certain drive class if you have mixed drives in your array) and randomizes workloads across all resources. The biggest advantages of native wide striping over traditional array designs that rely on multiple pools and workload isolation are:
Although native wide striping can handle complex, mixed workloads of transaction and sequential data access, applications that are either latency sensitive or single threaded can significantly increase their storage performance through the use of SSDs and storage tiering.
3PAR's software for storage tiering is called Adaptive Optimization, or AO. Based on administrator policies and algorithms keyed off QOS gradients, a 3PAR InServ array autonomically copies data from lower IOPS disk drives onto high IOPS SSDs.
The 3PAR tiering solution uses STEC MACHIOPS SSDs with a sustainable I/O rate of 10,000 IOPS. These devices have 50GB capacities and are installed as sets of eight SSDs across all mesh active controllers in a 3PAR InServ array to balance the high IOPS workload load over all controllers as well as drives.
Below are a few calculations for maximum sustainable IOPS from InServ arrays that use both SATA drives as well as SSDs with AO. I used 5,000 IOPS as the metric for calculating SSD performance, which is a conservative estimate for the STEC MACHIOPS performance, but actual performance from an AO-enabled array could be lower due to a number of variables including the I/O activity levels that can be sustained by both applications and servers, policy settings made by storage administrators, the accuracy of algorithms to select data for tiering and copy operations that populate and de-populate the SSDs.
Storage tiering is still in it's early stages and the industry is going to learn a great deal about this technology over the next several years. Performance models will certainly evolve as key variables are identified, which will almost certainly include server and application components.
Array 1: 160 SATA disk drives; 80% reads, no SSDs
Total IOPS of all drives in the array: 12,800
IOPS delivered to all servers w/RAID 5: 8,000
IOPS delivered to all servers w/RAID 10: 10,667
IOPS delivered to all servers w/RAID 6: 5,998
Array 2: 160 SATA disk drives; 80% reads - 8 SSDs; 50% reads
Total IOPS of all drives in the array: 52,800
IOPS delivered to all servers w/RAID6 (SATA) & RAID5 (FC): 21,998
Array 3: 160 SATA disk drives; 80% reads - 32 SSDs; 50% reads
Total IOPS of all drives in the array: 172,800
IOPS delivered to all servers w/RAID6 (SATA) & RAID5 (FC): 69,998
Array 4: 480 SATA disk drives; 80% reads - 32 SSDs; 50% reads
Total IOPS of all drives in the array: 198,400
IOPS delivered to all servers w/RAID6 (SATA) & RAID5 (FC): 81,994
Even a relatively small amount of SSD storage can boost performance approximately four times, as shown by Arrays # 1 and 2 above where eight SSDs totaling 400GB were added to an all-SATA configuration. It's also interesting to note the performance differences between arrays #3 and #4 above. Although the number of SATA drives tripled, the IOPS performance increased only 15%.
Scott Lowe had an interesting post on Friday about creating metaLUNs that has attracted some interesting comments, which is where the best parts are.
Given today's computing power, there is no need for stupid software contraptions like metaLUNs, they are a side effect of a dull architecture that puts the onus on the administrator to figure out what to do. It's easy - just figure it out!
3PAR InServ arrays automatically wide stripe data over all the drives in the system, without headaches or "weird science". It's very simple for administrators and the performance is excellent - without requiring an army of professional services people, the way things tend to work with arrays from the Ever Mounting Costs company.
And now for something completly different: Human Tetris!
Another way to understand what 3PAR announced yesterday is to think of the life cycle of storage. 3PAR clustered arrays now offer new methods for removing wasted capacity from storage and recycling it so you can make use of it again for more pressing needs.
First, lets start with Thin Provisioning, which is a way to match the consumption of storage resources with its demand. Non-thin provisioning storage makes you guess during installation what you will need storage resources for. Once the storage has been allocated to a volume in a thin provisioning system it is no longer available to be used by another application. Thin provisioning continues to work as the volume ages, but its effectiveness depends on how efficient the file system (or database) is choosing which blocks to write to.
There - that's the old stuff.
First is a technology 3PAR calls Thin Conversion. This is a way to make a block copy from a volume or system that wasn't thinly provisioned and put it on a 3PAR array so that it is thinly provisioned. In other words, it removes all the non-utilized space that was never written to - which can be a lot. This is a one shot deal that gives customers an immediate payback by not having to buy as much storage when they are refreshing their storage technology.
Second is Thin Copy Reclamation. It sounds a bit boring but storage admins familiar with the problem of trapping storage capacity in snapshot or remote copy pools know that this is actually a very big deal. It can make the difference between continuing to work with what you've got and being forced to buy more storage than you planned for. This is something that storage administrators are likely to use repeatedly, over the life cycle of the volume or array, to maintain the balance of resources in their 3PAR arrays.
Third is Thin Persistence. Like Thin Copy Reclamation, Thin Persistence is something that storage admins will use periodically through the array's life cycle to remove wasted storage. File system tools are used in conjunction with a 3PAR array to identify blocks that were previously allocated but have since been deleted by the file system. Thin persistence is designed to recycle these blocks.
Fourth is Thin Reclamation for Veritas Storage Foundation. This is also another full life cycle technology that was developed by 3PAR and Symantec where a storage protocol command is used to transfer deleted block information from a Veritas Storage Foundation file system to a 3PAR InServ array. Symantec is most definitely leading the way with thin-aware file system technology.
So there you have it - three new ways to keep storage thin and one new way to get onto a storage platform that offers you these important storage life cycle options.
When your existing storage approaches the end of its affordable life, you probably wonder if there might be a better way to get the job done. It would be great if you could spend a lot less on storage capacity and not have to do so much planning for your RAID groups, semi-wide stripes, service levels, snapshots and remote copy space. It would be even better if you could avoid the same surprising and painful hidden professional services fees that make storage seem more like a luxury item than an infrastructure technology. Best yet, how would it feel to actually decrease your storage footprint during your next technology refresh, as opposed to watching it grow larger than you care to think about. Think you'll get those kinds of options from the Ever Mounting Costs company?
Like most I/O performance demonstrations, the video below is a bit reminiscent of watching paint dry, but if you understand what's going on, its pretty cool. 3PAR's Thin Conversion is a block data transfer function where empty blocks coming into a thin volume on a 3PAR InServ array are identified as such and not written to the receiving thinly provisioned volume space. The result is a significantly faster data transfer and an efficient, thin volume with high storage utilization when the transfer completes.
In this video, block data from a thick (or fat) source volume is being migrated to a new thin target volume using 3PAR's Thin Conversion technology. The demo shows reads from the source in red and writes to the target in blue so you can see the impact of 3PAR's hardware-based zero detection technology. During the demo, zero-detection is turned off and writes spike to a much higher level and then it is turned on again to show writes returning to zero.
Because Thin Conversion only writes actual data on the target and does not copy empty space, it's insanely fast and because it's implemented in hardware, there isn't any performance degradation in the controller. Also notice that the amount of storage needed on the target volume will be much less than on the source volume and storage utilization for the array will be much higher.
Thin Conversion is a great way to pare storage costs when you are considering a technology refresh. Instead of buying more new storage than you want and increasing the storage footprint in your data center, you can buy less storage, reducing your storage footprint. Thin Conversion works with both T-Class (Enterprise) and F-Class (Mid range) arrays from 3PAR and is independent of the types of drives in the system. Check it out, you might surprise yourself at how little you actually need to spend.
Posted at 10:09 AM in 3PAR, clustered storage, customers, enterprise storage, mid range storage, performance, remote copy, reservationless, SQL Server, storage management, thin provisioning, video, wide striping | Permalink | Comments (5) | TrackBack (0)
Tags: 3PAR, conversion, provisioning, technology refresh, thin, zero detect
Steve Duplessie clarified his recent 'pot-stirring' post on cloud storage economics, with this new post, which was thoughtful and well written.
He brings up the history lessons of the over-hyped, over-financed bubble and demise of storage service providers (SSPs) during the early years of this decade, arguing that their failure was the inability to deliver sufficient value. I'd add that the technology available at that time was insufficient to allow the SSPs to deliver much other than expensive bulk storage. The evolution of virtualization technologies since then has made a big difference, changing the cost equations for service providers and allowing them to focus on application specialization and service delivery.
Steve argues that adding application value is the key, which I don't disagree with, but storage applications have always been tricky to define - just as storage technology is always very tricky to develop. I contend that value-add in storage comes from things that often seem very mundane, such as being bulletproof & flexible - a perspective I took in this YouTube video on 3PARTV. Just in time delivery of applications and the corresponding "right-sized", immediately available storage to support those applications makes a huge difference to service providers and their customers. Reservationless, incremental storage provisioning does not sound like much of an application, but it certainly is the way everybody wants storage delivered to their applications.
The ability to right-size storage - and the flexibility to make changes to storage at a moments notice is what drives 3PAR's technology development. It's why we have been so successful selling our products to cloud service providers. In the weeks to come, look for more service-oriented innovations from us.
3PAR's Remote Copy software broke all the rules for the cost and effort required to make remote data replication work. Huge savings in cost, time and capacity.
Everybody has an opinion these days about Cloud Computing and Cloud Storage. 3P tells "Store Heads" in his new video below that they ought to start investigating how to do it. The equipment used by cloud service providers matters a great deal to their success. Technologies such as reservationless thin provisioning, full array wide striping and dynamic, load-balancing, active/active controllers make the cloud experience much more satisfying. 3PAR partners with Cloud Service providers through a program called Cloud Agile to bring 3PAR array technology to cloud computing and storage customers.
Posted at 03:05 AM in 3P, 3PAR, cloud computing, clustered storage, enterprise storage, mid range storage, partners, reservationless, snapshots, storage management, thin provisioning, utility computing, video, virtualization, VMware, wide striping | Permalink | Comments (4) | TrackBack (0)
Tags: 3PAR, array, cloud agile, cloud computing, cloud storage, wide striping
(Author's note: a couple typos in the next to last paragraph have been corrected, which should make things clearer -mf 8-08-2009)
It's funny how things work in the stoblogosphere. The Anarchist goes after my reservationless post and steps in a pot hole of tchit and in the process brings blogging newcomer Enrico Signoretti - a Compellent reseller from Italy - out of the woodwork to howl about the stench.
A couple things about what Anarchist and Enrico said brought to light that people get 3PAR and Compellent confused sometimes. Mostly these are financial people - but lately Anarchist seems to be looking for a new career as a financial analyst as the long slow decline of Symmetrix sets in. Anyway, I felt compelled to point out the differences between 3PAR and Compellent, which got Enrico's attention again and like a true defender of the faith he posted on some things he felt I mis-spun. God, I love blogging. And for money's worth - it is much, much richer than twittering.
Of course, as things go, I recognized some pretty lame Compellent FUD that they have been feeding their resellers for some time now about 3PAR's hardware, which reminded me to post about it because it's actually a huge advantage for us. FWIW, I'm betting that no one from Compellent shows up to challenge it as they seem content to let their FUD-fed resellers do the dirty work for them. Clever, but in the end it's not a nice thing to do to guys like Enrico (have them carry you FUD for you). To be clear Enrico, you are OK with me, my gripe is not with you.
3PAR has been using Intel processors from it's inception in it's storage controllers. There are a lot of good things about this and it's heartwarming to see EMC finally move to Intel on V-Max after all these years. It takes them a long time to get caught up and of course, the target continues to move in front of them - despite their hollow declarations of parity. Oh yeah! And implementations don't matter either!
The proprietary stuff in Compellent's FUD has to do with our ASIC, which is a co-processor for storage functions. While Intel platforms have some great advantages (that we recognized 10 years ago), there are some shortcomings to using them - like segregating different types of I/O traffic - especially small database I/O versus long sequential streams. The ASIC in our system does this superbly, which is why our mixed application workload is so freaking good (along with our reservationless true wide striping.) Why would you want to buy a different array for each major application - that's a waste of resources.
But why would Compellent tell their resellers that 3PAR runs on proprietary hardware, when we run on Intel with a kick-ass co-processor to handle the hard stuff? It wouldn't be the usual disingenuous marketing tactics they like to accuse EMC of - would it?
The other thing our ASIC does is compute parity. Why on earth would you want to tie up your main controller processor doing this? Because you don't have the architecture to do it in a co-processor.
Which brings me to some of Hu Yoshida's recent posts. Hu has obviously been fed some things from the engineers in Odawara, Japan when he recently wrote about about the trade offs you have to make when you select your allocation size. Now I don't have much beef with Hu personally because he is a pretty good guy, but this post really only exposes the limitations of Hitachi's engineering team. Yes, there are trade offs in every design decision, but there are are an almost infinite number of ways to pay for them. 3PAR's design inception is still way ahead of the rest of the industry because people that designed high end cluster server systems for SUN designed the 3PAR architecture. I don't want to be too snide, but it's pretty clear that Hitachi has does not have the same sort of perspective on clustered systems design.
Whew - rant over. I'm going for my first coffee of the day now.