A lot has been written about HP's acquisition of 3PAR. I see it as a real game changer in our industry. This screencast explains why.
A lot has been written about HP's acquisition of 3PAR. I see it as a real game changer in our industry. This screencast explains why.
Posted at 09:25 PM in 3PAR, Adaptive Optimization, Autonomic, Efficient, energy, enterprise storage, green computing, HP, multi-tenant storage, performance, reservationless, storage management, storage services, thin provisioning, tiering, utility computing, video, virtualization | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, Autonomic, Converged Infrastructure, Efficient, HP, Multi-tenant
Posted at 08:26 AM in 3PAR, cloud computing, customers, enterprise storage, mid range storage, multi-tenant storage, partners, performance, SAN, storage management, storage services, thin provisioning, tiering, utility computing, video, Virtual Domains, VMware | Permalink | Comments (0) | TrackBack (0)
Tags: 3PAR, hotspot, virtualization, VMware, VMworld
The twitterverse is busy again today with discussions surrounding EMC's us of spambots to generate views of videos they are trying to make viral. If you are interested in seeing what is being said, check out these people's tweets and you'll be off on a trip down a dark hole.
Here are a couple cartoons I made about it last week from my new cartoon, Ineption:
Netapp's Val Bercovici suggest this viral spamming as the end of innocence in social media, but innocence exited the social media stage long ago.
I'm much more concerned about how large companies like EMC can use social media to suggest product and customer relationships that stretch the truth well beyond the impressions that a reader might take away from reading suggestive blog posts from respected corporate voices. As "unofficial company statements" that are more influential than press releases, social media pieces can distort things in a way that more-accountable corporate marketing are not allowed to.
Last week, Chad Sakac and Chuck Hollis published blog posts that pointed to an EMC white paper about details of a VMAX implementation at Terremark, an excellent 3PAR customer. Readers of these posts would probably think that VMAX was being used as the storage behind Terremark's multi-tenant, Enterprise Cloud service offering. That would be stretching things more than just a little bit. I commented on both blogs and the responses to my comments were interesting. I guess I feel a little kinder towards Chad as a result.
It is possible that somewhere in the world, a VMAX is being used by Terremark. One would expect Terremark to be looking at various storage platforms as a matter of course, it only makes sense for them. After all, VMware made a significant investment in Terremark last year and we all know who owns VMware. There are certain favors that EMC can ask that vendors such as 3PAR can't. But Terremark also has to operate Enterprise Cloud in their US major data centers every day and the storage they use for that is not in a test lab - it's production - and it is 3PAR storage.
And its not for lack of trying on EMC's part. Last November when VCE was announced, Terremark was discussed as a featured customer in both Chad's and Chuck's blogs. That was OK, I understand the excitement that surrounds a big announcement. But nine months later, to suggest that this announcement had given birth to a major production environment for a service that it is not supporting sort of stuck in my craw.
Here is a video I made at VMworld with Jason Lochhead, CTO of Hosting Solutions at Terremark last year where he talks about vCloud Express and Enterprise Cloud. Very cool offerings and definitely on the leading edge of VMware-based service offerings.It's not a viral video, but it has a lot more to say about what people care about than the videos EMC has been chasing with spambots.
How is it that some people possess the gift of foresight and the ability to predict the future? Some say they have dreams or visions, some extrapolate from experience and logic, while others make predictions hoping to fulfill an agenda. Then there is the element of public exposure. Is the predication public and do they use their real name or hide behind an alias?
Nicholas Carr took was very public and very open when he wrote his breakthrough book "Does IT Matter?". In it, he stated that there are no sustainable advantages to be gained by a company through the implementation of information technology. He argued that any short term gain can be matched by competitors in a relatively short period of time with lower capital investments - effectively punishing companies for innovating. He recognizes the necessity of having IT in order to stay competitive, but finds it difficult to justify being an early adopter of technology.
Since Carr published his book, we've seen a lot of change in IT markets, including the rapid deployments of virtual systems technology and the expansion of hosted, utility computing and all things "cloud." But the biggest changes have resulted from the global financial crisis, forcing companies to reduce non-essential costs significantly - especially IT costs.
Unfortunately, not every technology implementation intended to reduce costs has been successful. And that's one of the things that makes the information technology business so fascinating and perplexing - intelligent people with deep expertise in technology fail to predict the ways that things can go awry and what the cost of their shortsightedness will be.
The rich history of failed IT projects is exactly why there is so much FUD spread by the competitors in our industry - FUD gets customers thinking about the consequences of their purchase decisions and all the possible problems that can result from an error in judgment. It also contributes to the interest in the machinations of our industry and the "war games" that are played out in traditional and social media. Whether we are predicting changes to the industry through mergers and acquisitions or the development of new business models, it all flows into the river of FUD at purchase time.
With the abundance of FUD, one naturally develops an aesthetic for the stuff to cull the weak from the strong. For example, a piece of weak FUD recently appeared online on Silicon Angle titled "Why Netapp Must Seek Acquisition", written by the poser "secretcto". The author starts with the suggestion "let's take a look at the market cap of each of these players" and then neglects to make any comparisons. It goes downhill from there, reaching its lowest point when the article referred to Nicholas Carr as Daniel Carr and then failed to negotiate the transition of whether or not IT matters to cloud service providers. The tipping point for Carr's logic is that to service providers, IT absolutely does matter because operating data centers is their core business.
By contrast, you barely notice good FUD, it has a smooth logical flow and subtly builds to a persuasive conclusion based on a key point that usually has it's origins in a subjective opinion or bias. A decent example of good FUD was Chris Mellor's recent piece about the Storage Array Killing Fields qualifies as good FUD. Chris doesn't have an axe to grind, but he is a journalist and therefore has the responsibility of stirring the pot. It's a well written piece that uses an analogy that compares the selection of equipment for data centers with the selection of components used in an automobile.
The problem is that automobile manufacturing is a poor analogy for running a data center. When a car rolls of the manufacturing line it is shipped to a dealer, sold to a customer who drives it away. There is nothing about the experience of making, selling or buying a car that is even closely related to the constant ongoing data processing services that are provided by a utility or cloud service provider.
A better analogy is running a restaurant. Restaurants succeed or fail based on the quality of their customer service and that's why chefs like Thomas Keller strive to maintain consistent, excellent quality every minute of every day they are open.
Should we expect the recipe for success in hosting and cloud services to be any different? This recent article in Information Age states that 71% of the 450 CIOs in a KPMG survey want to improve the price to quality ratio of their outsourcing contracts. The dynamics of the business relationship between CIOs and their utility/cloud service providers are going to be the same. Service providers with the best reputations for customer service are going to thrive. Those that don't measure up will fail.
Vendors of consolidated stack solutions of servers, storage and software are trying to convince customers that the "All-in-one" stack solution is the safest way to proceed during the transition period while cloud computing is emerging. They would have you believe that the biggest risk in operating a data center is in ordering the products and getting everything installed initially. Considering that customer metrics for utility/cloud service providers will be responding to the needs of their customers quickly and accurately, the lions share of the risk will come well after the initial installation during the life of the service engagement.
The weakness of the All-in-one approach is that it does nothing to address the dicier aspects of owning, operating and changing an IT infrastructure after it is up and running. In many cases the stack vendor's answer to change management will be the same as it is today - time-consuming and expensive professional services. There are definitely utility/cloud service providers that will want this sort of service, but many would prefer to do it themselves at much less cost. That's what you do when your primary business is running a data center.
A talented chef can find a way to prepare a gourmet meal on an Electrochef All In One Kitchen, but they would never decide to run their business with them. They are going to select best of breed appliances and equipment that best fit their needs and enable them to prepare quality dishes in a quality fashion.
So the question for the utility/cloud data center operator then is - "what is best of breed equipment for my business?"
The classic clash between Best-of-breed and All-in-one solution pits cost against complexity. Best-of-breed technology has traditionally been more customizable to fit a wider range of requirements and therefore has been more complicated and expensive to operate. In contrast, All-in-one technology has traditionally been cheaper, limited to a smaller set of functions and easier to operate.
Unfortunately, neither stereotype works very well for the utility/cloud service provider. They need fully functional products that are also easier and quicker to operate. Fast, accurate change management and operator efficiency are the key elements for utility/cloud infrastructure products. 3PAR's Best-of-breed storage products have these characteristics as well as being extremely space-efficient and high-performing. Customers appreciate the amount of time they do not spend managing their 3PAR storage while they are getting the job done. When a new order comes into a 3PAR kitchen, the system is ready to go right away - including tasks that take a long time to set up on other storage, such as Remote Copy.
And what about the All-in-one stacks in the market? Surprisingly, unlike traditional All-in-one solutions, they are more expensive to install and operate. Change management is complex, which leads to relatively poor operator efficiency and the engagement of professional services, which does not necessarily speed up the process. The traditional benefits that All-in-one solutions typically provide are not part of these stack solutions.
The predictions for stacks taking over the market are all wrong. Sure, there will be stack solutions sold and it will take time for all of this to sort itself out as it always does when an industry is going through major, fundamental changes. The most important changes that will occur in the years to come will be driven by the service demands placed on utility/cloud service providers. Customers of utility/cloud services want their money's worth and the best service providers will do what it takes to give it to them. Stacks add no value in that equation.
Posted at 08:37 AM in 3PAR, bloggers, cloud computing, customers, EMC, enterprise storage, Hitachi, HP, multi-tenant storage, performance, remote copy, servers, storage companies, storage services, utility computing, virtualization, wide striping | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, arrays, best-of-breed, EMC, HDS, HP, stack, storage, vblock
I caught up with Mark Cravotta from Datapipe recently at a 3PAR event in Las Vegas. He's a high energy person who is having a lot of fun growing Datapipe's hosting and cloud computing services as well as helping to manage its expansion around the globe.
Datapipe is a 3PAR Cloud Agile partner and customer who uses our products throughout their line for primary multi-tenant storage, data snapshots, remote replication and all aspects of disaster recovery.
In addition to being customer-driven, Datapipe is also committed to being a leader in green utility computing by reducing the carbon footprint of it's data centers through power purchases from green power producer, Constellation NewEnergy.
Posted at 01:46 PM in 3PAR, backup, cloud computing, customers, energy, enterprise storage, green computing, multi-tenant storage, partners, remote copy, snapshots, storage services, utility computing, video | Permalink | Comments (0) | TrackBack (0)
Tags: 3PAR, cloud agile, cloud computing, Datapipe, green
What would my friends at EMC do without my parody of their announcement?
On the day a product is announced its pretty hard to make a serious analysis - that usually takes more time, but in the case of EMC, there are usually a couple things you can bank on.
The second is an obvious consequence of the first.
Otherwise, I think Storage Federation is a very big deal for our industry and it's great that EMC is bringing attention to it. People interested in reading more about this might want to check out Stuiesav's blog: and the article in The Register.
Our belief at 3PAR is that Storage Federation only makes sense if your storage is already autonomic (self-managing) and efficient. Otherwise, the costs will continue to exceed expectations. It's one thing to introduce technology for technology's sake - it's something else completely to put technology together in a way that reduces complexity. The proof will be in the professional services bills that customers will pay to install and maintain VPLEX environments.
A couple weeks ago, one of the major storage vendors had two major problems to resolve after one of their arrays suffered a firmware bug-induced failure at one of their cloud (email) service provider customers. They had to:
Meanwhile, their service provider customer had four major problems to resolve:
A vendor employee tried to address their public relations problem this way in his blog:
"OK, I'll take the blame for this -- sort of. We pride ourselves in putting a lot of thought into our customer designs. I'd argue that we're really, really good at it as well.
But not everyone is 100% sure of how their application will grow over time -- unfortunately, we're not psychics. And, let's be honest, not everyone necessarily wants to pay for redundancy we like to put into our designs.
We don't always get to directly engage all the time, either -- with products such as the (blanked out), most of this stuff moves through the channel. Somebody calls up one of our partners, says that they want to buy one of our products, and one gets sold -- and a lot of product gets sold that way."
I understand the desire to explain how messes become messy, but I'm not sure why he felt the need to speculate that his company's business partners or that their customer's budget were key elements of the problem. That is tantamount to saying, "All of our (blanked out) customers could have the same thing happen to them too." Anybody who has ever been close to one of these melt-downs knows there are many variables involved - including vendors underbidding each other and shaving elements from their bid in order to win the business.
From a distance, it looks like the vendor's response to the customer was good, although there apparently were some issues with failure notification from the array when the event occurred. I wouldn't call these sorts of things "Perfect Storms", but there are unfortunate scenarios where multiple things go awry. All vendors have these sorts of bad days, which serve as painful learning experiences. Unfortunately for customers, it's one of the ways vendors improve their customer support processes.
The customer also wrote in his blog, explaining the situation to their customers:
"Our SAN vendor analyzed the system logs for the event and determined that the service processor failure occurred due to a unique bug in the specific version of firmware on the system. Our vendor performed an emergency upgrade. The newer version of firmware includes a fix for the bug. We are taking additional corrective actions to make certain that there is enough spare capacity on the SAN. This will assure it performs without performance degradation in the event of a single hardware failure."
The reparation sounds reasonable, but it's not what I would call best of breed either. I'll explain why in the remainder of this post.
The explanation the service provider gave to their customers was only half correct. Yes, the failure in one controller was due to a firmware bug -and yes, all vendors find out about some of them at customer sites - but the inability of the surviving controller to handle the workload was another matter altogether.
The major defect of all dual controller designs for service provider applications is the uselessness of write cache when operating in degraded mode on a single controller.
When a dual controller array has a controller failure, all traffic is failed over to the surviving controller. However, this controller can't afford to place writes in cache because if this controller also fails any un-flushed writes in cache would be lost- making the recovery process all the more painful. As a result, the throughput of the controller degrades significantly because writes now take several orders of magnitude longer to process as each write must be completed at the physical disk level, instead of in fast cache memory. When you consider the sort of read/write ratios involved with an email application (heavy writes), it's not surprising to hear that it took 32 hours for the system to get caught up. I suspect that if the surviving controller had been able to use write cache, the customer might have experienced some amount of service level problems, but not nearly as bad as they suffered.
Write performance during array component failures is an important point that many customers give insufficient weight to when making their purchases. Public service providers certainly need to understand this. The exact same scenario - controller failure and subsequent drop in service levels - could certainly happen to a traditional data center customer, but the ramifications of this scenario are not as ugly as they are for a multi-tenant public service provider.
This case is a perfect example of how an older architecture is incapable of meeting the requirements of the new cloud service business model. If you are a cloud service provider reading this and wondering if you might have a similar exposure to a controller failure (including 3PAR customers with dual -controller arrays), my advice is to review what you have and start thinking about what you should expect if you have a controller failure and how you might want to deal with it on both a short-term and long-term basis. Best of breed cloud storage should not include dual controller arrays.
One of the identified corrective actions is having "enough spare capacity on the SAN", which in this case involves installing a second array. Without knowing the inside scoop, it looks like the idea is to split the workload across the two arrays so that if a controller failure occurs in either array, the performance drop won't be as noticeable. The array that doesn't suffer the failure will keep working as expected and the array that has the failure will only have half the load to deal with.
There are two primary problems with this "fix"
You are always going to have performance degradation of some sort when you can't use write caching, unless you are only reading data - which isn't the case here. It is flat out wrong to assume that a performance problem will not occur. Regardless, with the new two-array SAN, whichever system has the controller failure should be able to get caught up much faster than the 32 hours this customer had to wait. Of course, the customer's capacity and I/O load will almost certainly increase over time, and as that happens, the strategy of splitting the load between two arrays loses its effectiveness.
Along with adding the controllers, they are also certainly adding disk drives, and some notion of what "reasonable" utilization limits should be for them. The problem with limiting utilization as a best practice is that it puts the stamp of approval on inefficiency - not only for capacity utilization abut also for the power and cooling required to support all those underutilized drives. Most legacy arrays have built-in inefficiencies in the way data is laid out on disks, making it virtually impossible to achieve uniform utilization across all disk resources. The result is uneven consumption of disk capacity, as well as uneven I/O service levels among different disk groups, which is another variable in how much performance degrades following a controller failure in a dual controller array.
Finally, the customer now has two arrays to manage, including multipath connections, SAN zones, and all other aspects of the configuration, which all contribute down the road to change management complexities. The result is a net drag on administrator effort and an increased TCO.
A true best of breed solution would address the root-cause deficiency in the array's design, without creating additional management and cost burdens to the customer. Obviously, more than two controllers are needed. But how many controllers does a cloud service provider need in an array? The answer is at least three. Why? Because when a single controller fails, there can still be two surviving controllers working together, mirroring their cache contents, and performing fast writes to cache memory. That said, controllers are usually packaged in pairs for redundancy purposes, which means that the most likely configurations will have four controllers.
If you compare a single quad controller array with two dual controller arrays there are some key advantages that immediately jump out:
The next question is; "Is there a suitable quad controller array that the customer could have used instead of the two dual controller arrays they have?" Yes, 3PAR's F400 or T400 arrays are both quad controller arrays. The disk drives in these arrays can be either SATA or FC, or a mix of both types if the customer wanted to implement tiering. Product information of the F400 can be found here, and the T400 here.
However, simply putting four controllers in an array does not necessarily guarantee that they will be able to sustain write caching if one of them fails. The array must have the ability to remap and re-mirror the write cache contents of all four controllers to the surviving controllers following the loss of a controller. It's an interesting geometric sort of problem: There are four controllers, each with their own cache and cache that is mirrored from the other controllers in the array. All cache contents, including mirrors, need to be distributed evenly across all controllers to avoid congestion and load imbalances. All cache content, including mirrors needs to be accounted for within the array so that if a controller fails, the other controllers will be able to identify all the surviving original and mirrored copies of data. For cache data that has lost either a primary or mirrored copy, a second (new) copy needs to be made. Finally, the amount of data in cache may need to be re-leveled (decreased) to fit into the degraded cache capacity (3 controllers instead of 4).
I made a 9 minute last year video describing how Persistent Cache works. Here it is again. Thanks for watching.
Posted at 04:40 AM in 3P, 3PAR, bloggers, cloud computing, clustered storage, Compellent, customers, Dell, EMC, enterprise storage, Exchange, green computing, HDS, HP, IBM, mid range storage, multi-tenant storage, performance, SAN, storage companies, storage management, storage services, utility computing, video | Permalink | Comments (6) | TrackBack (0)
Tags: 3PAR, best of breed, cloud, cloud storage, failures, performance, storage, write cache
Technology integration makes computing products much easier to use and significantly drives down the cost and effort of owning it. For instance, technologies such as WiFi that were recently beyond the grasp of most people are now inexpensively integrated into PCs and usable by almost anyone.
The trick with integration is understanding what variables should be exposed - or as my friend Rick Vanover likes to say - how many knobs there are to turn. End user and infrastructure provider requirements differ considerably when it comes to knobs. For instance, Apple computers are great end user machines because they lack knobs, but are not always loved by technology professionals for the same reason. Data center operators need products with knobs in order to accommodate all the cross-purposed requirements that stretch beyond a one-size-fits-all design.
So knobs are generally good - but like so many things - their usefulness depends on how effective they are and their station in FARLEY'S HIERARCHY OF KNOBS, which includes the following levels:
Suicide Knobs: knobs that delete data and make things blow up. A good example of a Suicide Knob is something that formats storage.
Prison Knobs: knobs that make changes that are very difficult or impossible to reverse. Many storage provisioning knobs fall into this category. Once you provision and reserve storage with most storage arrays today you are stuck with that decision until the array's EOL.
Faux Knobs: knobs that never seem to do anything, no matter how far you turn them. For features past and future, but not now.
Random Knobs: knobs that produce unanticipated results that can go unnoticed for years. These are the knobs that fuel the technical publishing industry.
Slippery Slope Knobs: knobs that start you down a path to ruin through a chain of system dependencies. These are the knobs you spend a lot of money to learn about in vendor classes.
Dumb Ass Knobs: knobs that do things, but not anything useful. Granted there is a LOT of subjectivity in making a call on a dumb ass knob - but we all agree they exist.
Honest Knobs: knobs that actually do something you need them to without having to plan for weeks on how to use them. Most knobs should fall into this category, but alas!
Magic Knobs: knobs that do things so useful it makes you wonder how anybody thought of a knob like that. Most of these knobs are actually Honest Knobs, but we are so accustomed to seeing Suicide, Prison, Faux, Random, Slippery Slope and Dumb Ass knobs that we are blown away by a truly great Honest Knob.
I'd like to say I was surprised yesterday when graphically-challenged Hitachi announced their intention to sell their own Unified Cloud Graphic, (complete with Hitachi compute servers!). But it wasn't a big shock considering their marketing strategy of "just copy it".
I really don't know how they expect their graphic to compete with vBlock's graphic, with all the color, multiple font sizes and graphics within graphics.
What's missing from the both stack graphics are the knobs that administrators use to get real work done. Yes, knobs tend to be part of the underlying details, but to anybody that actually uses a product, they are very important details. The detail that C-level executives need to understand is that the stack does not have nearly the automation that is being promised today and that administrators will be doing a lot of work, turning the knobs that the stack provides. Again, it's not the number of knobs that matters, as much as it is the quality of those knobs.
Some people have speculated that the vBlock was a knob-less invention that originated in the board rooms of the VCE companies. Some have even suggested that it was the fallout after a failed acquisition bid by Cisco to acquire EMC. I don't know if THAT's true, but there is some evidence that the engineering groups in the companies involved have been scrambling to put meat on the bone.
Maybe someday stacks will be the next big thing, but I don't see it playing out that way unless an awful lot changes in the underlying products that make up the stack. Here's my take on STACK WARS:
STACK WARS give everybody something to write about- me included, right now!
Bloggers that write about stacks have a chance of getting jobs with stack vendors. If you are out of a job, start a stack blog today and twitter your back-stack off!
Stacks are all about packaging. Stacks will be assembled and shipped together (presumably), which could make things easier if your goal is to streamline receiving.
Stack products, are actually more services than products. However, if you ever want to make configuration changes in your stack it might not be economically feasible. (Think gigantic FRUs) For example, there is not a lot of flexibility in vBlock's configurations.
Due to the limited configuration options, stack resources are not likely to be used very efficiently and the economic return on the investment will lag. However, EMC customers are already accustomed to low storage utilization levels - so poor utilization might not be THAT big a deal. Definitely a weird way to win a point, but I 'll concede it grudgingly.
The business advantage of integration should be much lower costs. However, the VCE companies all need to maintain their margins if they want to satisfy investors. It's not clear how they will be able to leverage the integration effort to reduce the cost of vBlock, but then again if STACK WARS turn into PRICING WARS for STACKS, things could get very interesting. IBM must be STACKING up something - after all Hitachi already beat them to the punch.
The C level view of stacks are that they smooth out purchasing and operations expenses by providing a smaller number of Purchasing Knobs (that would be a Faux Knob). John Nash posted in his blog last week,"The Case for the vBlock":
What is interesting is that, usually, the higher up in an organization you are communicating the better the Vblock conversation goes. Remove the detailed technical questions and the value of the Vblock idea really shines. You get a known “product” from trusted sources. You get known costs today as well as known costs for future expansion. It greatly removes the risk from the organization with unknown infrastructure expenses.
There you have it, vBlocks will be sold from the top down by Cisco and EMC - companies that are good at selling from the top down, which will make it somewhat easier for the VCE companies to justify their price tag. But that won't make the price any easier to swallow.
As Nash wrote, "remove the detailed technical questions and the value of the Vblock idea really shines." That's like saying chapulines (fried grasshoppers) might appeal if Anthony Bourdain is talking about them on TV, but your own personal experience chewing and swallowing them might be different. I'm not talking about price here, I'm referring to the experience of running the vBlock. There is going to be a lot more involved than the knob-less graphics portray.
The weakest link in the vBlock chain today is EMC's contribution. There are far too many Prison (provisioning) and Slippery Slope Knobs in EMC storage. They aren't the only vendor with this problem, but they are the E in VCE. Provisioning storage with a v-Max is the about the same as it was with a DMX - despite what EMC employees would have you believe.
Prison Knob provisioning creates a lot of problems for customers as storage ages and as demands shift. Once storage has been reserved for usage in an EMC system, it is pretty much bound to that purpose.
My advice is to buy the products with the most Magic Knobs and avoid those with the most Prison provisioning Knobs. If you have ever felt trapped by a storage configuration that you couldn't live with or afford, you know what I'm talking about. Magic Knobs are those that reduce the effort to manage and change storage, increase the efficiency of storage and provide the most versatility for all applications, workloads and multi-tenancy.
Posted at 01:18 PM in 3PAR, Cisco, cloud computing, EMC, enterprise storage, HDS, Hitachi, IBM, mid range storage, reservationless, storage companies, storage services, utility computing, virtualization, VMware | Permalink | Comments (3) | TrackBack (0)
Tags: Cisco, EMC, Hitachi, knobs, provisioning, Stack, storage, v-block, VCE, VMware, Wars
InfoSmack podcasters Greg Knieriemen and Yours Truly interview Greg Kleiman (Netapp),
Eran Farajun (Asigra), Brad Rooke (JumpPoint) and Daniel MIlburn
(Consonus) about the current status of Cloud storage, the impact CDMI
will have and get their thoughts on how they think this industry will
evolve over the next several years. Recorded at SNW 2010 in Orlando.
The show has 3 parts: 1) intro,early stage apps, backup; 2) CDMI; 3) a look into the future and the competitive landscape in cloud storage services.
Ultraspeed describes their hosting business as providing enterprise level solutions at mid-market prices. In this interview, Jordan Gross and Michael Shanks of Ultraspeed talk about how they do it; from running diskless, DC-powered servers, to implementing virtual systems with 3PAR virtual storage and operating data centers in London and Amsterdam.
Posted at 10:52 AM in 3PAR, cloud computing, customers, enterprise storage, mid range storage, partners, remote copy, snapshots, storage management, storage services, utility computing, video, virtualization | Permalink | Comments (4) | TrackBack (0)
Tags: 3PAR, Cloud-Agile, customers, partners, Ultraspeed
SYSDBA is our excellent 3PAR business partner in South Africa, Some of their team came to 3PAR headquarters recently and I had the chance to record this interview with Nick de Beer, a technology expert with a wide range of experiences including Oracle, storage and virtualization. In this interview, Nick talks briefly about 3PAR's Recovery Manager software, which allows customers to recover data from many point in time snapshots.
All systems need contingencies - all the better if they can be built in. Unfortunately, failure mode and disaster operations usually means performance is sacrificed. In today's world - especially if you are providing cloud services - degraded storage performance means you are going to have serious business issues to contend with too.
Today 3PAR announced three new capabilities for our arrays that maintain high performance levels for our storage systems should you experience a failure our outage. The full story is here:
Posted at 08:22 AM in 3PAR, cloud computing, clustered storage, Countdown, enterprise storage, mid range storage, performance, storage management, storage services, utility computing, video | Permalink | Comments (0) | TrackBack (0)
Tags: 3PAR, clustered storage, Countdown, disaster recovery, persistent cache, raid mp, synchronous long distance
In today’s Register, Chris Mellor wrote an intriguing piece about the trend in cloud computing and the wave of industry consolidation that is occurring. He posits that the two are linked by a broader consolidation wherein IT equipment purchases will be made by a much smaller number of service-provider customers that sell services to enterprise customers, as opposed to those enterprise customers running their data centers and making their own purchases today. Mr. Mellor suggests that this shift from enterprise to cloud computing is the driver for industry consolidation and writes that service providers will “want to buy integrated and very efficient data centre kit.” In other words, service providers will be inclined to buy vertically integrated solutions from a small set of vendors.
But that leaves the question as to how service providers will differentiate their services. A major component of a service provider’s business value is
the selection, integration, and organization of best-of-class
infrastructure, which allows them to create unique services, features and cost advantages. Given this, why would they want to limit themselves to single-vendor
solutions that are inhibited by their vendors' business models and weaknesses? If, as the cloud
computing trend suggests, service providers gain increased purchasing clout,
they are more likely to demand that IT vendors provide greater
interoperability and standards in order to allow them the greatest choice in
mixing best-in-class elements of the IT stack (storage, servers, hypervisors,
OS’s, applications, etc.).
Vendor consolidation may very well be motivated by the desire of large vendors to vertically integrate their businesses to take advantage of future cloud-driven customer consolidation. Whether or not this strategy eventually claims an advantage will only be decided years from now.
Senior Director of Business Strategy
How much kool aid can be consumed in the blogospere? An infinite amount apparently.
Now that I see Steve Todd, EMC Intrapreneur, jumping on the circle jerk bandwagon of EMC's love for federated systems and the vaporware ideal of their having a unified platform - I have a very simple litmus test to offer:
EMC customers understand the frustration of living with diversification much more than the joys of federation.
Do Clariions work with DMXes or v-Maxes for such basic bread and butter data protection applications as remote copy? No. Do administrators have to learn different skills to deploy and manage EMC mid range and enterprise arrays? Yes. You can talk about federated futures all you want, but the past and present tell the real story and saying its a simple matter of programming to bridge the gap is - well - saying its a simple matter of programming.
This is not EMC bashing, this is telling it like it really is - the way customers feel the pain.
If you want examples of storage vendors who have already been delivering on the unified platform concept check out 3PAR and EqualLogic.
All 3PAR InServ arrays run the same software. Remote copy operations work between Enterprise (T class) and Mid Range (F class) storage clusters. Admins of 3PAR systems learn a single set of skills that are transferable between mid range and enterprise clusters. We do not mix F class and T class nodes in the same cluster - you either build an F class cluster or a T class cluster. 3PAR clusters are something we call Mesh Active, which refers to the uniform distribution of I/O activity and cache resources across all nodes in the cluster.
All EqualLogic arrays run the same software base. Remote copy operations work across all of their systems (I'm not sure about their new entry systems - but it would be a licensing restriction and not a code difference if these systems do not). Admins of EqualLogic arrays also learn a single set of skills that are transferable across all their products. You can mix and match different EqualLogic products in a single group. A group is not a cluster and does not have the same performance and availability characteristics as a cluster.
If you want to talk about federated systems in the cloud, look at what 3PAR cloud computing customers are doing. Our Cloud Agile partners have started rolling out ASSURED and SECURED services which demonstrate platform federation with remote copy and virtual private domains for storage in the cloud.
Posted at 02:03 AM in 3PAR, bloggers, cloud computing, clustered storage, customers, EMC, enterprise storage, mid range storage, performance, remote copy, storage companies, storage management, storage services, utility computing | Permalink | Comments (22) | TrackBack (0)
Tags: 3PAR, cluster, EMC, federated, remote copy, storage