A lot has been written about HP's acquisition of 3PAR. I see it as a real game changer in our industry. This screencast explains why.
A lot has been written about HP's acquisition of 3PAR. I see it as a real game changer in our industry. This screencast explains why.
Posted at 09:25 PM in 3PAR, Adaptive Optimization, Autonomic, Efficient, energy, enterprise storage, green computing, HP, multi-tenant storage, performance, reservationless, storage management, storage services, thin provisioning, tiering, utility computing, video, virtualization | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, Autonomic, Converged Infrastructure, Efficient, HP, Multi-tenant
As 3PAR is integrated into HP, there is a lot of new stuff to for us to figure out. One of the most important concepts at HP is Converged Infrastructure (CI). The basic idea of CI is to maximize a customer's investement in technology by consolidating resources in common, modular building blocks. 3PAR customers are already accustomed to the idea from with our InServ storage systems, but CI goes far beyond 3PAR's storage vision by including server and network technologies. It's a big idea with huge implications for product engineering, manufacturing, maintenance and support - and it raises the importance of software in data center solutions.
Posted at 08:26 AM in 3PAR, cloud computing, customers, enterprise storage, mid range storage, multi-tenant storage, partners, performance, SAN, storage management, storage services, thin provisioning, tiering, utility computing, video, Virtual Domains, VMware | Permalink | Comments (0) | TrackBack (0)
Tags: 3PAR, hotspot, virtualization, VMware, VMworld
To a lot of people, especially those who are unfamiliar with the storage industry, one of the obvious questions is "Who are these people and where did they come from?"
The answer is that the company was formed by a group of server-cluster engineers from Sun and has been around for over a decade developing and selling large scale storage products designed for something that used to be called "utility computing" seven years ago, but today is just called "the cloud".
We've been very successful with our cloud strategy and have 7 of the top 10 IAAS (infrastructure-as-a-service) customers as clients. 3PAR products work very hard in the background for a lot of household-name customers. Most people don't know or care.
However, cloud industry vendors know 3PAR because they are also very heavily involved with those same customers, competing with their own products. They see our storage systems in those large data centers and our customers tell them that they need to make sure they work with us. There's nothing unusual about that sort of thing, but we definitely are a player.
Here's what we do very well:
The thing that we didn't completely understand at 3PAR was how quickly the onset of the virtualized data center was going to tilt the storage world in our direction. 3PAR storage systems are based on a highly advanced, granular storage architecture. It's not always the easiest thing for people to understand because it is so different than any other vendor's architecture. However, people familiar with virtualized server features have a much easier time understanding how our technology works. There is nothing like a terrific, relevant analogy for explaining how your different widget works.
3PAR is a relatively small company, competing with much larger companies who use the benefits of their size, global reach and service organizations against us every day in sales opportunities. It hasn't been easy, but we've continued to grow our business in a very hotly contested arena where our competitors like to position us as the "small, new company" Storage purchases in this market are high stakes and careers can be made or lost on the right decision. We certainly don't win all the deals we are in, but we very seldom lose on technical merit. Usually it's because we are lesser known or because we can't match the service offerings of our larger competitors.
It appears that some of those variables will be changing for us relatively soon.
Posted at 09:08 AM in 3P, cloud computing, clustered storage, customers, Dell, enterprise storage, HP, mid range storage, multi-tenant storage, partners, storage companies, storage management, thin provisioning, utility computing | Permalink | Comments (4) | TrackBack (0)
Tags: 3PAR, cloud computing, Dell, HP, multi-tenant, storage , utilization
Yesterday I posted a demo of our new, updated InForm Management Console 4.1 and so I thought today I'd re-post a two-part video showing our VMware vCenter plug-in that was made by 3PAR architect Maneesh Jain. Make sure to pay attention to the Recovery Manager section of the demo that shows how easy it is to recover VMs, directories and files.
Virtualized storage from 3PAR flexibly adapts to mid-range up through enterprise VMware environments because our single software architecture runs the same code on both platforms. The skills used to manage one platform are preserved when switching to the other.
Posted at 06:00 AM in 3PAR, clustered storage, enterprise storage, mid range storage, multi-tenant storage, Recovery Manager, snapshots, storage management, tool talk, utility computing, video, virtualization, VMware | Permalink | Comments (0) | TrackBack (0)
Tags: 3PAR, DR, integration, Recovery Manager, vCenter, VMware
3PAR designs its systems to provide huge time savings for storage administrators. Below is a video of our new InForm Management Console (IMC) 4.1, announced today, showing how incredibly easy it is to configure and operate 3PAR's Remote Copy application.
Things that the demo didn't show that are advantages of 3PAR's single software architecture are:
Here is a brief description of all the software functions available through IMC 4.1. As you can see, it's a pretty comprehensive list of features:
Posted at 06:00 AM in 3PAR, Adaptive Optimization, clustered storage, Dynamic Optimization, enterprise storage, mid range storage, multi-tenant storage, remote copy, snapshots, storage management, thin provisioning, tool talk, utility computing, video, Virtual Domains, wide striping | Permalink | Comments (3) | TrackBack (0)
Tags: 3PAR, console, GUI, IMC 4.1, remote copy, replication
I've been going slightly nuts since yesterday after Cisco announced the CIUS. It looks like the perfect tablet for the sorts of things I really want a personal screen device for - communicating with other people. This review by Erik Parker of InfoWorld is a pretty good read and it summarizes key advantages and disadvantages of CIUS. If it can make the technology of video conferencing transparent to end users, it will be a big deal.
But the hidden story to this is that Cisco is also making a play to get into the corporate desktop/laptop business with the CIUS. The idea that companies could deploy these with VDI is definitely part of Cisco's grand plan for world domination. Whether or not the CIUS could replace laptop or desktop computers remains to be seen, but there are reasons to think they could eventually if the stars align.
The arguments for VDI are strong, but there are still a lot of hurdles to overcome, such as back end storage performance to support boot storms. By the way, people looking at large VDI implementations might want to look at 3PAR's wide striping storage systems to get the sort of affordable IOPS needed to support large VDI environments. My previous post illustrates our design for massive throughput, which supports a huge number of IOPS without needing SSDs or requiring storage administrators to create special disk pools to isolate the VDI workload from other applications running in the same storage array.
Steve Taylor, one of our SEs, created an animation that shows the multiple layers of virtualization that create the natively wide-striped data layout on a 3PAR storage server. I think it's the coolest thing I'd seen since joining the company that quickly summarizes the multiple layers of virtualization in a 3PAR array.
All the functions shown are automatically done for the customer with minimal administrative effort. 3PAR customers do not spend time planning the layout of special disk pools or preparing their disk drives configurations for certain functions. All they do is select the drive class and the RAID level for the volume they are creating and the rest of the data layout work is done for them.
The demo shows how a RAID 5 3+1 virtual volume is created, what it does not show is the way other volumes would be created using different RAID levels over the same set of resources. It would be a replay of this, but with a different RAID level applied - everything else would be the same.
Not only does this design provide massive throughput, it also responds very quickly when customers need to add volumes. It's like driving a freight train that can corner. Try doing that with your v-Max on anything but a test track.
Posted at 05:54 AM in 3PAR, clustered storage, enterprise storage, mid range storage, multi-tenant storage, performance, reservationless, storage management, tool talk, utility computing, video, virtualization, wide striping | Permalink | Comments (5) | TrackBack (0)
Tags: 3PAR, array, data layout, RAID, v-Max, virtual storage
How is it that some people possess the gift of foresight and the ability to predict the future? Some say they have dreams or visions, some extrapolate from experience and logic, while others make predictions hoping to fulfill an agenda. Then there is the element of public exposure. Is the predication public and do they use their real name or hide behind an alias?
Nicholas Carr took was very public and very open when he wrote his breakthrough book "Does IT Matter?". In it, he stated that there are no sustainable advantages to be gained by a company through the implementation of information technology. He argued that any short term gain can be matched by competitors in a relatively short period of time with lower capital investments - effectively punishing companies for innovating. He recognizes the necessity of having IT in order to stay competitive, but finds it difficult to justify being an early adopter of technology.
Since Carr published his book, we've seen a lot of change in IT markets, including the rapid deployments of virtual systems technology and the expansion of hosted, utility computing and all things "cloud." But the biggest changes have resulted from the global financial crisis, forcing companies to reduce non-essential costs significantly - especially IT costs.
Unfortunately, not every technology implementation intended to reduce costs has been successful. And that's one of the things that makes the information technology business so fascinating and perplexing - intelligent people with deep expertise in technology fail to predict the ways that things can go awry and what the cost of their shortsightedness will be.
The rich history of failed IT projects is exactly why there is so much FUD spread by the competitors in our industry - FUD gets customers thinking about the consequences of their purchase decisions and all the possible problems that can result from an error in judgment. It also contributes to the interest in the machinations of our industry and the "war games" that are played out in traditional and social media. Whether we are predicting changes to the industry through mergers and acquisitions or the development of new business models, it all flows into the river of FUD at purchase time.
With the abundance of FUD, one naturally develops an aesthetic for the stuff to cull the weak from the strong. For example, a piece of weak FUD recently appeared online on Silicon Angle titled "Why Netapp Must Seek Acquisition", written by the poser "secretcto". The author starts with the suggestion "let's take a look at the market cap of each of these players" and then neglects to make any comparisons. It goes downhill from there, reaching its lowest point when the article referred to Nicholas Carr as Daniel Carr and then failed to negotiate the transition of whether or not IT matters to cloud service providers. The tipping point for Carr's logic is that to service providers, IT absolutely does matter because operating data centers is their core business.
By contrast, you barely notice good FUD, it has a smooth logical flow and subtly builds to a persuasive conclusion based on a key point that usually has it's origins in a subjective opinion or bias. A decent example of good FUD was Chris Mellor's recent piece about the Storage Array Killing Fields qualifies as good FUD. Chris doesn't have an axe to grind, but he is a journalist and therefore has the responsibility of stirring the pot. It's a well written piece that uses an analogy that compares the selection of equipment for data centers with the selection of components used in an automobile.
The problem is that automobile manufacturing is a poor analogy for running a data center. When a car rolls of the manufacturing line it is shipped to a dealer, sold to a customer who drives it away. There is nothing about the experience of making, selling or buying a car that is even closely related to the constant ongoing data processing services that are provided by a utility or cloud service provider.
A better analogy is running a restaurant. Restaurants succeed or fail based on the quality of their customer service and that's why chefs like Thomas Keller strive to maintain consistent, excellent quality every minute of every day they are open.
Should we expect the recipe for success in hosting and cloud services to be any different? This recent article in Information Age states that 71% of the 450 CIOs in a KPMG survey want to improve the price to quality ratio of their outsourcing contracts. The dynamics of the business relationship between CIOs and their utility/cloud service providers are going to be the same. Service providers with the best reputations for customer service are going to thrive. Those that don't measure up will fail.
Vendors of consolidated stack solutions of servers, storage and software are trying to convince customers that the "All-in-one" stack solution is the safest way to proceed during the transition period while cloud computing is emerging. They would have you believe that the biggest risk in operating a data center is in ordering the products and getting everything installed initially. Considering that customer metrics for utility/cloud service providers will be responding to the needs of their customers quickly and accurately, the lions share of the risk will come well after the initial installation during the life of the service engagement.
The weakness of the All-in-one approach is that it does nothing to address the dicier aspects of owning, operating and changing an IT infrastructure after it is up and running. In many cases the stack vendor's answer to change management will be the same as it is today - time-consuming and expensive professional services. There are definitely utility/cloud service providers that will want this sort of service, but many would prefer to do it themselves at much less cost. That's what you do when your primary business is running a data center.
A talented chef can find a way to prepare a gourmet meal on an Electrochef All In One Kitchen, but they would never decide to run their business with them. They are going to select best of breed appliances and equipment that best fit their needs and enable them to prepare quality dishes in a quality fashion.
So the question for the utility/cloud data center operator then is - "what is best of breed equipment for my business?"
The classic clash between Best-of-breed and All-in-one solution pits cost against complexity. Best-of-breed technology has traditionally been more customizable to fit a wider range of requirements and therefore has been more complicated and expensive to operate. In contrast, All-in-one technology has traditionally been cheaper, limited to a smaller set of functions and easier to operate.
Unfortunately, neither stereotype works very well for the utility/cloud service provider. They need fully functional products that are also easier and quicker to operate. Fast, accurate change management and operator efficiency are the key elements for utility/cloud infrastructure products. 3PAR's Best-of-breed storage products have these characteristics as well as being extremely space-efficient and high-performing. Customers appreciate the amount of time they do not spend managing their 3PAR storage while they are getting the job done. When a new order comes into a 3PAR kitchen, the system is ready to go right away - including tasks that take a long time to set up on other storage, such as Remote Copy.
And what about the All-in-one stacks in the market? Surprisingly, unlike traditional All-in-one solutions, they are more expensive to install and operate. Change management is complex, which leads to relatively poor operator efficiency and the engagement of professional services, which does not necessarily speed up the process. The traditional benefits that All-in-one solutions typically provide are not part of these stack solutions.
The predictions for stacks taking over the market are all wrong. Sure, there will be stack solutions sold and it will take time for all of this to sort itself out as it always does when an industry is going through major, fundamental changes. The most important changes that will occur in the years to come will be driven by the service demands placed on utility/cloud service providers. Customers of utility/cloud services want their money's worth and the best service providers will do what it takes to give it to them. Stacks add no value in that equation.
Posted at 08:37 AM in 3PAR, bloggers, cloud computing, customers, EMC, enterprise storage, Hitachi, HP, multi-tenant storage, performance, remote copy, servers, storage companies, storage services, utility computing, virtualization, wide striping | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, arrays, best-of-breed, EMC, HDS, HP, stack, storage, vblock
Good question Nigel, one of the biggest problems customers have is being able to fully utilize all their resources. It's not just that the ROI for storage tends to be underwhelming, but more frustrating is the fact that their storage was provisioned in a way that makes resources inappropriate or unavailable for the pressing needs at hand.
Pools are used two ways - to reserve storage capacity for certain functions such as snapshots or to create QoS levels for storage. The difficulty lies that in the creation of pools for QoS, resources that are committed to pools are practically locked into them and cannot be easily redistributed to other pools to meet changing demands. As storage systems age and are filled with data, the various pools are consumed unevenly. For example, consider an array with six pools provisioned as follows:
Pool #1: SATA, 60TB (usable) RAID 6 - primary bulk storage: no performance requirementsPool #2 SATA, 40TB (usable) RAID 10 - primary low cost storage: capacity over performance
Pool #3 FC/SAS, 30TB (usable) RAID 5 - primary high performance storage: performance over capacity
Pool #4 FC/SAS, 20TB (usable) RAID 10 - primary highest performance storage
#5 SATA, 50TB (usable) RAID 5 secondary snapshot storage
Pool #6 FC/SAS, 60TB (usable) RAID 5 secondary snapshot storage
The type of problems storage administrators are constantly dealing with occur when one pool becomes maxed out - making it's associated QoS unavailable. The admin, then has three choices, 1) use a higher QoS, 2) a lower QoS, or 3) add new resources to the pool, if possible. If they decide to use a higher QoS they may have to deal with performance problems in higher-priority applications. If they decide to use a lower QoS they will have to deal with performance problems for the application. If they decide to add resources, they might have to interrupt many other applications and take them off line while workloads are shifted. Then you get the ripple effect of "remodeling the kitchen".
When you consider the fact that some storage systems force users to establish separate pools for thin provisioned volumes and thick volumes, the number of pools in the system increases and the fragmentation of resources becomes a much bigger problem.
The best practice for managing storage pools is to do away with them entirely so they don't inhibit access to expensive resources and, more importantly, so they don't soak up so much administrative time and create increased risk of downtime and data loss.
Pools of disk drives are just a thin layer above bare disk drives where virtualization is concerned. Considering the transparent nature of system virtualization technology it is almost incomprehensible that storage systems force customers to create these artificial constructs that force hard choices about something as basic as the layout of data on disks. Vendors with pool-based volume management like to distract customers by talking about whiz bang functionality that doesn't address the core storage problem - the fact that their customers are still doing much of the work that the system ought to do for them.
The best practice then is to replace outdated storage designs with new designs that do not reserve storage resources in pools and do not use pools to create QoS levels. 3PAR InServ storage systems do not use pools and do not reserve capacity for different QoS levels.
3PAR InServ storage systems are used by many of the largest companies in the world, which saves them an enormous amount of money by lowering lower capacity requirements and administrator overhead. For example, Priceline.Com has been a 3PAR customer for many years and they talk about how it has worked for them in this video on YouTube.
The InServ's data layout starts with the subdivision of all disk resources into 256MB "mini-disks" we call chunklets. All the higher level RAID level functions in a 3PAR system are applied at the chunklet level, not at the disk level. RAID in an InServ system is implemented as "micro-RAID" sets which then are concatenated together and formed into virtual volumes that are exported as LUNs.
FWIW, the term virtual volume was used by 3PAR years before the system virtualization phenomenon became the market force that it is today. I only mention this to reinforce the fact that from it's inception, the InServ internal storage architecture was designed to virtualize storage. It makes storage administration transparent by doing the low level provisioning work on behalf of the storage administrator.
As new storage is provisioned in a 3PAR system, the data is spread across chunklets in small 16KB increments. All the disk drives (of the same class SATA VS high performance) in the system are used by default, so that data is widely striped for optimal throughput and to avoid hotspots. While there actually are small amounts of capacity pre-allocated for use before storage is provisioned, this is done automatically by the system and it is done in thin slices across all drives.
There are no pools, no constraints, no weeks-long planning efforts needed for storage installations and change management.
If you are looking to dump your nagging storage administration problems why would you ever go back to pool based storage when that is the root cause of your problems?
Posted at 02:23 PM in 3PAR, customers, enterprise storage, mid range storage, multi-tenant storage, performance, reservationless, snapshots, storage management, utility computing, virtualization, wide striping | Permalink | Comments (12) | TrackBack (0)
Tags: 3PAR, cloud, enterprise, utility, virtual storage
Here's a video that TechTarget produced for us with one of our customers, Priceline.com.
Here are a few highlights from the video:
Priceline.com was one of the first e-commerce players to adopt virtualization. That may account for why the company's IT organization is known for for it's high availability and ability to adapt quickly to changes in the market. Given the fact that their business has a broad value-based appeal, their IT organization works very hard to get the best rate of return for their capital expenditures.
3PAR storage allowed them to increase their storage capacity over 400% over the last four years while reducing the administrative load required to manage it all. Ron Rose, ex-CIO at Priceline (now on the Sr. Management Team at Dell) said that they were able to decrease the data center footprint 50% during that time. Mr. Rose estimated that they were able to reduce the deployment of approximately 100 physical servers and their associated footprint costs, which were equivalent to 106 acres of trees 310 tons of hydrocarbons per year.
Chuck Hollis wrote a blog post earlier this week,titled "Once Upon a Time". I thought it was an excellent post, telling about the transition EMC made a decade ago starting when Joe Tucci replaced Mike Ruettgers. FWIW, I think the diversification that Tucci accomplished at EMC has made all the difference there - especially the acquisition of VMware. You might call it lucky (as I tend to do), but the fact was they were looking to diversify their business took them on a journey that has buoyed their company far beyond the capabilities that their storage products by themselves would have supported.
At the end, he asks the question if history was bound to repeat itself again - which appeared to be a nudge towards some of the other companies in the industry. I didn't think this was such an affront - Chuck has been known to tweak competitors from time to time, but for the last 6 months or so, he's restrained himself from doing so.
So I was surprised this morning when I saw some tweets that had me look at the post again. And sure enough there was a blow up there involving a cadre of Netapp people that over-reacted to Chuck's post.
One of the consequences of this over reaction was that a benign blog post about EMC history became a referendum on Netapp's Secure Multi-Tenancy (SMT). It wasn't what Chuck was driving at in his original post, but the comments from Netapp folks steered the discussion that direction.
Chuck's main argument is that SMT isn't very secure if your service provider can gain access to a tenant's data. I'd add to that and say, it's not very secure if your service provider can delete volumes and destroy data too. Inadvertent destruction of data by administrators is a larger threat than somebody pulling "an inside job".
But it doesn't just effect service provider scenarios. The issue of multi-tenancy also applies to private data center operations. There have been suggestions that the word "tenant" refer to the legal owner of the data, but the word "legal" is unnecessary and obscures the common understanding that a tenant is the application owner that uses a shared a resource, whether it is a physical server or storage array.
A good example of multi-tenancy within the confines of a private data center is a corporate database that is managed by a DBA that doesn't want anything else to impact their performance and stability. When that database is moved to a virtual environment, the DBA expects to have multi-tenant protection that ensures nothing changes except a decrease in operating costs. The same applies to any application owner who would like, but can't afford the luxuries of dedicated resources.
Role-based administration combined with resource virtualization makes multi-tenant environments safe from administrator errors. Limiting the scope of what an admin can see as well as what actions they can take eliminates the possibility of them making a simple mistake with major consequences. Using the DBA example, if the DBA alone controls their own storage resources, there is no opportunity for a co-worker to screw things up for them.
3PAR's Virtual Domain software (available since 2008) provides a role-based, restricted access system for managing storage resources. This certainly doesn't solve all the security problems for multi-tenant environments, but it's an excellent way to eliminate the most common concerns of application owners.
The technology can be extended to public cloud infrastructures as well if a service provider chooses to make it available. A customer can be given Virtual Domain private control of their storage resources - without the ability to see any other customers' resources - to manage and provision as they see fit. In the service provider model, 3PAR provides the technology to its service provider partners who provide Virtual Domain-based services to their customers. 3PAR Cloud Agile partners who offer these services today are:
Its out there and available, for private or public use.
Posted at 12:31 PM in 3PAR, bloggers, cloud computing, EMC, enterprise storage, mid range storage, multi-tenant storage, Netapp, Oracle, performance, storage companies, utility computing, Virtual Domains | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, EMC, multi-tenancy, Netapp, secure, storage, virtual domains
I just read an article about how the concept of infrastructure blocks is playing out on the SearchDataCenter site. The article presents several perspectives, but it's a bit confused. The concept is referred to as three different terms (pods, blocks and cells) and the comparison between a making your own and buying one are not clearly juxtaposed. Regardless, its a thought provoking article.
But is does raise the point what should we call these things? I think a better generic word for them is iBlock, short for infrastructure block.
I've been speaking to customers about this sort of thing lately and a number of them have expressed the opinion that rolling out their own iBlock would be a lot cheaper, more flexible and more scalable than anything they could buy from a vendor. I'm a big believer in the power of integration, but it's possible to get too far ahead of the curve.
3PAR customers have already been implementing iBlocks for several years using the 3CV design discussed in this ESG Labs report. That's one approach. The question is, if you were going to build an iBlock, how would you do it - and why?
I caught up with Mark Cravotta from Datapipe recently at a 3PAR event in Las Vegas. He's a high energy person who is having a lot of fun growing Datapipe's hosting and cloud computing services as well as helping to manage its expansion around the globe.
Datapipe is a 3PAR Cloud Agile partner and customer who uses our products throughout their line for primary multi-tenant storage, data snapshots, remote replication and all aspects of disaster recovery.
In addition to being customer-driven, Datapipe is also committed to being a leader in green utility computing by reducing the carbon footprint of it's data centers through power purchases from green power producer, Constellation NewEnergy.
Posted at 01:46 PM in 3PAR, backup, cloud computing, customers, energy, enterprise storage, green computing, multi-tenant storage, partners, remote copy, snapshots, storage services, utility computing, video | Permalink | Comments (0) | TrackBack (0)
Tags: 3PAR, cloud agile, cloud computing, Datapipe, green
London-based Ultraspeed has succeeded in the managed hosting business 13 years by being smart, opportunistic and service-oriented. A lot has changed during that time, especially customer expectations for uptime and how much customers rely on their hosting providers to respond quickly when needed. Web sites that are lucky enough to "go viral" can be a disaster if the hosting company's infrastructure is unable to adjust rapidly enough to meet demand.
In March, Ultraspeed opened their second data center in Amsterdam implementing a modular infrastructure design including multi-tenant 3PAR storage, VMware, Extreme Networks switches and customized servers. The highlight of their Amsterdam site is the ability to offer bi-directional DR services between London and Amsterdam using 3PAR's Remote Copy software. Ultraspeed is a member of 3PAR's Cloud-Agile program.
In this interview, conducted in February 2010, Jordon Gross, CEO of Ultraspeed and Michael Shanks, CTO joined us for coffee near their offices and talked about their company's history, it's technology, the challenges they face and how they expect things to shape up in the years to come.