And now, the final installment of SAGA: ZFS.Click on the image below to watch the video page:
Click this link to see the sordid start of SAGA: ZFS
There was a lot written last week surrounding VMware's release of vSphere 4.1. Netapp appeared to have a lot to say, but it was confusing to figure out what they were really talking about. I think I've got it now.
It's unusual for a company to be invited as a centerpiece of high-visibility festivities and then mysteriously decide not to follow through. It would be like getting complimentary tickets and backstage passes from Lady Ga Ga herself, telling all your friends about it and then not going. It it does make one wonder. Why wouldn't you do whatever it takes to be included in VMware's big summer announcement party? Well, if you're Netapp, the answer appears to be - "Being there is over-rated. Just make sure everyone thinks you were." Call it Photoshop for PR or call it keeping your poker face, it's a mash up of a blown opportunity and opportunistic courage.
The excitement for VMware's storage partners was concentrated in two areas: VAAI (vStorage API for array integration) and SIOC (Storage I/O Control). The initial release of VAAI includes new SCSI block storage commands that allow the arrays to offload host systems from redundant, resource-consuming tasks. SIOC is a method for managing I/O queues to create more fairness in accessing storage resources. Netapp issued a press release last week in conjunction with the vSphere 4.1 release, but it was for their Virtual Storage Console, not for the support of the storage enhancements in vSPhere 4.1. There was a flag waving mention of VAAI:
"Additionally, NetApp is supporting the new VMware vStorage APIs for Array Integration (VAAI) capabilities that offload data management tasks from the host server to the storage system. This can free up host CPU cycles for better performance and increased virtual machine density."
That's not exactly saying anything, but its more than they had to say about SIOC, which was zilch.
The bottom of the release directs readers to Vaughn Stewart's blog for more info. Apparently, Netapp's PR department left the rest of the innuendo up to Vaughn - a diligent and loyal Netapp employee who understands that sometimes a vendor blogger doubles as a PR bagman. It looks like I need to add a new chapter to Vendor Blogging with Dummies.
You have to dig into the comments to get some of the details, but Vaughn's blog does a decent job explaining that Netapp is working on delivering VAAI functionality in Q4 2010. Now, that's not all that late considering its only 6 months or so away, but as a privileged insider to VAAI development, it's not a great showing either. In fact, it wouldn't surprise me if some of the companies who were not in the program, such as Compellent, HP, IBM and Xiotech come out with VAAI plug-ins before Netapp does. As for 3PAR, we will have our VAAI plug-in available in September as part of a maintenance release. We didn't have a lot of time to develop VAAI functionality after gaining access to the APIs in early 2010, but we fast-tracked the development of it in order to make the announcement.
As much as I admire Vaughn's hutzpah for stepping in to carry the load that others at Netapp should have, there were a few problems with what he said. First was the absurd statement that " SAN is attempting to be more NAS-like". There is so much wrong with that statement that it's difficult to find a place to start. Who or what is SAN? Is VMWare SAN? Is the T10 SCSI standards committee SAN? Is SAN the being an embodiment of SAN the block protocol? Is there a virtual reality thing going on here? And what is NAS-like anyway? Does it have anything to do with the size of one's beak or the way particular vowels resonate in the sinus cavities? Or is it like racing the back roads in a used chevy? Whatever Vaughn meant, I tend to dislike the imprecision of technology anthropomorphism.
The second thing Vaughn said was "As for the first release of VAAI... These features ALREADY EXIST in NFS." Really, block zeroing? That is a function developed for EagerZeroThick volumes, which are only supported on VMFS datastores, not NFS datastores. Perhaps we will see that change in the future, but for now its SAN only.
Hardware assisted locking is a way to allow smaller granular locking for VMFS and addresses an issue with VMDK-level operations in a shared datastore. Because NFS puts VMDKs in separate datastores, which are locked independently, hardware assisted locking is unnecessary for NFS. In other words, its a SAN only function because the current NFS datastore architecture doesn't need it.
The other API in VAAI is Full Copy. This VAAI API appears to be functionally equivalent to a Netapp utility called RCU (Rapid Cloning Utility) that was included as a function in their Virtual Storage Console. It is not, however, something that exists in NFS, unless Netapp wants to give that feature to all it's NAS competitors. As a vSphere function, Full Copy will be available to all vendors that implement the VAAI APIs. It will be interesting to see what differences there are as far as programmatic control using the VAAI plug-ins, vendor-specific consoles and Powershell.
Chuck Hollis wrote a blog post earlier this week,titled "Once Upon a Time". I thought it was an excellent post, telling about the transition EMC made a decade ago starting when Joe Tucci replaced Mike Ruettgers. FWIW, I think the diversification that Tucci accomplished at EMC has made all the difference there - especially the acquisition of VMware. You might call it lucky (as I tend to do), but the fact was they were looking to diversify their business took them on a journey that has buoyed their company far beyond the capabilities that their storage products by themselves would have supported.
At the end, he asks the question if history was bound to repeat itself again - which appeared to be a nudge towards some of the other companies in the industry. I didn't think this was such an affront - Chuck has been known to tweak competitors from time to time, but for the last 6 months or so, he's restrained himself from doing so.
So I was surprised this morning when I saw some tweets that had me look at the post again. And sure enough there was a blow up there involving a cadre of Netapp people that over-reacted to Chuck's post.
One of the consequences of this over reaction was that a benign blog post about EMC history became a referendum on Netapp's Secure Multi-Tenancy (SMT). It wasn't what Chuck was driving at in his original post, but the comments from Netapp folks steered the discussion that direction.
Chuck's main argument is that SMT isn't very secure if your service provider can gain access to a tenant's data. I'd add to that and say, it's not very secure if your service provider can delete volumes and destroy data too. Inadvertent destruction of data by administrators is a larger threat than somebody pulling "an inside job".
But it doesn't just effect service provider scenarios. The issue of multi-tenancy also applies to private data center operations. There have been suggestions that the word "tenant" refer to the legal owner of the data, but the word "legal" is unnecessary and obscures the common understanding that a tenant is the application owner that uses a shared a resource, whether it is a physical server or storage array.
A good example of multi-tenancy within the confines of a private data center is a corporate database that is managed by a DBA that doesn't want anything else to impact their performance and stability. When that database is moved to a virtual environment, the DBA expects to have multi-tenant protection that ensures nothing changes except a decrease in operating costs. The same applies to any application owner who would like, but can't afford the luxuries of dedicated resources.
Role-based administration combined with resource virtualization makes multi-tenant environments safe from administrator errors. Limiting the scope of what an admin can see as well as what actions they can take eliminates the possibility of them making a simple mistake with major consequences. Using the DBA example, if the DBA alone controls their own storage resources, there is no opportunity for a co-worker to screw things up for them.
3PAR's Virtual Domain software (available since 2008) provides a role-based, restricted access system for managing storage resources. This certainly doesn't solve all the security problems for multi-tenant environments, but it's an excellent way to eliminate the most common concerns of application owners.
The technology can be extended to public cloud infrastructures as well if a service provider chooses to make it available. A customer can be given Virtual Domain private control of their storage resources - without the ability to see any other customers' resources - to manage and provision as they see fit. In the service provider model, 3PAR provides the technology to its service provider partners who provide Virtual Domain-based services to their customers. 3PAR Cloud Agile partners who offer these services today are:
Its out there and available, for private or public use.
Posted at 12:31 PM in 3PAR, bloggers, cloud computing, EMC, enterprise storage, mid range storage, multi-tenant storage, Netapp, Oracle, performance, storage companies, utility computing, Virtual Domains | Permalink | Comments (2) | TrackBack (0)
Tags: 3PAR, EMC, multi-tenancy, Netapp, secure, storage, virtual domains
InfoSmack podcasters Greg Knieriemen and Yours Truly interview Greg Kleiman (Netapp),
Eran Farajun (Asigra), Brad Rooke (JumpPoint) and Daniel MIlburn
(Consonus) about the current status of Cloud storage, the impact CDMI
will have and get their thoughts on how they think this industry will
evolve over the next several years. Recorded at SNW 2010 in Orlando.
The show has 3 parts: 1) intro,early stage apps, backup; 2) CDMI; 3) a look into the future and the competitive landscape in cloud storage services.
This morning Netapp announced plans to acquire Bycast, Inc., a privately held company in Vancouver BC. I can see wanting an office in Vancouver, so congrats to Netapp on that front. Also congrats for sticking it in the eyes of storage competitor, HP - and probably their N-Series partner, IBM, - who have been acting as the OEM sales channel for Bycast.
Here are the main points:
Here is how Bycast decribes itself (from their Company Overview page):
Bycast is the leading provider of advanced storage virtualization software for large-scale digital archives and storage clouds. For organizations whose business depends on access to vital data, Bycast protects and preserves digital assets over their lifetime. Bycast StorageGRID® software simplifies the management of massive fixed-content storage systems and enables organizations to optimize their storage infrastructure and ensure the integrity and availability of their valuable data assets. StorageGRID also enables the formation of archives that can scale to petabytes of data across hundreds of sites. StorageGRID is sold globally through OEM relationships with two of the world’s major storage vendors. Bycast Inc. is a privately held company headquartered in Vancouver, BC.
The company's market leadership is illustrated by a global customer base, a vibrant application partner ecosystem, and strategic partnerships with industry-leading storage vendors IBM and HP. Bycast StorageGRID is unique in that it is proven to address the needs of both centralized and distributed organizations, across heterogeneous hardware environments.
StorageGRID has won numerous industry awards including 2006 Storage Product of the Year and the Frost & Sullivan Healthcare Technology Innovation award. By providing a storage virtualization layer that sits transparently between enterprise applications and industry-standard storage hardware, StorageGRID addresses the needs of key, high-growth segments of the digital archiving market:
- Multi-site enterprises with archives distributed across multiple data centers
- Regional archives with independent organizations sharing common data centers
- Small and medium sized businesses that require a robust digital archiving platform
- Service providers delivering long term data archiving as a service
Here is the full text from Netapp's press release today: (Skip past the italics if you've already read it or don't like reading press releases)
Normally, I wouldn't publish another company's press release, but I wanted to make things easier for readers. If you search these two distinct descriptions you will find Bycast describes itself using the words "archives or archiving" seven times. Netapp avoids the word altogether. Conversely, in describing Bycast, Netapp uses the word "object" nine times, whereas Bycast left it out completely.
NetApp (NASDAQ: NTAP) today announced that it has entered into a definitive agreement to acquire Bycast Inc., a privately held company headquartered in Vancouver, British Columbia, Canada, in an all-cash transaction.
Bycast is a leading developer of object-based storage software designed to manage petabyte-scale, globally distributed repositories of images, video, and records for enterprises and service providers. Customers whose business depends on access to critical data across geographically distributed locations rely on Bycast to better share and retain content anywhere, any time to quickly respond to their changing business requirements. Founded more than 10 years ago, Bycast has helped more than 250 customers worldwide dramatically improve their operational efficiency and reduce the administrative burden of managing massive quantities of data across multiple geographies.
Bycast extends NetApp's leadership position in unified storage by adding an object-based storage software offering. Object-based storage is a new and emerging approach to storing and accessing data based on object names and rich metadata that describes the content in greater detail, which simplifies the task of large-scale object storage while improving the ability to quickly search and locate data objects.
For example, a media company can use an object-based storage solution to provide its graphic artists around the world with the ability to simultaneously access data and collaborate on common projects. Object-based storage interfaces greatly simplify the administration of the storage used for this purpose. With the acquisition of Bycast, NetApp broadens its capabilities in serving key verticals such as digital media, Web 2.0, healthcare, and cloud services providers and helps customers create even greater efficiencies across data centers around the globe.
"Bycast extends our unified storage strategy and enhances our solution for shared storage infrastructure by adding new capabilities for global data access and mobility," said Manish Goel, executive vice president, Product Operations, NetApp. "The addition of Bycast's products enables NetApp to offer our enterprise customers and service provider partners a complementary solution that enables them to efficiently build and manage a very large-scale global repository of data central to many IT-as-a-service offerings."
Portfolio and People Synergy
Bycast enables NetApp to expand into new opportunities and markets for petabyte-scale, billion-object content repositories. In addition to its products, Bycast brings to NetApp valuable technology and talented employees. Bycast employees' technical expertise, experience, and support of their customers create powerful synergies with the NetApp culture, values, and commitment to customer success. Bycast's Vancouver headquarters will become a technology center for NetApp and will be responsible for existing Bycast products and future product development.
As a proven market leader in the storage industry, NetApp provides Bycast immediate enterprise credibility. In addition, NetApp's global sales organization and partnerships will expand the delivery of the Bycast portfolio and enable broader market reach to enterprise customers, service providers, international markets, and additional vertical markets to drive adoption and success of its products.
"We are excited and look forward to joining the NetApp team," said Moe Kermani, CEO of Bycast. "We share a complementary vision and a common dedication to excellence. Together we will offer customers the best-in-class content repository solutions that further their drive toward a unified storage infrastructure."
The acquisition is expected to close in May 2010, subject to the satisfaction of customary closing conditions.
Here is a little scoreboard for all the storage spin fans out there:
And so the spin win goes to Netapp! The company assuming all the risks.
iKnerd (Greg Knieriemen) broke the story yesterday about Oracle/Sun breaking off their relationship with HDS. That got everybody twittering - with the majority of tweets from the storage universe suggesting Oracle had greedy motives. How unfair! So, the video below attempts to restore balance to the universe and brings Netapp, HP, cloud computing, 3PAR and Larry's toys into the discussion.
If you are a Sun storage customer and think its time to change, you should check out 3PAR. We have a lot of ex-Sun server engineers who designed our storage cluster. I'm sure you'll appreciate the architecture of our InServ arrays, as well as our 50% capacity reduction guarantee.(Hey, Claus Mikkelson at HDS. I've had a comment in on your blog for a couple days and it hasn't been posted yet. I know things can slip through the cracks sometimes, so I thought I'd bring it to your attention.)
Last week, the storage anarchist published a virtual talk show featuring virtual me (3parfarley) as the special guest.
In a strange turn around of events, the 3D cartoon instantiation of storage anarchist was apprehended recently while sneaking around in 3PARvaTAR's chunklet matrix. Special cameo appearances are made by the Storage Architect, iKnerd and and Stephen Foskett direct from their karaoke concert last Thursday night @ #HPbladesday
3PAR, EMC, Netapp, IBM, Capacity Guarantee, storage, array, SAN, HDS
Netapp has been on the hot seat ever since Tom Georgens, their CEO commented that tiering would soon be obsolete. Since then, a number of people have called him out on it, including yours truly (in a steering wheel cam), StorageBod, The Storage Architect, StorageZilla , a storage blogging wannabe, and last but not least, the Storage Anarchist. To be fair, Georgens DID get support from the contrarian Drunken Data.
At the end of his post, the Storage Anarchist asks:
It's tempting to suspect WAFL's snapshot mechanism is the problem, but there is nothing about file level snapshots that would preclude storage tiering. Storage tiering depends on the ability to redirect block addresses across devices classes, which can be done at an abstraction layer below the file system level. In Netapp's case, the issue appears to be an interlock between WAFL and Netapp's underlying RAID layer. So I'd say its mostly a Netapp RAID problem.
As writes come into a WAFL system, they are first staged to NVRAM in order to eliminate parity RAID write penalties and then they are written to "nearby" blocks using a tightly coupled relationship between the file system and its underlying RAID subsystem. This design has an unusual and intricate knowledge of disk drive operations and status within the RAID array. In other words, the file system in a Netapp machine is intricately coupled to the physical characteristics of the underlying storage hardware, which means creating block abstraction layers is highly improbable. The text-image below is from Netapp's Patent filing 6,138,126 dated October 24, 2000.
Now it's possible that this patent does not indicate the implementation within Filers today, but I'd say there is a good chance it explains Netapp's reluctance to embrace tiering. If this turns out to be Netapp's tiering shortcoming, Netapp would need to virtualize their RAID implementation in order to get to a point where they could start working on tiering.
Holy cow!! Is it possible that Netapp is actually THAT FAR behind in storage virtualization - not to mention the next wave of the technology - tiering?
If this analysis is correct, it may be that the only way to get tiering (not caching) with a Netapp system is to connect their V-Series filer to a third party array that offers tiering. If that's the case, will Netapp support it, seeing as how they don't have it on their home-grown Filers?
The hot seat could get a lot hotter.
No, it's not a SWCSA rap, but it is a steering wheel cam, complete with a surprise ending- inspired by Stu @ at EMC.
There's been a dysfunctional discussion of capacity guarantee programs over on Chuck's blog. There had been more sensible, independent discussions on the Storage Architect's blog, but that apparently wasn't good enough for EMC - a company without a capacity guarantee program of their own. Unfortunately, Chuck decided to shut down comments on his post, citing an overload of vendor hash - which could continue to go on as long as there is breath left in any bloggers from Netapp.
Chuck's post poses the question - do you want to buy from a doctor or a used car salesman. The suggestion he makes is that EMC treats you like a doctor while 3PAR, HDS and Netapp treat you like used car salesmen.
The doctor picture he used was this one:
Which reminded me of Scrubs - but of course there are other doctor images he could have used:
In case you've been shunning the news, this is Dr. Conrad Murray.
The used car salesman picture was pretty funny:
I'd suggest Chuck is using classic used car sales tactics: "Who loves ya baby? The warranty them guys offer don't protect you from nuthin'. Your engine will blow up the day after the warranty expires. All they want is your munny!"
Still, seeing as how he was linking this image to 3PAR (in one way or another), I'd have hoped he would have used a picture like this instead:
You might not end up buying that car, but you should at least check it out.
Chuck characterizes capacity guarantee programs as not being in the customer's best interests. That would be true if 3PAR, HDS and Netapp wanted to increase the number of unhappy customers they have, but that is just CRAZY EMC thought diarrheaship:
Instead, I'm pretty sure we all want our customers to be very happy with their storage solution:
Yes, 3PAR's capacity guarantee is a way to attract customers, but it's much more than that - it's a way to back up our efficiency claims by putting our money where our mouths are:
RecoveryMonkey had a post recently about FUD and the ridiculous corner case claims storage vendors sometimes make about each other. 3PAR has been telling customers for years that our products are more efficient than theirs and we are now backing it up with our capacity guarantee. It's not FUD, it's not spin and its definitely not a corner case.
EMC can't help themselves. Given what appears to be a new corporate mandate to deny competitive threats in their massive propaganda machine, they look pretty stupid when one of those threats involves their new BFF Cisco and their trophy-acquisition raison d'etre,VMware.
Yesterday, Netapp, Cisco and VMware made a planned, well-orchestrated announcement of their secure, multi-tenancy architecture. Trust me - everyone in the industry knew it was coming and it was the main reason we chose to time our vSPhere and vCenter Server announcement for Monday this week. Chuck Hollis, EMC's StorageMonkeys poll winner, responded with an incredibly ignorant post that does not even mention the companies involved, nor does it link to any of the numerous online postings discussing it. For as many words as La Bombast put into this blog entry, you would think he would have at least mentioned Cisco in his discussion on network security. Granted, this announcement puts EMC in a compromised position, seeing as how their most strategic partners have chosen to develop an advancement in data center technology with one of their most dangerous competitors, Netapp. Nonetheless, EMC's inability to deal with the reality of it underscores the difficulties they have holding an objective, honest and open public discussion.
I like what Netapp, VMware and Cisco have done with their Secure Multi-Tenancy architecture. The ability to segregate and align applications, data and IT resources for management purposes is a very big deal. It is also one of the key concepts in virtualization technology. Sharing resources among multiple applications and users provides powerful economic leverage (doh!) and aligning security and access controls on virtual boundaries is not only desirable - it is essential. Role-based administration is certainly nothing new, and bringing that into the virtual data center is a necessary evolution in the technology's development.
So what about 3PAR? Been there, done that and continue to lead in utility storage infrastructure technologies. 3PAR developed our Virtual Domain technology more than two years ago that brought role-based administration and firewalls to utility storage. It's also why we developed our Cloud-Agile program for utility computing service providers and designed ourCloud-Agile SECURED solution into it. We have done many things over the years that have been ahead of their time and Virtual Domains was one of them. We're happy to see Netapp, Cisco and VMware shine a light on this important technology area. Too bad for EMC that they missed on this one. Guess they have some 'splainin and fudding to do!
Check out our capacity guarantee -
the program EMC doesn't want you to know about.
Posted at 06:59 AM in 3PAR, bloggers, Cisco, cloud computing, customers, EMC, enterprise storage, Netapp, partners, servers, storage companies, storage management, utility computing, virtualization, VMware | Permalink | Comments (5) | TrackBack (0)
Tags: 3PAR, Cisco, cloud, EMC, multi-tenancy, Netapp, secure, storage, virtualization, VMware
Storagebod posted recently about the foibles of EMC not certifying Netapp's V Series Filer heads with their storage. That certainly is not a decision made in the best interest of EMC's customers, no matter how they might try to spin it. FWIW, 3PAR is a Netapp V Series partner and we're happy to help customers get a best of breed SAN+NAS solution installed.
Anyway, it occurred to me how much less bickering there would be between Netapp and EMC if EMC would only certify the V Series....