Project Honeycomb, a.k.a. Sun StorageTek 5800, received the scrutiny of Mario Apicella, senior analyst at InfoWorld Test Center in a recent test.
According to Apicella, the Sun StorageTek 5800 addresses fixed-content archiving needs with a resilient, cell-based solution that can scale from 8 to 16 nodes per cell (a half rack), or up to 32 nodes in a single rack, with the possibility for further expansion through the addition of more cells.
What distinguishes Honeycomb from the competition, in Apicella's view, is its openness, which allows users to define their own metadata schemas consistent with the specifics of their respective applications.
In his test of Project Honeycomb, Apicella used a fully populated, 16-node cell connected to three client machines running the Solaris 10 Operating System (Solaris OS), Red Hat Enterprise Linux, and Windows Server 2003. In this setup, each Honeycomb node is essentially a server running Solaris 10 and the Sun StorageTek 5800 application software, and each server mounts four 500GB SATA drives that connect via redundant links to two GbE switches. The redundant switches are integral components of the cell and, of course, provide protection against a failure of either one, he writes.
He notes that, of the 16 nodes, one is the elective master and coordinates the activities of the other nodes, but the system has a mechanism to quickly and automatically replace a failed master with another node, enhancing the solution's reliability.
Apicella also has praise for the simplified administrative interface of the Sun StorageTek 5800, which can be accessed via an SSH connection. "The whole CLI boils down to fewer than 20 commands...that "cover setting the configuration of a cell, monitoring the physical health of the system, displaying I/O statistics from performance counters, and performing basic tasks such as rebooting, changing passwords, and setting the date and time."
With a simplified testing protocol that used a system of scripts, Apicella chose the number of client machines in use during the test and the types of operations to perform, which included storing, reading, or deleting objects, or running a query. "One of the parameters of the script was the object size, which allowed me to crank up the number of operations per second when using small objects, or to push the limits of the system's transfer rate when working with large objects," he writes.
Following the test, Apicella concluded, not surprisingly, that conventional NAS simply isn't designed for long-term archiving because the typical NAS would choke under the load of storing multiple large objects at the same time, and it would die with its third consecutive drive failure. Project Honeycomb, on the other hand, addresses the performance and resilience requirements of content archiving with a new architecture whose commands are both intuitive and very powerful. He cites the example of being able to type "sysstat" to create a concise status summary in just a few lines, which showed him that all 16 nodes and all 64 drives of the test cell were working properly and ready to go.
Read More ...