Hubert Yoshida, Hitachi Data System’s CTO, predicts that storage volumes are about to cross an important threshold. “We expect that within the next three to five years we will have a customer with an exabyte of data,” Yoshida said. “Today we have customers with close to 100 petabytes.”
What can you realistically do to manage a million terabytes or more of information (or 100 petabytes for that matter)? Whatever the answer, it’s unlikely to be found in current systems.
“Even though in storage we have been doubling in capacity almost ever year, the basic architecture of storage systems is in many cases 20 years old.” That approach won’t be sustainable in the exabyte era, Yoshida suggests: “It requires a new approach and architecture. We have to have a fundamental change in the way we do architectures and implement technologies. It’s not just a matter of getting bigger disks, we have to change the way they work together.”