Data back up is the buzzword of the year, for sure. But, are you unknowingly saving corrupted data to your storage systems?
While data corruption in storage networks is rare, it can be costly if the problem goes undetected for some time. The recovery process can be slow and costly, depending on when the database was last backed-up.
Hitachi and Oracle
say they have a plan to prevent this from happening. The two companies Tuesday say they have developed a new data protection technology designed specifically for Oracle9i Database that prevents writing corrupted data into all of Oracle’s internal file types: Oracle database files, REDO log files, and control files.
Named the “DB Validator,” the technology is applied at the microchip and microcode level. The new data protection technology was initiated with Hitachi’s decision to support the Oracle Hardware Assisted Resilient Data (HARD) Initiative, a global initiative to construct resilient data management and storage solutions, launched by Oracle in November 2001.
Engineering development groups from Hitachi and Oracle worked to integrate Oracle’s data integrity check algorithms into Hitachi storage systems. The companies said the technology will be integrated with Hitachi’s new high-end storage subsystem, Lightning 9900V series (Japanese domestic name SANRISE 9900V series), and available later this month.
“We are very happy with the results of Hitachi’s successful implementation of DB Validator as part of Oracle’s HARD Initiative,” said Oracle vice president of Platform Alliances Doug Kennedy. “The data validation functionality Hitachi developed with Oracle will further ensure the integrity of the data and provide more secure systems to our joint customers. We plan to continue to work closely with Hitachi to deliver service-ready solutions.”
Last year, Redwood Shores, Calif.-based Oracle worked on similar software with storage leader EMC Corp. as part of its HARD initiative.
Kennedy said data corruption that occurs outside of the database is very difficult to detect and can be very expensive and time consuming to fix after the fact. Corruption can occur in any or all of the layers before writing data into storage; for example, while passing through the operating system, channel adapter, or network. In these cases, since the output data is written without error into the storage, the companies say the database cannot detect the corrupted data until it tries to read the data, at which time, a read error occurs and the system stops.
What the DB Validator does is use a microchip circuit to check data integrity mounted on the interface adapter of ‘Lightning 9900V’ and ‘SANRISE 9900V’. The microcode software that runs on the 9900V controller, then references the data and helps prevent potentially disastrous data corruption and minimizes risk and potential costs in backup, restore and recovery operations.
“The advanced technologies of both companies combined seamlessly to realize one of the most robust database platforms for customers. As a result, Hitachi wants to expand collaborative solutions with Oracle,” said Disk Array Systems Division general manager Mikito Ogata.
The DB Validator was tested at the Hitachi-Oracle SAN Solution Technology Center (SSTC), established at Oracle Japan in May 2000 to perform collaborative verification testing of actual customer operations and showcase integration and validation of advanced Hitachi storage capabilities with the Oracle database. The companies said the solutions and technical data verified at SSTC would be translated and provided worldwide.