Disaster recovery plans crucial for PACS

With the final security provisions of the Health Insurance Portability and Accountability Act (HIPAA) expected to be enacted by November, hospitals need to evaluate their disaster recovery and data backup plans for their PACS networks.

"Disaster recovery planning and testing are essential to PACS implementations of any scale," according to Dr. David Avrin of the Laboratory for Radiological Informatics at the University of California, San Francisco.

PACS recovery from an off-site location requires six phases, according to Avrin:

  • Availability of a computer platform.
  • Reloading of system, database, and application software, if necessary.
  • Re-establishment of network connectivity to the remote site.
  • Recovery of the database.
  • Recovery of the image data set.
  • Re-establishment of on-site local area network connectivity.

UCSF recently completed disaster recovery testing of its PACS network, which uses the university’s internally developed hierarchical storage management (HSM) system. The HSM method, in addition to employing online, short-term archiving using redundant arrays of inexpensive disks (RAID), is a storage scheme that includes an off-site storage component.

For the off-site archive, image data undergoes lossless wavelet compression at a ratio of 2.5:1 and is then transmitted off-site to a StorageTek tape archive via a virtual private network (VPN). A truncated, lossy version is also kept on-site on a magneto-optical disk (MOD) jukebox for reference images older than a few months, allowing UCSF to maximize its on-site storage capabilities, Avrin said. The wavelet encoding is provided by LizardTech’s MrSid software.

Avrin presented the institution’s findings in June at the Symposium for Computer Applications in Radiology in Philadelphia.

To test UCSF's disaster-recovery capabilities, the researchers prepared a test database consisting of approximately 62,000 patients with 200,000 studies (linking to approximately 4 million images stored in a separate image archive) (Journal of Digital Imaging, Vol 13, No 2, Suppl 1 (May), 2000: pp 168-170).

Over a fast Ethernet network connection with its off-site data repository at the University of California, Davis, recovery of all database information -- with the exception of image data -- was completed in two hours and 30 minutes, Avrin said.

UCSF also tested recovery of about one day’s worth of image data (260 studies with an average of 35 images per study) consisting of 2.18 gigabytes of storage, with lossless compression. Recovery was achieved in approximately two hours and 27 minutes, with a transfer speed of 240 kilobytes per second.

The recovery process wasn’t completely uneventful, however. Some of the image data waiting in the queue to be transmitted to the off-site archive could not be retrieved, as were the most recent database entries since its last incremental backup, Avrin said. UCSF performs a total backup of the database once a week, with incremental backup each night. Between those incremental backups, UCSF maintains a disk that mirrors every transaction.

"That works fine for a technical disaster, but not for a fire," Avrin said. "You might think about logging those events and doing more frequent incremental backups to your off-site (archive)."

Speedier recoveries can also be achieved by applying reasonable lossy compression ratios to the image data, such as 10:1 for CT, 5:1 for MR, and 3:1 for computed radiography, Avrin said.

"That (two hours and 27 minutes) could be shortened to 40 or 50 minutes, if you’re willing to accept some lossy compression," he said. "We’ve demonstrated that it’s possible to use the off-site lossless, wavelet-encoded component of HSM to recover lossy encoded image data of diagnostic quality at a rate of approximately 1.5 days of image data per hour."

By Erik L. Ridley
AuntMinnie.com staff writer
September 19, 2000

Copyright © 2000 AuntMinnie.com

Page 1 of 775
Next Page