Healthcare IT systems are not plug and play, and they never will be. As a result, healthcare imaging and IT professionals need to master the use of tools such as test systems, simulators, validators, sniffing software, and test datasets to ensure data integrity.
For a number of reasons, many healthcare imaging and IT professionals have to deal with troubleshooting, testing, and validating connectivity and interoperability.
There can be multiple objectives for this testing:
- A software engineer testing his or her newly developed software
- An integration engineer testing connectivity between different devices
- Application engineers testing for interoperability
- Service and support people trying to determine why something does not work or stopped working
- System administrators needing to deal with finger pointing between different vendors and locate the problem source
Another important test activity is acceptance testing by a user, often represented by a consultant, to determine whether the system works as specified and meets initial requirements.
Test tool categories
Common test tool categories include test systems, simulators, validators, sniffing tools, and test datasets, as mentioned. Many of these are available for free or as open source; some require a modest licensing fee. Test data are generated by either standards organizations or trade associations. The characteristics of these tools are presented below, followed by a list of where to download them and where to find tutorials on how to use them.
Test systems
Test systems are either a copy of the system to be diagnosed or a system with equivalent or very similar behavior. Specifically, if you have a PACS, you might have a "test-server," which is another license of the same database used for the production PACS. The test system could be running on a high-powered, standalone miniserver, which could possibly store images for a week or so.
Most users purchase or negotiate a test system to be available as part of the new purchase. A recent OTech survey showed that about 40% of PACS users have a test system. Another option, in case you don't have a test system, is to use a free or open-source PACS application such as Conquest or dcm4che.
In addition to the PACS "backbone," users should always have at least one and preferably two additional test viewers from different vendors for displaying images. There are a number of freely available viewers, including K-PACS, ClearCanvas, OsiriX, and several others.
For electronic medical records (EMRs), I found it uncommon for users to have a test system available, which is kind of surprising. In many cases, a production server might be loaded with test data at system installation time, but as soon as the user training is complete and the system goes operational, this information is typically wiped to get ready for the production data.
EMRs are also quite different with regard to their functionality and interfaces; therefore, a free or open-source EMR might not be as useful as a test PACS. But one could use the Veterans Health Information Systems and Technology Architecture (VistA) Computerized Patient Record System (CPRS) EMR, which was developed by the U.S. Department of Veterans Affairs (VA) and is available as open source.
Mirth is the best-known freely available interface engine. This device maps several different interface protocols, but it's at its best when being used for HL7 version 2 message mapping. I found it somewhat hard to use, but there is (paid) support available for someone who needs to configure the mapping rules.
One can use a test system to test new modality connections to a PACS, to test new interfaces (e.g., lab or pharmacy) to an EMR, or for reproducing certain errors. In the case of a new image acquisition modality connection, one could create test orders that would show up at a test worklist (dcm4che PACS has this capability), and query the worklist from the test system.
This allows for proper mapping to be tested from the orders to the DICOM worklist, and to tune any additional configuration needed to make sure that the worklist does not have too many or too few entries. The same applies for external interfaces, e.g., for lab or pharmacy to an EMR. It is usually better to test connectivity prior to actually going live.
There are those who use these test systems as a basis for their production, i.e., as their primary clinical system. For example, it is not inconceivable to use the VistA as an EMR, the Mirth interface engine as the HL7 router, dcm4che as a PACS and modality worklist provider, and ClearCanvas for image viewing.
However there are potential liability issues for using non-U.S. Food and Drug Administration (FDA) approved and/or noncertified software for medical purposes, especially if utilized for primary diagnoses in humans. But, for veterinary use, these test PACS applications are relatively widespread in clinical use. I would not recommend using any of these in a production environment unless you have a strong IT background or can rely on a strong IT department or consultant.
Simulators
A simulator is a hardware and/or software device that looks to the receiver to be identical or similar to the device it is simulating. An example would be a modality simulator that issues a worklist query to a scheduler, such as provided by a RIS, and can send images to a PACS. If the simulator assumes the same addressing (application entity title, port number, and IP address) as the actual modality, such as an MRI, and sends a copy of the same images, the receiver would recognize the data the same as if the transaction came from the actual device.
The same can be done for a lab simulator to an EMR, exchanging orders and results, and for a computerized physician order-entry (CPOE) simulator for sending orders and arrival messages. The advantage is that these simulators provide a "controlled" environment while providing extensive logging.
These simulators are typically used to test connectivity prior to having an actual operational system available to simulate and resolve error conditions and troubleshoot connectivity issues. They can also be used for stress testing and evaluating performance issues.
One should note, however, that a simulator does not exactly reproduce the behavior of the device it is intended to simulate. If there are timing-related issues or semirandom problems, one would try to keep the original configuration intact as much as possible and use sniffers instead to find out what is going on.
I use OT-Send, an HL7 CPOE simulator, and OT-DICE, a DICOM modality simulator. Both are available from OTech.
One could also utilize the various DVTk open-source simulators, but these are not trivial to use and are therefore almost exclusively employed by test and integration engineers. DVTk simulation tools also are programmable using a proprietary script, which makes them very useful for exception, performance and error testing, and simulation.
Validators
A validator is a software application that validates a protocol or data messaging format against a standard set of requirements or good practices. These are extremely useful for testing by development and integration engineers, especially for new releases and new products.
I am amazed by how many errors I find when running a simple DICOM image against a validator. I personally believe that there is no excuse for these errors as these tools are available for free in the public domain. DICOM protocol and data formats can be validated using DVTk.
Another useful tool provided by DVTk is "file compare." If there is any suspicion about data integrity, i.e., whether a vendor adds or removes information from a header, which could cause problems, one can simply compare the original and "processed" one to see the differences.
In addition, this compare tool can be configured to filter certain attributes and highlight the ones the user is looking for. I have employed this tool to determine if any changes in software affected the data format by running this tool against the image before and after the upgrade.
For information exchanges between EMRs, the Clinical Document Architecture (CDA) data format is emerging as the standard. This is an area where we might expect a lot of potential issues in the near future as these EMRs are being rolled out. Data format and compliance with the required templates can be verified on the National Institute of Standards and Technology (NIST) website.
Sniffing software
Sniffing software requires access to the information that is exchanged, which can be done by installing the software at one of the devices that interacts with the connection to be monitored. This could be placed, for example, on the device sending or receiving the information, a network switch, or by connecting the sniffer to the link to be intercepted by a simple hub.
The use of sniffing software can be somewhat of an issue, however, as many institutions clamp down on their networks and do not allow for a "listening" device to be connected in fear that it compromises network integrity. The de facto standard for sniffing and analyzing DICOM connections is Wireshark, which used to be called Ethereal.
However, one could use Wireshark for the sniffing only and ask the network engineer to provide you with the so-called .cap file, which can be captured on any of the available commercial sniffer and network management applications. The analysis can then be done separately using Wireshark.
Sniffers can be deployed to detect any semirandom and not easily reproduced errors, or to troubleshoot in situations where the error logs are either incomprehensible or inaccessible, or to prove that changes are made in data before sending the information. A combination of a sniffer and validator is especially powerful. For example, one could upload a capture file into DVTk analyzer/validator and analyze both the protocol and data format.
Using a sniffer is often the last resort, but it is an essential tool for those hard to diagnose problems. As examples, I have been able to diagnose a device that randomly would issue an abort, which would cause part of the study to fail to be transferred, or to determine the errors as exchanged with the status code of the DICOM responses, and to find that query responses do not quite match the requests, and to resolve many other semirandom problems. One can easily configure the sniffer to capture all of the traffic from a certain source or destination, store it in a rotating buffer, and, when the problem occurs, start analyzing the information.
Test datasets
If a problem occurs with clinical data, it's often hard to determine whether the problem is caused by corrupt or incorrectly captured data, or whether it is a result of the communication and processing of the information. Therefore, it's essential to have a "gold standard" of data.
Imagine a radiologist complaining that an image looks "flat," "too dark," "too light," or just does not have the characteristics he or she is used to seeing. In that case, it's invaluable to be able to pull up a reference image.
In addition to sample images, sample presentation states, structured reports, and CDA documents are available. Most of the test-data objects created by the Integrating the Healthcare Enterprise (IHE) initiative are used to test conformance with any of their profiles. For example, there are extensive datasets available to test proper display of all of the different position indicators (and there are quite a few) on digital mammography images, together with correct mapping of computer-aided detection (CAD) marks.
The same applies for testing the imaging pipeline, for which there are more than 100 different test images that are encoded using almost every possible combination and permutation of pixel sizes, photometric interpretation, and presentation states. The nice thing is that the data are encoded such that the effect of the image display always is identical.
For example, one might have an image for which the header says that the information should be inverted with the data so that the ultimate end-effect is that the image looks the same as the noninverted test sample.
It is easy to load all of these images onto a workstation, where you'll see almost immediately for which image the pipeline is broken. This is a great test tool to be used when performing an acceptance test or to check after a new software upgrade is installed at your workstation. You would be surprised how many systems do not render all of these correctly.
For verifying display and print consistency, the American Association of Physicists in Medicine (AAPM) has created a set of recommendations, as well as both clinical and synthetic test images. These are invaluable to determine whether or not your display or printer supports the DICOM Grayscale Standard Display Function, also referred as "the DICOM curve," and, if so, if it is properly calibrated according to that standard.
A simple visual check will make sure that certain parts of the test pattern are visible and indicate compliance or the potential need for recalibration. Even if nonmedical-grade displays are being used, there is no reason not to calibrate a monitor or printer according to this standard (there are after-market devices and software available to do this) and to make sure they stay in calibration.
In conclusion, I am convinced that any connectivity issue can be visualized, located, and resolved via the right set of test, simulation, and validation tools using a wide variety of test data. It is just a matter of learning how to use these tools and applying them for the appropriate circumstance. These tools are also invaluable for acceptance testing and to prevent potential issues.
Tutorials are available on the OTech YouTube channel for how to install and use most of these tools.
Herman Oosterwijk is president of OTech, a healthcare imaging and IT company specializing in EMR, PACS, DICOM, and HL7 training.
The comments and observations expressed herein do not necessarily reflect the opinions of AuntMinnie.com, nor should they be construed as an endorsement or admonishment of any particular vendor, analyst, industry consultant, or consulting group.