Code snippets, tutorials and software engineering.

A thesis submitted in partial satisfaction of the requirements for the degree Master of Science in Computer Science at University of California, Los Angeles
by Cho-Nan Tsai

Full patent available here

2008

DEDICATION

For Father, Mother and Sister, who have given their unconditional love and support throughout my life.


For those who believe technology leads to good health and prosperity.


ABSTRACT OF THE THESIS

Although modern medicine can treat diseases that were deemed incurable centuries ago, medication errors have always been a problem that never went away. Research studies have shown that errors occur most frequently during medication prescription. Since the dawn of information age, graphical user interface (GUI) computerized medication order systems have appeared in hospitals to facilitate medication prescription. However, fewer than 5% of the hospitals in the U.S. have adopted such a system [1]. The main reason for the low adoption rate is due to the lack of consideration for existing clinical workflow and user environment. As a result, health care information technologies have yet to produce a “killer app” which can tackle this problem effectively.

Instead of another GUI system, I propose a voice user interface (VUI) system to reduce medication errors that might occur during medication prescription. Voice is by far the most natural means for communication. A voice system, if carefully designed with automated validation and user confirmation, can increase productivity during medication order captures.

The goal is to create a physician speech model that can recognize physician lingo, track conversation context, provide relevant information from hospital IT system and capture medication orders. The system employs grammars in a speech recognition engine to recognize phrases and generate proper responses. Data mining on historical medication order data can create rules to validate medication order components. With much potential, this system is an alternative approach to GUI systems for effectively capturing medication order, which ultimately help reduce medication errors.

INTRODUCTION


Medication errors have caused serious issues. A comprehensive study released in 2006 reported that every year medication errors cause harm to at least 1.5 million people [2]. Described in a published article from the U.S. Food and Drug Administration (FDA), people have suffered further complications and died as a result of these errors [3]. The extra medical cost in treating these drug-related medication errors cost more than 3.5 billion dollars annually [2]. Preventable, these errors continue to exist regardless advances in modern medicine. In order to deal with these issues, it is important to understand how and why medication errors occur.

Background


Before we can discuss causes for and reasons behind medication errors, it is vital to understand how medication orders are being given in a hospital setting today. There are two common approaches with which a physician can order medications: verbally communicating orders to nurses or scribbling medication orders on patient charts. During patient rounds, physicians travel in a team of several residents, doctors and nurses, visiting patients from bed to bed. Typically, one physician starts off by presenting a patient, citing vital signs, analyzing the patient with the team. It is also at this time that medication orders can be given verbally. On the other hand, physicians also write medication orders on patient charts after patient visits. During emergencies, verbal medication orders are also given. Regardless of how orders are given, they are always handled by authorized clerks or registered nurses at a later time. Sometimes, written orders are not entered into a computerized system at patient wards. Instead, orders on patient charts are faxed directly to the pharmacy, where they are then processed by a pharmacy computerized system before ordered medications are dispensed for administration. If medication errors are found by nurses, clerks or pharmacists, a follow-up call is made to the physicians who prescribed the medications.

Having described medication order workflow in a hospital setting, we now have the context to understand the nature behind these medical errors in more detail.

Medication errors can happen in a number of ways. Miscommunication in medication order can occur when a drug order is being transcribed. Medication orders that are scribbled by physicians onto patient charts can often be illegible, leading to wrong medications being dispensed. A misplaced patient chart would lead to no medication being ordered. The fact that some orders are given verbally and then transcribed later can also leave a lot of room for errors. Mislabeled medications or medications with erroneous administrative directions can also cause fatalities. Using inaccurate data from patient-monitoring devices as a basis for prescribing drugs can also lead to errors. A study conducted on voluntary reported medication error types reveals that errors creep in mostly during the drug prescription phase, which includes prescribing error, improper dose or omission in the medication order (see Figure 1).

Figure 1. Breakdown of medication error types
Figure 1. Breakdown of medication error types


On the other hand, external factors can also contribute to medication errors. A clinical researcher attributed errors to multicultural work environment and the emergency nature behind these orders [4]. In the United States, many hospitals have residents, physicians and nurses from different ethnic and cultural background. Accents and different proficiencies in English are main reasons behind miscommunications. As a result, understanding verbal orders can be challenging at times, not to mention other factors such as noise, work overload and frequent interruptions. Furthermore, physicians need to verbally order drugs during emergencies to deal with extreme situations. Under pressure, nurses, who are responsible for transcribing and sending orders to the pharmacy, may not perform as well due to nervousness and other psychological reasons. These issues are common and they can have an impact on services delivered by health care providers.

Knowing issue drivers behind medication errors, we are in a strong position to propose a system that can reduce medication errors.

Objective

This work investigates a VUI approach to medication order systems, which allows medication orders to be captured efficiently, thereby reducing medication errors coming from doctors’ prescriptions. Efforts will also be made to validate medication order for missing components or unusual dose range. Specific objectives are:-

  • to create a virtual agent with whom users can have natural dialogs using any regular telephony device
  • to create a system that can mount onto existing IT infrastructure in hospitals
  • to use hospital databases as a primary repository of knowledge for the virtual agent
  • to create a system that can capture medication orders through voice and deliver the orders electronically to a pharmacy
  • to create a system that has a minimal impact on physicians’ existing workflow
  • to perform validation on medication order captured
  • to provide this service to any user without prior voice training
  • to build a medical knowledge bank which provides definition of medical terms
  • to produce a session summary (useful for billing purposes)
  • to evaluate the effectiveness of the system

Scope

This work addresses medication errors that typically occur during medication prescription and order communication. Medication errors caused by mistakes in drug labeling, drug administration, drug dispensing and patient monitoring are not being addressed here. Since majority of errors can be attributed to prescription errors, it therefore makes sense to focus on the area that can help reduce errors the most.

Benefit analysis of a voice application

An intelligent verbal medication ordering system is effective in capturing medication orders during inpatient scenarios. Physicians in major hospitals do not have sedentary jobs so the idea of simply using a phone interface to prescribe medication for patients is very appealing. The most opportune moment for using this voice application is during patient rounds. A dedicated physician from the rounding team can speak medication orders into a phone, while the rest diagnose patients. With a noise-canceling headset or ear-piece, the physician will be able to tell the system the patient for whom a medication order is being prescribed and the details of the order, without interference from the environment. This has minimal impact with the existing clinical workflow and it helps to capture verbal orders at the source, which can reduce miscommunication and negligence. Furthermore, voice recognition of drug orders eliminates errors caused by illegible scribbles. Another great advantage of VUI is that, since interactions are modeled after physician speech, users can verbalize the entire order in one utterance, which is more efficient than filling out forms in a GUI. This system also eliminates intermediary staff like nurses or clerks thus reducing room for errors and they can spend more time with patients. Most importantly, an interface modeled closely after physician-nurse interactions (using common medical jargons during medication orders) in a clinical environment would appeal to physicians.

In addition, a verbal medication order system can be adopted easily in most modern hospitals. The system can conveniently connect to existing hospital databases in order to receive and send data electronically through Health Level 7 (HL7), a common protocol used by medical equipment and medical applications to exchange data. Since speaking comes naturally to almost everyone, learning to use a VUI system would require little training time. Unlike a GUI system which requires physical computer clients, a VUI system can be maintained centrally in the backend. Therefore, no resources would be needed for installing, updating and repairing client software on computer terminals – since a phone – mobile, landline, VOIP-based – is all that a physician needs to connect to the proposed system. In general, adopting the proposed system puts minimal strain on hospital resources.

A voice application is not a panacea to prescribing problems caused by medication orders. People have known to be irritated by long, computer-generated speeches, usually from automated customer-supporting call centers. Since fidelity rate of voice capture is never 100% (not even between humans), it is therefore mandatory to confirm and validate captured phrases from users from time to time. Especially in medication orders, accuracy of verbal orders must be taken seriously. With this proposed system, it is probably not optimal to prompt for user confirmation by verbally iterating a long list of drug prescription details. Nevertheless, alternative approach to this will be addressed in the conclusion section.

Computer Physician Order Entry (CPOE) is a GUI-based computerized system that addresses medication errors. A CPOE system allows physicians to key in medication orders which are then sent electronically to nurses or pharmacies over a computer network. Features in CPOE include standardized medication orders to select from, order entry workflow which mimic closely to the paper-based system and patient safety features [5].

CPOE systems have great advantages over paper-based system. [6] reports that CPOE systems can decrease transcription, increase accuracy and completeness, and offer the ability to enter orders in multiple locations using computer terminals. Data captured by typical GUI-based CPOE systems have a high fidelity rate. A user can instantaneously verify data entry on a monitor. Data captured by a standard input device like a keyboard or mouse would be as good as the input given by a user.

However, there are barriers to adopting CPOE systems. Physicians have a hard time adopting new clinical workflow induced by such systems, especially with the extra time they need to issue a medication order [6]. IT systems in most hospitals today are built on top of legacy platform, which often use different technology standards [7] [8]. For CPOE system to work, disparate systems must be integrated first. At the moment, there are not many options of CPOE vendors and decent CPOE systems are usually homegrown [8]. However, hospitals may not have the human and financial resources to build such systems. Finally, these systems cost millions of dollars to build [9]. Therefore, COPE systems continue to pose a real challenge for hospitals to adopt.

Furthermore, CPOE systems have not been able to deliver all that well in the health care industry as promised. [10] reports that these systems have problems with “fragmented CPOE displays that prevent a coherent view of patients’ medications, pharmacy inventory displays mistaken for dosage guidelines, ignored antibiotic renewal notices placed on paper charts rather than in the CPOE system, separation of functions that facilitate double dosing and incompatible orders, and inflexible ordering formats generating wrong orders.” A CPOE system implementation, which costs 34 million dollars, was shutdown after three months into deployment at Cedars-Sinai Medical Center [11]. So far, fewer than 5% of the hospitals in the U.S. are using some kind of CPOE systems [1].

Among VUI-based applications in health care, a common technology called interactive voice response (IVR) has been popular. Examples of IVR applications in health care include appointment scheduling, appointment reminder, automated “attendant”, personalized medical survey, and drug prescription refill [12] [13] [14]. Similar to those being deployed at automated call centers, the exchange of information is almost one way. Users simply receive information from systems. An interesting research work was deployed in a rural community in Pakistan to offer health care information through computer-generated speeches to low-literate users [15]. The application was deployed using Microsoft Speech Server, which is similar to the platform used by the proposed system. However, the main difference is that the proposed system offers a good balance between information retrieval and data capture. The proposed system also targets educated physicians who speak English, instead of low-literate community volunteers who speak Urdu.

There is a research undertaking which focuses on data capture by voice. The case study involves speech interaction design for documenting anesthetic procedure [16]. During anesthetic procedure, medication orders of anesthetic drugs are given and administered. Architecturally, the system uses IBM ViaVoice for speech recognition, SWI-Prolog for logic processing, and separate software modules for text-to-speech synthesis and visual confirmation. Since this set-up requires user voice training, they were able to achieve an impressive recognition rate between 72% and 92.4%. However, this system only focused on one-word to three-word phrases in their experimentations; whereas the proposed system is able to handle medication orders from single word phrases to multi-component medication orders involving administration route, start date, end date, administration frequency, order reasons, administration times, etc (all this in one utterance). The proposed system also supports a wide vocabulary of about 1600 commonly prescribed drugs in hospitals, which render our proposed system suitable for practitioner of general medicine.

Technical plan

A telephony infrastructure that can handle inbound calls and make outbound calls is established. This infrastructure should include a server that hosts a spoken dialog system. This system should have the ability to perform speech recognition and text-to-speech synthesis.

A database is required to store data needed for this application. A relational database stores medication orders, physician data and patient data. Through a HL7 protocol, data is pulled from medical applications that are part of an existing hospital IT system. HL7 standard is an international protocol for communication among computer applications used in the medical space. The main application of the proposed system has read/write permission to this database.

To capture medication orders, dynamic and static grammars which describe the structure of various phrases that a user might verbalize are loaded onto the recognition engine. Natural language generation and named entity recognition are used to provide annotation of medication order components once they are recognized. A designated C# class processes the annotated text, directing the captured data to other software layers as needed.

To validate captured orders, market basket analysis – a data mining approach – is used to create association rules among medication order components. These rules are then used for checking against orders for component omissions. In addition, a semantic approach to order capturing is also explored.

SYSTEM DESCRIPTION

System architecture

The system architecture of the proposed system is depicted as follows (Figure 2). A Dell PowerEdge server hosts the main application and the speech engine. This server has a telephony interface which permits receiving inbound calls and making outbound calls. The main application is written in C#. The server has a connection to a database, which is managed by Microsoft SQL Server 2007. The speech engine used is Microsoft Office Communications Server 2007 Speech Server. Inside the speech engine, the most relevant components are speech recognition and text-to-speech synthesis. Through a local intranet, the database server is able to receive periodic data updates from medical devices or applications under a hospital IT system. The system is also connected to the Internet, which means information or data from the system can be pushed to various devices or applications such as pagers, email, soft phones, desk-top applications, etc. The proposed system architecture is called Integrated Clinical Information Phone Service or ICIPS.

System architecture of Integrated Clinical Information Phone Service (ICIPS)
Figure 2. System architecture of Integrated Clinical Information Phone Service (ICIPS)‌‌

Software architecture

In order to ensure reusability, flexibility and maintainability of the proposed system, a model-view-controller (MVC) architectural pattern was used as a blue print for overall design of the main application. A software engineering approach, this pattern allows separation of business logic from data and presentation of data. In the context of a voice application, the model layer is an abstraction layer to manage data in the database. Through the model layer, the pattern also facilitates tapping into aggregated data from disparate application databases in a hospital IT system. Data retrieved through the model layer can be used to generate grammars necessary for speech recognition or voice responses for users. The view or the presentation layer formats how those data can be spoken. For instance, the number “120408” can be spoken as a date object or a simple number object. Speech pauses or emotions may also be used during the formatting process. The controller layer manages the logic and control flow of the application according to user input and speech context. In each of these layers, there are specific object-oriented C# classes performing functions within its own layer and they only cross over to other layers through limited interfaces whenever necessary. This pattern provides an efficient methodology in building and maintaining a user model necessary to track speech context and conversation history. In addition, this approach also helps build robust code which effectively manages control flow from layer to layer (Figure 3).

Software design follows Model-View-Controller Pattern
Figure 3. Software design follows Model-View-Controller Pattern

METHOD

User model

One of the most important supporting functionality of the main application is the ability to maintain a user model for each user. User model is part of the model layer described in the software architecture of the system. The purpose of user model is to dynamically constrain data relevant to the user, keep track of speech context and record conversation history. User model is optimally designed to match clinical workflow of physician users. Since clinical workflow in every hospital is different, this user model can be easily tailored to meet specific needs of every hospital. To make modifications, a programmer can create a new user model by inheriting code from the base class of user model. For instance, if the proposed system needs to be extended to support the clinical work of registered nurses, a new model user class for nurses can be created by inheritance. For this research, the physician class is inherited from the base class of user model.

The user model is divided into six functioning modules (Figure 4).

User model
Figure 4. User model 

“Identifier” module contains identifying information about the user. In this application, relevant attributes stored in the module are physician name, physician password, physician identification number (used in the hospital IT system), physician contact information, physician department, etc. These attributes are crucial throughout users’ interactions with the proposed system. For instance when a user logs in, the identity of the user is verified using information contained in this module. As the user requests information on his or her patients, the identification number is used in retrieving appropriate records in the database. When ordering medications, the proposed system may load only the drugs frequently used in the department with which the physician affiliates. Not only adding an extra layer of security, this design also boosts accuracy rate.

“My Patients” module contains names of assigned patients to each physician user (obtained from hospital IT system). It is vital to locate a patient first before medications can be ordered. “Current Patient” module maintains the name and patient identification number of the patient once a patient is located. This module can be expanded to include patient-related information such as the contact for patients’ next-of-kin, which can be useful (e.g. a physician can ask ICIPS to make an outbound call to his next of kin).

“Current Topic” module keeps track of speech context. As a user interacts with the proposed system, it helps to know under what context is the user requesting for information, as this constrains the size of expected phrases from the user, which in turn improves accuracy rates. For example, when a user is interested in retrieving abnormal lab results of a patient, the proposed system then loads dynamically the grammar containing abnormal lab results for the patient registered in the “Current Patient” module. Therefore, the proposed system will not be burdened with searching through other types of lab results or abnormal labs of other patients.

“Current Location” module stores the current location of the patient rounding team as they move from patient to patient. Similarly, this helps in constraining the number of patient names or patient rooms that need to be recognized when choosing a patient before a medication order.

“Conversation History” module records interactions between users and the proposed system, which can be delivered as a summary via email for verification or a series of records to the database for billing purposes.

Grammar

A grammar is a set of structured rules that identify words or phrases spoken by users. It also contains selection of responses that can be returned to users. Grammars inform speech recognition module in the speech engine the expected words or phrases that can be given by users. Grammars can be statically generated off-line and feed to the speech recognition module wherever needed. On the other hand, dynamic grammars can also be generated in real-time as user change from context to context during interactions.

There are two types of syntax for grammar: an Augmented Backus–Naur Form and an Extensible Markup Language (XML) Form in the World Wide Web Consortium Speech Recognition Grammar Specification. Currently, Microsoft Speech Application SDK supports XML-based grammar format.

This XML-based grammar demonstrates how a rule can be established (Figure 5). A grammar must contain at least one rule that defines a pattern of phrases or words. A programmer can also specify wildcards, set repetitions and embed existing riles in a rule. When the user voice input matches the pattern described in these rules, appropriate attributes like rule id, actual recognized text or annotation is returned to the main application for logic processing in the control layer.

<grammar xmlns:sapi="http://schemas.microsoft.com/Speech/2002/06/
                            SRGSExtensions"
         xml:lang="en-US" tag-format="semantics-ms/1.0"
             version="1.0"
         mode="voice" xmlns="http://www.w3.org/2001/06/grammar">

   <!--This is transfer grammar using in speech-enabled IVR-->
   <rule id="CommonDrug" scope="public">
      <one-of>
         <item>potassium chloride</item>
         <item>tylenol</item>
      </one-of>
      <tag>$.Transfer = $recognized.text</tag>
   </rule>
</grammar>
Figure 5. Sample XML-based grammar


The design for grammars is extremely crucial in building a good voice application. A loose grammar expecting many input phrases yields a low recognition rate. A rigid grammar on the other hand would not be flexible enough accommodate various inputs from different users. Therefore, a balance must always be maintained when building these grammars.

With reference to our proposed system, several grammars have to be dynamically generated to facilitate several features of the proposed system. A system functionality that locates patients for physicians uses dynamic grammar to load patient names on physicians’ assigned lists. When a physician asks for a patient, he can say “let’s talk about Misses Robbinson”, the proposed system is able to match the “Mrs. Robbinson” currently assigned to him, instead of trying to match all “Robbinsons” in a hospital IT database.

Another good use of dynamic grammar is in the validation module of the medication order functionality. The proposed system checks for missing medication order components after the first utterance is captured. If a missing component is found, a prompt and its associated grammar are generated dynamically for that component. For example, if administration times component is missing from the order, the proposed system will prompt the user “how many times would you like to administer” and the associated grammar will have a rule with data items such as “X1” or “times one”, “X2” or “times two”, etc. All in all, dynamic grammars are powerful means to capture voice data efficiently and accurately.

Microsoft Visual Studio provides a powerful grammar builder called Conversational Grammar Builder, which allows creating grammars without explicitly writing XML. This has been a crucial utility in creating functionalities in the proposed system, including the medication order capture functionality. In the builder, training sentences can be imported and keyword phrases can be added through the GUI. As the main application compiles, these training sentences are parsed and a binary file containing the parsed grammars with annotations is produced. This can be attached to a prompt in the main application.

Conversational Grammar Builder has been particularly useful for building the medication order component capture prompt (Figure 6). On the upper left panel, categories of keywords can be created, from which groups of keywords that a user may be expected to say can be entered. The categories correspond to the components of medication orders. For instance, “DrugAdministrationRoute” is a category containing various possible routes through which a drug can be administered. Some examples of groups that fall into this category are “PO”, “IV”, “IVPB”, etc. For each of this group, the specific keyword phrases or their synonyms can be entered on the right panel. This is useful because users can say many variations of “PO” and they are still recognized as the “PO” group.

Conversational Grammar Builder screen shot for medication order capture
Figure 6. Conversational Grammar Builder screen shot for medication order capture


In Conversation Grammar Builder, training sentences are needed for the builder to create grammar structures for the binary grammar file. These sentences can be entered into the lower right panel. By named entity recognition, the builder parses and classifies elements in text into predefined keyword categories (found in upper left panel) in colored blocks of annotated text. For example, “potassium chloride” is classified under the “DrugName” category and “twenty meq” under “DoseUnitRate” category, so on and so forth.

Finding a patient

Before any drug prescription, it is necessary to locate the patient on whom the drug is to be administered while the proposed system is being used by a physician. One method that fits cozily with clinical workflow is by using the exact physical location of the user. During patient rounding, a physician can say “we are now entering 4 ICU” as they are entering the premise unit. Instantly, the proposed system loads all room numbers and patient names associated within this premise unit onto the wait prompt. At the same time, the user model also keeps track of this user’s current location. At this point, a user may ask “who is in room sixty six thirty four” and the system will retrieve from the database through the model layer the patient who is staying in this room. Another approach that has been implemented is by loading assigned patients into a dynamic grammar as soon as the user is logged in. A user can directly locate his or her own patient by saying “let’s talk about Misses Robbinson” or “let’s talk about patient Robbinson” at the wait prompt if she has been assigned to the physician. Another implemented approach is by actual spelling of patients’ name (e.g. “a” as in “alpha”, “b” as in “bravo”, etc), which can be tedious hence it is not the preferred method.

Medication order

To build the medication order functionality of the system, various samples of medication orders are first collected and analyzed. Medication orders can come in several forms and medical abbreviations are frequently used. Nevertheless, a general pattern of medication order components and order types can be observed. In general, medication orders can be grouped into three broad types: basic, extended and complex (Figure 7).

Types of medication order components
Figure 7. Types of medication order components

Basic orders have standard medication order components such as drug, dose amount, administrative route, etc. A full list of all medication order components currently being supported by the proposed system can be found in Appendix A.

Extended orders refer to orders that contain instructions on prior orders and involve administration dates components, in addition to components found in basic components. The proposed system recognizes dates and day of the week. Hence, “today”, “tomorrow”, “next Tuesday” and “November 20th” are all valid responses. If day of the week is given, the proposed system calculates the actual dates before writing it to the database.

Complex orders are conditional medication orders. Examples of conditions are metrics like blood pressure, temperature, etc. In these orders, physicians only want the medication to be administered if some conditions are met, which relies on nurses’ judgment. At the time of writing, this type of order is not being supported by the proposed system.

Once the nature of medication orders is understood, Conversational Grammar Builder is used to construct the grammar necessary to capture these orders from physicians. The first step is to build all the medication order components as keyword categories in the upper left panel. Knowledge in medical domain is needed to fill out various possible details that can be uttered by users. Many websites for people working towards a registered nurse degree offer tutorials on medication orders and medical abbreviations. This is useful information in constructing specific keyword phrases. Components that contain many entries can be generated programmatically from data sources in an XML grammar format. This XML file can be associated to a category in the builder, which can save a lot of time from manual entry. An example of a component that needed this treatment is the drug name grammar. Currently, the proposed system offers more than 1600 common inpatient drugs.

Several challenges during the grammar building phase are worth describing. Because the list of common drug names is obtained from a database that records drug administration through manual entry by nurses, data cleansing is needed to ensure that drugs are spelled correctly. A Python program was written to parse these drug names using Google AJAX Search API. Only drug names that return more than 100,000 search results were used and the rest were assumed incorrectly spelled. Since medication orders obtained came in written form, an algorithm needs to be deployed to translate it to the verbal form before it can be used as training sentences. Numbers must be spelled out (including decimals), punctuation must be stripped and abbreviations or acronyms must be replaced with original medical terms. To illustrate, the written form “Tylenol 650 mg po X2 prn” translates to “Tylenol six hundred fifty milligram per oral times two whenever required”.

Validation

Since omission is one of the leading causes of medication errors, market basket analysis is employed to help create association rules that check for missing components in medication orders. A method of market basket analysis, association rule induction is intended to find a set of products that are usually bought together in a supermarket setting. This is useful in recommending additional items that shoppers might be interested in buying during item check-out, based on what is already in the shopping basket. For instance, a customer who buys wine and bread are most likely to buy cheese too. This rule when established can be applied in seeking component omissions in medication order as well. The idea is to treat each component type in a medication order as an item in a shopping basket. By applying efficient Apriori algorithm [17], we can create association rules between types of components. The association rule created is imported into a look-up table in the database of the proposed system. The proposed system uses the table to check for missing components (Table 1). If the drug component, the frequency component and administration times are supplied by the user, the proposed system prompts the user for the missing dose amount. Confidence levels and sample percentage are also available to help pick the best rule for a given situation. A sample of the actual association rule used is available in Appendix B.

Table 1. A conceptual explanation of the look-up table
Table 1. A conceptual explanation of the look-up table

Another type of validation is the semantic evaluation of captured medical order components. The proposed system should have the ability to detect overdose situations due to human error or misrecognition. For instance, “Tylenol” can be verbally ordered in tablets or milligrams. For “tablets”, the norm is 1 to 2 tablets. For orders in “milligram”, the norm is often 650 milligram or 325 milligram (half of a tablet). However, with our existing approach, the proposed system can take 650 tablets or 1 milligram of “Tylenol”. This captured order obviously does not make sense but there is an effective solution for it.

By restructuring the medication order grammar, the rules are constrained precisely for the dose amount that a drug should take. In the case of “Tylenol”, the grammar should only accept an integer of either “1” or “2” before the word “tablet” or “tab”. On the other hand, the grammar should only take exactly “650” or “325” before the word “milligram”. If “650” is recognized as “615”, we can ask the user if he or she meant “650”. If, for some reasons, the recognition engine picks up “IV” or “intravenous” as the administration route (instead of “per oral”), this medication order component should be dismissed automatically because this drug only come in tablet form.

Medical knowledge bank

Although not directly essential to medication ordering, it is convenient for physicians to look up medical information quickly and efficiently at locations where they do not necessarily have easy access to computers or reference materials. This functionality can be constructed by Medical Subject Headings (MeSH) data source. MeSH is the National Library of Medicine's controlled vocabulary thesaurus. It consists of vocabularies in a hierarchical structure that permits searching at various levels of specificity. By combining the power of a voice application, this functionality allows users to obtain and learn the definitions, categorization, synonyms, examples and spellings of medical terms in an interactive manner.

To build this functionality in ICIPS, steps involving data cleansing and relationship building between tables have to be executed. A flat MeSH text file containing all vocabularies is first downloaded. By using the XML BULK LOAD functionality in Microsoft SQL Server, we can set up schema and relationships for three main tables in the database; one table for storing all concept names (vocabulary list), one table for storing its associated definition; one table for storing examples and synonyms. “DescriptorUI”, a universal identification field used in MeSH, is referenced across all three tables. A left outer join is used to join these tables to produce the final grammar in XML to be used by the speech engine. Care must be taken to remove punctuations and symbols or the grammar file cannot be loaded.

Summary generation

Since the software architecture is designed using model-view-control pattern, it is fairly simple to add this feature in the user model class. In this class, there are three functions called “addHistory”, “collapseHistory” and “getHistory”. As users interact with the proposed system, every speech interaction involves either a C# dialog class (for voice prompt) or statement class (for voice response). We call “addHistory” function to record the name of the invoked C# class and the timestamp of the event. This data is incrementally added as an object to a dynamic array in the user model. Before users hang up, the proposed system executes “getHistory” which subsequently calls “collapseHistory”. The function “collapseHistory” flattens the dynamic array to produce a string which contains interaction summary. If users want to convert the class names of the dialogs or statements to more user-friendly and meaningful phrases, a reference translator function can be used. For example, “stLogin” class name can be translated to “Called Me”. The function “getHistory” sends the summary to the user by email, whose email address is obtainable from “Identifier” module in the user model class. In addition, “getHistory” also writes each interaction entry into the database through the model layer.

EVALUATION

Most experiments revolve around accuracy rate for medication order capture by voice. To carry out experiments, about three hundred medication orders have been prepared before hand (i.e. translated into spoken form). A portion of these orders are used as training set and the rest are used as test set. Test phrases are grouped into basic order and extended order types. Actual phrases of medication orders can be found in Appendix C. These experiments are carried out on a working prototype of the proposed system, currently situated in Brain Monitoring and Modeling Lab at UCLA Department of Neurosurgery.

While every attempt has been made to ensure grounds are covered, these experiments are non-exhaustive. The conclusions reached by experiments conducted in this research are more or less subjective in nature, as greater test sample sizes or actual clinical trials are required to reach more quantitative, solid conclusions. Nevertheless, these tests provide a good overview of the potentials of the proposed system.

Accuracy rate is a metric that is used to measure accuracy of medication orders captured. Accuracy rate is defined as, over the average of three order repetitions, the number of correctly recognized components divided by total number of components in a medication order. While a common metric called word error rate is used to measure performance of speech recognition systems, it is not used here for good reasons. Word error rate is often used to measure the recognition of free text. In this case, the medication order phrases are actually made up of structured components. Thus, using word error rate may probably be an overkill.

Generally, the proposed system scores between 80% and 90% in accuracy rate through out experiments. The system sometimes has problems with hard-to-pronounce or uncommon drug names (“levophed”). At times, it can be challenging to recognize the intended item from a list of similar-sounding medication order components (“lasix” and “lasik”, “milligram” and “nine gram”). Recognition of numbers and dates are superior, though occasionally slips may occur due to similar sounding numbers (“six hundred fifty” and “six hundred fifteen”, “two” and “ten”). Otherwise, the system performance is acceptable.

Test data vs. Training data

The purpose of this experiment is to compare the accuracy of capturing medication orders that are randomly selected (and do not exist in the training phrases) and the accuracy of capturing orders that are part of the training set. The outcome of the experiment is rather predictable; accuracy on orders that are part of training data should score higher. The result shows that accuracy rate on training set is about 90.4% while accuracy rate on test set is about 82.2% (Figure 8). The number of training set and random test set are both ten. It noteworthy that even with a small number of training set the system is still capable of performing above 80% on medication orders that are not included in the training set.

Figure 8. Accuracy rate between recognition of randomly selected phrases and training phrases

Training set size

The purpose of this experiment is to evaluate the effect of training set size on accuracy. Training sets are prepared in sets of 1, 10, 50, 100 and 200 medication orders. For the test set, 11 basic medication orders are evaluated against these various training sets. The results show a growing accuracy as training set size increases. However, performance levels off to a range between 86% and 88% after a training set of 50 (Figure 9). Interestingly, Microsoft recommends 50 or more training phrases when using Conversational Grammar Builder.

Figure 9. Accuracy rate by training set size

Basic order and extended order

The purpose of this experiment is to compare the accuracy rate between test sets of basic medication orders and that of extended medication orders. Training sets are prepared in sets of 1, 10, 50, 100 and 200 medication orders. For the extended order test set, 18 extended medication orders are evaluated against these various training sets. For the basic order test set, the result from the previous experiment is used. The result shows that accuracy rate between basic order and extended order differ by less than 2% for training size great than 50. In some cases (T50, T100), extended orders perform slightly better. Experiment on T1 (training set size of one) is not carried out because the performance is predictably low.

Figure 10. Accuracy rate by training size, on basic order and extended orders

Gender and language fluency

The purpose of this experiment is to evaluate the impact of users’ gender and language fluency on accuracy. Seven subjects are recruited for this experiment. Prior to experiments, each subject is given 11 medication orders (the same set as prior experiments) so that the subject becomes familiar with the medical orders before giving orders. Subjects are shown once how to give medication orders to the system before they begin on their own. Subjects come from different background; 3 men and 4 women were recruited; among them, there are three native speakers of English. While most do not have a medical background, one subject is a licensed physician in Japan.

Results observed are striking. There is almost a 10% difference in accuracy between male speakers and female speakers. This shows that the recognition engine has not been optimized for female voices. Native speakers of English score generally higher than non-native speakers of English. Subject #7 scores an extremely low score of 56%. This is probably due to her monotonicity in her voice and the fact that she spoke unnaturally slow during speech interactions. Subject #2 has an extremely heavy Japanese accent hence the lower score compared to the two other male speakers.

Figure 11. Accuracy rate by language fluency and gender

Permutation

The purpose of this experiment is to subjectively evaluate the robustness of the proposed system in recognizing medication orders that come in various permutations. Often times, each physician has his or her own way of dictating or writing medication orders. It is perfectly valid to utter drug administration times before administration frequency or vice versa. Hence, it is important to understand how well the proposed system can recognize an order even when order components do not coincide with the general pattern observed in the training phrases.

To evaluate, a short Python program is written to generate permutations of medication orders. Four examples from basic medication orders are analyzed closely. Permutations that are irrelevant (something that a physician may never say, i.e. “milligram”, “two”, instead of “two”, “milligram”) are removed. 200 training phrases is loaded for this experiment.

The results observed are interesting. While medication orders uttered in the permutation generally observed among training sentences ({drug}, {dose}, {dose unit}, {administrative route}, {administrative frequency}) score well, the system has most trouble when administrative route or administrative times are spoken first prior to other components (Table 2). In order #1, the entry “IV zero point two milligram hydromorphone” is often recognized as a drug called IVIG, a solution of globulins. In this order, other order components recognized also seldom makes sense. Nevertheless, majority of the permutations score high in accuracy rate, which probably means the proposed system can support minor variations of the same medication order.

Table 2. Permutation of order components and accuracy rates
Table 2. Permutation of order components and accuracy rates

Validation

The purpose of this experiment is to subjectively evaluate the omission detection functionality of the proposed system. Proper and malformed basic medication orders are given to check for omission detection. If the proposed system detects a missing component, the system prompts the user by loading a dynamic prompt and a dynamic grammar for the missing component. By looking at results from validating 20 basic medication orders, the proposed system seems to function extremely well (Table 3). The system does not support validation of extended and complex orders yet therefore these are not being tested.

Table 3. Validation results of medication orders and missing components

Type

Medication Orders

Validation Result

Missing Components Prompted

Proper

morphine two milligram times one iv

Complete

N/A

Proper

dc hydralazine iv

Complete

N/A

Proper

zofran four milligram iv q eight hour whenever required nausea vomiting

Complete

N/A

Proper

tylenol six hundred fifty milligram q four hour whenever required 

Incomplete

Administrative Route

Proper

morphine two milligram im times one whenever required 

Complete

N/A

Proper

Normal Saline Bolus IV five hundred c c

Incomplete

Drug Administration Times

Proper

Start Levophed g t t eight milligram

Complete

N/A

Proper

one milligram Versed Give Q one hour for severe shivering

Complete

N/A

Proper

Double Concentrate Nitroglycerin two hundred milligram in two hundred fifty c c

Complete

N/A

Malformed

morphine

Incomplete

Dose, Drug Administration Route, Drug Administration Times, Unit

Malformed

morphine two milligram

Incomplete

Drug Administration Route, Drug Administration Times

Malformed

morphine whenever required

Incomplete

Dose, Unit, Drug Administration Route

Malformed

morphine IV

Incomplete

Dose, Drug Administration Times, Unit

Malformed

morphine times one

Incomplete

Dose, Drug Administration Route, Unit

Malformed

zofran q eight hour

Incomplete

Dose, DrugAdministrationRoute, Unit

Malformed

zofran four milligram whenever required

Incomplete

DrugAdministrationRoute

Proper

discontinue hydralazine

Complete

N/A

Malformed

discontinue

Incomplete

Drug

Malformed

bolus iv

Incomplete

Dose, Drug, Drug Administration Times, Unit

Malformed

normal saline five hundred

recognition failure

N/A


The semantic approach of validating dose amount and other components also has been tested. In general, if users speak values that are not present in the grammar, the recognition just fails and the users have to retry. A better way would be inform users about possible choices of dose amount for this drug and prompt users to choose one. The only downside to this approach is that the grammar needs to be constructed on a case-by-case basis for each drug.

Medical knowledge bank

The purpose of this experiment is to subjectively evaluate the functionality of medical knowledge bank. A medical term is asked and the system responds with its definition. Examples of this term and the category that the term falls under can be requested too. Currently the system supports 24,000 entries of medical terms in the proposed system. Occasionally, the system confuses the intended term with similar sounding terms. Nevertheless, there is always a prompt asking for confirmation. A sample interaction to request for definition of hypertension is included (Figure 12).

U: What is hypertension?

S: Hypertension is Persistently high systemic arterial blood pressure. Based on multiple readings, hypertension is currently defined as when systolic pressure is consistently greater than 140 mm Hg or when diastolic pressure is consistently 90 mm Hg or more.

U: What are examples of hypertension?

S: Examples of hypertension are malignant hypertension, pregnancy-induced hypertension and renal hypertension

U: What category does hypertension fall under?

S: Vascular disease

                                                                                  U = User, S = System

Figure 12. A sample interaction between a user and the proposed system

Summary generation

This is a sample of the summary generated by the proposed system (Figure 13). This summary can be edited to include physician name, department or patient name by pulling data from the user model.

20:53:32 - Called me (ICIPS 73)

20:53:35 - Logged in as NAME

20:54:01 - Requested his patient names

20:54:13 - qaConfirmMyPatient

20:54:22 - Asked for Pt ROBBINSON

20:54:26 - Patient Confirmed??

20:55:14 - Got End of Shift Summary

20:55:59 - qaAskForBloodPressure

20:56:12 - stReadoutTemperature

20:57:02 - Got abnormal labs

20:57:10 - Thanked me ... huraaaa:-)!!

20:59:17 - potassium chloride, 20, meq, iv, X4,

21:00:02 – qaVerbalMed

21:00:40 - Hung up after 7 min 08 sec.

Figure 13. Summary generated

Miscellaneous functionalities

Currently, the proposed system is also capable of supporting these simple yet useful functionalities.

  • To suspend a conversation. “Hold on ICIPS.”
  • To resume conversation. “C’mon ICIPS.”
  • To connect to a live operator. “Live help please” or “Operator please”
  • To make a phone call to another medical staff by name. “Call Doctor Smith please.”
  • To make a phone call to a specific room. “Call Operating Room 1 please” or “Call room sixty-six twelve.”
  • To interrupt computer speech at any time. “Excuse me ICIPS.”
  • To leave a voice mail. “I would like to leave a voice mail for Doctor Smith.”
  • To playback voicemails. “New messages please.”
  • To change login pin. “I would like to change my pin please.”
  • To order medication by reason and selecting a drug from a list. “Can you give him something for the pain?”

CONCLUSION

The proposed system shows a lot of promise in providing an interesting verbal alternative to GUI medication order system. The verbal approach overcomes many of the problems caused by traditional paper-based systems, GUI systems and transcription-based systems (through intermediary staff). Compared to GUI systems, VUI systems have less impact on physicians’ clinical workflow so it makes sense to assign a designated user in a rounding team. Although recognition rate typically float between 80% and 90%, work can be done to improve this recognition rate. If a voice-dependent system (with voice training) is used instead, the recognition rate can be increased significantly. Despite the fact that recognition rate is not 100%, it should be realized that speech understanding via communication between humans is not always 100% either. Therefore, care must be taken to design grammars that can constrain vocabularies properly according to conversation or location context. Furthermore, data mining approach and semantic approach for validating medication orders seem promising. Validations help catch careless human errors and correct results of misrecognitions by the speech engine. A medical knowledge bank offers a quick access to medical terms, drugs or medical concepts when physicians want to verify definitions or concepts before drugs are prescribed. An interaction summery from the system can be used for billing and auditing purposes, which enhances the work of hospital management. Since natural speech is easy to mimic and no prior voice training is needed, any authorized physician can begin using with minimal training. All in all, verbal medication order system definitely helps in reducing medication errors.

While the proposed system is promising, many areas have to be improved before it can be deployed in the real world. Security can be enhanced by voice recognition throughout the duration of interactions with the proposed system. Although the exact reason for a low accuracy rate by female voice is not known, it is suspected that the speech engine has not been optimized for female voices. One way to improve this is to switch to a speech engine platform that uses voice training. Work is also needed to understand how to improve the proposed system for non-native speakers of English. This is especially important to hospitals in countries that have multiethnic health care work force. Further, the system can be extended to support complex medication order types. To increase accuracy, work can be done to improve picking out the right term from similar sounding terms. Context awareness is important in constraining the right term in this case. Validation of orders can improved to drug-specific level, as currently the system only support this at order component level. To reduce user frustration from listening to long voice responses during confirmations, a system that works with phones that have visual display can be helpful. Two types of Cisco phones with visual display are currently being used throughout UCLA Ronald Reagan Hospital, which is worth exploring (Figure 14) [18]. To prevent drug interactions, the system can implement a voice interface to an off-the-shelf drug interaction database. A good way to audit physicians’ work is to analyze generated summary stored in the database but this summary can be mined for physicians’ usage and speech pattern. Knowing a physician’s pattern, the proposed system may be able to dynamically generate shortened yet relevant speech interactions that physicians can benefit from. For example, a user’s usage pattern is observed: user logs in, find one of his own patients, and check the patient’s temperature and blood pressure before medications are ordered. With the new approach (based on a pattern already being recognized), the proposed system can generate a prompt asking the physician, “would you like to check patient Robbinson’s temperature and blood pressure before ordering the medication.” This will significantly reduce the speech interactions and increase productivity. Finally, no deployment is possible unless proper pilot projects and clinical trials are conducted to explore potential sociological, organizational and technical issues of the proposed system in a hospital setting.


Figure 14. Cisco phones with visual display panel

APPENDICES

A. Keyword category and key words currently supported by ICIPS

Medication Order Component (keyword category)

Keyword groups

Dose unit rate

dose: range <1 to 999999> up to 4 decimal places
unit: mcg, amp, millimol, liter, tablets, cc, gram, meq, ml, mg, inch, unit, units, drop, gtt, percent
rate: per kg, per hour, per month, per day, per sec, per min, per oz, per cc

drug administration duration

H1

drug administration route

po, iv, ivpb, im, sq, jt, gt, ngt, right eye, IM or IV, forley catheter, drip, iv bolous, gtt, ivp

drug administration times

X1, X2, X24H, X3, X4, X5, X6

drug end

<date>, <day of week>, <today, tomorrow, yesterday>

drug name

1600 entries of common drugs

drug release

SR, XL, CR

drug start

<date>, <day of week>, <today, tomorrow, yesterday>

new order actions

start, add, restart, recheck, turnoff

order clearance

prohibition, permission, please

order day time administration

tonight, after midnight, am

order frequency

q day, after meal, before meal, every night, every morning, every day, qhs, biw, qod, four times a day, q pm, q am, three times a day, twice a day, q <number> hour, every <number> hour

order need administartion

now, stat, prn, sos, pc

order reason

pain, fever, rull out, agitation, pre med, insomnia, nausea vomiting, nerve test, transportation, indigestion, itching, constipation, protect, not necessary, accucheck, anxiety,  cough, drain placement, verify ngt placement, for rua, for ultrasound, shivering

per order

per MD, per pharmacy, per nurse

prior order actions

clarify, received, renew, hold, change, increase, descrease, cancel, double

B. Sample look-up table for checking omissions in medication orders. The columns are truncated due to paper space constraint.

Suggested Missing Component

Sample %

Conf. Level

drug

Drug
%

dose

dose

100

10.8

NULL

NULL

NULL

Unit

100

10.8

NULL

NULL

NULL

drug

100

10.8

NULL

NULL

NULL

drug

100

18.9

NULL

NULL

NULL

DrugAdministrationTimes

83.3

16.2

NULL

DrugPercent

NULL

DrugAdministrationRoute

83.3

16.2

NULL

DrugPercent

NULL

dose

100

16.2

NULL

DrugPercent

NULL

Unit

100

16.2

NULL

DrugPercent

NULL

drug

100

16.2

NULL

DrugPercent

NULL

DrugAdministrationRoute

90

27

NULL

NULL

NULL

dose

90

27

NULL

NULL

NULL

Unit

90

27

NULL

NULL

NULL

drug

100

27

NULL

NULL

NULL

DrugAdministrationRoute

100

45.9

NULL

NULL

NULL

dose

100

45.9

NULL

NULL

NULL

Unit

100

45.9

NULL

NULL

NULL

drug

100

45.9

NULL

NULL

NULL

DrugAdministrationRoute

100

54.1

NULL

NULL

NULL

dose

100

54.1

NULL

NULL

NULL

Unit

100

54.1

NULL

NULL

NULL

drug

100

54.1

NULL

NULL

NULL

dose

93.3

81.1

NULL

NULL

NULL

DrugAdministrationRoute

87.5

86.5

NULL

NULL

dose

Unit

93.3

81.1

NULL

NULL

NULL

DrugAdministrationRoute

87.5

86.5

NULL

NULL

NULL

drug

100

81.1

NULL

NULL

NULL

DrugAdministrationRoute

81.1

100

drug

NULL

NULL

Unit

100

86.5

NULL

NULL

dose


C. Medication orders used in experiments

Basic medication orders

  1. turn off lasix
  2. morphine two milligram times one iv
  3. dc hydralazine iv
  4. zofran four milligram iv q eight hour whenever required nausea vomiting
  5. tylenol six hundred fifty milligram q four hour whenever required
  6. morphine two milligram im times one whenever required
  7. Normal Saline Bolus IV five hundred c c
  8. Start Levophed g t t eight milligram
  9. one milligram Versed Give Q one hour for sever shivering
  10. Double Concentrate Nitroglycerin two hundred milligram in two hundred fifty c c

Extended medication orders

  1. change hydralazine twenty mg ivp Q six hrs PRN
  2. change to albumin twenty five percent of hundred c c
  3. Decrease heparin g t t to six hundred units per hour
  4. increase CVVHD predilution to one thousand five hundred ml per hour
  5. albumin five percent of two hundred fifty cc times one iv now
  6. potassium chloride twenty meq iv times one over one hour
  7. give twenty meq potassium chloride per oral times one
  8. give sixteen mmol sodium phosphate iv times one
  9. tylenol six hundred fifty milligram q four hour whenever required starting tomorrow ending october fifteenth
  10. tylenol six hundred fifty milligram q four hour whenever required starting tomorrow ending next Monday
  11. hold next dose of amphetamine
  12. decrease nesiritide drip by zero point zero five milligram per kg per min
  13. increase milrinone drops to zero point two mcg per kg per min
  14. fifteen mmol naphos ivpb times one now
  15. restart dopamine drops
  16. increase heparin drops to one thousand six hundred units per hour iv
  17. please renew vicodin one tab q four hour per oral whenever required for moderate pain
  18. temazepam fifteen milligram p o times one now for insomnia

REFERENCES

  1. D.M. Cutler. (2005). US Adoption Of Computerized Physician Order Entry Systems. Health affairs, 24(6), 1654-1663.
  2. P. Aspden, J. Wolcott, J. L. Bootman, L. R. Cronenwett. “Medication Errors Injure 1.5 Million People and Cost Billions of Dollars Annually” Preventing Medication Errors. <http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=11623>
  3. M. Meadows. “Strategies to Reduce Medication Errors” FDA Consumer Magazine May 2003-June 2003 <http://www.fda.gov/FDAC/features/2003/303_meds.html>.
  4. T. Hendrickson. Verbal Medication Orders in the OR, AORN Volume 86, Issue 4, October 2007, Pages 626-629. <http://www.sciencedirect.com/science/article/B83WR-4PVRFT9-C/2/f215273dc1edac4f0779c4d7db3453e6>
  5. “Computer Physician Order Entry.” Wikipedia: The Free Encyclopedia. 9 November 2008 <http://en.wikipedia.org/wiki/CPOE>.
  6. D.F. Sittig, W.W. Stead. “Computer-Based Physician Order Entry: The State of the Art,” Journal of the American Medical Informatics Association (March/April 1994): 108–123.
  7. C.J. McDonald. “The Barriers to Electronic Medical Record Systems and How to Overcome Them,” Journal of the American Medical Informatics Association (May/June 1997): 213–221.
  8. P.C. Tang, W.E. Hammond. “A Progress Report on Computer-Based Patient Records in the United States,” in The Computer-Based Patient Record: An Essential Technology for Health care, 2d ed., ed. R.S. Dick, E.B. Steen, and D.E. Detmer (Washington: National Academy Press, 1997), 1–20.
  9. American Hospital Association, AHA Guide to Computerized Physician Order-Entry Systems (Chicago: AHA, 2000).
  10. R. Koppel., J. P. Metlay, A. Cohen, et al. Role of computerized physician order entry systems in facilitating medication errors J. Am. Med. Assoc. 2005; 293:1197-1203.
  11. Ceci Connolly. “Cedars-Sinai Doctors Cling to Pen and Paper”, The Washington Post, March 21, 2005.
  12. A. Mouza. “IVR and Administrative Operations in Health care and Hospitals”. Journal of Health care Information Management — Vol. 17, No. 1
  13. M. La Vigne, K. Tapper. “Interactive Voice Response in Disease Management Providing Patient Outreach and Improving Outcomes” Patient-Centered Health care 2000 pp. 46
  14. L. Stammer. “Telecom 2000.” Health care Informatics, 2000, 17(1), 34-44.
  15. J. Sherwani, N. Ali, S. Mirza, A. Fatma, Y. Memon, M. Karim, R. Tongia, R. Rosenfeld. “HealthLine: Speech-based Access to Health Information by Low-literate Users”, In Proc. IEEE/ACM Int'l Conference on Information and Communication Technologies and Development, Bangalore, India, December 2007
  16. A. Jungk, B. Thull, L. Fehrle, A. Hoeft, G. Rau. (2000). A case study in designing speech interaction with a patient monitor. J. Clinical Monit.: 16:295--307.
  17. R. Agrawal, R. Srikant. "Fast Algorithms for Mining Association Rules", VLDB. Sep 12-15 1994, Chile, 487-99
  18. Cisco Homepage. World Wide Web. <www.cisco.com>

LIST OF ACRONYMS

CPOE Computer Physician Order Entry

FDA The U.S. Food and Drug Administration

FDA U.S. Food and Drug Administration

GUI Graphical User Interface

HL7 Health Level 7

ICIPS Integrated Clinical Information Phone Service

IT Information Technology

IVR Interactive Voice Response

MeSH Medical Subject Headings

MVC Model-View-Controller

VUI Voice User Interface

XML Extensible Markup Language

VOIP Voice Over Internet Protocol

You’ve successfully subscribed to Into Code
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Your link has expired
Success! Check your email for magic link to sign-in.