<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=182969788831632&amp;ev=PageView&amp;noscript=1">


Features to Evaluate when Purchasing a Pathology Speech Solution

In Part One of this three part series, I discussed the importance of defining problems before embarking on a search for a new reporting solution.  I also provided real-world examples from the laboratories that we do business with.  In this second post I am going to touch upon the important feature sets that pathology laboratories should focus on when considering a speech recognition reporting solution acquisition.  These features need to be looked at in the context of solving those problems and delivering the benefits that laboratories need.

As I mentioned in the CAP Today article, "Hear me now? Another audition for speech recognition," the positive cost savings and benefits of speech recognition TECHNOLOGY can only be achieved as part of a complete pathology reporting workflow solution that includes managed templating and other workflow enhancements.  What I meant by my comments is that some people get mesmerized by the speech recognition technology and all of its cool bells and whistles, but lack context for how those features impact their reporting workflow or process.  For the most part, they impact fractions of percentages against recognition accuracy, but have little impact on time savings or final report accuracy. 

Speech recognition technology is a great tool to rapidly and accurately create text, but what happens when a potential user is resistant to its use or struggles with the accompanying report creation workflow?  What happens when the features address accuracy, but have no proven way to address laboratory system limitations, workflow, or individual user preference?  Quite simply...failure or limited success that leads to lost productivity and opportunity cost and ultimately frustration and regret.  Here are some feature sets to consider and questions to ask...

1. Recognition and Report Editing Tools

Mountain-Molehill_thumbWhen most organizations start their search process this is almost unilaterally the feature set that receives the most amount of focus.  I am certainly not here to say that recognition accuracy and tools are not important, but the disproportional amount of focus that advancements receive at this point in time (accuracy levels have been at a high level for over a decade) do not have a proportionate impact on the results.  The fact is that the remaining items on this list coupled with the proper implementation and training methodologies will play a much bigger part in delivering the benefits that laboratories are looking to receive from these solutions.

That said, you need to decide what your user tolerance is for initial misrecognition and make sure that the feature set provided allows all users (regardless of accent) to surpass the minimal accuracy level and edit and correct their dictations in a rapid and simple manor. The minimum standard is typically 95%, and most modern solutions provide for 98% or greater accuracy. 

So what does 98% accuracy mean?  It means that every 2 words out of 100 need to be edited.  These rates are actually proven to be better than the average transcriptionist.  If you have the right tools then editing can be a snap.  Users see the words on the screen while their specimen is still in front of them. They either select the words using voice or mouse and either re-dictate or issue the correction voice command.  Issuing the correction command will fix the misrecognition and their speech profile will learn from these corrections while correcting the text in the report.  

Tip: Don't overlook the importance of a Pathology Speech Model.  Recognition is typically dependent on the context of what is being dictated and a Pathology Speech Model ensures that the words and the way that you phrase them are being recognized in the context of a Pathology report.

2. Workflow Tools

Laboratories need to decide whether all users will be dictating and self-editing (provides the most benefits).  If not, then they need a hybrid solution that allows for traditional transcription workflows.  This can include an integrated traditional digital dictation solution or a delegated solution where initial text is transcribed by the speech recognition engine and edited by a third party. 

Laboratories also need to consider whether the system allows for direct dictation into the AP System text editor of choice, and if not, what tools are available to input text where it needs to go in the AP System?  Are there tools available to help navigate the AP System workflow and eliminate unnecessary user steps from the workflow? These are all important questions to ask if the intention is to select a unified platform that addresses the needs of all users without leaving any behind.

Tip: Make sure you consider per diems and residents where the learning curve of teaching self-editing workflows may not be worthwhile.

3. User Customization Tools

Not every user will use the software in the same way, but providing too many choices in an unmanaged environment can also cause more support headaches down the road when every user has their own unique undocumented way of doing things.  In order to balance the needs with the ability to support those users, here are some questions you should ask. 

Does the system allow for user preference of input devices like microphones and foot pedals, and can the associated preference settings for these devices vary and be identified on a user basis?  Does the solution have customization tools that provide users with variable commands for speech navigation and can these commands be centrally managed by an administrator?  Can users customize their vocabulary and can vocabulary customizations be shared across groups of users?  Can voice commands easily be shared by groups of users?

Tip: The importance of user management tools for devices like foot pedals and microphones is magnified when users share workstations.

4. Template Tools

The importance of templating as an accuracy tool is often overlooked in the selection process.  Templates can serve to significantly reduce the number of words users dictate, which in turn lowers overall initial report misrecognition. They also significantly reduce dictation time since much of the report is already created.  Lastly, they serve to provide standardization and formatting that can assist the patient care team in identifying critical areas of the report.

Here are some questions that you should ask about templating.  Does the solution provide pathology-specific templating capabilities?  If so, does the templating solution allow for individual and group templates?  Can the templates be sorted contextually by report type?  Do the templates provided increase the report accuracy and readability, and ultimately reduce report creation time?  Do the template tools allow for both free text and synoptic reporting?  Are there template automation tools that allow for default text and quick navigation?  Do the template creation tools allow for multiple formatting features that align with your AP System formatting options?  Can templates be navigated by voice?  Can the input devices (microphone and foot pedal) be used to navigate templates without voice commands?

5. Pathology-Specific Reporting Tools

Speech technology can be used to manage workflow and call out specific safety features in the underlying AP system.  If implemented properly it can do so without forcing users to take extra manual steps.  Beyond addressing the integration needs of any particular AP System and reporting workflow, can the features offered address other complementary documentation activities of the Pathologists and enhance patient safety? 

6. Management and Downtime Tools

Since most speech technology solutions are directly integrated with the AP System, it is important to ask what happens when the AP System is not available.  Can users still dictate reports or will this downtime cause reporting delays that impact patient safety

In addition, managing large sites can be very tricky when each user has their own speech profiles, vocabularies, voice commands, and device settings.  As a result you have to ask if the solution provides for central management of user speech profiles, vocabularies, device settings and templates?

As you can see there are a lot of features outside of recognition accuracy tools that need to be considered when selecting a platform to address the problems that you identified in Part One of this blog series.  While this list is comprehensive, it certainly is not exhaustive.  In Part Three of this series I will discuss how to ensure that the systems that you are considering will best meet the needs of your organization.

In the meantime, feel free to connect with us if you have any questions or need additional information.

Contact Us ›