<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=182969788831632&amp;ev=PageView&amp;noscript=1">


CAP Today Explores Variances in Pathology Speech Recognition Adoption

This Q&A with Voicebrook CEO Ross Weinstein was published in the October 2015 issue of CAP Today. The content builds upon a three-part blog post that Ross wrote back in May

The reality is that not all sites and solutions are created equal.  Pathology laboratories planning for a successful speech recognition reporting implementation need to consider many variables apart from the technology that recognizes spoken words and converts them into text.  This Q&A provides a working list of differences between laboratories that must be considered before selecting and deploying the right solution for your lab. 

Q (CAP Today): I’ve talked to other pathologists about their experience with speech recognition, and it tends to vary greatly from group to group. Some have an easy time and others find it difficult and just tolerate it. How can this varied experience with the same product be explained?

A (Ross Weinstein). As I have mentioned in past articles in CAP TODAY, speech recognition is not a product/solution. It is a technology that can be used as part of an effective pathology reporting solution like our VoiceOver product. I can’t speak entirely for other implementations of speech recognition technology, but among our VoiceOver client base we do see variances in initial adoption satisfaction. I would not objectively describe those variances as great, and they tend to narrow with time and experience. As I like to say, “The devil is in the details.”

VoiceOver solution versus speech recognition technologyI recently wrote a three-part blog series intended to help pathology sites determine what solutions are best for them and how they can better predict success at their site. In the third part of the series, “Ways to make sure your speech recognition selection isn’t a failure,” I discuss the many variances in pathology laboratories that make it difficult to look at any one user, at any one site, and at any one AP system and predict your success based on their experience (Ways to Make Sure Your Speech Recognition Selection Isn't a Failure). All users, sites, AP systems, and workflows are not created equal, and each combination brings its own unique implementation success challenge.

Since speech recognition technologies and solutions are not self-contained reporting solutions, they rely on the underlying AP system and its workflow, which are major variables in the user experience. Specifically, there are variances in:

  • User role (pathologist, resident, PA, others)
  • AP system (Cerner, Epic, Meditech, Soft, Sunquest, others)
  • AP system infrastructure (client/server, virtual, or cloud)
  • Workflow (gross dictation, microscopic dictation, autopsy, other)
  • Organization type (academic hospital, private lab, hospital group, other)
  • Site locations (multisite versus standalone lab)
In each case, different combinations of these factors create many permutations of variance with challenges to address to create similarly successful user experiences.

Finally, I asked my director of client services, Lindsey Pitsch, what she believed caused the biggest variations in initial user satisfaction and acceptance, and she said that in most cases there is a direct correlation between satisfaction and user involvement in the planning stages of the implementation process.

She believes that sometimes administrators take over with the thought that it is better for the pathologists to not “waste” their time on a change that the administrators perceive to be an administrative task. The problem is that they aren’t just replacing a dictation system. The change alters the user’s daily workflow. We always request the presence of PAs and pathologists on project teams. We have historically seen that those clients with users who participate fully in the implementation process tend to have a much higher initial satisfaction rate. Their voices are heard and they know what to expect when they go live. When users are not involved, sometimes they receive the wrong message. They think speech recognition will never make a mistake or that on day one they will be exponentially faster than with transcription. By keeping users involved you can properly set and manage expectations, which will lead to a more successful user experience and perception.

To hear more about matching the appropriate solution to your specific needs, please feel free to schedule an appointment with one of our team members.

Make An Appointment  ›