The reality is that not all sites and solutions are created equal. Pathology laboratories planning for a successful speech recognition reporting implementation need to consider many variables apart from the technology that recognizes spoken words and converts them into text. This Q&A provides a working list of differences between laboratories that must be considered before selecting and deploying the right solution for your lab.
Q (CAP Today): I’ve talked to other pathologists about their experience with speech recognition, and it tends to vary greatly from group to group. Some have an easy time and others find it difficult and just tolerate it. How can this varied experience with the same product be explained?
I recently wrote a three-part blog series intended to help pathology sites determine what solutions are best for them and how they can better predict success at their site. In the third part of the series, “Ways to make sure your speech recognition selection isn’t a failure,” I discuss the many variances in pathology laboratories that make it difficult to look at any one user, at any one site, and at any one AP system and predict your success based on their experience (Ways to Make Sure Your Speech Recognition Selection Isn't a Failure). All users, sites, AP systems, and workflows are not created equal, and each combination brings its own unique implementation success challenge.
Since speech recognition technologies and solutions are not self-contained reporting solutions, they rely on the underlying AP system and its workflow, which are major variables in the user experience. Specifically, there are variances in:
- User role (pathologist, resident, PA, others)
- AP system (Cerner, Epic, Meditech, Soft, Sunquest, others)
- AP system infrastructure (client/server, virtual, or cloud)
- Workflow (gross dictation, microscopic dictation, autopsy, other)
- Organization type (academic hospital, private lab, hospital group, other)
- Site locations (multisite versus standalone lab)
Finally, I asked my director of client services, Lindsey Pitsch, what she believed caused the biggest variations in initial user satisfaction and acceptance, and she said that in most cases there is a direct correlation between satisfaction and user involvement in the planning stages of the implementation process.
She believes that sometimes administrators take over with the thought that it is better for the pathologists to not “waste” their time on a change that the administrators perceive to be an administrative task. The problem is that they aren’t just replacing a dictation system. The change alters the user’s daily workflow. We always request the presence of PAs and pathologists on project teams. We have historically seen that those clients with users who participate fully in the implementation process tend to have a much higher initial satisfaction rate. Their voices are heard and they know what to expect when they go live. When users are not involved, sometimes they receive the wrong message. They think speech recognition will never make a mistake or that on day one they will be exponentially faster than with transcription. By keeping users involved you can properly set and manage expectations, which will lead to a more successful user experience and perception.