In my role as an Implementation Specialist, I come across a lot of users with different backgrounds, personalities, and job functions. It is my responsibility to ensure that each user I work with is able to continue to work in the most natural way possible while adhering to certain best practices that are proven to benefit all users. Below is a list of the four most common mistakes that I have seen new VoiceOver users make and some best practices to prevent them.
1. Not Maintaining Proper Microphone Position
The first thing that I do when I train a new user is help them setup their speech recognition profile. During this step, I instruct each user where their microphone should be placed in relation to their mouth. The software then learns about how they speak and sound, but it is also doing this in the context of where their microphone is positioned.
What I sometimes find is that new users will move their headset around their neck, not face their desktop microphone when they are talking, sit further away from the microphone then they did during their training, or constantly change the position of their hand with the handheld device. This can result in significant recognition accuracy issues.
Tip: If using a headset, place the microphone element three fingers width to the left or right of the side of your mouth. If using a desktop microphone, try to position it nine to twelve inches in front of your mouth for the best results. If using the handheld microphone try to keep it 4-6 inches from your mouth.
2. Not Training or Correcting Words
We understand that users may feel that there is a lot of information to learn during an initial training session, and in most cases they are asked to apply this knowledge almost immediately. Due to this rapid transition to productive use, in the early stages many users focus on their initial dictation, but forget to train or correct words that were misrecognized. Instead, they will select the word/phrase and repeat it in hopes that it is understood the next time. This action doesn’t allow the software to “learn” and adapt to your voice. Instead, we recommend that users make use of the Dragon correction command to correct words. When used properly, the software will learn from its mistakes, and it will be more likely to recognize the correct words the next time you say a similar word or phrase.
Tip: I recommend reading Lindsey Pitsch's blog post, "Word Correction Saves Time and Improves Recognition Accuracy", to understand proper correction techniques for specific situations.
3. Improperly Closing User Profiles When Exiting the Software
As mentioned in the previous tip, users can improve their accuracy by correcting and training words. That said, in order for these beneficial changes to be recorded, they must be saved to your central user profile. Likewise, there are other types of changes that you may make during a dictation session. You may add a template, modify a template, delete a template, or change some of your user configuration settings. While these changes are recorded locally in real-time, if your user profile is not shut down properly, in some instances users risk profile corruption and having to revert back to an earlier iteration of their profile.
Tip: In order to avoid this potentially frustrating circumstance, we recommend the following methodology to close a user profile when completing a session...
- From the VoiceOver toolbar, Click "Users"
- Then Click "Close User"
4. Not Following the Designed Workflow
One of the main benefits of using our software versus generic speech recognition applications is the fact that each solution we provide our customers is designed to work within their AP System environment. We build a standardized series of voice commands and processes that help users rapidly navigate through an AP System while also maximizing patient safety.
As a result, we don't just train our users how to use speech recognition software. We teach them how to use the speech recognition software within the context of their AP System workflow. After all, what good is knowing how to be able to dictate if you don't know how to get to a specific place in the AP System or how each window is designed to accept speech driven text?
Not all AP Systems are created equal, and not every window in an AP System supports full "select and say" speech recognition functionality. Our team designs workflows with the user’s role and AP system capabilities in mind, and following that prescribed workflow step-by-step is necessary to ensure report creation success.
Tip: We always leave each user with a workflow manual. Please refer to this document often in the early stages of use in order to ensure that you are maximizing your use of the system. If you have misplaced this document, please feel free to reach out to our support team and they will forward you an additional copy.
In summary, we recognize that learning any new system can be challenging for users, and as a result making mistakes is to be anticipated. That said, by following the tips outlined in this blog post you reduce the chance of repeating some of the most common mistakes that we see from our new users, and in doing so improve your short-term success and transition to our solution. As the saying goes, "no matter how many mistakes you make or how slow you progress, you are still way ahead of everyone who isn’t trying". In this case being ahead equates to saving time, money and improving patient safety, so it is well worth the effort.