Imagine you’re overhearing a conversation between two people and, in the middle of the discussion, one of the people says “colon”. If you were busy checking a new text message on your phone or reading the latest tweet or blog from Voicebrook and this was ALL you heard, you might not know if this person is talking about the symbol, the anatomical structure, or a European colonial settler (seriously). Now, had we also known that the word prior to “colon” was “ascending”, and that the two people speaking were pathologist assistants in the gross room, than those contextual clues would have made it easier for us to determine the correct meaning of this word.
Because we as humans come stock with a fairly good short term memory, we have ability to (very quickly and mostly subconsciously) sort out what every word directed at us means. The same concept applies to speech recognition, although the software's memory is quite a bit shorter than ours; its short term memory gets cleared out each time the user verbally pauses for more than about a second.
On a separate but related note, when you’re speaking to a computer, the speech engine has been pre-programmed to understand the frequency that certain words appear on their own or in the context of other words. So not only is the speech engine attempting to piece together the appropriate combinations of phonemes (or language sound sub units) to create the individual words spoken, it’s also calculating whether those word combinations make sense together based on its electronic understanding of the English language and the subject (i.e.: Pathology Report) you are talking about.
Understanding all of this, I’d like to share my two most important tips for increasing your accuracy when speaking to your computer:
1. Speak in full sentences (or at least full thoughts)
I have implemented speech recognition software for many doctors in many different medical specialties. What I’ve found across the board is that those who plan out their sentences before beginning to speak tend to have better accuracy overall. On the other hand, those who pause frequently in their dictation end up having to spend more time proofreading and editing their text. This often leads to a sort of negative feedback loop in which the way the user speaks causes mistakes, which leads the user to trust the software less and speak to it like they might to someone that’s hard of hearing (i.e. “Received…...in…...formalin……” or worse, "re...ceived...in...for...ma...lin").
2. Make your vocabulary YOUR vocabulary
All speech engines have an underlying vocabulary that consists of thousands of words and phrases that the engine matches up with the sounds coming from the user. These vocabularies are usually specialty-specific and were built partially by scanning thousands of reports related to that specialty.
At Voicebrook, when we start setting up a pathology project, we ask for a list of common stain names, institution names, and physician names so that we can add more site-specific context to your baseline vocabulary so that we can add site-specific context to your baseline vocabulary. But, even though all of that preparation goes into it, it may turn out that your own “personal poetry” (a phrase I’ve heard from countless pathologists across the country) consists of word combinations that the engine wasn’t built to understand. Adding words and phrases to match your speech patterns is a very simple process that can resolve a LOT of accuracy issues. (In VoiceOver, this can be done using the Dragon View/Edit tool in the Vocabulary menu.)
I hope this brief blog provided focus to these two important points which I believe will have the most significant impact on your accuracy. It’s my hope that these tips will help contribute to a very positive experience with speech recognition. For more information or help adding words or correcting words to understand you better, please contact support email@example.com.