Machine interpreting, also known as automated interpreting, is a relatively new term. This makes sense, as it is also fairly new technology.
So, what is machine interpreting? If you’re familiar with machine translation, you’ve probably figured it out already.
While machine translation automatically translates written text, machine interpreting converts spoken words into different languages.
Does this mean human interpreters are out of jobs? No – at least, not in the near future.
How Machine Interpreting Works
Machine interpreting essentially combines two forms of previously existing technology: voice recognition software and machine translation.
Machine interpreting can be broken down into three basic steps:
1. Voice recognition software picks up a speaker’s voice, and renders it into text.
2. The text is run through a machine translation program.
3. The translated text is automated into speech in the target language.
This seems like a simple process. However, since there are many glitches in voice recognition software and machine translation, machine interpreting is a very unreliable form of communication.
Issues with Machine Interpreting
The core issue with machine interpreting is its basis in machine and automated translation. While machine translation can often give the gist of a statement, it is not acceptable in situations which demand accuracy.
Also, as anyone who has ever been frustrated with the voice command features on their phone will agree, voice recognition software is not a perfect product.
Poor connections, loud background noises, heavy accents, mispronunciations and regional dialects all make it difficult for computers to “understand” your voice.
However, neither one of these issues has stopped some companies from attempting to create automated interpreting services.
Current Machine Interpreting Services
In June 2013, an Israeli startup company called Lexifone launched an automated telephone interpreting service. Lexifone is able to interpret between English and seven other languages (French, Spanish, Italian, Portuguese, German, Russian and Mandarin).
According to Lexifone, their service runs audio through four separate translation programs in order to evaluate statements and then conclude the best translation. The company states that unlike web-based translation services (such as Google Translate) that use statistics to analyze translation patterns, their programs attempt to analyze the meaning of speech.
However, an AP article states that Lexifone “doesn’t offer a major leap forward in translation technology.”
According to the article, the machine interpreting process is very slow. Users must wait for prompts both before and after speaking, and there are interruptions regarding the product’s commands or other services.
On top of the less-than-friendly user experience, the interpreting is far from accurate.
Apparently the program is best-suited to straightforward dialogue regarding trade and business terminology (though it reportedly muddled details such as numbers, which is particularly problematic when interpreting for business purposes).
But colloquial speech did not fare well at all.
According to the article, when a Chinese speaker asked in Mandarin, “What’s the issue?” the system rendered in English, “Australian pig.” (We’d love to know what three options were passed over to determine “Australian pig” as the best translation.)
Clearly, it will be quite some time – if ever – before machine interpreting becomes a viable replacement for professional interpreting services.
Machine Interpreting at the 2020 Tokyo Olympics
Then again, Japan seems very optimistic about the future of automated interpreting.
Researchers at the Nara Institute of Science and Technology have been developing software to improve the accuracy and speed of automated simultaneous interpreting from Japanese to English.
There is a major difference between consecutive and simultaneous interpreting. Lexifone’s aforementioned telephone service would fall under the category of consecutive interpreting, but simultaneous interpreting is much quicker: it begins giving an interpretation in the target language even before the source speaker has finished talking.
This becomes very tricky when interpreting from a language like Japanese, which places verbs at the end of the sentence. However, Nara has developed a program which can allegedly anticipate a sentence’s meaning before it is finished. According to the research team, it is comparable to a simultaneous interpreter with a year of experience.
While this technology may be sufficient for interpreting formal sports commentary (e.g., scores, finishing times and numbers of medals won), we’re still betting that the use of machine interpreting at the 2020 Olympics will give us plenty of translation fails.
What do you think of machine interpreting and what it means for the future? We’d love to hear your opinions in the comments below.