Table of contents:
-
Shouldn't interpretation specialists like KUDO know better than to dabble in AI?
-
Can I mix human interpreters and AI interpreters in the same meeting?
-
Will this technology have access to specific glossaries and materials we upload?
-
What if the quality is not high enough? Can I get my money back?
Can I select a different voice for each speaker?
Yes. When connecting to a meeting on KUDO using AI interpretation, you can choose between a female or a male voice. This choice is specific to each user logging in and can be updated anytime in the settings.
⚠️ if you’re having an on-site or hybrid event, this voice setting may need to be considered if all your tech team is using one single laptop to send the audio feed into KUDO.
What languages are currently supported by KUDO AI?
Source language is the language presenters will be speaking during the meeting. KUDO AI currently supports the following source languages:
- Brazilian Portuguese
- British English
- Canadian French
- English
- French
- German
- Italian
- Mexican Spanish
- Portuguese
- Spanish
Target languages are the languages your audience will select and listen to the meeting into. KUDO AI currently supports the following target languages:
- Arabic
- Brazilian Portuguese
- Canadian French
- Chinese
- Chinese (Taiwan)
- Croation
- Czech
- Danish
- Dutch
- English
- Finnish
- French
- German
- Greek
- Hebrew
- Hindi
- Italian
- Japanese
- Korean
- Mexican Spanish
- Norwegian
- Persian (Farsi)
- Polish
- Portuguese
- Romanian
- Russian
- Spanish
- Swedish
- Turkish
- Ukrainian
More languages are currently being developed and tested in beta versions. If you are interested in getting involved in the testing process, please reach out to your KUDO contact.
You can select multiple source language if this option is activated in your account, and multiple target languages. For stability and performance's sake, we recommend you select a maximum of 10 target languages. This recommendation may vary depending on your meeting set up. For tailored advice, please reach out to your KUDO contact.
What is KUDO AI?
KUDO AI is a machine-powered, real-time speech translation solution. Developed in-house by a team with joint expertise in language and technology, the pilot version of this ground-breaking product was launched in Q1 2023 on the KUDO platform, making it the world’s first fully integrated, end-to-end solution of its kind.
Given the potential for language access to transform the way we communicate, the product roadmap for KUDO AI—like that of all KUDO’s human interpretation tools and solutions—includes continuous optimizations.
How does KUDO AI work?
As machine translation technology goes, the methodology of KUDO AI is also unique; it integrates state-of-the-art Natural Language Processing systems like speech recognition and synthesis with innovative KUDO language models trained to support real-time, continuous translation of spoken language. Essentially, KUDO AI goes a step further than the norm by analyzing speech structure in real time and breaking it down in a way that mimics a more natural pace of speaking. The result? A unique user experience that offers clients the ability to customize voices and even control speech fluency, based on KUDO Speedometer technology.
While we are unable to divulge details about the single components and processes involved—which come under KUDO’s IP—we can share that the solution uses a robust cascading system comprising speech recognition, machine translation, speech synthesis, and modules to analyze the speech in real-time and make informed decisions on the translation strategies.
Will this replace human interpretation?
No. KUDO AI is not a replacement for our existing solution, but an expansion of language access to market segments that have never been able to afford professional interpretation.
Interpretation is a broad field spanning many use cases, objectives, and methods. Professional interpreters are needed to ensure accurate communication across languages, and they will be for years to come. Technology can be helpful in repetitive, scripted exchanges, however, and for languages that are widely spoken. This is not a zero-sum game.
Ultimately, language is complex, and it is questionable whether AI will ever be able to replace the judgment, flare, presence of mind, and the ability of a human interpreter to mediate complex philosophy or highly technical discussions. But as technology continues to evolve, so will interpreters’ roles.
Shouldn't interpretation specialists like KUDO know better than to dabble in AI?
The very fact that we are industry experts makes us the best company to launch an AI solution (best for both interpreters and clients). Whereas AI and technology-centric companies exist and are growing in number, KUDO AI is simply an addition or “entry point” to our existing interpretation offerings – one that moreover highlights the superior quality of professional interpreters across the industry by virtue of it being marketed as a “basic” solution.
Can I mix human interpreters and AI interpreters in the same meeting?
We are currently evaluating this option, but in the very first version of KUDO AI, it will not be possible to combine human interpreters with AI in the same meeting.
Will this technology have access to specific glossaries and materials we upload?
Yes. KUDO focuses on providing the most accurate interpretation to a specific customer conversation. This is the case with our human interpreters, and it will be the case for our AI solution. We aim to allow clients, if they wish, to integrate their own resources in order to increase the quality of translation for a specific event.
You can add a list of proper names (people's name, product's name, etc.) so that AI can use them to enhance quality.
What if the quality is not high enough? Can I get my money back?
Extensive rounds of user testing indicate that KUDO AI is sufficiently good to be used in select meetings and events, even in its first version. Existing clients will additionally be provided with a test meeting so they can assess the quality for themselves. Based on what use cases they want to test, their client managers will then recommend whether KUDO AI is a good fit or not.
What is the overall level of accuracy we can expect?
Tests have shown that the accuracy level of KUDO AI – in the specific scenarios for which we have planned use cases for the solution – is on average 3.4/5, and that quality is 3.8/5. These scores were surprisingly high compared with human interpretation.
How do I know that the results of the S2S translation are good enough when I do not understand the other languages?
This is a problem you would have with any language interpretation solution – human or AI.
Feedback from your meeting participants – during or after – should indicate the quality of both spoken and written translation provided by KUDO AI. And bear in mind that the KUDO team are, above all, experts in multilingual communication. Whether it is our human interpretation solution or our AI Speech Translator, we are constantly testing and optimizing our services.
If this offering is cheaper than using human interpretation service, what keeps clients from trying out KUDO AI for events in which they have previously used human interpretation?
KUDO has designed a pricing model so that both KUDO AI and Human-powered Interpretation are available to the clients. Also, in its present form, KUDO AI presents several limitations that make it incompatible with certain use-cases, for which human interpretation remains the best fit. E.g., KUDO AI can only be used in non-interactive, unidirectional presentations where listeners are passive. For anything else, the use of human interpreters will be needed.
Does KUDO AI recognize dialects or various accents?
We plan to include cover for mainstream language variations, such as Brazilian Portuguese and Canadian French. As for accents, KUDO AI probably fares better than the average interpreter in accommodating them. Part of the reason is that, unlike a human interpreter, the technology has no bias and attaches no judgment to how a speaker enunciates their words.