Watch Your Language!

Watch Your Language!

WIRELESS VOICE SYSTEMS HELP FILL COMMUNICATION GAPS

When it comes to space, Media Vision addressed the issue by incorporating room-combining capabilities via fiber optics. Ours is indeed a multicultural society, one that thrives because of its diversity. But we can only reap the benefits that diversity offers if we can communicate with one another, even when we may not all speak the same language. The need for language interpretation across all facets of business and government is great, creating opportunities for AV integrators to provide the technology that facilitates it.

The biggest benefit that language interpretation systems have to offer is that they’re designed to support simultaneous translation: for example, a speaker presents in English and, at the same time, a Spanish-speaking interpreter translates and transmits what’s being said to the Spanish-speakers in the audience, who are equipped with receivers. This is much more efficient (and a lot less cumbersome) than consecutive interpretation, a ping-pong approach that requires the speaker to deliver his or her message in chunks, pausing to allow the interpreter to translate what’s been said before continuing on with his or her presentation. Language interpretation systems also utilize the same technology and frequencies as assistive listening: in 2013, in response to a petition put forth by Eden Prairie, MN-based assistive listening products manufacturer Williams Sound, the FCC modified its rules to allow simultaneous translation devices to operate in the 72 MHz to 76 MHz band.

Listen Technologies recently launched iDSP RF and IR receivers, which can be programmed to clearly identify the language being interpreted. The intensity of simultaneous translation requires interpretation systems to be extremely easy to use. “It’s like doing an extreme sport,” said Marcela Lopez, a Spanish-English interpreter and president of Spanish Solutions Language Services, an interpretation and translation firm based in Miami, FL. “You’re listening and coming out with the translation at the same time—you’re monitoring yourself and the speaker.” Interpreters generally trade off every 20 or 30 minutes, sometimes less if presenters speak particularly fast, or if the presentation is peppered with highly technical information like acronyms, she explained.

“People using this technology want ease of use and portability,” said Cory Schaeffer, vice president of business development at assistive listening systems manufacturer Listen Technologies in Bluffdale, UT. She highlights the company’s recently launched iDSP RF and IR receivers, which can be programmed to clearly identify the language being interpreted (channels, for example, are labeled, “English,” “Spanish,” “Portuguese,” versus “Channel A,” Channel B,” “Channel C”). “They don’t want to be worrying about the technology; they need to be thinking about their job of interpretation. They can’t have this ramp-up time to train on the product—it’s got to be intuitive.”

Traditionally, language interpretation systems are based on RF, IR, and digital RF transmission. With the prevalence of BYOD, however, several manufacturers are developing systems that stream directly to smartphones, letting meeting participants use their personal mobile devices as receivers. (In other words: there’s an app for that.) As of press time, Sennheiser’s Mobile-Connect supported up to 25 users per system, with plans to expand to 50, and then 100, throughout the course of this year. Williams Sound released its mobile solution, Hearing Hotspot, at InfoComm 2014.

Manufacturers are constantly working to improve products to help users communicate, even when they speak different languages. Most recently, companies are working to design products that support simultaneous translation. But while these mobile device-based solutions may cut down on the amount of equipment that facilities need to provide (namely receivers), Paul Ingebrigsten, president and CEO at Williams Sound, cautions that they are not necessarily suited for all applications just yet. “If your intent is to rely on an existing Wi-Fi infrastructure, that can be a non-starter,” he said. In some cases, a dedicated network is necessary, or if the system is piggybacking on an existing one, it had better be robust. He also notes that users can run into compatibility issues, especially if their device is several years old. “In many ways, the question for the person who is running the meeting is: what kind of problem do you want to manage? Because the problems don’t just go away—you just get a different set of problems when you’re managing bring-your-own- device as opposed to providing purpose-built devices.”

One of the challenges that multilingual organizations face is related to real estate: interpretation booths take up a significant amount of space. Kevin Stoner, director of operations at Media Vision, a wired and wireless conferencing and interpretation solutions developer headquartered in Oakland, CA, notes that the company has addressed this by incorporating room-combining capabilities via fiber optics. He explained that at the United Nations in New York, NY, there is a regular need for simultaneous translation in six languages, but for some events, that need increases to 10 or 12. “The best way to utilize the space is not to constantly have the maximum number of booths, but to ‘borrow’ those additional booths [that are in] another location,” he said. The same concept could apply to courtroom and corporate applications: “A courtroom can’t have three different interpretation booths regularly in every courtroom,” even if there is a somewhat regular need to interpret three different languages simultaneously. “Providing three that are centrally located becomes critical in designing the space and utilizing it in the best way possible.”

With the prevalence of BYOD, several manufacturers are developing systems that stream directly to smartphones, letting meeting participants use their personal mobile devices as receivers. (Left) The Sennheiser MobileConnect supports up to 25 users per system, with plans to expand to 50, and then 100, throughout the course of this year. (Right) The Sennheiser tourguide 2020-D system relies on a digital RF transmission to connect the guide’s mic to his or her tour group, wearing the HDE 2020-D II headset. While healthcare, law enforcement, and courtroom applications—where lives literally hang in the balance—are the most obvious use cases, the need for language interpretation systems touches pretty much every market one can think of. National and multinational organizations must communicate with employees, suppliers, and clients from all over the world, and higher education institutions courting foreign students may incorporate language interpretation into events, such as campus tours. At the K-12 level, schools apply it to parent-teacher meetings in an effort to empower non English-speaking parents to be involved in their children’s education, and houses of worship use it to address increasingly diverse congregations.

In 2013, in response to a petition put forth by Eden Prairie, MN-based assistive listening products manufacturer Williams Sound, the FCC modified its rules to allow simultaneous translation devices to operate in the 72 MHz to 76 MHz band. But even though there may be a need for it, language interpretation isn’t always top of mind, resulting in missed opportunities. “I think the attitudes [about it] in different parts of the world differ quite dramatically,” said Ryan Burr, technical sales manager at Sennheiser Middle East in Dubai, U.A.E. In this region, the manufacturer serves 25 different countries, and Burr explained that language interpretation plays a big role because businesses, sports facilities, educational institutions, and the hospitality industry attract people from all over the world. Europe, he notes, is similar. For U.S.- based integrators, he believes that presenting language interpretation systems as an option can lead to opportunities beyond the sale of these products. “They’re going to have some form of assistive listening requirement somewhere within the project, and the [integrator] should try to develop that into a much more comprehensive system than just assistive listening.”

With both public and private sector organizations focused on collaboration, Schaeffer urged AV integrators to include language interpretation functionality in their project proposals. “Just asking the question: ‘do we have a need to include people that may not speak English?’ It’s about understanding what the experience is going to be like for all—not just the English-speakers,” she said. “We make the assumption that everybody speaks English, but that’s just an assumption. True collaboration comes when we can bridge that gap of understanding and work to meet everybody’s needs.”

Carolyn Heinze is a freelance writer/editor.

Bring It Up!

Cory Schaeffer, vice president of business development at Listen Technologies, pointed out that in some cases, the RFPs for assistive listening and language interpretation systems aren’t distributed to AV integrators, largely because clients may not realize that AV firms are positioned to provide these systems. She noted that mentioning the possibility for language interpretation to be incorporated into a project is the first step in harnessing the opportunity this category offers. “Clients come to integrators with an idea of their need; however, a good integrator will ask questions such as, ‘do you collaborate with others who may not speak English?’ To us at Listen, it’s fairly straightforward, as we simply treat the other languages as another audio channel. It’s not black magic; it’s just audio.”

Carolyn Heinze has covered everything from AV/IT and business to cowboys and cowgirls ... and the horses they love. She was the Paris contributing editor for the pan-European site Running in Heels, providing news and views on fashion, culture, and the arts for her column, “France in Your Pants.” She has also contributed critiques of foreign cinema and French politics for the politico-literary site, The New Vulgate.