As part of the United States Department of Veterans Affairs, the VHA Employee Education System (EES) delivers an extensive range of training content, wellness programming, and departmental communications to VA staff and Veterans nationwide. EES has deployed the ENCO automated captioning solution enCaption to achieve the department’s goal of increasing productivity and reducing time to market for its products and services while continuing to deliver accurate captioning services. Building on its satisfaction with the initial enCaption pilot and the purchase of its first enCaption unit, EES is now in the process of adding two more enCaption units to dedicate to real-time live captioning of its two 24/7 satellite program channels.
The VA Knowledge Network (VAKN) satellite uplink center, located in St. Louis, Missouri, distributes two programming channels to 250 VA Medical Centers and Community Based Outpatient Clinics nationwide. The VA-1 channel is dedicated to delivering training and communication to over 400,000 VA staff and contractors. Its second 24/7 channel, Veteran News Network (VNN), is also distributed to both internal and external audiences on platforms including YouTube and Roku. VNN offers programming on Veteran’s health issues, benefits, nutrition, and other aspects of Veteran care. All VAKN content is closed captioned both for regulatory compliance and to ensure full accessibility by staff and Veterans with disabilities or impairments.
Until recently, EES had exclusively used live captioning service providers with human transcribers. The provider would dial in to the facility’s caption encoders and perform real-time transcription of content that was then archived for subsequent playout on the channels. Looking to upgrade their technology and boost productivity, the VA began researching an automated, AI-powered captioning solution to enhance existing captioning services.
“With our previous workflow, we needed to schedule captioners to dial in, but the timeframes for our Master Control engineers were pretty tight,” explained Hugh Graham, telecommunications specialist, VHA Employee Education System. “The engineer would need to start playback of the original content on one device and record the captioned result on another. It was very labor-intensive, and any technical glitches became a significant bottleneck.”
After extensive research, EES tested an enCaption unit in its St. Louis facility and decided it was a good fit. enCaption’s speech-to-text accuracy proved particularly impressive.
“Human captioners do a great job, but our unique medical terminology can be difficult for them,” explained Graham. “We can’t always send them a script of what will be said, so they are at the mercy of the information we give or don’t give them. enCaption allows us to upload word models for spelling things like medical terms and the names of our leadership, which it can then use for all future captioning. Even without the word models, enCaption has proven to be amazingly accurate for most of our content right out of the box.“
EES’ enCaption workflow mirrors its previous human captioning counterpart. Content is played out from a media server into enCaption device. The resulting captioning data and video is fed to an EEG 470 caption encoder and embedded. The closed-captioned video can be routed to a record device, uplink channel, or to a streaming media encoder for distribution. EES leverages Haivision encoders for streaming delivery and also leverages Haivision’s cloud-based delivery service, Haivision Hub, to deliver VNN to YouTube and Roku devices. “This signal flow makes it very simple,” said Graham. “We just embed captions into the SDI signal once, and the embedded results are available to any of our platforms.”
While already enjoying the freedom of not needing to schedule human transcribers, EES looks forward to a further productivity boost with its purchase of two additional enCaption units. The new enCaption systems will be dedicated to real-time, 24/7 captioning of the VA-1 and VNN satellite channels, YouTube, and Roku distribution. The offline enCaption unit will be used for ad-hoc captioning of real-time sources or file-based content.
“The two additional units that we have added for our 24x7 channels enable a whole new workflow, where it’s all done in real-time rather than captioning the content and archiving it for playback later,” explained Graham. “Our first enCaption unit has already made things easier, and our new process is even more automated and ‘hands-off.’ Since we closed caption everything that goes out, having boxes sitting there that just do it without any manual steps or needing to schedule time with a human captioner is a big boost to productivity and a big time-saver. With enCaption, our Master Control will never have to worry about manually intensive offline captioning again.”