Our flagship product, Audimus.Media was built upon VoiceInteraction’s speech processing technology as a proprietary automatic captioning system comprising state-of-the-art signal processing and Machine Learning techniques targeting live shows. A solution that covers the captioning accessibility requirements to live TV broadcasting, Internet streaming, VOD publishing and video metadata indexing, with advanced audio signal analysis, large vocabulary speech recognition, speaker identification, automatic punctuation and content segmentation to produce highly accurate captions with low latency.
Audimus.Media is not only another AI system, but a complete and mature product, recognized as, probably, the most reliable automatic captioning solution currently available. For many years, there was a default resistance towards automatic captioning systems but after a number of proper evaluations, it was clearly shown that the speech processing technology that powers Audimus.Media has reached a stage that represents an unquestionable added value to a TV station in all their distribution channels.
Deployed in a national TV station for the first time 11 years ago to create captions to all its live news shows, Audimus.Media incorporates the specific needs of TV producers and has in place an automatic updated system that allows it to quickly adapt to the constant vocabulary change.
It was conceived to be a transparent addition to the video production workflow that is able to push automatic captions, not only to the closed caption encoder but also encode them into Internet live streams. Since TV stations are increasingly broadcasting video contents to Internet distribution platforms, Audimus.Media also operates as a software closed caption encoder.
The newest feature with its debut at the IBC Show allows adding closed captions into live video streams encoded with the promising AV1 video codec. This new codec joins the set of previously supported codecs: MPEG2, H264 and H265.