This year the EBU BroadThinking Conference was sounding like a holistic swirl, a milestone in the trend of technology to define sets that are greater than the sum of their parts, through creative evolution. « Where Broadcast Meets BroadBand », you get some interesting fusion effect occurring and diluting the traditional boundaries of the screens, with the handheld devices being part of the big screen experience or extending it rather than trying to scalp it, in an environment where all the devices converge towards a restricted set of standards rather than tracing their own line.
While we by default think that standardization kills creativity, events like BroadThinking show that it’s the opposite: if we gather energies to solve common problems together, we can both come up with a more evolved solution and concentrate on what’s important past the pixel grid: the user experience, so consistent across screens that you forget there’s more than one screen involved.
Android has limited support for HLS (Apple’s HTTP Live streaming protocol), and device support is not the same from one version or one device to the next. Android devices before 4.x (Gingerbread or Honeycomb), do not support HLS. Android tried to support HLS with Android 3.0, but excessive buffering often caused streams to crash. Devices running Android 4.x and above will support HLS, but there are still inconsistencies and problems.
World’s first subtitling technology that uses “Automatic Contents Recognition” to Identify, Sync and Service your subtitles or captions - efficiently and effectively... To any device, in any workflow.
The FP7 HBB-Next project (http://hbb-next.eu) presents a proof-of-concept of mutli-source multimedia synchronisation. The demo shows content from four sources.
This presentation was made at the MPEG meeting in Shanghai, China, in October 2012, related to the input contribution M26906. It gives the details about the dem
The introduction of caption transmuxing simplifies deployment by minimizing the number of source files required, though the process is slightly different for live vs. on-demand in Wowza Media Server 3.5.
The 2012 FOMS workshop took place in Paris, France. The working meeting consisted of focused sessions dedicated to bridging gaps among browser vendors and ensuing associative APIs meet the needs of developers that are making use of them. Browser vendors represented included Google Chrome, Mozilla Firefox, and Opera. Players platforms such as Youtube, BrightCove / VideoJS, Long tail / jwPlayer and of course Kaltura were also represented.
Organised by Siliva, the meeting has long severed as a great space to hack and ‘get things done‘ in the open media standards and software space. This year was no exception. In this post I highlight some of the sessions that took place and provide a rough summery of the state of associated web video APIs & components : WebRTC, Media Source Extensions / MediaStream Processing API, WebVTT, New Codecs...
Captioning is coming to internet video. Legislation goes into effect in the US during 2012 and 2013 that mandates closed captioning on certain categories of online content – see our earlier post for details on the legislation. But even apart from this legislation, closed captioning is a good thing for accessibility and usability, and is yet another milestone as internet video marches towards maturity.
Unfortunately, closed captioning is not a single technology or “feature” of video that can be “turned on”. There are a number of formats, standards, and approaches, ranging from good to bad to ugly. Closed captioning is kind of a mess, just like the rest of digital video, and is especially challenging for multiscreen publishers.
So if you want to publish video today for web, mobile, and connected TV delivery, what do you have to know about closed captioning? This post will outline the basics: how closed captions work, formats you may need to know about, and how to enable closed captions for every screen.
The EBU has today published EBU-TT part 1, with TT standing for Timed Text. It's a follow-up to the widely used EBU STL specification, which was originally published in 1991 (when the medium for exchange of subtitles was the 3.5" floppy disk!). The new format is XML-based, which makes it 'human readable' and more suited to modern integrated file-based production methods.
EBU-TT is a simplified version of the W3C Timed Text specification, which means it fits well into the broad family that includes W3C TTML and SMPTE TT, which is more focused on the US environment and on distribution. EBU-TT was developed by the EBU's XML Subtitles group, chaired by Andreas Tai of IRT.
Digital Rapids announces powerful new enhancements to the Digital Rapids Stream software for the company's StreamZ, StreamZHD and Flux encoding solutions. The same new capabilities are also available as applicable in the StreamZ Live family of live streaming encoders and the Digital Rapids Transcode Manager automated, high-volume file transcoding software. Recently-released or soon-to-ship new features include enhanced Closed Caption support for adaptive bit rate streaming and support for automated advertising insertion when streaming live with Adobe Flash technologies.
Recently released new software updates add support for Closed Captions with Microsoft IIS Smooth Streaming technology and with HTTP Live Streaming through the optional iPhone/iPad encoding module.
Expanded support for the use of cueing messages with Adobe Flash technologies enhances content owners' ability to monetize their media across multiple screens in fully automated workflows. Cueing messages in live input sources, commonly used in broadcast operations, can be detected to automate the insertion of cue points into outputs targeting Adobe Flash Player and Adobe AIR applications. Additional new features in the latest software updates include DFXP timed text file creation from live and file-based sources.
TitleExchange Pro is a unique tool which makes complicated things simple and fast when juggling with subtitles.
It's probably the most advanced tool available for the Apple platform to interchange subtitle information. Using it with Apple's Final Cut Pro makes this NLE the most flexible editing software package in the world when it comes to subtitles.
If you ever wished to get your titles out of Final Cut Pro into DVDSP (or another DVD authoring application), as text file for a translation bureau, get titles into Final Cut Pro from another source like DVDSP, STL or a file from the translation bureau, or ceate a QT movie with all your Final Cut Pro subtitles without the need of rendering, add captions to your YouTube videos or just "equalize" your Final Cut Pro titles TitleExchange will be your "one stop" tool.
The EBU has published a new Subtitling Format specification (EBU Tech 3350). The new format is called EBU Timed Text (EBU-TT) and provides an easy-to-use method to interchange and archive subtitles in XML.
EBU-TT is based on the W3C Timed Text Markup Language (TTML) specification. The EBU format can be seen as a constrained version of the W3C spec, aimed at providing a solution more tailored to broadcast operation. This is especially relevant as broadcasters are increasingly moving to file-based HDTV facilities, where subtitles are created, edited, exchanged and archived together with the content.
The EBU has published a new specification for the distribution of subtitles: EBU-TT-D (Tech 3380). The XML based EBU-TT-D format is a low-complexity way to combine subtitle text, styling, timing information, and positioning details to allow implementers to provide users with a subtitle experience at least as good as that on current TVs, regardless of the platform on which they are watching the content.
EBU-TT-D was developed in less than a year, by taking into account expertise from users, distribution parties, hybrid TV organizations and CE manufacturers. The work built on the EBU XML Subtitles group’s knowledge gained when creating the EBU-TT subtitle format for production interchange and archiving (EBU Tech 3350). The specification is derived from the base W3C TTML specification. It strongly constrains the feature set of TTML to make it easier for decoder/renderer implementers to add subtitle overlays to video without the complexity that is present in TTML to support other scenarios.
The new project provides a real-time API for closed captions, making it possible to code against what’s being said on TV right now — and enabling a broad range of applications.
I’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.
TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.
Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.
One of the more exciting developments in HTML5 video is the inclusion of the track element in the newest versions of the desktop browsers. In addition to bringing captioning and subtitle support to HTML5 video, the invisible track element allows publishers to attach a rich array of textual metadata to their videos. In this blog post, we'll look at the different types of tracks that can be used in conjunction with the tag :
- WebVTT: A New Format for Text Tracks
- Accessibility: Captions, Subtitles and Descriptions
EBU-TT stands for EBU Timed Text. It is an XML-based subtitling format intended as a follow-up to the currently widely used EBU STL format ( EBU Tech 3264).
In January 2012, the EBU published EBU-TT part 1 - Subtitling format definition (EBU Tech 3350) for industry comments. This specification defines an easy-to-use XML structure for the interchange and archiving of subtitles. It builds on the W3C Timed text Markup Langauge (TTML) 1.0 . To help developers get started, an XML Schema implementation of EBU-TT part 1 is available for download.
Red Bee Media has built a new live subtitling platform integrating speech recognition technology, writes Adrian Pennington. It is also participating in a new EU-funded project aimed at developing automatic transcription and translation technology.
The company, which provides closed captioning services for the BBC’s entire output including iPlayer, is to introduce a new live subtitling platform Subito (Italian for immediately or real time) this summer. The system, which is for internal use only, introduces the ability to auto-align pre-prepared text.
“In a news scenario, for example, quite a high proportion of words are pre-scripted or repeated from previous half hours so it’s possible to use audio and other metadata to automatically repurpose and transmit text that already exists,” explained David Padmore, Director of Red Bee’s Access and Editorial Services. “This will reduce the delay in delivering subtitles on the screen and increase the accuracy of realtime captioning.”
Web video accessibility is a broad term that refers to making videos usable for all types of viewers. Traditionally, it refers to those with impairments, but more recently the definition has broadened. At LongTail Video, we feel strongly about creating the means of equal access to online video content. By building products that support features such as multi-language video captions, we aim to increase viewer accessibility. Though there are many pieces to making a video fully accessible, in this post we focus the discussion on closed captions.
We’ve been able to play video in the browser without a plugin for a couple of years now, and whilst there are still some codec annoyances, things appear to have settled down on the video front. The next step is adding resources to the video to make it more accessible and provide more options to the viewer.
We currently have no means to provide information about what’s happening or being said in the video, which means the video isn’t very accessible and the user can’t easily navigate to a particular section of the video. Thankfully, there’s a new format specification in the works called WebVTT (Web Video Text Tracks). As of now, it’s only in the WHATWG spec, but the recently established W3C Web Media Text Tracks Community Group should introduce a WebVTT spec to the W3C soon.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
ajouter votre point de vue ...