Language Tech Market News
52.4K views | +4 today
Follow
Language Tech Market News
The Home of Multilingual Intelligence
Curated by LT-Innovate
Your new post is loading...
Your new post is loading...
Scooped by LT-Innovate
Scoop.it!

Announcing AVA: A Finely Labeled Video Dataset for Human Action Understanding

Announcing AVA: A Finely Labeled Video Dataset for Human Action Understanding | Language Tech Market News | Scoop.it
In order to facilitate further research into human action recognition, we have released AVA, coined from “atomic visual actions”, a new dataset that provides multiple action labels for each person in extended video sequences. AVA consists of URLs for publicly available videos from YouTube, annotated with a set of 80 atomic actions (e.g. “walk”, “kick (an object)”, “shake hands”) that are spatial-temporally localized, resulting in 57.6k video segments, 96k labeled humans performing actions, and a total of 210k action labels.
LT-Innovate's insight:

Facebook for one is truing to use images to teach robots words for things. Next may come words for actions.

more...
No comment yet.
Scooped by LT-Innovate
Scoop.it!

Personalising Video News Using #NLG to Answer Questions

Personalising Video News Using #NLG to Answer Questions | Language Tech Market News | Scoop.it
Earlier this year, Automated Insights co-hosted a hackathon with the Amazon Alexa team and 15 teams ranging from startups to Fortune 500 companies. Using our NLG technology and Alexa's Natural Language Processing (NLP) and speech technology, we created mind-blowing applications that allowed end users to receive spoken personalized news, financial, school, weather, and all sorts of other information, just by asking a question.
LT-Innovate's insight:

Video data as input to NLG applications for Viewers

more...
No comment yet.