cross pond high tech
159.8K views | +1 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Amazon Alexa scientists find ways to improve speech and sound recognition

Amazon Alexa scientists find ways to improve speech and sound recognition | cross pond high tech | Scoop.it

How do assistants like Alexa discern sound? The answer lies in two Amazon research papers scheduled to be presented at this year’s International Conference on Acoustics, Speech, and Signal Processing in Aachen, Germany. Ming Sun, a senior speech scientist in the Alexa Speech group, detailed them this morning in a blog post.

“We develop[ed] a way to better characterize media audio by examining longer-duration audio streams versus merely classifying short audio snippets,” he said, “[and] we used semisupervised learning to train a system developed from an external dataset to do audio event detection.”

 

The first paper addresses the problem of media detection — that is, recognizing when voices captured from an assistant originate from a TV or radio rather than a human speaker. To tackle this, Sun and colleagues devised a machine learning model that identifies certain characteristics common to media sound, regardless of content, to delineate it from speech.

 

Philippe J DEWOST's insight:

Alexa, listen to me, not the TV !

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

MIT's Cheetah robot moves by feel to approximate how humans and other animals navigate - without any visual sensor

MIT's Cheetah robot moves by feel to approximate how humans and other animals navigate - without any visual sensor | cross pond high tech | Scoop.it

In a turn away from vision, a team at MIT has created a feline robot that attempts to better approximate how humans and animals actually move, navigating stairs and uneven surfaces guided only by sensors on its feet.

Why it matters: Many ambulatory robots rely on substantial recent improvements in computer-vision, like advanced cameras and lidar. But robots will be more nimble and more practically interact with humans with the addition of "blind" vision — a sixth sense of feeling that most living things have for their surroundings.

What's going on: Computer vision alone can result in a robot with slow and inaccurate movements, says MIT's Songbae Kim, designer of the Cheetah 3.

  • "People start adding vision prematurely and they rely on it too much," Kim tells Axios, when it's best suited for big-picture planning, like registering where a stairway begins and knowing when to turn to avoid a wall. So his team built a "blind" version in order to focus on tactile sensing.

How the blind version works: Two algorithms help the Cheetah stay upright when it encounters unexpected obstacles.

  • One determines when the bot plants its feet, by calculating how far a leg has swung, how much force the leg is feeling, and where the ground is.
  • The other governs how much force the robot should apply to each leg to keep its balance, based on the angle of the robot's body relative to the ground.
  • The sensors can also adjust to external forces, like a researcher's friendly kick from the side.

The result is a quick, balanced robot: The researchers measure the force on each of the Cheetah's legs straight from the motors that control them, allowing it to move fast — at 3 meters per second, or 6.7 miles an hour — and jump up onto a tablefrom a standstill. These tricks make the 90-pound bot look surprisingly nimble.

Cheetah's design emphasizes "sensors that you and I take for granted," said Noah Cowan, director of the LIMBS robotics lab at Johns Hopkins University.

  • Humans unconsciously keep track of where their arms and legs are — and the forces acting on them — to help stay balanced and move smoothly. MIT’s Cheetah “feels” its legs in a similar way.

The Cheetah's capabilities resemble some of the robots produced by the ever-secretive Boston Dynamics, which in May released a video of its four-legged SpotMini navigating autonomously through its lab with the help of cameras.

  • It's not clear whether Boston Dynamic robots use tactile technology like Kim's, and the company did not respond to an email.
Philippe J DEWOST's insight:

It "looks" like machine vision is not necessarily mandatory when it comes to designing efficient "walking" machines.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Babel phish: In which languages are internet passwords easiest to crack?

Babel phish: In which languages are internet passwords easiest to crack? | cross pond high tech | Scoop.it
In which languages are internet passwords easiest to crack?

DESPITE entreaties not to, many people choose rather predictable passwords to protect themselves online. "12345"; "password"; and the like are easy to remember but also easy for attackers to guess, especially with programs that automate the process using lists ("dictionaries") of common choices. Cambridge University computer scientist Joseph Bonneau has recently published an analysis of the passwords chosen by almost 70m (anonymised) Yahoo! users. One interesting result is shown below. The chart shows what percentage of accounts could be cracked after 1,000 attempts using such a dictionary. Amateur linguists can have fun speculating on why the Chinese do so well and the Indonesians do not. But one particularly interesting twist is how little difference using language-specific dictionaries makes. It is possible to crack roughly 4% of Chinese accounts using a Chinese dictionary; using a generic dictionary containing the most common terms from many languages, that figure drops only slightly, to 2.9%. Speakers of every language, it seems, have fairly similar preferences.
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

The first AI-generated textbook shows how robot writers are good at compiling peer-reviewed research papers

The first AI-generated textbook shows how robot writers are good at compiling peer-reviewed research papers | cross pond high tech | Scoop.it

Academic publisher Springer Nature has unveiled what it claims is the first research book generated using machine learning.

The book, titled Lithium-Ion Batteries: A Machine-Generated Summary of Current Research, isn’t exactly a snappy read. Instead, as the name suggests, it’s a summary of peer-reviewed papers published on the topic in question. It includes quotations, hyperlinks to the work cited, and automatically generated references contents. It’s also available to download and read for free if you have any trouble getting to sleep at night.

“a new era in scientific publishing”

While the book’s contents are soporific, the fact that it exists at all is exciting. Writing in the introduction, Springer Nature’s Henning Schoenenberger (a human) says books like this have the potential to start “a new era in scientific publishing” by automating drudgery.

Schoenenberger points out that, in the last three years alone, more than 53,000 research papers on lithium-ion batteries have been published. This represents a huge challenge for scientists who are trying to keep abreast of the field. But by using AI to automatically scan and summarize this output, scientists could save time and get on with important research.

“This method allows for readers to speed up the literature digestion process of a given field of research instead of reading through hundreds of published articles,” writes Schoenenberger. “At the same time, if needed, readers are always able to identify and click through to the underlying original source in order to dig deeper and further explore the subject.”

Although the recent boom in machine learning has greatly improved computers’ capacity to generate the written word, the output of these bots is still severely limited. They can’t contend with the long-term coherence and structure that human writers generate, and so endeavors like AI-generated fiction or poetry tend to be more about playing with formatting than creating compelling reading that’s enjoyed on its own merits.

Philippe J DEWOST's insight:

Artifical Intelligence can now write research books. When shall we expect a book about #AI itself ?

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

In Changing News Landscape, Even Television is Vulnerable

In Changing News Landscape, Even Television is Vulnerable | cross pond high tech | Scoop.it

From PewResearchCenter — Trends in News Consumption: 1991-2012

 

While traditional news platforms have lost audience, online news consumption has been undergoing major changes as well. Nearly one-in-five Americans (17%) say they got news yesterday on a mobile device yesterday, with the vast majority of these people (78%) getting news on their cell phone. Among smartphone owners, nearly a third (31%) got news yesterday on a mobile device.

 

The second major trend in online news consumption is the rise of news on social networks. Today, 19% of the public says they saw news or news headlines on social networking sites yesterday, up from 9% two years ago. And the percentage regularly getting news or news headlines on these sites has nearly tripled, from 7% to 20%.

No comment yet.