Computational Mus...
Follow
Find
6.1K views | +0 today
Computational Music Analysis
New technologies and research dedicated to the analysis of music using computers. Towards Music 2.0!
Your new post is loading...
Your new post is loading...
Scooped by Olivier Lartillot
Scoop.it!

Music's Data Problem May Be Depriving Artists Of Significant Revenue - hypebot

For nearly 100 years, performing rights organizations have tracked the music played on the radio, then the television, and now the internet. Their goal: to figure out who should get paid.

These organizations - ASCAP and BMI are the big ones - have traditionally relied on the radio, television, and internet music companies they monitor to report what they played, and to how many people, then they cross-reference that with random sampling.

 

In 2012, there is no longer a need for either of those ancient approaches. Back when I did college radio, we used to write down each song we played to submit them to these PROs, and to a great extent, that is still how they work. To borrow a phrase from the old Six Million Dollar Man television show, “we have the technology” to fix this: audio fingerprinting, which can identify every song and snippet of a song that plays on every radio station, television channel, and streaming radio company. Why guess when you can know?

 

This is why we’ve been intrigued by TuneSat, which actually sets up televisions and computers, and feeds them into other computers. The computers actually identify what is being played, rather than counting on broadcasters and webcasters to report things accurately.

I saw TuneSat’s Chris Woods explain what his company does at a MusicTech Meetup in Brooklyn last month, after which I posed a question: “Why don’t ASCAP and BMI use this technology, or simply buy TuneSat outright?” My question was met with knowing guffaws. Someone else in the audience piped up, “Where do we start?”

Woods went on to explain that those organizations are too slow, too mired in the past, and “not nimble enough.”

 

Identifying music on broadcasts would seem to be a perfect application of “big data” — analyzing all media to find the songs and pay the pipers. But to Woods, it clearly wasn’t being used properly.

“I can tell you for a fact that they have never used technology to report the use of my music on any of the broadcasts,” he said. “They have had the technology to do so since 2005, and it’s now 2012, so something’s not right here. It doesn’t take a rocket scientist to figure that out. I don’t know what the real issue is — maybe they’re too big, or slow to adapt to new technology, or maybe it represents exposing their formulas or how they collect and distribute royalties…

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

TechCrunch | The Echo Nest CEO On What Big Data Means To The Music Industry

TechCrunch | The Echo Nest CEO On What Big Data Means To The Music Industry | Computational Music Analysis | Scoop.it

(featuring a 10-min video interview)
“The Echo Nest is possibly the hottest music data company around right now. They've signed deals with Nokia, EMI, Clear Channel, Spotify, and most recently, Vevo. 

So chances are if you enjoy music, The Echo Nest has something to do with what songs you’re recommended.

Knowing this, Techcrunch Jordan Crook couldn’t resist sitting down with CEO Jim Lucchese to chat out what the music industry will look like in the next couple years, and how The Echo Nest may shape it.

Lucchese believes that the songs you listen to say something about your identity, and that music services have a huge problem ahead of them in the form of millions of listeners and millions of digital music titles. Being the middle man between such huge pools of information is nearly impossible without a deep understanding of the music itself.

But Lucchese believes that the real shift will come by way of understanding the listener, too. We’re getting to a point now where music can be analyzed and categorized in a number of different ways, but little is known about why someone would enjoy Nicki Minaj and Florence + The Machine at the same time. That’s what The Echo Nest is trying to figure out, and it would seem that the company is doing so ahead of the rest of the industry.”

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Midemlab 2012 Finalists: Music Discovery, Recommendation, and Creation (Part 1) | Evolver.fm

Among other finalists:

 

“You know how the likes of Shazam only recognises recorded music? WatZatSong claims to recognise songs you sing or hum into your computer, notably by asking its online community.”

 

“WhoSampled.com “allows music fans to explore the DNA of their favourite music”, by tracking songs over the past thousand years, no less! Direct comparisons of, say, how Kanye West sampled Daft Punk are just a click away.”

 

“All of the finalists will present their services to an expert panel featuring SoundCloud, MOG, Sony, Music Ally and more — at midem Saturday January 28. The winners will be revealed at midem’s Visionary Monday, January 30.”

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Pop Hit Prediction Algorithm Mines 50 Years of Chart-Toppers for Data

Machine-learning engineers from the University of Bristol think they might have the master equation to predicting the popularity of a song.

 

They use Echo-Nest musical features (tempo, time signature, song duration, loudness, how energetic it is., etc.). By using a machine-learning algorithm, the team could mine official U.K. top-40 singles charts over the past 50 years to see how important these 23 features are to producing a hit song.

Musical style doesn’t stand still, and the weights have to be tweaked to match the era. In the ’80s, for example, low-tempo, ballad-esque musical styles were more likely to become a hit. Plus, before the ’80s, the “danceability” of a song was not particularly relevant to its hit potential.

Once the algorithm has churned out these weights it’s simply a case of mining your proposed song for these exact same features and working out whether they correspond to the trends of the time. This gives you a hit-prediction score.

The team at Bristol found they could determine whether a song would be a hit and, with an accuracy rate of 60 percent, predict whether a song will make it to top five or if it will never reach above position 30 in the chart.

 

Predicting pop songs through science and algorithms has certainly been done before, with varying levels of success.

Researchers at Tel Aviv University’s School of Electrical Engineering mined popular peer-to-peer file sharing site Gnutella for trends, and have a success rate of about 30 percent to 50 percent in predicting the next music superstar. The secret? Geography.

Meanwhile, Emory University neuroscientists went straight to the source and looked at how teenage brains reacted to new music tracks.

 

And then there’s Hit Song Science. It uses an idea similar to Bristol University’s equation, by using algorithms to analyze the world of popular music to look for trends, styles and sounds that are a popular amongst listeners. At the website Uplaya, wannabe hit-makers can upload a track and get a score. The higher the score, the better your song is.

Well, the more catchy it is, at least. The algorithm gives “I Gotta Feeling” by The Black Eyed Peas a hit score of 8.9 out 10, for example.

Bristol’s study differs from previous research because of its high accuracy rate and the time-shifting perception to account for evolving musical taste. Tijl De Bie, senior lecturer in Artificial Intelligence, said, “musical tastes evolve, which means our ‘hit potential equation’ needs to evolve as well.”

He added: “Indeed, we have found the hit potential of a song depends on the era. This may be due to the varying dominant music style, culture and environment.”

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Software that spots patterns of emotional speech

Software that spots patterns of emotional speech | Computational Music Analysis | Scoop.it

Researchers are teaching computers how to spot deception in people’s speech. Cues include loudness, changes in pitch, pauses between words, ums and ahs, nervous laughs and dozens of other tiny signs that can suggest a lie.

 

A small band of linguists, engineers and computer scientists, among others, are busy training computers to recognize hallmarks of what they call emotional speech — talk that reflects deception, anger, friendliness and even flirtation.

 

The technology is becoming more accurate as labs share new building blocks, said Dan Jurafsky, a professor at Stanford whose research focuses on the understanding of language by both machines and humans. “The scientific goal is to understand how our emotions are reflected in our speech,” Dr. Jurafsky said. “The engineering goal is to build better systems that understand these emotions.”

 

But homing in on the finer signs of emotions is tougher. “We are constantly trying to calculate pitch very accurately” to capture minute variations, he said. His mathematical techniques use hundreds of cues from pitch, timing and intensity to distinguish between patterns of angry and non-angry speech.

His lab has also found ways to use vocal cues to spot inebriation, though it hasn’t yet had luck in making its computers detect humor — a hard task for the machines, he said.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

System that recognizes emotions in people's voices could lead to less phone rage

System that recognizes emotions in people's voices could lead to less phone rage | Computational Music Analysis | Scoop.it
An emotion-recognizing computer system has been designed to make the use of automated telephone services less stressful.

 

A team of scientists have created a computer system that is able to recognize the emotional state of a person speaking to it, so that it can alter its behavior to make things less stressful.

The system analyzes a total of 60 acoustic parameters of users' voices, including tone, speed of speech, duration of pauses and energy of voice signal. The scientists designed the system to look for negative emotions in particular, that would indicate anger, boredom, or doubt.

 

Down the road, perhaps it might someday be combined with a system being developed at Binghamton University, that identifies computer users' emotional states by looking at their faces.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Intelligent Audio Technology | Imagine Research

Intelligent Audio Technology | Imagine Research | Computational Music Analysis | Scoop.it

Imagine Research adds a set of ears to cloud computing and mobile devices. They create software that hears, understands, and labels sounds, making media files searchable and enabling innovative workflows.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Echofi helps you find artists on Spotify that fit your tastes

Echofi helps you find artists on Spotify that fit your tastes | Computational Music Analysis | Scoop.it

Echofi uses the Echo Nest API, as does the Twitter Music Trends app. 

The Echonest Platform is a music intelligence platform that currently has 220 apps built on top of its API, and has which aggregates data about popular music. The company says that is has collected five million data points on thirty million songs and 1.5 million artists.

Music has gone social, and companies like Spotify are just getting started.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Discovering Talented Musicians on YouTube with Acoustic Analysis

We wondered if we could use acoustic analysis and machine learning to pore over videos and automatically identify talented musicians.

First we analyzed audio and visual features of videos being uploaded on YouTube. We wanted to find “singing at home” videos -- often correlated with features such as ambient indoor lighting, head-and-shoulders view of a person singing in front of a fixed camera, few instruments and often a single dominant voice. Here’s a sample set of videos we found.

Then we estimated the quality of singing in each video. Our approach is based on acoustic analysis similar to that used by Instant Mix, coupled with a small set of singing quality annotations from human raters. Given these data we used machine learning to build a ranker that predicts if an average listener would like a performance.

While machines are useful for weeding through thousands of not-so-great videos to find potential stars, we know they alone can't pick the next great star. So we turn to YouTube users to help us identify the real hidden gems by playing a voting game called YouTube Slam. We're putting an equal amount of effort into the game itself -- how do people vote? What makes it fun? How do we know when we have a true hit? We're looking forward to your feedback to help us refine this process: give it a try*. You can also check out singer and voter leaderboards. Toggle “All time” to “Last week” to find emerging talent in fresh videos or all-time favorites.

Our “Music Slam” has only been running for a few weeks and we have already found some very talented musicians. Many of the videos have less than 100 views when we find them.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Tonara's iPad app for musicians

"Playing a musical instrument can be challenging, especially for beginners. Trying to follow along with sheet music and having to turn the pages while playing only adds to the difficulty. Tonara has developed an iPad app that helps eliminate this burden while providing additional innovative features to assist musicians.

"This technology is called acoustic polyphonic score following," explains Yair Lavi, CEO of Tonara. "Score following is the ability, given the scores of a certain piece or song and the live performance of this piece, to follow with very high precision the exact location in the score of the performance. Acoustic means that it works with acoustic instruments like the piano or violin. It doesn't have to have a digital connection. And polyphonic means that it works with polyphonic instruments. A polyphonic signal is a signal that is composed of several notes being played together. A piano is a polyphonic instrument, and several instruments being played together is a polyphonic setting."

As a musician plays music from a score, a cursor follows his progress through the song and moves from page to page automatically. It works regardless of tempo and filters out external noise. There are currently 250 pieces of sheet music in the system, and Tonara is adding to the library at a rate of approximately 100 to 150 pieces per month.

Because it works well even for beginners, it can function as a great educational tool, showing progress from playing one page correctly, to two, to three and so on. In future versions, the software will be able to display the notes that were played correctly and those that were played incorrectly, so students will have a clear picture of where they need to improve.

Musicians can download each part of a piece of music but choose which part they want to synchronize on the screen. For example, the trumpet player might choose to sync only his part, while the pianist might choose to see everything. In addition to instrumental syncing, the app also works with vocalists.

"Tonara has the only polyphonic score following technology today," says Lavi. "We actually developed it [over] a couple of years, even before the iPad. During the last year, we ported it into the iPad, and we plan to port it into different tablets. The tablet is the ideal platform for the musician, because they can place it on the piano or the music stand."

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

How the Ear Distinguishes Sweet Sounds From Sour Notes - ScienceNOW

"A mathematical model may explain how the nerves in your ear sense harmony, a team of biophysicists reports. The model suggests that pleasant harmonies cause neurons to fire in regular patterns whereas discordant notes stimulate messier neuron activity."

 

It seems that this journalist needs some basic music theory lessons... :-x

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Apple to teach the world to sing - Telegraph

Apple to teach the world to sing - Telegraph | Computational Music Analysis | Scoop.it
The iPod and iPhone could soon be helping to teach people how to singalong to
their favourite tracks under plans drawn up by Apple to turn the devices
into mobile karaoke machines.
more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

New Algorithm Captures What Pleases the Human Ear—and May Replace Human Instrument Tuners | 80beats | Discover Magazine

New Algorithm Captures What Pleases the Human Ear—and May Replace Human Instrument Tuners | 80beats | Discover Magazine | Computational Music Analysis | Scoop.it
Imprecision, it turns out, is embedded in our scales, instruments, and tuning system, so pros have to adjust each instrument by ear to make it sound its best. Electronic tuners can’t do this well because there has been no known way to calculate it. Basically, it’s an art, not a science. But now, a new algorithm published in arXiv claims to be just as good as a professional tuner.

 

The new study replaces the human ear’s ability to detect “pleasingness” with an algorithm that minimizes the Shannon entropy of the sound the instrument produces. (Shannon entropy is related to the randomness in a signal, like the waveform of a sound, and is unrelated to the entropy of matter and energy). Entropy is high when notes are out of tune, say the researchers, and it decreases as they get into tune. The algorithm applies small random changes to a note’s frequency until it finds the lowest level of entropy, which is the optimal frequency for it, say the researchers. And setting tuners to follow this algorithm instead of the current, more simple formula, would be a simple fix.
The paper has a graph comparing the results of human (black) and algorithmically tuning (red) as proof of the latter’s effectiveness. Not bad, but entropy-based tuning hasn’t passed the real test yet: a musician’s ear.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Targeted Audio Ads Based On… Human Emotion? - hypebot

Targeted Audio Ads Based On… Human Emotion? - hypebot | Computational Music Analysis | Scoop.it

“Imagine being able to target your advertising messages based on the parameters of a listener’s emotions during a particular song.

 

Moodagent is a service that combines digital signal processing and AI techniques to create music profiles that take into account characteristics such as mood, emotion, genre, style, instrument, vocals, orchestration, production, and beat / tempo. From these characteristics, playlists are created. Moodagent has an enormous database of music in the cloud, in which every track is scored on five attributes: Sensual, Tender, Happy, Angry, and Tempo.

 

Using the advertising capabilities of Mixberry Media’s audio ads technology, coupled with Moodagent’s knowledgebase of the emotional and musical aspects of songs, advertisers can now target their message to distinct emotional profiles.

 

Brands will be able to select a specific song to embody the essence of their message and, as a result, have their ads heard when the listener is enjoying other tracks with the same emotional data and characteristics – allowing advertisers to communicate the core value of their brand as they perceive it and deliver it to users when they’re in a similar mood or state of mind.”

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Music Tech Continues Lead Role At Midem 2012

Music Tech Continues Lead Role At Midem 2012 | Computational Music Analysis | Scoop.it

Tech will continue its takeover of the music industry at this year's midem conference taking place January 28 - 31 in Cannes. Music tech startups, sponsors, hackers and bloggers will all be in attendance and capturing as much attention as possible.

 

This year's midem gathering seems designed to emphasize that music tech is not simply a bunch of new communication tools but a transformational force in the industry.

 

Tech-related attendees are expected to include tech companies as diverse as Amazon, Microsoft, Spotify and Facebook as well as The Echo Nest, Webdoc, Soundcloud and ReverbNation.

- Visionary Monday will feature such participants as GigaOm's Om Malik, Angry Birds creator Mikael Hed and Facebook's Dan Rose.

- midem Hack Day will gather 30 hackers for a 48 hour process of creation sponsored by BlueVia. Many will be exploring ideas suggested by midem attendees.

- midemlab will present 30 music tech startups pitching their companies and services to a panel of expert judges.

On the final day, a Bloggers' Wrap featuring Eliot Van Buskirk of Evolver.fm and Sansom Will of Contagious Communications will give the "lowdown on midem’s hottest industry trends."

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

TechCrunch | The Echo Nest To Power New Spotify Radio (Which Begins Rolling Out Today)

TechCrunch | The Echo Nest To Power New Spotify Radio (Which Begins Rolling Out Today) | Computational Music Analysis | Scoop.it

Last week, Spotify music service announced that it was redesigning its radio experience from the ground up, offering unlimited stations and unlimited “skips”. And today, Spotify will begin officially rolling out “Radio” to its users on top of its new app platform. But, what Spotify hasn’t been talking about until today is what kind of technology is powering its awesome redesigned Radio functionality.

 

Enter: The Echo Nest, a music intelligence startup whose technology powers many music apps from media companies and independent developers. The Echo Nest is now providing its music intelligence technology to power intelligent radio and radio playlisting within the new Spotify Radio app as it rolls out today across the country.

Given The Echo Nest’s relationship with app developers and record labels (it recently partnered with EMI to open its catalog to app developers), this relationship makes a lot of sense. The Echo Nest will now essentially be powering Spotify Radio, allowing users to create personalized radio stations based around songs or artists in Spotify’s roster of over 15 million tracks.

Partnering with The Echo Nest allows Spotify to enable users to build playlists dynamically around any song or artist for a far deeper radio experience than Spotify has offered previously. As The Echo Nest has one of the more sophisticated playlist engines out there, combining this playlist intelligence with Spotify’s huge catalog and deep social integration should definitely give Pandora pause.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Computer System Recognizes Human Emotion [VIDEO]

Computer System Recognizes Human Emotion [VIDEO] | Computational Music Analysis | Scoop.it
A team of scientists has created a computer system that can recognize human emotion as part of voice recognition.

 

Scientists have created a computer system that attempts to recognize human emotions such as anger and impatience by analyzing the acoustics of one’s voice. Such a system would have obvious implications for perennially frustrating interactive voice response systems, but could be applied to other areas as well.

 

Learn more about the technology in the video.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

BBC World discusses MediaMined, sound recognition, and audio search

How do you teach computers to recognise and classify over a million different sounds, often unrecognised and unlabelled before? Click talks to Jay LeBoeuf about sonic search engines. Instead of typing a search term in and seeing a load of returns in text, you could instead play in a sound or tune and it would find you sounds that either match it or resemble it. Jay LeBoeuf discusses how his technology might come to the aid of musicians and filmmakers especially.

 

(starts at 6:45)

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Sound (and music), Digested

Sound (and music), Digested | Computational Music Analysis | Scoop.it

Audio engineers have developed a novel artificial intelligence system for understanding and indexing sound, a unique tool for both finding and matching previously un-labeled audio files. Imagine Research of San Francisco, Calif., is now releasing MediaMinedTM for applications ranging from music composition to healthcare.


"MediaMinedTM adds a set of ears to cloud computing," says Imagine Research's founder and CEO Jay LeBoeuf. "It allows computers to index, understand and search sound--as a result, we have made millions of media files searchable."

For recording artists and others in music production, MediaMinedTM enables quick scanning for a large set of tracks and recordings, automatically labeling the inputs.

"It acts as a virtual studio engineer," says LeBoeuf, as it chooses tracks with features that best match qualities the user defines as ideal. "If your software detects male vocals," LeBoeuf adds, "then it would also respond by labeling the tracks and acting as intelligent studio assistant--this allows musicians and audio engineers to concentrate on the creative process rather than the mundane steps of configuring hardware and software."


The technology uses three tiers of analysis to process audio files. First, the software detects the properties of the complex sound wave represented by an audio file's data. The raw data contains a wide range of information, from simple amplitude values to the specific frequencies that form the sound. The data also reveals more musical information, such as the timing, timbre and spatial positioning of sound events.

In the second stage of processing, the software applies statistical techniques to estimate how the characteristics of the sound file might relate to other sound files. For example, the software looks at the patterns represented by the sound wave in relation to data from sound files already in the MediaMinedTM database, the degree to how that sound wave may differ from others, and specific characteristics such as component pitches, peak volume levels, tempo and rhythm.

In the final stage of processing, a number of machine learning processes and other analysis tools assign various labels to the sound wave file and output a user-friendly breakdown. The output delineates the actual contents of the file, such as male speech, applause or rock music. The third stage of processing also highlights which parts of a sound file are representing which components, such as when a snare drum hits or when a vocalist starts singing lyrics.


One of the key innovations of the new technology is the ability to perform sound-similarity searches. Now, when a musician wants a track with a matching feel to mix into a song, or an audio engineer wants a slightly different sound effect to work into a film, the process can be as simple as uploading an example file and browsing the detected matches.

"There are many tools to analyze and index sound, but the novel, machine-learning approach of MediaMinedTM was one reason we felt the technology could prove important," says Errol Arkilic, the NSF program director who helped oversee the Imagine Research grants. "The software enables users to go beyond finding unique objects, allowing similarity searches--free of the burden of keywords--that generate previously hidden connections and potentially present entirely new applications."


While new applications continue to emerge, the developers believe MediaMinedTM may aid not only with new audio creation in the music and film industries, but also help with other, more complex tasks. For example, the technology could be used to enable mobile devices to detect their acoustic surrounding and enable new means of interaction. Or, physicians could use the system to collect data on such sounds as coughing, sneezing or snoring and not only characterize the qualities of such sounds, but also measure duration, frequency and intensity. Such information could potentially aid disease diagnosis and guide treatment.

"Teaching computers how to listen is an incredibly complex problem, and we've only scratched the surface," says LeBoeuf. "We will be working with our launch partners to enable intelligent audio-aware software, apps and searchable media collections."

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

EMI Music opens its catalog up to developers to create apps for its artists

The initiative provides access to music from the famous Blue Note Records jazz label, and a catalog of thousands of songs from acts such as Culture Club, Shirley Bassey and The Verve. Developers will be able to make use of The Echo Nest’s vast database of information about songs, from simple things like tempo and genre, to complex data about the ‘mood’ of songs. There’s also access to dynamic playlist APIs, open source audio fingerprinting, audio analysis, and remix software.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Instant Mix for Music Beta by Google

Instant Mix uses machine hearing to characterize music attributes such as its timbre, mood and tempo.

 

Music Beta by Google allows users to stream their music collections from the cloud to any supported device, including a web browser. It’s a first step in creating a platform that gives users a range of compelling music experiences. One key component of the product, Instant Mix, is a playlist generator developed by Google Research. Instant Mix uses machine hearing to extract attributes from audio which can be used to answer questions such as “Is there a Hammond B-3 organ?” (instrumentation / timbre), “Is it angry?” (mood), “Can I jog to it?” (tempo / meter) and so on. Machine learning algorithms relate these audio features to what we know about music on the web, such as the fact that Jimmy Smith is a jazz organist or that Arcade Fire and Wolf Parade are similar artists. From this we can predict similar tracks for a seed track and, with some additional sequencing logic, generate Instant Mix playlists from songs in a user’s locker.

Because we combine audio analysis with information about which artists and albums go well together, we can use both dimensions of similarity to compare songs. If you pick a mellow track from an album, we will make a mellower playlist than if you pick a high energy track from the same album.

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

Web Audio API

Web Audio API | Computational Music Analysis | Scoop.it

"The Web Audio API introduces a variety of new audio features to the web platform. It can be used with the canvas 2D and WebGL 3D graphics APIs for creating a new generation of games and interactive applications. The API is capable of dynamically positioning/spatializing and mixing multiple sound sources in three-dimensional space. It has a powerful modular routing system, supporting effects, a convolution engine for room simulation, multiple sends, submixes, etc. Scheduled sound playback is provided for musical applications requiring a high degree of rhythmic precision. Realtime analysis / visualizer support and direct JavaScript processing is also supported."

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

TechCrunch | Tonara’s iPad App Looks To Reinvent Sheet Music For The Digital Age

TechCrunch | Tonara’s iPad App Looks To Reinvent Sheet Music For The Digital Age | Computational Music Analysis | Scoop.it

"If you’ve ever played an instrument, there’s a very good chance you’re familiar with the annoyances involved with sheet music. For one, it can be tough to find the piece you’re looking for, and then there’s the matter of actually playing the piece — every few stanzas, you find yourself pausing to flip to the next page (or you have to recruit a friend or family member to turn the pages for you.

Now Tonara thinks it has a fix: it’s launching a new iPad application that will display your sheet music, then listen to which notes you’re playing, turning the page automatically at exactly the right moment.

And the app appears to be quite impressive from a technical perspective — it supports polyphonic note recognition (so it could recognize notes being played by both your left and right hand simultaneously on a piano, for example), and can adjust to tempo and missed notes. The app will also ignore ambient noise, and it’s possible to set up multiple iPads with multiple instruments playing, and not have them interfere with each other."

more...
No comment yet.
Scooped by Olivier Lartillot
Scoop.it!

How to process a million songs in 20 minutes

"The recently released Million Song Dataset (MSD) is huge (around 300 gb) which is more than most people want to download. Processing it in a traditional fashion, one track at a time, is going to take a long time. Luckily there are some techniques such as Map/Reduce that make processing big data scalable over multiple CPUs. In this post I shall describe how we can use Amazon’s Elastic Map Reduce to easily process the million song dataset."

 

NOTE: Such parallel-for loop is also available in Matlab Parallel Processing Toolbox, which can be used in MIRtoolbox.

more...
No comment yet.