Data is becoming increasingly important in the music industry. Many streaming services rely on it to power their music recommendations and some companies are even using data to try to predict the next big thing in music. However, with something as personal as music, is more reliance on data always a good thing?
When I log on to Spotify, it recommends I listen to T. Rex because I listened to Marc Bolan. Bolan is the frontman of T. Rex. I roll my eyes and click away.
Recommendation algorithms increasingly suggest everything we may want to watch, read or buy; who to follow on Twitter, what New York Times article to read next or what home goods products to order from Amazon.
What happens when that logic is applied to something as personal, unexplainable and previously unquantifiable as music?
Data is trendy right now, and the music industry is catching on. Samsung just launched a mobile personalized radio app called Milk, Lyor Cohen is tapping Twitter metrics, Gracenote is analyzing BitTorrent data and Warner Music inked a label deal with Shazam. A forthcoming Cone speaker promises to really get to know you by using contextual information like what room you are in and the time of day to tell you exactly what it thinks you want to hear.
It’s not a totally new concept. Pandora’s genome project was first launched in 2000 to analyze and catalog a web of musical attributes. Apple launched its Genius feature in iTunes in 2008 using purchase history to recommend what you might like. Most digital music services today come with suggested artists and some sort of auto play function — and most of those services are, or were, being powered by Echo Nest data. But as the T. Rex example shows, music recomendation is not an exact science yet.
In a conversation after his presentation, Whitman likened what the Echo Nest does to Google search, “It’s a way to browse and discover things. It’s like a Google search — we’re not forcing anything on you, its a way to explore.”
As streaming services like Beats, Spotify and Rdio offer essentially the same catalogue of music, with a few notable exceptions, the real differentiation factor between services is going to be the user experience and the quality of those recommendations.
Because most casual music listeners want an easy button, like a radio dial, that provides a finite number of options rather than a seemingly infinite flood of choice. As the Echo Nest’s director of developer platforms Paul Lamere said at SXSW: “You have to find a way to engage the people that are going to be intimated by a search box [sitting] in front of 30 million songs.” Humans tend to prefer to lean back and trust what a computer suggests as correct, that’s why three-fourths of all viewer choices on Netflix are from the recommendations on the home screen.
“Taste profile is a huge portion of the Echo Nest business these days. We are tracking tens of millions of people’s music listening history on our systems. This powers all of our personalization, so when you log on to one of the services that use the Echo Nest Taste Profile, they’ll know about you right away. They’ll say ‘Well I know this person, what kind of music they like and what kind of stuff they want to listen to in the morning.’ If sometimes they listen to kids music, thats a different thing than listening to metal music another time of the day.”
With all this data to crunch and tastes to triangulate, what about human reccomendation?
Though there is no data point for musical serendipity, there may be a sweet spot between rockism (humans) and technological determinism (computers) when considering algorithmic discovery. It can be easy to frame the choice as human versus machine, but those algorithms were created by humans, for humans. A computer-generated recommendation is not necessarily always better than a human’s, but perhaps better than a human mind alone. People ultimately care about the music, not the technology that delivers it.