"If there's one thing that we've done a terrible job of, it's driving more mainstream awareness to the awesome shit that people are doing on our platform," says Lucchese, an attorney. "Maybe it's because we're founded by two MIT PhDs and run by a lawyer. In the developer community, we're past being considered the 'smart little guys,' and we're being recognized as a best-in-class platform. Granted that it's still kind of the early days for [everybody else] to really understand what we do, but now it looks like mobile apps are changing that."
AGREE TO DISAGREE
In the late 1990s, Brian Whitman was a frustrated computer scientist and electronic musician living in New York City, and performing under the name Blitter. He played regular gigs and even dropped some vinyl, but spent most of his time imagining ways to get his music in front of potential fans. At the time, he noticed that online message boards were becoming increasingly populated by people who were anxious to discuss — and, more often, argue about — music trends and stylings. Somehow, Whitman thought, these were not disparate musings.
"In hindsight," he says, "this is all very obvious. But when these communities were forming, the people who were doing music recommendation and retrieval weren't looking at [blogs and message boards] — they were just looking at the audio signals. . . . I set out to prove that the more you know about a community, the more you understand peoples' preferences."
Following his interest in the conversation around music, Whitman's explorations brought him to the world-renowned MIT Media Lab, where he eventually met his philosophical nemesis and future business partner, Tristan Jehan: a soft-spoken, French-born computer scientist, amateur keyboardist, and researcher who cut his teeth at UC-Berkeley's Center for New Music and Audio Technologies.
Jehan's view of music analysis was the opposite of Whitman's: he thought that relationships between songs should be derived by extracting and analyzing musical metadata. Jehan came to MIT to prove that sounds — as opposed to the worldwide dialogue about music — were the best barometers of listener taste.
"I'd been working on how to make computers better understand music," says Jehan. "Brian was looking at how computers could understand music in the context of how people speak about it on the Web." Adds Whitman: "You can't just look at the audio signals, and at the same time you can't ignore the audio — you have to know what the song sounds like, and understand the conversation around it. You need to do both."
By the time that Whitman and Jehan earned their doctorate degrees (in machine listening and media arts and sciences, respectively) in 2005, both were considered all-stars in their parallel fields. So it was fitting that they agreed to disagree, and partnered to launch the Echo Nest out of a small office in the same building where the company now occupies several suites.
Using their complementary research as a foundation, they wrote programs that crawl the Internet (and streaming services like last.fm), analyzing everything from comments and discussion about songs and artists, to the rhythm, harmony, and timbre of millions of actual tracks. Within two years, they'd built the most powerful interactive music database ever indexed as a single platform — an API they would come to call the "Musical Brain."