Modulate This Interview with Imagine Research CEO Jay LeBoeuf

image

I recently had a chance to meet Jay LeBoeuf, the CEO and founder of the San Francisco based company Imagine Research, and learn more about his past and current work. Imagine Research is working on next-generation intelligent signal processing and machine learning technologies. I thought the work was fascinating and Jay graciously agreed to take time out of his busy schedule to share some insights on his work and the field in general. He also has some suggestion on how you can get involved with helping to solve real-world problems in the digital audio and music realm.

_____________________________________________________

Mark Mosher:  How long have you been involved in R&D work and how did you get started?

Jay LeBoeuf: I've always had a passion for music and technology – in undergrad (Cornell University) , I was an electrical engineer, with a minor in music, and gigged with my band on weekends.  Everything suddenly made sense when I did a Masters at CCRMA (Stanford University).  If you understand audio software and technology at its lowest levels, you have this immense appreciation for the tools that our industry uses.  You also develop this urge to make new tools, and help bring new experimental technologies to market… which is how I ended up at Digi.

MM: Prior to founding Imagine Research, you were at Digidesign doing R&D on Pro Tools. What Pro Tools features that Modulate This readers might use daily did you have a hand in creating?

JL: Digi was such an amazing place and opportunity – I was one of the first team members on Pro Tools' transition from OS 9 to OS X.  I was on design and test teams for D-Control / ICON mixing console, the HD Accel Card, integration of M-Audio into the Pro Tools product line, and Pro Tools software releases 5.1.1 through 7.4.  In my later years, I researched techniques for intelligent audio analysis  – the field that I'm most excited about.

Imagine Research Web siteMM:  Do you feel that being an independent research firm allows you to work more on the "bleeding edge" than if you were doing the research from within a company?

JL: Absolutely.  Imagine Research was founded because this "bleeding edge" technology needs a helping hand into industry.  Most companies, especially in the MI space, keep their focus on their incremental features, compatibility, and bug fixes – and applied research is inherently difficult and risky to productize.

The U.S. National Science Foundation has been a great partner in helping us bring innovative, high-risk-high-reward technologies to market.  We've received several Small Business Innovation Research (SBIR) grants to address the feasibility and commercialization challenges of music information retrieval / intelligent audio analysis technologies.  I encourage all entrepreneurs to look into the SBIR program.

MM:  How does Imagine Research help companies leverage emerging and disruptive technologies yet build practical solutions?

JL: Close collaborations are key during the entire technology evaluation process.  We focus on end-user problems and the workflows enabled by technology.  The solution is what's important , and we try not to geek out and use unnecessarily sophisticated technology when a simpler solution works fine.  That said, the more disruptive technologies tend to spawn new ideas, features, and products- and you need a long-term  partnership to capitalize on it! 

MM: According to your web site,  Imagine Research is working on a platform for “machine learning”. Can you briefly tell us what machine learning is and offer some examples of how machine learning could be applied to change how composers and sound designers create?

JL: In short, machine learning algorithms allow a computer to be trained to recognize or predict something.  One way to train a machine learning algorithm to make predictions is to provide it with lots of positive and negative examples.  You can then reinforce its behavior by correcting it, or having your end-users correct its mistakes. 

In our case, we use machine learning to enable machine hearing.  Our platform, MediaMined™,  listens to a sound and understands what it is listening to – exactly as human listeners can identify sounds.   

When software or hardware is capable of understanding what it is listening to, an enormous array of creative possibilities open up: such as DAWs that are aware of each tracks contents, search engines that listen to loops and sound effects and finds similar-sounding content, and intelligent signal processing devices.  I'm confident that this will enable unprecedented access to content, faster and more creative workflows, and lower barriers to entry for novice musicians.

MM: Are there non-musical applications for your platform?

JL: Absolutely.  Our platform was designed for sound-object recognition – so while I frequently discuss analyzing music loops, music samples, and sound effects, we can also understand any real-world sounds.  We're working on applying our techniques to video analysis, as well as exploratory projects involving biomedical signal processing (heart and breath sound analysis), security/surveillance, interactive games, and more than enough to keep us busy!

MM: How can app developers leverage your platform?

JL: While the specific platform details are still under wraps, I'd really enjoy talking with desktop, mobile, and web-based app developers.  We really welcome input at this early stage.  I'm happy to discuss at "info at imagine-research dot com".  For general information, announcements, and updates, please follow us on Twitter (@imagine-research).

MM: Imagine Research also creates "intelligent" algorithms for consumer audio and video products. Can you give us some examples of products that might be utilizing your algorithms?

JL:  Sure – check out JamLegend (think: Guitar Hero but online, free, social-networked, and it's one of the only music games where you can upload and play YOUR OWN music).  We developed the technology for users to play interactive Guitar Hero-style games with any MP3s.  So far, over 1.1 million tracks have been analyzed. 

We have a number of exciting partnerships with our MediaMined platform to be announced.  These applications directly aid musicians and creative professionals. 

MM: How do you think that the growth in cloud computing and the explosion of Smartphone processor power will change the la
ndscape of digital audio?

JL: The most exciting thing to me is unparalleled access to content – we'll be able to access Terabytes of user-generated content, mash-ups, and manufacturer/content-provider material (loops, production music, samples, SFX),  online from any device. 

Music creation can now occur anywhere.  Smartphones provide a means to record / compose wherever and whenever the muse strikes.  With cloud-based access to every loop, sample, sound effect, and music track ever created, how do you begin to find that "killer loop" or sample in a massive cloud-based collection — and — on a mobile device?!?  Don’t worry, there’s some disruptive technology for that. 

MM: Do you have any words of advice you can give to Modulate This readers who might want to pursue a career in audio R&D?

JL: Full-time corporate R&D gigs typically requires a graduate degree in music technology and music and audio signal processing such  as at Stanford's CCRMA, UCSB's MAT program, NYU, etc.)  But let's talk about the most untapped resource for research: industry-academic collaboration.  The academics have boundless creativity and technical knowledge, but might not know the current real-world problems that need solving.  I'd encourage readers to reach out to professors and graduate students doing audio work that they find interesting.  Think big – the hardest problems are the ones worth solving. 

____________________________________________________________

Links:

Mark Mosher
Electronic Musician, Music Tech & Technique Blogger, Boulder CO
www.MarkMosherMusic.com
www.ModulateThis.com

Interviews

Mark Mosher • Audio/Visual Synthesist | Boulder, CO View All →

Artist site: http://www.markmoshermusic.com
Host of Sonic Encounters Podcast: http://markmoshermusic.com/podcast/.
Boulder Synthesizer Meetup founder and host: http://www.bouldersynth.com

1 Comment Leave a comment

  1. Industry – Academic Collaboration is key. But I am finding that academia is generally too slow to move (no agility) and approaching fatal technologic latency. This latency is so bad in many cases my desire to collaborate and bring new information to the classroom feels thwarted by the tenured power brokers inside the institutions.
    I was planning on a degree in music/audio technology but it was clear to me that I would never learn any skills I needed/desired for a realistic professional career in the industry. That was 10 years ago. My re-interfacing with higher education now is showing me that the skills I was seeking to learn in the late 1990’s are being introduced soon – that is 10 years behind via my realistic desires.
    Considering the speed of technology in our era, these avenues are having a negative impact on the industry as a whole. Rouge certifications, over priced technical schools, and various levels of unprofessionalism in instruction and information stand to continue flood the market with people and practices that hurt everyone in the long run.
    Great interview. I still have hope, but my focus is on collaboration that produces positive results. Just like this interview, we all need to reach out and talk more on what we are doing!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: