Appearance on the GigGab Podcast

I was fortunate to get an invitation to chat with Paul and Dave over at the Gig Gab Podcast yesterday, and we had a great time talking about Capo, by-ear learning, and some of the challenges I face as an independent developer working on an app that is partially powered by Machine Learning.

Gig Gab is a great show that’s aimed at working musicians, and is definitely worthy of a subscription in your podcast player of choice. Be sure to check it out!

1 Like

Thanks for sharing your Capo journey on Gig Gab, really enjoyed your story.
I was floored when you explained that you trained the ML models in Capo, that can be super tedious. I was thinking it was a 3rd party product. Nice work!

1 Like

Tedious is an understatement, especially in my case! :slight_smile:

I’ve written many thousands of lines of code that aren’t shipped alongside Capo, purely for the purpose of developing a own custom training environment that is tailored to run specifically (and fast!) using Apple’s hardware and SDKs.

One of the key elements in my system is that the “front-end” components (the stuff that chews up the incoming audio, sets it up for the neural network, etc) and “back-end” (the stuff that spits out chords) is shared between the training environment and the Capo application code.

Why’s that important? Spending hours (days, really) optimizing my training environment to process thousands of hours of audio in a couple of minutes as quickly as possible is not a wasted effort. This is one of the many reasons why Capo’s chord recognition isn’t just accurate [1], but super duper fast—most songs finish chord detection in less than a second on most modern iPhones!

It took a lot of time, but the results speak for themselves.


  1. Sure, there are still songs that cause trouble, but it’s almost always because the recordings are so far out of tune. ↩︎

1 Like