From flamenco rhythms to decoding stock market patterns, Nadine Kroher’s expertise is wide-ranging and diverse. Her unique background also serves her well in her role as the Co-Head of Research and Development at Pyrsos, TMC²’s in-house AI Lab.

She specialises in machine learning and pattern recognition and uses her skills to apply fresh thinking to a variety of use cases for artificial intelligence (AI).

1st Feb 2024

Explore Nadine’s unique path into content-based music analysis and her interdisciplinary role at TMC² and Pyrsos

From Beats to Bytes: Nadine Kroher’s Journey from Flamenco to Fintech at TMC²

What kind of path did you take into content-based music analysis and working for TMC² and Pyrsos?

 

I was always interested in music, but I wasn’t a particularly talented musician, and I was also interested in engineering, so I tried to find something that married the two. I did a bachelor’s and master’s in electrical- and audio-engineering and did what was known back then as signal processing for audio and music. I also obtained another MSc in Sound and Music Computing, in which I focused on the computational analysis of music.

 

In 2015, I embarked on a PhD in Applied Mathematics at the University of Seville, in Spain, on a project related to signal processing and also machine learning. The thesis focused on computational analysis of flamenco music and music information retrieval, and I designed an algorithm to transcribe flamenco singing. With music retrieval you can focus on elements like detecting beats and bars, segmenting tracks, or detecting the mood or genre. I looked at these elements for flamenco, which doesn’t share many similarities with the technology for pop music - it’s a completely different philosophy of music - and there is a complex definition of genre. As I immersed myself in this research, I found it similar to jazz: the more you understand it, the more you enjoy it!

 

As I finished my PhD in 2018, I started working for Mashtraxx - one of TMC’s portfolio companies that uses artificial intelligence for audio and music editing - and built machine learning models for music analysis. From there, I started to branch into other kinds of projects in the TMC² portfolio. Some of my work with DAACI, which develops next-gen smart and AI creative music tools, is similar to the mix of tools I used for my PhD in detecting the presence of instruments in recordings and how they interact with each other. 

“As I immersed myself in this research, I found it similar to jazz: the more you understand it, the more you enjoy it!"

Nadine Kroher | Co-Head of Research and Development at Pyrsos

During your time at TMC2/Pyrsos, what kind of projects have been the most intriguing?

 

We have built a lot of recommendation engines - in the broadest sense. I find the content feeds for apps really interesting because there will never be one recommendation engine to rule them all - it’s just not going to happen. It’s a very individual, customised job because it depends on so many different things like the user base, the type of content and how the user interacts with that content. If it’s a music platform then the recommendation engine will function differently from a platform for text or images.

 

I find fintech really interesting, which I would never have expected coming from my music background. For Sigma Financial AI, which provides traders with a set of AI tools, we’re working on algorithms that can detect patterns in stock market data and candlestick charts that show the highs and lows of stocks for a certain period. This is interesting because it’s not a straightforward machine learning problem, it’s more of a signal processing problem - and not an easy one. The Pyrsos Lab team and I treated the stock data as if it were a signal coming from a sensor or instrument and I think my signal processing background helped to provide new ideas and a different way of approaching the problem.

 

 

What about the most impactful projects? 

 

We also have many projects that relate to music, such as similarity engines that can find a track that sounds like a particular song. That’s where our core intellectual property sits, and I think it’s important for the world of music because most music streaming refers to a very small amount of music. This means we’re all listening to the same stuff and there’s a long tail of music that has never been heard. What we do targets the applications where you don’t just get the music that everyone else listens to - being able to discover new music is really important.

 

 

As someone who has co-authored and developed the technology for multiple patents, what advice would you give to aspiring investors in the field?

 

For investors, it makes sense for them to have tech knowledge because we are in a world where technology is very advanced. In machine learning, for example, a lot of things look impressive, but the product doesn’t necessarily have any technological innovation. It is very hard to distinguish something that is truly innovative from something that can be built relatively quickly and replicated by pretty much anyone; knowing the difference takes skill and expertise.

 

 

What advice would you give to innovators who are pursuing patents?

 

In some ways patent applications are similar to writing for a scientific journal. With those publications you need to show the ‘how’: you have to explain what the innovation is and prove that it works better than anything else that exists. With patents, it’s a bit different. Firstly, because these are legal documents that use legal language you need to involve lawyers to write it for you. You don’t have to show how well your innovations work but you have to distinctly show how it is different from every other technology that exists. It makes sense to research this thoroughly before even attempting to build or patent something and understand what is already available.

 

 

How do you envision AI transforming the music industry in the future?

 

The machine learning community has been focusing a lot on generative algorithms that generate multimedia, like text in the case of ChatGPT. For me, the question is whether this is good or bad for music. Having said that, I don’t think engineers should decide - it’s society as a whole that should discuss and grapple with this question.

 

There are probably applications for music where generative AI would make sense, but do we all want to listen to computer-generated music in the future? I don’t know, but I don’t think so. On the one hand you could generate background muzak for retail stores and lifts, for example, but there are composers who earn a living creating that and these are the same people who make interesting music that I enjoy listening to.

 

I think it would be a mistake to restrict any machine learning research for music because there are various use cases for it. For example, DAACI, our portfolio company, makes tools that removes the difficult aspects of creating music while fostering creativity for the musicians. That’s a very different kind of application compared to a black box where you write a text prompt, click, and you get a new piece of music. When assessing these technologies, we have to be very analytical, and it will probably not be solved by a simple black-and-white answer.