Magenta, Google’s research project that is dedicated to understanding how to use machine learning in the space of the arts have just released an open source experimental instrument named NSynth. The group aims to build machine learning tools that will give musicians new ways of expressing themselves through music.
The Nsynth algorithm learns the core aspects of what makes each sound, sound like it does, and then combine those characteristics, drawing a new sound, that’s not just a blending of the two original sounds (like it’s XY Pad lookalike).
Another group at Google, Creative Lab, took the sounds from NSynth and created a musical instrument, named ’NSynth Super’, which is the device you see in the above video. The directions for building your own NSynth Super are available for free on Github; they have also made available a free Max for Live device for Ableton.
On the surface, NSynth Super looks pretty invocation, and perhaps even an advancement in music technology, particularly with it’s relationship to AI.
However, upon learning of the device, I couldn’t help but be reminded of the German-made Hartman Neuron synthesiser that was created, 15 years ago, back in 2003. The technology in this format has remained dormant since then, at least in commercial synthesisers or instruments (to my knowledge). The Neuron was a polyphonic synthesiser that used, at the time, a new form of synthesiser and sound modelling that was based on technology found in neural networks. The Neuron analysed audio files or samples and then created digital computer models of these sounds which could then be re-synthesised and process using the extensive amount of onboard data-wheels and joysticks with control of more ‘musical’ sonic aspects such as instrument shape, size and acoustic behaviour. The joysticks allow for real-time tweaking of up to three parameters at once – 3D modelling in a true 5.1 surround sound environment.
Google’s research project is all open sourced, but I can’t help but feel that they should have at-least given Neuron a nod when their technology was truly revolutionary at the time. This, proven further by Magenta’s efforts in designing an innovate instrument using the best AI technology we have available and arriving at something very similar to the 15-year-old Neuron.
I don’t want to sound like I’m undermining their efforts, as the Nsynth algorithm incorporates AI technology to learn the characteristics of each sound, which is a substantial advancement to Neuron, and so it should be with the resources and funding of Google. But the core idea remains the same and is much closer to a reinvention, or enhancement of the wheel, than a genuinely cutting-edge innovation.