Wherever you go in the music production space, be it Reddit threads or YouTube videos, the conversations are alive with the familiar talk of saw waves, saturation, filter cutoffs, and more. But beyond the familiar buzzwords and marketable features lies a rich tapestry of components and concepts that rarely get the spotlight and yet are the core features that separate the sonics and timbres of one synth from another. In conversation with the experts at Novation, this piece peels back the layers of synth anatomy to reveal the essential yet often overlooked elements that make up the backbone of these musical powerhouses.

In our latest exclusive interview, Novation’s lead hardware designer, Danny Nugent, helped develop blockbuster synths like the PEAK and Summit alongside Nick Bookman and Chris Hugget, sheds light on the transformative impact of digital and analog technologies from the more complex waveforms shaping the future of sound to the nostalgic warmth of analog circuits. Get ready to uncover the secret sauce of synths that gives musicians and producers their unique edge in an ever-evolving musical landscape.


How do digital oscillators’ capabilities for complex waveforms and wavetables expand the range of sounds compared to traditional analog oscillators?

Analog oscillators sound great, but you are limited to simple shapes like triangles, pulses, and saws. These classic shapes will always create specific harmonics; for example, a saw wave will give you every harmonic descending in volume, perfect for sculpting further with filters.

This is where digital & wavetable oscillators can excel. Digital allows you to repeat any waveform, allowing the production of any harmonics. Wavetables take this further, allowing it to scan through different waveforms and the harmonics to change over time, which is perfect for glacial pads or harmonically rich basses.

Digital oscillators allow for more control over the waveform. For example, on Peak and Summit, it is possible to get oscillator sync without needing an additional oscillator or to do linear FM between the oscillators, so the pitch remains stable as the FM increases. Combining this can create vibrant and complex sounds before you filter and add FX.

How do multitimbral engines offer more versatility in sound shaping than single-timbre engines?

Multitimbral engines may sound complex, but they run multiple synth patches simultaneously. It is possible to do this on a single timbre synth within a DAW by layering up recordings, but the absolute joy comes from performing the layers simultaneously.

I see multi-timbral as designing two musical systems that interact with each other. It can be as simple as having one layer do an attack sound while a pad slowly fades in or panning two sounds in a stereo space but being able to play them in sync. More advanced techniques can be using LFOs to create exciting polyrhythms between the two patches,

One of my favorite tricks on Summit is to layer two similar patches with different Arp settings. So, on one layer, the Arp could be going up; on the other, an Arp going down at a different rate and rhythm. Playing chords then creates these beautifully intertwined musical patterns with evolving harmony and polyrhythms, which I would never be able to come up with myself.    

Splitting the sounds on the keyboard also brings benefits. Being able to play two separate parts live can allow for more musicianship on stage, but it is excellent in the studio as well, as you can figure out a bass and chord idea at the same time while staying focused on the same instrument.

What do you appreciate most about the different sonic qualities of analog signal paths, such as transistor-based vs. tube-based circuits?

Analog signal paths impart character to the sound. This character all comes down to the non-linear responses, especially when you drive them. Guitarists always have the debate between solid-state transistor amps and tube amps. The difference between them is all about how they break up when driven. Tube amps tend to break up nicely, providing a lot of additional harmonics. The break-up also responds well to dynamics, allowing you to dig in as a player and control that response.

Unlike an electric guitar, a synth already uses very harmonically rich waves, so we go for a more straightforward approach with drive using a diode hard clipping circuit. This also means we don’t need tubes for every voice in the synth (which would be a nightmare to service!).

What is interesting about Peak and Summit is where you place these distortions. There are 3 points where these circuits are. 2 are on the voice pre and post-the-filter. The pre-filter circuit allows you to beef up your harmonic content and make the sound thicker but still cut the high end with a low pass, for example. The post-filter distortion will affect the filter’s resonance, allowing that resonance to scream. The final distortion is after all the voices are summed, which means you get a different response as the voices are stacked together.

How does the physical interface design of a synth, like knob placement and menu structure, influence a user’s sound design workflow.

The physical interface is the key to the instrument and why you want to use it over a plugin. Having a direct 1:1 control is critical in a clear layout. For Novation, we always lay out the pots to describe the signal path oscillators before the mixer, followed by the filter and modulation.

This is a big part of the process when designing a synth, and we often go through many variations of layouts on paper first, emulating that sound design experience before finding the right one.

It’s not just the placement of the pots that matter. Each control needs to be fine-tuned to give the player the best experience. As synth designers, we want to create an instrument that is easy to find those sweet spots but also allows you to push it to extremes when you want to.

Menu diving is always an interesting debate, with how complex synth engines are it can be unavoidable at times. For the Peak, we wanted people to create a wide range of sounds without touching a menu, so we put parameters in the menu that are not performative, and you usually set and leave for a patch. Some hidden elements, like Peak’s FM features, were put in the mod matrix menu and were not immediately discoverable. Adding the FM controls to the front panel on Summit made this much more accessible and expressive.

Discuss the impact of high-resolution digital-to-analog converters in modern synths on sound fidelity and character compared to the old-school, original all-analog synths.

In a digital system, limits are based on the sample rate and the Nyquist frequency. The Nyquist frequency is the highest frequency you can recreate without aliasing artifacts. Playing a saw wave high up the keyboard will easily create harmonics that surpass this frequency in a standard digital system. Anti-aliasing filters must be implemented, or wavetables must be band-limited to counteract this. Analog synths do not have this problem.

In Peak and Summit an FPGA was used to create the oscillators. Building DACs directly from the FPGA meant we could run the Oscillators at 24 MHz, pushing that Nyquist frequency well beyond what the synth could produce. This allowed all the digital oscillators to be purer and not band-limited as they would be on a true analog synth. Doing tests in the house, the Peak and Summit Saw wave is indistinguishable from its analog counterpart on the Bass Station II.

Lastly, can you explain the paraphonic versus polyphonic synthesis concept and its impact on a synth’s chordal capabilities?

In polyphonic synths, you have a concept of a voice. This voice is a single mono synth with oscillators, filters, envelopes etc. When you play multiple notes into the synth, the voice manager will send these notes to each of these voices, giving you true polyphony and independent control of that voice’s amplitude, filter, and modulation.

In a paraphonic synth, you only have one voice. Multiple notes can be played by controlling the pitch of each oscillator with the notes on a keyboard. These oscillators will then share the rest of the voice, usually the same envelopes, LFOS, and filters. Chords can be played, but the notes will not have independent control.

From this, it would be easy to say that polyphonic is greater than paraphonic, but the paraphonic behavior does open up some interesting sound design techniques. For example, on Bass Station II, it is possible to sync the oscillators together, put them in paraphonic mode, and you can then control the pitch of the synced oscillator from the keyboard. It’s an exciting sound, allowing you to play these ghost melodies hidden within the oscillator’s harmonics. Like this, it is possible to creatively play the ring modulation between the oscillators. Both these effects would be complex to recreate on a polyphonic synth. 

Profile picture of Will Vance
By
Will Vance is a professional music producer who has been involved in the industry for the better part of a decade and has been the managing editor at Magnetic Magazine since mid-2022. In that time period, he has published thousands of articles on music production, industry think pieces and educational articles about the music industry. Over the last decade as a professional music producer, Will Vance has also ran multiple successful and highly respected record labels in the industry, including Where The Heart Is Records as well as having launched a new label with a focus on community through Magnetic Magazine. When not running these labels or producing his own music, Vance is likely writing for other top industry sites like Waves or the Hyperbits Masterclass or working on his upcoming book on mindfulness in music production. On the rare chance he's not thinking about music production, he's probably running a game of Dungeons and Dragons with his friends which he has been the dungeon master for for many years.