What’s a synthesizer?
It all begins with an Oscillator. That’s an electric device that emits a wave pattern in a defined frequency, generating a sound. Think of it as a physical device (because that’s what it is in a modular synthesizer), or as the WebAudio API calls it - a node. That node has an input and an output. The input is the waveform and frequency, the output is the sound signal. That signal can now be connected to other devices. An actual synthesizer would have speakers as the last device in the chain, but the WebAudio API foregoes that. The last node simply acts as a speaker.
My oscillator is a very simple one. It supports the four basic waveforms (sine, square, triangle, sawtooth) and a frequency. The different waveforms generate different kinds of tones: A sine wave sounds a bit warm and soft, a sawtooth is more industrial and harsh. The frequency determines the pitch. 440Hz corresponds to the concert pitch A. Wikipedia has a full table of all the frequencies.
Attack, Decay, Sustain, Release
Next, I want some controls to further manipulate the character of my sound signal. Give it a more roomy tone, or a pluck, or a swell. On a real synthesizer, those controls are named envelope, Contour, or ADSR. That’s short for Attack, Decay, Sustain and Release.
It’s an established system to give a wide gamut of characteristics to a relatively simple signal, by manipulating its volume (y-axis) over time (x-axis). The four keywords stand for a specific aspect each:
- Attack: The swell of the volume at the beginning of the tone
- Decay: The decrease in volume right after the start, like the pluck of a guitar
- Sustain: Holding the note at a specific volume for as long as the signal comes
- Release: The volume of the note after the signal has gone, like a hall effect
There are some more specific envelopes and there are also lots of effects and methods to manipulate the sound further, but I’ll do just fine with those four basic controls.
Here’s a rough sketch of how the logic works.
Each audio node gets options to configure them. Pressing a key triggers the sound generation from the Oscillator and the subsequent steps Attack, Decay and Sustain. Letting go of a key triggers the Release step and then terminates the note.
What a pressing a key actually means, depends on the input device. Ideally, I’d wish for something like
<input type="piano">, but realistically, I had to implement handlers for mouse, touch, and keyboard events myself.
So I wound up with a functional synthesizer. To top it off I added the usual PWA niceties, a logo and called it JSSynth:
Chrome puts a limit on how many oscillators can play simultaneously. On Windows and macOS that appears to be 50, but it’s much lower on Android. Firefox does not place such a limit at all.
Safari doesn’t include the AudioContext yet (but it’s in TP for version 14), so this synthesizer won’t work neither on Safari for macOS nor any iOS browser until they open their OS up for third party browser engines.
I think there’s a bug somewhere in the release key mechanism because keys tend to get stuck when played frantically. Until I can be bothered to fix that, a reload is the only thing that helps.