Building a Mini-DAW in JUCE
This post summarizes a sequence of JUCE/C++ assignments that gradually assemble the spine of a basic DAW. Instead of a long code dump, I describe what each unit does, how the pieces interact, and which functions are pivotal. Short code fragments are included only where they clarify a concept.
1) HelloWorld — custom component and mouse interaction
This first unit establishes the core GUI loop: a component draws itself in paint(juce::Graphics&), reacts to input via mouseDown(const juce::MouseEvent&), updates a small bit of state, and then asks JUCE to schedule a redraw with repaint(). I render a greeting string centered in the window, change the text and background color on every click, and mark the last click location. The work happens entirely on the message thread: JUCE calls paint whenever repaint() is requested, and layout is handled with getLocalBounds() so the text remains centered on window resizes. Conceptually, this demonstrates JUCE’s event-driven model: events (like a click) mutate state and trigger rendering.
// Click -> change state -> repaint.
void MainComponent::mouseDown (const juce::MouseEvent& e) {
lastClick = e.getMouseDownPosition();
idx = (idx + 1) % greetings.size();
repaint();
}
2) GUI Basics — wiring buttons, sliders, and combos
The second unit focuses on composing UI and reacting to parameter changes. Widgets are created as members, attached to the component with addAndMakeVisible(...), and laid out in resized() so their positions adapt to window size. Instead of formal listener interfaces, I use modern callback hooks like juce::TextButton::onClick, juce::Slider::onValueChange, and juce::ComboBox::onChange, which keeps logic local to the owning component. This pattern is essential for audio tools: a slider might represent gain or cutoff, a combo box might select a waveform or device, and a button might toggle transport. The function flow is simple and robust—construction wires callbacks, resized() handles geometry, and callbacks update parameters that other parts of the app consume.
// Minimal, readable callback wiring.
playButton.onClick = [this]{ startOrStopTransport(); };
gainSlider.onValueChange = [this]{ currentGain = (float) gainSlider.getValue(); };
modeCombo.onChange = [this]{ modeId = modeCombo.getSelectedId(); };
3) Media Player — transport, position, time display, and gain
Here I introduce file I/O and real-time audio playback using JUCE’s transport classes. Audio formats are registered with juce::AudioFormatManager::registerBasicFormats(), a file is chosen with juce::FileChooser, and a juce::AudioFormatReaderSource feeds samples into juce::AudioTransportSource. Playback is controlled with start() and stop(), and an on-screen position slider reflects the current playhead. A small time updater runs in juce::Timer::timerCallback() to keep labels and sliders synchronized without burdening the audio thread. Actual audio samples are pulled in getNextAudioBlock(const juce::AudioSourceChannelInfo&), where I also apply gain to demonstrate a simple signal path. The key division of labor is crucial: transport and UI live on the message thread, while audio stays in getNextAudioBlock, lock-free and fast.
// Source + transport control
transport.setSource (readerSource.get(), 0, nullptr, reader->sampleRate);
playPause.onClick = [this]{ transport.isPlaying() ? transport.stop() : transport.start(); };
4) Wave Lab — a compact wavetable oscillator
The final unit moves into sound generation with a CPU-friendly wavetable oscillator. A single cycle is precomputed into a buffer; setFrequency(float f, double sampleRate) calculates a fractional phase increment, and getNextSample() uses linear interpolation between adjacent samples while wrapping phase with a modulo. This approach keeps aliasing manageable at moderate frequencies and provides a clean foundation for a full synth voice (add ADSR envelopes, filters, and polyphony via voice management). In a real app, the oscillator’s getNextSample() is called inside your rendering routine—either a renderNextBlock(...) helper or directly inside getNextAudioBlock(...)—to fill the output buffer without allocations or locks.
float WavetableOsc::getNextSample() {
auto i0 = (unsigned) idx, i1 = (i0 + 1) % tableSize;
float frac = idx - i0, out = table[i0] + frac * (table[i1] - table[i0]);
idx += tableDelta; if (idx >= (float)tableSize) idx -= (float)tableSize;
return out;
}
5) MIDI Connect — device setup, message logging, and sampler readiness
This unit establishes audio/MIDI device configuration and real-time MIDI input. juce::AudioDeviceManager centralizes I/O, and the built-in juce::AudioDeviceSelectorComponent provides a ready-made dialog to choose devices. I enumerate inputs with juce::MidiInput::getAvailableDevices(), open one via openDevice, and start it so messages arrive on handleIncomingMidiMessage(...). Inside that callback I parse events—MidiMessage::isNoteOn(), isNoteOff(), and isController()—and log their parameters (note number, velocity, controller values). This is the natural handoff point to a sampler or synth: route the parsed events to an AudioProcessor hosted by juce::AudioProcessorPlayer, so MIDI can trigger sounds in the audio graph. The pattern is the same across instruments: device setup, message callback, then event routing to audio.
// MIDI callback parses and forwards events.
void handleIncomingMidiMessage (juce::MidiInput*, const juce::MidiMessage& m) {
if (m.isNoteOn()) append("NoteOn " + juce::String(m.getNoteNumber()));
if (m.isNoteOff()) append("NoteOff " + juce::String(m.getNoteNumber()));
if (m.isController())
append("CC " + juce::String(m.getControllerNumber()) + "=" + juce::String(m.getControllerValue()));
}
Closing notes
The overarching pattern across all units is separation of concerns: the message thread owns UI, device dialogs, and transport state; the audio thread focuses on predictable, lock-free sample generation and processing. JUCE’s function touchpoints (paint, resized, callbacks like onClick/onValueChange, timerCallback, getNextAudioBlock, and handleIncomingMidiMessage) provide a clear, reusable structure. From here the logical next steps are recording with a ring buffer, building a simple synth voice with ADSR and a filter, and persisting project state via ValueTree to JSON.
Enjoy Reading This Article?
Here are some more articles you might like to read next: