It adjusts while you play
1Take v1.4 adds AI Auto-Optimization. It runs during recording and adjusts EQ, compression, input trim, and saturation on the fly. No pausing to tweak settings. No remembering to fix things after the fact.
It's not a loudness normalizer. It reads the actual character of your recording and picks parameter values for that specific take.
What it adjusts
Four processing stages, each tuned to the live signal:
EQ
FFT analysis finds the problem bands — low-mid muddiness, harsh upper-mids, room resonances — and sets corrective filter values. Both frequency and gain are set per band, so cuts and boosts go exactly where the signal needs them.
Compression
Dynamic range gets measured throughout the take. The optimizer picks threshold, attack, and release based on the source — loose and transparent for vocals that are already in a natural range, tighter for instruments with sharp transients.
Input Trim
Noise floor and LUFS measurement together determine whether the level is too hot or too quiet. The trim gets adjusted to put the signal in a good spot for downstream processing — enough headroom, no clipping.
Saturation
Some recordings just sound thin. A little harmonic saturation helps. The AI checks spectral density and harmonic content before applying any — the goal is presence, not distortion.
You can see every change
Every parameter the AI touches shows up inline in the recording detail view, with the before and after values right next to each other. Don't like a change? Override it manually. All four stages are shown, so nothing happens behind your back.
I built it this way on purpose. AI processing that hides what it does isn't useful.
It learns from your previous takes
The optimizer doesn't treat every recording as a blank slate. When a certain EQ approach consistently works well under a given preset, that pattern carries forward into the next session's starting point.
In practice this means accuracy improves the more you use 1Take. Early sessions run on the general analysis engine. Later ones also draw on what worked in your specific room and with your specific setup.
Exported data: markers.json
When you export a take, markers.json includes a full log of every parameter change the AI made — the file that was analyzed, each parameter, the value before, and the value after.
This is handy when you bring the audio into a DAW. You can see exactly what 1Take did and match your plugin chain to it, or just use it as a reference for what the room sounded like.
| Field | Description |
|---|---|
analyzedFile |
Name of the recording file that was analyzed |
parameter |
The processing parameter that was adjusted |
previousValue |
The value before optimization |
newValue |
The value after optimization |
How it works under the hood
Everything runs on-device, during recording. Three measurement systems drive the decisions:
- Spectral analysis — Continuous FFT reads the energy distribution across the spectrum and feeds EQ decisions in real time
- Noise floor detection — Quiet passages get analyzed to establish an ambient baseline for gain and trim
- LUFS measurement — Integrated loudness is tracked against broadcast and streaming references, which drives compression and trim
No audio leaves your device. Everything is local.
Turning it on
- Open Settings in 1Take
- Tap Recording
- Enable AI Auto-Optimize
Once enabled, it runs automatically for every new recording. Old recordings aren't touched retroactively — but if you want to run optimization on a past take, open its detail view and use the manual option there.