I wrote this for you, before I knew you might need it.
I did it because I’ve been through the same journey — different paths and streams, but we share a common belief.
And although the specifics of the craft may differ from what I’m about to tell you, I intend the broad strokes to be relatable.
So let’s start with this:
Yes, you too can mix and master in headphones.
What seems like an at-face audio observation quickly gets kneejerked, rebutted, and quite frankly, spat upon. Usually, it’s by people so used to doing things their own way, that they don’t consider that, well, there might be others out there who are doing awesome with their own approach. In other words, they’re selfish, not genuinely interested in helping you advance your craft, and came here to voice an opinion at you, not uplift a collaboration with you. So, that’s not useful.
First: I have a profound respect for talented specialist audio roles and “golden ears”. I also have a hearing disability called hyperacusis, which is one those weaknesses that’s also a strength, because I hear things differently from most. The built-in EQ in my ears is already imbalanced, so any external compensation I’m doing is also overcompensating for my natural deficiencies! But, like how more and more have challenged the dogma of why universities can bill so much for tuition and how the Industrial Revolution’s fossils still lord over us, I question a lot about past belief that was true then, but isn’t now.
This extends to audio practices: in earlier decades, there was more doubt that you can make expressive electronic music, because there was a perception that “machines do all the work and take all the soul out of it”. (HINT: even Kraftwerk got some wicked melodies out of the bargain.) Also: remember when the “home studio” was scoffed at, because you need to be in a “professional environment” to get the job done? And how based on that limited worldview, “doing it on your laptop in your bedroom/hotel” possibly couldn’t be valid, just because you didn’t pay your dues and the whole price of admission to do things the hard way? (Some odd codependency with “you get what you pay for”, I suppose.)
This is not a technical how-to article. (There are lots of those.) It might not even be a philosophical if-why one. (There are fewer of those.) But it speaks to the metaphors of musicmaking, and it sure as heck is a reassurance, to share my experiences. And maybe, just maybe, if they match up with yours and where you want to go, it’ll be useful.
So again, for emphasis: YES! YOU TOO CAN MIX AND MASTER IN HEADPHONES!
There’s been decades of opposition to this. Some of it is based on science, a lot of it based on fear, more specifically, “being afraid to do things differently”. Opponents trot out the same points, such as: you can’t judge the bass, stereo imaging will be wack, you need the crosstalk, the frequency spectrum will be doubly wack, etc. But consider this: do you think a lot of people buy headphones and earbuds because they listen to music in them? And why do various games, films, and media advise to “Listen in headphones”, with all their surround and binaural goodness? Certain experiences even have adaptive settings for headphones vs. soundbars. Could it then, perhaps be a valid end goal?
Cutting out a lot of the chaff along the way, let’s make some stellar presumptions…
One of them being: before any of this, you need a “decent” audio signal path. And by decent, I mean that it’s relatively transparent, that even if your audio interface isn’t an RME, you’re able to hear results from others that sound good to you. And of course, that you have a decent pair of headphones to begin with. To remix a saying from Centaurworld:
You are able, but are you comfortable?
Another of these presumptions being: you’ve used a pair of headphones so much that you know what all manner of other music sounds like on it. You’ve listened to talky podcasts, headbanging metal, thumpin’ EDM, and maybe even the occasional slice of a particular string quartet playing over the lowercase rustling of library pages being flipped. You’ve established a range, and for lack of a better way to put it, are intimate with your headphones. You know them inside-out. They are thus a reference. A safe space for your ears.
And that gets us to where, say you make music in headphones — including “mixing” and “mastering” — and you put it on your phone, and you play it in your car, a friend’s house, heck, a club soundsystem. Maybe you even have another friend with a “proper” setup you can audition. The point being, this is a feat of ambassadorship, and you know how what gets output to your headphones will translate, precisely because you’ve been down this road many times.
If you’ve come this far, good.
Reading further implies you have a general sense of what “mixing” and “mastering” mean, how the definition has changed over time, and so on. But as a basis for discussion, you understand this much.
There’s also an inferred curtness about what certain words mean, where no, you aren’t getting the full-on Sylvia Massy or Bob Ludwig treatment via DIY. No way. If you’re going to get heart surgery, you need a cardiovascular specialist. But what it does signify: simply by being hands-on, you’re learning about what makes “mixing” and “mastering” important to the process, and gaining a greater respect of the craft and those who came before you. I’ve been exploring this very thing for over a quarter of a century, and still have more questions.
Used to be a given that once you do the recording, the separate phases happen sequentially, with mastering being the final one, handed off to someone else you trust. But now, with the power technology imbues you with, with the mimized cost and maximized convenience at your fingertips, you can do all at the same time. (That’s what I do: I fly in melodies, mix as I go along, and “mastering” is often using a template that I tune for each track.) Should you? It varies. Yes, it can help to have a fresh pair of ears listen and improve on what you did. But it’s just as valid for you to play “auteur” and control the whole flow. Just like in visual media, where certain directors are content to help other storytellers find their way, while others need to imprint their own voice on everything underneath them.
It may also be possible that while you can get part of the way there, ultimately, you eventually discover that you prefer someone to mix and/or master your music with whatever tools they have. But just like the voiceover director who mimes the characters crazy intonations so that the actual VO artists have a reference track to shoot for, some is better than none. Plus, it’ll give you a better appreciation for how it all works.
My backstory, in brief, is that due to a combo of being short on money, time, having hyperacusis, and preferring an isolated “in my head” immersion, that’s how I started trying this full production workflow in headphones. I began because I was curious. I had help along the way, although a lot of my experiments are in isolation. While I’ve gotten to a relatively stable state, I’m nipping and tucking as-needed. If Voxengo Elephant comes out with a new limiting algorithm, I’m going to try it out, same as any virtual instrument and effect. All in pursuit of getting closer and closer to translate what’s in my head into your ears.
I’ve had various headphones along the way. For a long stretch, I stuck with Sony MDR-XB700s. I found my impressions from almost a decade ago, and I still stand by this. I made so many tracks on them that garnered me awards in music competitions, and heaps of praise! (Yay, social proof.) They were thought to be “too colored with that boomy bass”, but they sounded so fun, and I was able to get to where people asked me what speakers I monitored my mixes on. That felt pretty gratifying. Although, revealing to these peeps that it was, in fact, headphones… made the resulting fallout awkward and unpleasant. It was like an evil version of the Pepsi Challenge, where someone tells you they prefer Pepsi, but it’s actually Coke all along.
Also consider that on your journey, the results you’re getting are on-target (or at least are getting closer and closer) to your actual intent. In other words, whatever criticism you receive is compatible with that end goal, and external feedback isn’t veering you off of the path you chose to travel. To put it more colorfully: if you want a loud, brash, brickwalled-to-the-max track and someone else has you earmarked for something with a lot more dynamic range circa Squarepusher or Hybrid of today, their feedback might be wrong for you in this moment.
I’ll lay this out preemptively. People like to defend what they have. Maybe they put a lot of money into a conventional setup, and have too much of a sunk cost (in their own mind) to change it up. Maybe they tell you to send them your music and prove it, but they aren’t doing anything for you in return — I’ve gotten that, I get it. (And you’re in the right place, torley.com, to listen to my music.) There are a number of scenarios that replay themselves across spacetime, variations on a theme. And if you’ve played this game like I have, your brain’s keen pattern detector will pick up on these, too. You won’t even have to think about it, there’ll just be — blink! — that’s what it is.
I routinely re-examine my own prejudices. More recently, it got so hot, and I had to turn on air conditioning to stay cool. It makes a lot of whooshy pink-esque noise, so I pitched myself the idea: “What if I got active noise-canceling headphones?” which is taboo upon taboo, if you’ve read various forums out there, like one step beyond naysay to headphones. I picked up a refurb pair of the Sony WH-1000XM4s, and within a week, I had figured out how to translate what I did on the MDR-XB700s to the WH-1000XM4s, to where I’d wear them back and forth and be satisfied. And stay cool. Different profiles, but still good. (And thank goodness RME allows individual workspace/EQ settings as a one-click change.)
This still seems fairly unorthodox, but it seems obvious to me in hindsight. Colors of noise are the enemy in properly soundproofed studio, with extra care being paid to acoustic foam to minimize reflections, getting a computer that’s quiet under load, and such. But certain things — like mechanical keyboard and music controller keyboards — can’t get more quiet past a certain point, due to physics. However, with active noise canceling, you can operate more gear with the outside world shut out, and make the world of music what you’re hearing — in headphones.
So why not give it a go… what if?
You’re going to trade your time, energy, and funds for it. You’ll learn something coming out the other side. In my experience, it was absolutely worth it, but it’s also frustrating because of those who keep harping on hating headphones for mixing/mastering, with nary a whit of having invested much of their own ear-meat to literally hear it out.
A lot of emotions carried me through this. The act of listening, after all, engages those. I think about other fields and counterintuitive wisdom in them, like how Harold McGee and Heston Blumenthal debunked ages-old “wisdom” about washing mushrooms, by applying science. Those findings led to further developments from Jim Fuller. How do we even get here without being curious? We can’t. And even with the science, we can’t necessarily intellectualize why we might be curious. All I know is that I heard a lot of things repeated, and wanted to test them for myself.
Be kind and patient with yourself. If you already have audio monitors but haven’t been able to make them work for you, it doesn’t mean you throw them away. Maybe you switch between monitors and headphones, not unlike having salty and sweet tastes together can bring out the flavors in each other. Listen to what you want to get out of this, and what your priorities are. Do you want to reduce the amount of physical space your setup uses? That’s often a compelling part of using headphones more. Or perhaps your own hearing is better tailored towards hearing tiny details in the cans, right next to your ears, and you want others to share that experience.
I encourage you, that if you are inclined towards this path, that it’s legitimate. You aren’t alone because I’ve traveled here before, and I’ll walk alongside you. What a weird thing to be advocating for, but here’s where we find ourselves.
In the end, none of this is to say that everyone should mix & master on headphones. My point is that you can on cans!
Delight in your musicmaking, and the rest will follow.
Now I can say I’ve had that experience.
Flashback to a record store in 1997 — some 20+ years ago, when I asked my Mom if I could pick out several CDs. She found objections to some of the cover art, but was otherwise agreeable and supportive. I had just sailed past the nascent cusp of discovering electronic music, so these were all “genre” picks. I remember Karma by Delerium, the Lush 3 EP by Orbital — or was it In Sides this time? And did I also get Beyond the Mind’s Eye from Jan Hammer during this trip? Some of that blurs together, but I surely did request Oxygène 7-13 by Jean-Michel Jarre.
How’d I hear of it? Flashback a bit earlier, where I was tuned into MTV Amp that bookmarked the “stateside electronica invasion”, and the airwaves opened me up to the synthesis possibilities beyond. There was this amazingly kinetic, animated video for “Oxygène Part 8” that I recorded to VHS, and kept playing. It would set in motion one of the pillars of my musical philosophy to this day: that music can be both adventurously experimental (weird noises!) AND irresistibly hummable (catchy tune!). They aren’t mutually exclusive.
I’d heard of Jean-Michel Jarre even before that moment, though. Rewind even further, and I’ll recall that the first time I was ever exposed to his music were through some skillfully-transcribed MIDI files that used all manner of trickery to simulate some of his distinctive signatures, like his burbling LFOs and spacey sweeps that wrapped the core melodi sequences in an all-encompassing bliss. The files had odd names (to me at the time) like “EQUINOX4.MID” (8.3 amirite?), and in my naive and unbounded mental meandering, I’d sometimes wonder why you’d give every song on an album the same name, just divided in parts.
I’ve long admired JMJ because of his longevity and consistency (across ANY kind of music, five decades onward!), and more recent, overt ambassadorship to collaborate with other electronic music greats and build bridges across styles in electronic music. Sad thing with myopic “EDM” is the transient “flavor of the month” and accelerated thrust that burns its practitioners out quickly — you need to take care of yourself, and as I write this, I see the tragic news about Avicii. There’s this commercialized drive to do “chase trends” or to “outdo your last” without being clear on what the purpose of it is, or how it comes at the expense of one’s inner voice, with oddly homogeneous tracks sitting atop Beatport and 100 things sounding like they could be made by a single producer.
Striving to be mindful of that, I’d composed a number of homages to what I’ve learned from JMJ’s works — a branch from the same tree that birthed Zoolook’s creative sampling techniques, a vibe which I think has been neglected even amidst so many other explorations.
So it was fitting that when JMJ came and visited in my neck of the woods for the first time, I’d see him in concert, live, after all this time. I played out how it’d happen in my mind, and he delivered beyond expectations. He’s the apex of a showman, more energetic than most performers half his age (and far more visually engaging than any “I see this DJ tapping buttons but not sure what it’s doing or if he’s even playing live”) complete with guitar/keytar/laser harp solos and even “micro cam” performances — yes, the latter was during “Oxygène Part 8” and I went wild!
My other favorite among favorites is his fairly recent collaboration with Armin Van Buuren, “Stardust”, which sounds both comfortingly familiar yet strikingly daring, and has one of the most elegantly constructed “drops/breakdowns” I’ve ever heard, punctuated by crisp tom fills and an essential melody.
Can you spot me?
At this moment, specifically 1:55, I was reminded of why I got into electronic music in the first place, why I love exploring so many sonic possibilities across an always-unfolding palette of playful tools. Whether it’s done in the solace of the studio or delighting a crowd in realtime, I believe, given the vast canvas we have to paint layers on, it’s imperative we producers be as starships, soaring forth to discover emotionally resonant realms of the aural arts.
In a world that is so much hustle and bustle, it’s difficult to find a contemplative moment to sit still and remember why you’re doing what you’re doing. And besides the sheer entertainment value here, I found a profound yet simple truth that brought me back to my roots.
UPDATE: thank you to matrixsynth for capturing this video, where my enthusiasm is also apparent. You should check out his writeup too!
You never outgrow the things you love — more specifically, the unfinished questions from childhood and youth that live on, as recurring motifs throughout the rest of your life. My first “serious” synthesizer was an Alesis QS8. Over recent months, incrementally (and perhaps insidiously), I dreamt more about revisiting those sounds I’d grew up with. The QS series is quite underrated to this day — carrying knocks for lack of a resonant filter, among other limitations that would shape my creative growth. I was too naive to let any of that stop me, and I still vividly recall how my parents supported my first steps into further electronic music with this particular model.
So lately, come 2018, coupled with other fortuitous timing, I acquired a QSR, which is the rackmount version of the QS8. Same sounds, much more compact. Thanks to some helpful folks with legacy knowledge that’s otherwise been lost to the dunes, I’m back and running. In those 1997-98 salad days — 20 years ago! — I became attached to the presets, though clamped by my lack of experience and technological developments that would not arrive for years to come.
Here I am now.
As I plugged in the Alesis QSR and prayed, amidst other skittering thoughts… would it turn on?… what would I hear in all these years since?… the increasingly prominently question I kept asking myself as the mainline throughout, resounding like the sound of drums in Doctor Who’s Master’s head:
“What if I could play those sounds I grew up on, with everything I’ve learned since?”
There’s such beauty in essentiality.
For it was Clara Rockmore who made the Theremin — nothing more than a flavor of sine wave! — sing to its utmost potential. And it’s within those simplest of sounds that, if we can transform them across realtime to produce emotional changes of dynamic magnitude, then that, to me at least, is a transcendent measure of music. Imagine this unfolded across many layers, and the effect can be even more exponentially extended, then contrasted in the arrangement itself as we switch between “stripped down, raw” bits vs. a “litany of layers”.
I considered those early Alesis QS programs within banks, with names like “’74 Square” (such a tone for soloing wildly, as I did on the jazzy “bad acid square”) and “DSP Violin” (with its delay-flange, closest thing I could find at the time to the solo on Deep Forest’s “Marta’s Song”), with their percussive counterparts such as “UFO Drums” (whose LFO-swooshed hi-hats are utterly unique) and “Industro” (early NIN-esque abrasion).
At the time, I had to make heavy trade-offs, since although the Alesis branched beyond the General MIDI spec and allowed drums per se to live on any channel — not just #10! — I was still limited to 16 channels in all, with 64 voices of polyphony being split further amongst them. Also, the effects bus — while bathing single patches in their glorious flangy washes and other colorful treatments, it became glaringly clear how dependent how many of these patches were on said FX, as soon as I was forced to choose from a limited amount (as part of “4 independent stereo multieffect processing busses” as the literature put it) within a full mix.
This drove me towards further things. I wanted to be fully unshackled. I’d obtain more racks of hardware synths (whose ungainly physical bulk would send me in the opposite direction for my 2000s in-the-box phase). Those racks were before the Golden Age of VST, y’see, and I’d specifically seek out those with powerful DSP FX capabilities (like the Novation Nova, which allowed a whole complement for each of its parts with no sacrifice, even if the reverb have a characteristic metallic tinge!) Habits built upon each other over the years, until I’d finally come full cycle, in an era when Jean Michel-Jarre (trivia: my fave of his is still “Oyxgene 8”) revisits his chronologie. “Sky — no, alternate universes are the limit”-type freedom in Ableton Live, nudging against the paradox of “analysis paralysis” when we have 500 piano sounds to choose from. The impulse to place the first note and keep jamming, fighting, oh-ever-so-hard, against that conjoined demon, “writer’s block”.
While expressive controllers haven’t breached mainstream embrace as I’d hoped they would — and indeed I draw parallels between space-loving Vangelis’ championing of the never-mass-market-derived Yamaha CS-80 and the “Why aren’t we on Mars already?” with shades of respect to Elon Musk — I have some fantabulous tool-toys I enjoy very much. Such as the Expressive E Touché, which verily must be rubbed to be believed — I was a skeptic until I wiggled the wood and took my signature pitch-swooping to those next levels, compleat with bonus stages! And Palette Gear’s live-reconfigurable sliders/buttons/dials, which I’ve created a profile for, to map to my old QS8’s MIDI CC 12/13/91/93 sliders. To summarize, newer inventions that bring out the best in what I had before. The reincarnation of a long-lost family, the homecoming of aural comfort food, all sharing a meal at this same sonic table.
I’m also taking advantage of this opportunity to pick up sounds I didn’t hear the first go ‘round. The QSR has two slots, like a toaster, to put in expansion QCards containing more samples ’n’ sounds, and the World Ethnic QuadraCard is one whose cover art conjured up a lot of wistful speculation of what it might sound like. Now I know.
Scrolling through the preset banks shows me many old friends I haven’t seen in so long.
Anyway. And it’s so, so very awesome to be back.
There’s a lot of bad advice about electronic musicmaking out there. Facade trash that seems like it should be worthwhile, but apply a little oil-o’-critical-thinking and it falls apart like a cheap suit.
Here’s one: “It doesn’t matter what you use, it matters how you use it.” Utterly mindboggling how often this is repeated, sometimes in a misguided attempt to encourage rookie producers who don’t (yet) have the money to buy the gear they desire.
I’ve seen contortedly hamfisted apologetics try to explain this one away to no one’s benefit, but it can’t be done.
Here’s a suggestion: instead of being an “or” person when it comes to such a situation, why not be an “and” person? Meaning:
It does matter what you use and it matters how you use it.
To dismiss (or dilute, even with generous figurative-ness) what you use also impugns the why, and is grossly disrespectful.
In electronic music — especially — the relationship between musicians and toolmakers is vital to the progress and evolution of the field. What we use includes synthesizers/samplers, DJ decks, personal computers, and a realm of other mechanical devices.
There are very good reasons why you may prefer one tool after another. Perhaps it had fewer annoyances (they all do, even the best of ‘em), the interface is a lot more robust, or it simply makes you happier. I know that’s true of me. Even when I’m making music that doesn’t sound happy, there is an implicit delight that manifests when I’ve begun to layer one element atop another, and it makes me increasingly excited to hear the crest of the drums and bass begin to coalesce with the pitch-swooping solo that I’ve just thrown down.
And none of this is possible without the what: all the hardware and software that amazes me to no end. Why, we live in an age of miracles — in 2016, I realize how many of my longtime process dreams have come true. SSDs are becoming increasingly affordable (speeding up access to my projects), I can almost instantly travel through time and recall many of my fave synth sounds throughout history — like a living museum! — and although I have gripes about things like Ableton Live’s freeze speed and myriad other nitty-gritties… I recognize how far we have come, thanks to this continued dialog between the musicians and toolmakers… and even those who are both.
I recall the late, great Dr. Bob Moog’s words on various artists who honored his creations, and took them to places he didn’t expect. From Wendy Carlos to ELP to Jan Hammer, as this interview reminisces:
“Both the players and instrument designers have to learn something about really getting control of a lead synthesiser. To me, it’s a big difference between just playing a keyboard and playing it with pitch-bending and vibrato so that it’s expressive. Playing the keyboard is OK but there are very few people who can do something like Jan Hammer does.”
Still relevant words decades later, where tools that are designed a certain way introduce biases that may discourage a musician from being wildly playful. The counterpoint of this is that limitations can remove choices and thus impose resourcefulness.
On a specific note, I believe strongly in pitch-bending, vibrato, and the entire pantheon of expressive nuances. There are few instruments that continue to push those next levels, among them the ROLI Seaboard models — and the new ROLI Blocks which makes some of my fave sounds even more accessible on-the-go! (Earnest disclosure: I’m affiliated with them by way of initially being a customer and fan, then being sponsored.)
ROLI has taken “multidimensional polyphonic expression” to new heights pointing at the mass market, and while it’s still viewed as a newfangled novelty (to parallel the VR trend), there is substantial value in these forms: being able to wiggle your finger directly on the keywave surface and imbue the music with some of your direct energy is light years ahead of needing a separate hand to grab the pitch wheel, and light years even further from the piano (no pitch-bend whatsoever).
The Seaboard is one of my whats specifically because it fulfills a how I want to do something, and to credit Simon Sinek, I start with why I want to make music like this in the first place — more on that is a story for another time. 😀
Writing about music really is like dancing about architecture, and Frank Zappa’s observation remains lucidly correct. I continue to wrestle with the “right” words to describe what I do: sometimes I choose those terms because of a pleasant alliteration, or because I see them as underappreciated in the broader world.
So then: why “sonic science fiction”?
Since I was a tiny tot, I’ve been enamored with visions of the future and possibilities of “What if?” The spectrum has included counterfeit worlds and dystopian cyberpunk of Philip K. Dick, to Ray Bradbury’s wildly passionate tales that speak from the heart, to the succinct and consistent gems that Ted Chiang crafts… each and every one a priceless, life-impacting memory. And not just literary — I’ve been fixated by the visual component too, be it Syd Mead’s thoughtful worldbuilding or Jacek Yerka’s Mind Fields (his supreme collab with Harlan Ellison). Plus, “genre” anthologies like The Twilight Zone and The Outer Limits, and who can forget The Mind’s Eye computer-animated voyages?
Alongside all of this, there is music associated with science fiction, as in film soundtracks, as in concept albums inspired by those unspooled yarns. My own (present) take on the matter is based on my love of the brief form, the short story. The intent to build a self-contained world, let the characters get loose, and get out. Hence, another related phrase I’m fond of: “sound short story”. The rules are simple:
- Come up with a specific idea.
- Explore it in two minutes.
Why that length? It came about awhile back due to the stipulations of a Samplephonics “create a genre” contest I entered. (You can hear my results here.) Around the same time, I was reading about brief yet great songs, and it helped propel me to complete tracks in rapid, startup-style succession. In continuing hindsight, it’s been a creative limitation that has bred productivity — I now have an abundance of 120-second critters roaming around, and so many more on the to-dream-to-do list! I also find with an economy of time, I make stronger decisions about sequencing/structure. At times, it’s akin to warping into a picturesque planet where the air is toxic and your containment suit has a ruptured seal — love the sights, can’t stay for dinner.
Of course, I can have my proverbial watermelon and eat it too: I continue to create longer songs (that are also based on specific ideas, such as an alternate history of progressive house). But for the bulk of it, my sonic science fiction is very much about creating the kind of music I want to birth and thrive, ziplining across imaginary nodes that neither of us have heard before. This is why I liken it to finding treasures (aurelics, or “aural relics”) in alternate realities, parallel timelines, other universes, etc.
Like its sci-fi ancestors, my music serves as allegory for how things might have developed divergently. One track might represent a world where accordions are taken more seriously; another is an étude involving classic drum machines; yet another might be a reasonable excuse to learn a virtual effect I’ve just bought. (I also believe creatives and toolmakers should collaborate closely to improve the overall environment of self-expression, but that’s a topic to expand on another time…)
And now you know. All that remains is for you to dance about architecture with me: I invite you to put on a quality pair of headphones with rich bass (as I make quite abundant use of it), get into a comfy position, and let’s travel together into these other spaces.