The Modular Atmospherics of Robert Aiki Aubrey Lowe | Interview

Robert Aiki Aubrey Lowe. Photo by Desdemona Dallas. Used with permission from the artist.

Robert Aiki Aubrey Lowe's creative odyssey with modular synthesis has been one that should inspire any musician striving to reach the top of their game. His capacity to merge his voice into a variety of electronic devices has carried him beyond the usual synth festivals and into spaces such as art and science museums, where, through synthesis, he challenges conventional perceptions of what is possible with sound.

Robert Aiki Aubrey Lowe at a modular synth.
Photo by Desdemona Dallas. Used with permission.

Lowe—who has made many records under the moniker Lichens—has also composed music for a number of film projects, but his most well-known work to date is the original soundtrack for Nia DaCosta's 2021 horror flick, Candyman. Lowe has been nominated for an Oscar in the Best Original Score category as a result of this exceptional work.

The aim, according to Lowe, was to create an aural terrain that complemented the film, while preserving the narrative's cinematic history and story arc. To that end, he was on set during the filming, at the site of Chicago's Cabrini-Green housing projects, where the film takes place.

We got the chance to speak with Lowe and in this interview, he goes deep into his approach of composing the well-received soundtrack, as well as his creative methods for making an immersive soundscape. In addition, we also discuss how modular synthesis assisted him in developing a distinct voice, and why it's so crucial for Lowe, a Black American, to raise awareness of other Black composers of yesteryear and of the modern era.

Preorder the 2xLP vinyl release of the Candyman OST now.

Where did your musical journey begin?

So I started actively playing music when I was about 14 years old. A lot of that came from my love of all types of music as a kid. But when I really started getting deeply into music and really investigating what I could do potentially to have some sort of a creative voice in that world, it was a lot of punk records and thrash records that I was listening to. Bad Brains was huge for me, Black Flag was gigantic as well. The Minutemen, Lungfish, actually—like, a few years in, I heard Lungfish for the first time, and that was something that really stood out to me, because it was so beyond. All of the groups that I mentioned, I really was tethered to those types of groups due to the fact that they weren't trying to play by any sort of a rule.

I think that whenever you get into this concept of genre, you will have signifiers that go along that sort of fit inside of this classification. I think the way that all of those groups were doing things were sort of outside of things that I'd seen other groups doing, and sort of sticking to a path. So I just really enjoyed the idea of being able to explore and experience things in as many different ways as possible, and just really to try to find my own voice inside of that, and not to be derivative, not to lean too heavily into influences of other things. Obviously there will be reference points inside of everyone's creative work, because that's how it is. It's not possible to step away from that, but just this idea of being derivative or doing something that sounds like something else. It was about sounding like myself.

So that's really where it got started. I think very quickly after I started to make music, within the first couple years, I really became enthralled and very interested in 20th century music, more extreme music or harsh noise. Those were also things that sort of pushed me along to find different pathways. I was a kid that was very curious. So I would often go to record stores, and I would shuffle through the bins and find things that looked compelling to me, things that looked very interesting, having no concept of what it actually was that that thing sounded like. So it was about exploring that and really taking a risk and listening to something, and being open to figuring out what exactly that thing was.

Well, with searching for your own voice, is that what compelled you to get into modular synthesis?

Yeah, it is. So the first time I heard music on a modular synthesizer would have been Switched-On Bach. That was the first thing that I heard. Then in pretty short order after that, I heard Morton Subotnick. I don't even think it was Silver Apples. I think it Wild Bull was the first Subotnick record that I heard. So, that was the entry point. Then from there, I started to discover things that were all generally mid-20th century, maybe a little later. There wasn't a lot of contemporary music I was hearing at the time that was using modular synthesizers. It was a lot of stuff that other labels like Nonesuch would put out in their new music series or Deutsche Grammophon. Then eventually I started to hear records that were done in the '80s and then on up. So it took some time until I really started to hear contemporary modular synthesizer music.

Your music sounds to me like you were more curious about the West Coast style—using complex oscillators, using low pass gates. Is that what attracted you more, rather than the East Coast style?

I definitely was more interested in additive synthesis. I was more interested in the pathways. Even though it's a modular system, the mindset of East Coast [synthesis] seemed a little more fixed and a little more reliant on a black-and-white key controller. The West Coast systems didn't have that. I was definitely more interested in the multitimbral aspects of the things you could get out of Buchlas and Serges—the pluckiness or almost acoustic nature or natural sounds that you could get out of those sorts of synthesizers—and also, there was a really raw electricity that you could get specifically out of the Serge. There was something about the presence of those particular synthesizers that I thought was really incredible.

Lowe's June 2014 performance at the Exploratorium, featuring the MIDI Sprout

You were one of the first musicians that I've seen incorporate living plants into your performance. How did that idea come about?

I was commissioned by the Exploratorium, which is a science museum in San Francisco. I was commissioned for work in their resident series that would also be utilizing their Meyer Constellation Sound System. Being that it was a science museum, I thought it would be interesting to investigate how non-human energy would be able to interact with an electronic system. So I was loaned a prototype of the MIDI Sprout, which is a little box that converts biodata into MIDI signals. So it wasn't the modular at all. It was just this little box that was converting biodata to MIDI. Then I would have to take that into a MIDI interface and then convert that into voltage.

So what I decided to do was to use two plants to ultimately determine how the sound moved in the room. The Meyer system in their theater is a 100-speaker array. Unfortunately, I didn't have time to really program the array. What I did instead was: I broke the array into quadrants. I was able to send signals to each of these quadrants, and it was moving around on a vector plane. I wasn't determining how the sound was moving in the room—the plants were doing that.

So what I would do is: I would take the biodata from the plant, send it into the MIDI Sprout, send the MIDI Sprout into... I think at the time, I had a [Mutable Instruments] Yarns, which broke that into four CV and Gate outputs, and then sent the voltages to modulation sources that would move on an XY axis. Then [I was] using attenuverters to adjust the amount of voltage, because I didn't really have control of how much or how little coming right out of the gate. I used attenuverters so I could actually fine-tune that a little bit. In that video, it's hard to discern what's happening, because you really had to be there in real time to understand how the sound was moving.

I had been workshopping this idea. I had an artist residency at EMPAC in Troy, New York, which is a performing arts institution that does incredible residencies and really has state-of-the-art equipment. I was an artist in residence there for about five weeks. I started working through these ideas about moving sound, basically creating clouds of sound in which the motion or the direction of the sound was indeterminate. So if you have four speakers, sometimes it'll go in a circle; sometimes it pops here and then here. I think anyone would generally, subconsciously, start to detect how things were moving. I wanted to be able to randomize that.

So when I started to move the sound in random ways in this quadraphonic system, it wasn't whirring around the room. It was sort of here, and then it would stop, and then it would pop up here. After a while, you stop trying to anticipate where the sound was coming from and really settle into it. So for that whole year, every performance I did was workshopping this idea further and further. Basically I wanted to be able to create a cloud of sound in which the directionality of the sound was inconsequential, and it would settle you into a hypnagogic or a dream-like state and just let you exist in the sound.

That general immersion was a true testament to your work, especially now with the official soundtrack to Candyman. Why was it important for you to be on set and be in what's left of Cabrini-Green in the making of the soundtrack?

The main reason for me to be on set was to capture field recordings, and especially throughout Cabrini-Green and what is currently existing there, I wanted to be able to capture the energy of the space. [Ed: The site of the Cabrini-Green housing projects has undergone extensive changes in recent years, with former residents being cleared out to make way for high-end housing and retail complexes.] I wanted to be able to record the locations naturally and inject those recordings with that energy, or those ghosts—it was like basically capturing apparitions—inject that elementally into the score in certain places so it would carry that energy of the actual physical space. It was important for me to be on set to see how Nia worked as a director. She and I had had multiple conversations. We had worked together or talked together in person, but I wanted to actually see her work.

Lowe in his home studio.
Photo by Desdemona Dallas. Used with permission.

I thought that being able to see her working in realtime, how she was directing, how she was moving the actors around to get the take, the decisions that she was making on the vantage points—all of these things were really important to me to know how she was working.

I thought that we would be able to work much better together due to the that fact, inherently, I would have this knowledge of how she moved in that space, and I could accompany her or accent what she was doing with sonic elements in that sonic landscape.

What were you using in your field recording setup?

At the time, I had a Zoom H6. I was using the onboard microphones that come with it, as well as a pair of these LOM microphones, a small company from I believe Bratislava. [They] make these really tiny electret microphones that are incredible for micro-sound. I also had a pair of Line Audio CM3 small diaphragm condenser mics. I was using all of those with the Zoom.

What types of things were you recording on set?

I was recording the insects outside, just the wind blowing through the row houses. I was recording inside of some of the abandoned buildings, just basically recording the silence or any air that would move through. There were old electrical boxes on the outside of some of the buildings that I would put the microphone inside and record the wind hitting them, and also record the movement of the creaky doors for the electrical boxes that were being moved by the wind, or I would agitate them myself.

There are scenes that take place in a laundromat in the film. So I was also doing recordings inside of the laundromat in between takes, where I was recording the washers and the dryers just running. All sorts of stuff, like cars driving by, helicopters going overhead, and then manipulating all of these sounds and breaking them down in a way, granularizing them so they are not necessarily the natural sounds anymore.

Some of the insect sounds remained fairly pure, but some of the other sounds, I wanted to make them into a different texture that would be unrecognizable as the actual thing that it was—because it was less important for me to have the actuality of those recordings, and more to use those recordings in a way that they were processed, so I could create other landscapes or enhance compositions that I was working on.

What are you using in the granular process?

It was all the Make Noise Morphagene. It's such an incredible module. It's such an incredible sampler, and there's so many variables involved with it. You can get such rich, incredible sounds out of it. I also use that for some of the string sounds. Like, these sort of buried, distant, orchestral string sounds that I was using in the film were not actually strings—they were the Make Noise Mysteron that I would record into the Morphagene, and then adjust the Gene size and slide the start time to create these ocean-like shifts of these big-sounding orchestral movements, but they were very, very far off in the distance.

What's more obvious throughout the score is your use of other instruments, especially with the reprisal of the main theme "Music Box." Could you talk about other instruments used in production?

I used a Fender Rhodes and a Hohner Pianet T, so, two different electric pianos to get different sounds. The "Music Box" theme, which was me sort of vamping on or reimagining that Philip Glass composition, was all done with the Pianet, because I wanted to give it the sort of an energy that you might have heard from the original score, but have it a little more twisted and spare. But it almost sounds like a celesta, or a glockenspiel could almost get to that point. Actually, the live performance of the score, which I've just done a few weeks ago—I premiered the score in Poland and had to transcribe the entire score for an ensemble—I had the percussionist, the orchestral percussionist play that. Instead of playing it on an electric piano, he played it on a glockenspiel, and it worked very, very well.

Could you just talk a little bit more about that transcription process? How many pieces were in the band that played at the Poland show?

So there were I think it was a total of 13 on stage. Two cello, concert bass, violin, viola. So five strings. Bassoon and contrabass bassoon, orchestral percussion, a four-piece choir, and electronics.

The conductor was this fellow who's a collaborator of mine, who is a composer and a cellist in his own right, Brent Arnold, because I can't do that. That's one thing that is completely out of my wheelhouse. I can't even get close to it. But he also did the transcription for me. Transcribing it for an ensemble was interesting due to the fact that, initially, I didn't notate anything. So any of the performances that are on the record by other players, like from Hildur Guðnadóttir or Matthew Morandi—I was directing them through. I was giving them things to listen to, and then I was directing them, or I would play out a motif on a piano and have them play that, and then direct them, and have them play through multiple times, and then take those recordings of those performances. Then I would do the arrangements after the fact, so nothing was notated at all.

In order to have the ensemble not tuning after every single composition, there were a few times where concessions had to be made or things had to be re-tuned in the playback—the electronic playback that was going to be used for the performance—so that the ensemble could stay in concert pitch, and they wouldn't have to continue to tune. But a lot of the time what I was doing was a note might sound like a G that was a little sharp or a little flat, but in actuality, what you're hearing is two different tones or two different notes that are stacked on top of each other, sometimes multiple notes that are stacked on top of each other to get this really multitimbral and multiphonic sound. A lot of the things that I was doing with my voice, with all of the vocal and choral recordings, was doing a similar thing.

So it was actually most difficult to direct the choir. Or I wouldn't say difficult, but I had to spend the most time with the choir due to the fact that it's very much outside of what most would call deep traditional concert music. But it had to do with—there's certain ways that things can be written or noted. So if the shape is a letter, like a vowel, it's almost that vowel, but it's not quite. So I would have to say, "Okay, it says an Ohhh, but it's more like an Ohhhhwaaaoohhh." That sort of thing.

But it was very easy. All of the players were super professional and top of their game. It was a real pleasure to work with all of them and to actually see I could do it, because in my mind, the score was never meant to be performed live one-to-one. It's impossible to do. There's no way you could do it. I wanted to keep it that way, because I wanted that score to be specifically that thing that lived in that world and existed as a character in that landscape.

But I do like the idea of re-investigating works for pieces and giving them a different life. It's sort of this concept of—even though it's a documented, fixed work—I still consider it a work in progress, because if I want to reinvestigate that thing. It's not going to be what you hear as a recording; it's going to be something different. It will carry the same energy and same tonalities and same arcs, peaks, and valleys—all that—but it's going to be slightly augmented. That's more interesting to me than trying to figure out how to recreate something one-to-one. But the funny thing was when this transcription was done, the score's 200 pages long. It's a lot of music.

Robert Aiki Aubrey Lowe - "Spell Casting"

That's interesting that you spent a lot of time with the choir, because I know in your work you use your voice a lot. It's like you kind of place your voice within drones to where you can't even really distinguish what's your voice and what's the synth.

I would say that I like to play with illusion. I like to play around. One of the reasons that I really landed on utilizing modular synthesizers a long time ago was the fact that I had been deeply involved in investigating my voice as an instrument. I wanted to have a companion to my voice that had the same flexibility and organic nature and variables attached to it that the human voice has. I think that the modular synthesizer is best suited for that. So it's more interesting to me. Even in live performance, I've had people talk to me afterwards and say, 'I was watching what you were doing, and I still couldn't necessarily discern what was your voice and what was the synthesizer, even though I was watching you singing.'

There's certain moments where I got lost and I couldn't understand—I couldn't translate what that thing was. That's something that's really interesting to me. I like to be able to play with that, because at the end of the day, it's not really that important how you got there. It's most important that the process and the work is shown through, and it's compelled someone to pay attention.

Speaking of illusions, the film Candyman deals with that idea. In fact you wrote in the liner notes for the soundtrack that you wanted to enter this zone of interacting with illusion and sort of represent everything that the movie is about. But how does that translate musically?

I wanted to be able to utilize a bunch of different instruments. The score is an electro-acoustic score. That's important to me, because I like the natural and physical aspects of acoustic instruments as much as I like the electronic and conversational movements of a module or synthesizer. I like to be able to trick the ear and have it not understand. I basically wanted to turn electronics into acoustic sounds and acoustic sounds into electronic sounds. So it was moving back and forth where you're hearing a cello, but it doesn't read to you as a cello. Then you're hearing electronics, but it doesn't read to you as such. So it was about moving back and forth.

A lot of that has to do with: Any sort of physical modeling I do with modular synthesizers is generally very spare. I use very few modules, and I don't like to necessarily use modules that… make it easy. Like I said, I used the Mysteron quite a lot, because I think it really does create incredible string-like, bowed, plucked string and piano soundboard sounds. But then it can have another life all its own. I love Karplus-Strong and I love those circuits. So that was one element. The Odessa from XOAC Devices was another module that I used. It's an incredible spectral oscillator—the sound is bananas from that thing. It is such an incredible instrument on its own, that has so many ways it can move. I used that a lot for more percussive sounds and stranger strings that would happen.

A lot of the elements in the score are actually delay feedback that I've then manipulated and moved into this zone where I can make these melodic elements happen just with delay feedback, like delay feeding back on itself. So not using any sort of original sound source and just playing with feedback. Those, and obviously low pass gates, the Optomix and the LXD, were huge in creating these natural sounds.

Then I was using acoustic instruments, like cello, contrabass, and then processing those things. Sometimes I would use plugins in Pro Tools. Sometimes I would process them through the synthesizer—the human voice I was processing through the synthesizer, [and] through the Morphagene. I also used the Make Noise Erbe-Verb a lot.

Seems as if, even in the process of creating an official soundtrack for a feature film, you're still constantly tinkering with your system. It seems like you let experimentation guide your approach. But is that accurate?

No, I think I definitely have a defined approach. I think there's always a consideration and always an intention. I utilize things in very specific ways. I will play around with them and see what I can irk out of them, but I know going into it what I want to use. Or even in the process of doing it, I have in my mind the idea and the intention. I could say, "Oh, I'm missing this thing, so I'm going to grab this and put this into the equation."

Another one I used, which I'll use all the time, is OMI Industries Dual Digital Shift Register, having a shift register in which it's spitting out pseudo-random gates. I wanted to have as much of a human element and less of a machine element as possible. So any of the percussive movement that was happening was happening through the pseudo-random gates that I was using, the TEMPI from Make Noise, and the ALM/Busy Circuits Pamela's New Workout to move the clocking information around in the dual digital shift register. I would have these movements that weren't always repeating in the same way and moving in a way that seemed a little less mechanical and a little more human.

But it seems like you do still use some generative aspects to where you're defining where you want things to go, but you still have machines to sort of randomly take you to that destination. Is that so?

Right. I think you can still have a defined approach and utilize random or pseudo-random information. I think that having an aleatoric process… Like, I use chance a lot, but the way in which I utilize chance, I'm defining certain parameters with certain instruments that have very specific functions, and then incorporating this mode of chance—this aleatoric mode—it's giving me the opportunity to witness things that I may not have considered. I want to be able to remain as open as possible, because the intention will always remain, but the pathway to that particular destination may change.

It's not necessarily upon myself to course-correct it. I am more interested in the fluidity of that moment and that motion, and I think it's really important to be able to relinquish control at a certain point. You are continually having a dialogue with this instrument. It is an extension of you, but it could take you literally down a path that you had not thought about. So being able to give into the possibility of new considerations, I think, is something that will only expand your vernacular and advance your process.

Lowe at his home studio.
Photo by Desdemona Dallas. Used with permission.

Could you speak to what it's been like being one of the few Black people in this realm of film composing and how important it is to you to start expanding these conversations?

It's very important. I was always into film when I was a kid. I was into film more than I was into music at a certain point, but the sound in film is always important. Even the negation of sound or exploring silences—all of these things are important. It all helps to tell a story. It all helps to create narratives. So early on, the composers that I would really lean into and I would perk up with—it was a lot of white male composers. But then when I was a teenager, I started to discover more African American composers that were working in film, whether it be Bill Lee or Terence Blanchard, more recently Michael Abels, who's worked with Jordan Peele on Get Out and Us. Sam Waymon, who is Nina Simone's brother, who did the scores for all the Bill Gunn films, like Ganja and Hess. It's really lovely to see that and really lovely to be able to engage, because those are actually scores that I'm relating to on this level that I wouldn't necessarily with some of the other composers that I would hear.

It is very important, and it is very important to be able to investigate sound outside of what has traditionally been handed down as film music, because I think all of those composers definitely step outside of that. I think one composer that was always very good about it who's not a black composer was Morricone. Ennio Morricone was an incredible film composer, but he also came from a very avant-garde background, being part of Il Gruppo in the '60s and producing a lot of, I think, fairly difficult music for a larger audience. But for me, it's important to be able to represent the African American avant-garde, because it is very much a thing. It is very much there, and it is ultra-important and doesn't get talked about as it should.

Like Shadowgraph from George Lewis from 1977, these are avant-garde records that should be canon, but people [only] talk about them in the context of jazz. They should be talked about as much as [Krzysztof] Penderecki or [George] Crumb or [John] Cage or any of these other composers that were doing things around the same general time, like Charles Dodge.

Also, a composer that no one ever seems to talk about is Olly Wilson, Olly Woodrow Wilson. Incredible composer, was teaching at Oberlin. He established TIMARA [Technology in Music and Related Arts], which was the first electronic conservatory music program in any university in the US. There was obviously Columbia, Princeton—but it was the first program that was implemented into the conservatory. That had never happened before. There was always this delineation between electronics and avant-garde composition and conservatory music. He was the first one to put them together in 1968 at Oberlin.

His work also was investigating the diaspora. He was doing these classical compositions while really, truly thinking about traditional African music or folk music from different regions in Africa. The electronic compositions he did were incredible. The electro-acoustic works that he would do for electronics and clarinet—nobody ever talks about this stuff—and what he did was so awe-inspiring.

Would you care to share any general patching advice, whether it be something that helped you break through creatively when you first got into modular or something that's new and fresh on your mind currently?

Well, I would say when I first started using modular synthesizers, I started very slowly, and I would get one module. I would play around with it. I would figure out how it worked. I would play around with it without looking at the manual and see what I could get out of it. Then I would read the manual and then say, "OK, well, that's why this was happening or wasn't happening, because this does this and this does this." I think it's really important to have a couple of different modules that you can play with and have them play together so you can understand how they move. I think it's always best to start slowly and a module at a time, because if you start out with a whole system, you're going to get totally lost, and you're going to miss a lot of the nuances that some of these modules have the ability to provide. So that's one thing.

Then the other thing is when I had... I think I had about five modules total, not nearly a complete system, but I was able to figure out how I could perform with this very small amount of modules. I was like, OK, so now I want to figure out how I can use these things in a way that will keep it interesting if I do a performance. I started doing live performances very early on. I understood signal path, and I understood a lot of electronics and synthesis already, but the amount of variables that were possible, the tonalities that were possible were very different. So for me, it was more interesting to start workshopping ideas and improvising in real time in front of an audience and throw myself into the lion's mouth, and then just learn on the fly. That's what I did very early on. I think that that was very beneficial for me.

comments powered by Disqus

Reverb Gives

Your purchases help youth music programs get the gear they need to make music.

Carbon-Offset Shipping

Your purchases also help protect forests, including trees traditionally used to make instruments.

Oops, looks like you forgot something. Please check the fields highlighted in red.