Modal Electronics Backstage: Tim Wright, Part One
In this, two-part, installment of Modal Electronics Backstage, we catch up with Sound designer and composer Tim Wright / CoLD SToRAGE. From his origins in the world of video games, through to his thoughts on creative process shifts spurred by changes in technological foundations, we take an exclusive insight into one of the most influential video game composers of all time.
Hi Tim! Can you tell us how you got into music and sound for games?
My initial exposure to music was watching “Top of the Pops” back in the early 70s. Nobody in the family was particularly musical, except my aunt who played piano and guitar. So of course, when I’d visit her, I would always pick her guitar up. I would wonder why those six strings were tuned to those specific notes when they don’t really sound that great together? You have to prod your fingers on the fretboard to get something that’s more interesting. I was noodling around with that and then singing my own pop songs, though with nonsense lyrics. At the age of 3 it just sounded like something vaguely “Eurovision-esque” from a country where you don’t know the language.
I had the desire to create music from an early age, and I did eventually have piano lessons. It was a case of my father offering to buy me a piano, but only if I took said lessons, as I was already playing my grandmother’s piano a lot.
After a couple of years, I thought it might be worth accepting his offer. So, I did just that. I started taking lessons and in about a year, I had started composing my own music. I would go in and say to my piano teacher ‘before we begin can I play you something I’ve written?’ He was a fairly staunch and authoritarian teacher. I’d play this thing I’d written and he’d totally ignore it and then just get to the piece that I’d need to play for him. So, piano was my first real instrument, the place where I started to create anything that could (maybe) be reproduced by somebody else.
Then my father bought me a student guitar that I’ve since given to my son. It still has a free Nelson Mandela sticker on it. That’s how old it is! I wrote 20 or 30 pop songs using this guitar. The best moments were when my mother would say we’re going to decorate your bedroom. I wasn’t excited to get different coloured walls or wallpaper. I was excited the room was going to be empty, because that meant I could sit on a ladder in the middle of my empty bedroom, with the guitar on my knee and get great reverb. So, that’s where my composing began with a couple of instruments and some singing.
Did you study music or sound in college or was it something that you picked up by yourself?
The latter mostly. I would listen to things like Axel F – that hit from the 80s. I sat at the piano and worked out both parts by ear.
As for formal training, I had piano lessons between the ages of about 7 through 15. I got up to Grade five. I detested musical notation. I found it frustrating and I didn’t like sight reading. I was much more of a hands-on, play-it-by-ear musician. A lot of people asked how I was going to get my ideas across without notation? Well, I was massively into computers when I was a kid and I suspected they’d allow me to do away with standard notation.
Are we talking about Commodore 64 or pre that?
The first computer I actually got my hands on was a Commodore Pet.
In the back of my mind I was thinking, ‘I’m going to be using synthesizers not pianos. I want to be Howard Jones, Nik Kershaw or any of these kinds of electronic pop stars’. Now I think back, they were minimalist geniuses.
Back in the day, I just wanted to be on “Top of the Pops” with a couple of synthesizers. My thought was to have an Atari ST driving the MIDI playback, a Roland 707 drum machine, a DX-7 keyboard and maybe a JX-3P. I had a piece of paper with all this kit sketched out and connected together.
But it never happened of course. I was just some young kid, sat in my bedroom, with very little money. Then I started writing music on a Commodore VIC-20 and then eventually a Commodore 64. But my life took a massive turn when I bought my Amiga 500. That was the pivotal point. Up until then, none of the music I’d composed had made it beyond my bedroom.
How did you begin to get into creating sound effects and become featured in games?
I wrote my first game on the Commodore 64 when I was 15. Actually, no… wait… I wrote a game on the Vic 20 before then but just for fun. But the Commodore 64 game I called “Spider Chase” was good enough that I sold it to various school friends in the playground. I would record it onto a cassette, and hand-draw each and every cassette insert so each one was individual. It was painstaking, but it had to look nice and be coloured in – crazy now I think about it.
There wasn’t any music in that game, but I’d created some zap sounds and explosion sounds. I had to get my head around how the sound chip was going to produce those noises and at the right time. I didn’t really bother much with computer sound chips after that, until I’d completed an Electronics and Communications degree in London. That involved a ton of radio stuff and building a little 6502 based micro-computer. I was getting into the whole electronic side of things again.
When I finally had to go out into the big wide world to earn a crust, I went to work for a company called Littlewoods, based in Liverpool. I worked in their computer programming department largely for Index – a big rival to the Argos Catalogue Shop at the time.
I’d been working there maybe a year or two before Commodore brought out the Amiga 500. Aside from the amazing graphics, it had four channels of 8-bit sampled sound. That was amazing. It was basically a sawn-off Emulator or a Synclavier, but for £500! I bought one as soon as I could afford it.
The next piece of the puzzle was a friend of mine who was into the Commodore 64 demo-scene. That’s basically where you create a piece of living artwork – with moving visuals and great sound. But it’s not a game. It’s a piece of animated art. And over the top of this, you’ll have a scrolling message saying “hi” to everybody in the demo scene, and text telling you who created it, who did the graphics, who did the programming and music. Or it might be more cinematic with scrolling credits like you would have in a film.
I wanted a slice of that action, but I didn’t have the skills back in my teenage years. Anyhow, he had access to all these amazing software tools that were used to make these demos – art packages, music sequencers and so on.
So when the Amiga 500 came out with much better graphics and incredible sound. He gave me a disc with Sound-Tracker that was literally a full-on music sequencer. It came with two discs of samples from various synthesisers – I think the DX-100 was used a lot? And various drum sounds, and a few bits of vocals and other stuff. So, I used these at first and then I bought a little add-on that went into the back of the Amiga so I could record my own sounds – a hardware sampler. Then the sky was the limit!
When did you begin to have your music featured in published games?
After many years of playing games, I’d really gotten into the music more than anything. Some of the music was really good, and I thought I was probably just as good as what these guys were doing, so I should be able to get some work too.
I wrote a letter and included a floppy disc with my best tunes and sent it out to various software houses. A couple of them replied that they had all the music they needed for now, but it sounded good. One company was very keen and asked for a bit more to gauge my ability.
But before anything in this kind of “shotgun approach” could bear fruit, I met a couple of guys who were in the demo scene, and I composed the music and sound effects for their latest demo, “Puggs in Space” which eventually gestated into a game for Amiga and Megadrive.
I love it, and you’ll have to claw it out of my cold, dead hands if you want it back!
Tim Wright about COBALT8
So yeah, the Amiga 500 really was my chance to light the blue touch paper. Everything came together, my desire to create music, my love of computer games, my love of computers, the demo scene.
So, we showed the Puggsy demo to Psygnosis – a game publishing house. The CEO loved it, and wanted to create a game – and that was the main thrust, the main deal. But he also took me to one side to talk about the music and sound effects, and said he had even more work for me. I was like, “Wow!”
Shadow of the Beast II was the first game I was asked to do the music for, which is insane. Apart from the fact I was cheap because I was starting out, I kept asking why choose me? It’s a risk, right? You’ve got all these existing and proven musicians to choose from, and they wanted me to do this?
Wow. I’m curious about your workflow back then. Being able to do things like plug a sampler into the back of the Amiga sounds like a world opening up, but what was it like for your creative process, where did you draw inspiration from?
I was quite fortunate in that my brothers were all into computers. So, if we rewind just a little, when we got the Commodore 64, we all played the games. And then I started writing games. I began by creating a little vertical scrolling shoot-em-up. I was coding it all in BASIC and it was awfully slow. It needed to be faster, so I taught myself 6502 Assembly Language, which meant that everything was now super-fast. I got the screen scroll, a little ship moving, and I got to a point where I wasn’t quite sure how to proceed. My younger brother Lee thought all this assembly language was fascinating, so he got himself a little cartridge you can plug into the Commodore 64, which makes programming so much easier. He optimized my game and made it a lot quicker.
That’s when I realised he was gonna be the clever programmer guy of the family. So instead, I started doing the graphics for him… creating the little ships and the rockets. And then I went away for a bit. I came back and my next brother down, Jez, had created all these amazing ships and stuff too, and it sank in that was going to be his specialty (sighs).
But we still need some sounds for this thing right? So that’s how I ended up being the musician brother, haha! I got hold of a music sequencer for the Commodore 64 and started learning how to make the SID Chip (the World famous analogue synth chip in the C64) sound good. Back then, my music wasn’t so great… nothing compared to the people who still compose for the SID chip even to this day – for example, LMan is an amazing C64 composer.
Fast forward to when we all got an Amiga 500 each, and I didn’t even bother to try to program or create graphics. I went straight to creating audio. Lee became the coder for the music play routine, the essential bit of code you would shove into the game to play my music. So we were symbiotic in that respect… games companies would hire us both as a team. When you looked at a game on the TV screen, you could see how many raster lines Lee’s player would take-up, and his play routine would be super-fast and super-tiny memory wise.
We also created a custom sound effects player. So you’ve obviously got the music player, but you need a compatible sound-effects player. I sat down with Lee and I said, OK, we need to develop something that you can edit various parameters and you can have memory efficient source samples too – short, little waveforms. By manipulating these small looping samples in code, we could create zaps, explosions, wind noises and whatever else we needed, but saving loads of valuable memory space in the process.
I think the creative process changes depending on the hardware that you’re playing with…
Tim Wright about technology influences
These were all synthesized waveforms, we’re not yet in the realm of samples or found sounds?
Yes. You actually start off with literally just white noise. On some computers and synthesizers you can then filter that. As people reading this might already know, low-pass filters can help when you’re creating sound effects, but you had nothing like that on the Amiga.
To do that in real time, like you can on any PC now, there just wasn’t enough horsepower. You can’t even play an MP3 on an Amiga, it’s not fast enough to decode it. So, it was a case of having various little samples. One that’s like a full open white noise waveform, one that’s a bit more low-pass, and another that’s a bit more low-pass, and then you can switch between them, vary the pitch, combine them, and you do all this within a little editing system.
Everything was memory constrained.
So then from Shadow of the Beast II did your career really explode after that?
Yeah. It was never ending. It was insane.
Tell us more about the composition process with SOTB II.
One good thing about SOTB II was the fact that the guy that worked at Reflections, which was the company that was developing the game, had worked with David Whitaker, a famous musician that had created the music for the first game.
He was a bit of a muso himself. He played guitar and he loved his Korg M1. So consequently, when he worked with David and then me, he’d recommend I try patch 38 for the percussion patch, 76 for the lead sound, etc. I mean down to that level of macro management, if you’re looking at it from that perspective.
From my perspective, this guy had created Shadow of the Beast which was epic. I mean the gameplay, maybe not so much, but a great showpiece. So, to be working on the follow-up, I wasn’t going to say, “I don’t think that’s a good idea”. In fact, I didn’t even have a Korg M1, but I had a friend who did. With the Amiga you can sample anything, so if you know someone with a stack of great synthesizers it’s a good place to start!
I would say they went easy on me to be honest. They knew that I could write music and they had a style that they had already kind of worked out with the first game.
I took that on board and then worked outwards musically from there. I added some of my own sounds, maybe not doing exactly what he’d asked. I just kept sending them demos, which, ‘back in the day’ was insane. You’d have to go to the train-station, and they operated a service called Red Star where you’d pay an inordinate amount of money for it to go on a train and arrive the same day. Then he’d be near a train station in the North of England, and ready to get the floppy disk. In fairness, he did pay for all those expensive shipments.
It wasn’t the baptism by fire it could have been, they were very kind and helpful. That doesn’t mean it wasn’t stressful though; the deadline was short-ish and I’d not developed for a game before. It was a case of knowing you have this much memory for this part of the game, and less or more for the next part, and just working with that.
The other thing is that when you painstakingly create this music, some of it might not make it into the game, because of memory constraints. Let’s say we’ve got four disks in total, the most important thing is the gameplay, the levels, the intro, because it says Psygnosis on the label. So, if some of your music gets cut because they’ve run out of space, or had to add another level to the game full of graphics, then don’t take it personally. You still get paid. So that was okay.
How has your creative process changed over the years, from Shadow of the Beast II, the Commodore 64, the Amiga, and then to WipEout and modern day games?
I think the creative process changes depending on the hardware that you’re playing with, and any constraints that are put upon you like timescales. Clearly the process can also change depending on what the game is about and what genre it is. The process can also change depending on the people or the team you’re working with too. All of these things feed into the big question that is, “How am I going to approach this project?”
Do you sit down with a team to explore the visual side of things to inform the sense of sound, or do you go away with a script or an idea or things written down?
I’m very visually led. I guess it’s always a case of watching a demo or a video. Is there a work in progress? Is there a YouTube video I can look at? If I was playing this game, what would I like to hear now? Then, after I’ve absorbed that, I ignore it and just go into composing mode, play around with some ideas and sometimes go off in a bizarre direction and waste the whole day! If I find I’ve composed something that’s actually okay, but it’s not gonna work for this project, I’ll put it in the unused pile.
When I’m doing racing game stuff, you know, WipEout, Pacer, Slipstream, all that anti-gravity stuff, then the visual side is very important to me. All of these racing games have a different pace too, different speeds, different sense of scale. I have to try and say to myself, WipEout was a certain flavour, don’t go there with this new game. But it’s still a little bit of a tightrope, because people know what WipEout should sound like coming from CoLD SToRAGE.
For example, when I was working on Sodium 2 (or Project Velocity as it was also called in PSN Home), it was a bit more gritty and gravelly, and wasn’t quite as fast. More importantly, it was set on an alien planet too – not on Earth. So I had to keep the pace of the racing game, and make sure to add those nice melodic hooks that people enjoy but also add a different flavour.
Then with Pacer, it was more of an open vista and less constrained, compared to WipEout where you’re going down this tunnel and that tunnel, and there’s a tight turn with very little to be seen in the distance. So, I decide it’s got to be a little more etheric. You’ve still gotta have the driving beat, but it’s less tightly formed with space for the music to breathe – a good example of where I’m trying to keep it so that it sounds like me and I’m not trying to force a style upon myself. Each one of these games has to be identifiable, not only as me, but as the product itself. It can be tricky!
It’s not just your
standard sine, triangle and square…
Tim Wright about the COBALT8 Sound Engine
I imagine that you’re perhaps reaching for different hardware synths or do you also use a lot of software instruments too?
Until 2017 I had been completely PC bound with Propellerhead Reason. I fell in love with Propellerhead right from the early days. They did a PC Program that had the Roland 808 and 909 drum machines, along with a TB303 Bass – back in the acid house era. And from that they created a fully featured DAW called Reason. I just love the fact that it has a virtual rack and you can spin it around and cable things up and then go from CV to, you know, something else, modulate this and modulate that. It’s so flexible.
The Reason DAW was enough. I didn’t need any hardware synths. So it’s been a strange journey for me, from the SID chip in the C64 through to the Amiga and its sampling side of things and then I was fed up with this and wanted the proper kit. So, when I worked at Psygnosis, after the Sony buyout and the PlayStation was launched we had money to splurge. So I got the Roland JD-800, some Akai sampling synths, just basically a whole collection of physical gear because the sound quality is way better. All this was going through a Yamaha ProMix-01 digital desk with motorised sliding faders. I had a Lexicon reverb too – I was like a kid in a sweet shop. So that was how I was creating music for a lot of games for a while, back in the mid 90s.
When Reason 3 or 4 came out I thought there was no point having all these hardware synths, because they’re kind of limited. And if I want to hear two JD-800s at the same time in a piece of music, then I’d have to multi-track. I’ve got to record the first bit, then the second bit and if I realized I didn’t quite like the first bit, I’d have to re-record it… it was slow and frustrating.
But with Reason on a PC, I had an infinite number of synths, drum machines, and stuff in theory. And, if I needed a chorus or a reverb on anything, it was just there in an instant. But then you start to go crazy, adding all these software generated synths and effects and it just can’t cope, so I needed a faster PC!
I just lived in that little insular world, with everything inside my PC, that is until analog started to make quite a comeback a few years ago. It was then that I thought I should maybe get back into the outboard gear. It might sound counterintuitive given what I’ve just said, but it’s kind of exciting to have a limited world to play with again.
I got myself a Behringer Deepmind 6 on my birthday a few years ago, and was wowed. It sounded really good. I did some crazy things with it. I never really used it to write any music, well not so far, but I certainly enjoyed it just as a noodling thing. It gave me some musical inspiration and some cool ideas.
So, I can’t really say that I’m now a convert of using outboard MIDI sequencers again, but I will create some sounds and sample it or maybe do a riff and then have that riff sampled in one of my songs.
Which are your preferred synthesizers at the moment?
I still noodle with the Deepmind 6. Obviously the Modal COBALT8 has endeared itself to me to the extent where I was doing a project the other day, and I needed sound effects. Normally I would load Reason, and use the Subtractor Synth, because it’s the most basic synth in Reason, and because I know it so well. But on this occasion I didn’t, I used the COBALT8 because in my mind I thought I need ‘that sound’ and in my brain I knew I could do that on the COBALT8 really easily. I was quite surprised to suddenly be thinking that way – a real shift.
Actually, the first day I received the COBALT8 I was wondering if I’d really like it or not, as it has a bit of a learning curve. But by day three, it was a case of, “I love it, and you’ll have to claw it out of my cold, dead hands if you want it back!”. It was staying with me. Then I went crazy creating a bunch of presets – and to actually use it on a project was a joy.
I hear you’ve already used it to create a demo track?
Yeah! After a few weeks I’d created over 140 presets. The idea behind creating these was that there would be enough variety to create several sonic landscapes. There’s percussion, bass, pads and leads, and even some sound effects too.
The demo track was built up in layers. I sat down and just played the drums by hand, and gradually built up the demo track by track and just played everything manually. Then I added a slight tweak with EQ, but no extra effects and no major mastering. It was simply created as a demo piece, rather than an actual finished song. I mean, it could be turned into a proper song, but I was just trying to showcase what the COBALT8 can do.
For those who don’t know as much about synthesis, what is it that you prefer about the Cobalt 8 over other synths? What features really attract you to it?
Modulation. I mean, using it as it stands, just twiddling with the knobs and the dials and sliders is all well and good. But, when you’ve got it rigged up to a PC where you’ve got the full editor, it’s a joy to use that and control what’s going on in the synth. Then, once you’ve got a sound, you can just reach for the keyboard and tweak the parameters a little bit. It’s kind of like a dual process. I’ll do a lot of it using the editor on screen and then as I’m playing, I’ll just tweak using the knobs on the synth itself.
And by modulation, I mean you’ve got certain waveforms and then you’ve got mutations of those waveforms. To make a compound sound and something that sounds like it’s alive and sweeping, then you’ve got the oscillators. For example, you can say, this particular modulation is going to affect the pulse width on a pulse waveform, but there are so many other waveforms to choose from that change in other ways. They’ll morph from, say a sine to a triangle, to a square and so on.
Then it has some algorithms that contort the waveform in other ways. So, if you just change one of those oscillators you’ll get one style of effect – but then you can also modulate a modulator, which sounds odd, but it’s a thing! You could also do something to an extreme where you’re changing the pitch rapidly, or anything else you can think of. It’s really a case of, what happens if I do that?
It’s got several effects built-in like chorus, delay, reverb, phaser and a rotary style-effect to name a few. It’s also got a bit-depth effect, so you can get really grungy, dirty sounds. You can even modulate the effects as well using one of the LFO’s. So doing that, you can get some very interesting sounds indeed.
It’s got a single main filter with different settings, but there’s also filters built into some of the effects, so I use those sneakily too in some of my patches.
It’s got such a rich sound canvas, because there’s so many waveforms to choose from. It’s not just your standard sine, triangle and square… there’s some weird and wonderful waves, some are kind of resonant and some have a kind of multi sinusoidal vibe, a bowing kind of sound and some really crunchy A.M. modulated sounding waveforms.
Modal do a sneaky trick where each oscillator isn’t just an oscillator. If you go to the extreme, you get octaves, octave plus sub, chords and so on. I’ve used these in a few patches to get a sort of arcade machine sound, it’s almost like a tune in itself.
COLD STORAGE Signature Sound PackS
Messij for COBALT8
Prepare Yourself! This is an epic artist library from the man himself, giving you some insight into his unique approach to sound design – everything you could dream of is included in here from hyper-realistic FX to expansive pads and everything in between.