Welcome to my blog.

If you’re reading this, the odds are not in your favor and you have been chosen to grade my assessment. Please help yourself to some cookies. My name is Esteban, and although I’m originally from Ecuador, I currently live in Indonesia, in the island of Borneo. God knows why. Anyway, I've been fooling with the guitar for more than 15 years, but I'm still an amateur. Really.

Speaking of odds, you probably know a lot more than I do. If you are grading my assessment, please do try to leave a comment with any suggestion, constructive criticism, or insult about the lesson. Thanks!

Monday, April 8, 2013

Week 5: Modulated Short Delay Effects (Flanger, Phaser, and Chorus)

This week we explored filter and delay effects. It was probably the most important week for practical knowledge. Understanding how filters work depends mainly on our capacity to experience these plug-ins, and being aware of the changing properties of the signal. 

However, the theory part of that knowledge is fundamental in the process of understanding it. While you probably had the opportunity to explore the characteristics of your specific plug-in, I think theory might get a little bit ignored. So I'll talk about the modulated short delay effects, and describe their function and how they work. 

Modulation effects are very helpful when trying to add dimension and depth to the signal. These effects are, actually, delays, but their delay time response is only a few milliseconds, and they use a low frequency oscillator (LFO), so it can't be perceived as such by our ears. That means these effects do not feature repetition of their signal, but rather they combine the original signal (dry) with the altered one (wet). That's what modulation means: by duplicating a signal with a particular alteration, the effect "modulates", changes a particular kind of adjustment of the original signal. 

I want to explore two of those modulation effects: Flanger and Chorus.

Flanger:
I did say this was a theory post. But to understand a good use of flanging, you should probably start by listening to the best Beatles album (yeah, the best, you heard me!): Revolver. It is said that it was John Lennon who named the effect. "Tomorrow never knows" is probably one of the best songs to demonstrate an awesome use of a flanger effect:



Do you hear it? Flanger could be described as the sound of a jet engine. It has a certain metallic quality to it...
Flanger is a Comb filtering effect, placed in motion. It means it is a delayed version of a signal added to itself. As a result, the frequency response consists of a series of spaced spikes, that simulate a comb. This is how it looks through a spectrum analyzer:



Figure. Flanger effect window.Flanger controls: In order to be a flanger, the Delay time has to be very short (pictured as "Rate", in the Logic effect panel). Being a modulated effect, both the original and edited signal should be present, so the Dry/Wet controller should feature both signals. (A 100% mix in the example means all wet, so 50% could be a standard). The amount of Feedback, how much the signal is sent back to itself, can give the effect more tonal presence, or make the sweeping more pronounced. 

Chorus:


Have you ever experienced listening to a choir? Well, that is literally what happens when you use this effect: there is more than one similar sound source, sounding almost at once, and almost at the same timbre. The little discrepancies between a group of singers is what makes you hear something and say, ok, that's a choir. Let's get out before communion starts. Chorus can give depth to a clean guitar, offering certain echo quality. It is featured, for example, in Nirvana's "Come as You Are":





Sorry to remind you of your 15 year old self. Yeah, this song has been more overplayed than O.J Simpson's race card, but still, such a good song. And the chorus effect helps to that: the opening rhythm guitar has some depth to it, like if there were several guitars playing at the same time. The effect is also heard when that high, piercing chord explodes just before Kurt starts the "Memori--a" bit, right under the minute mark of the video. 

Chorus works by passing the dry signal through the effect, which duplicates it and very slightly changes its pitch and playback speed. Again, being a modulated effect, you need the original signal, so having the effect with only the wet signal might give you an out-of-tune sound. 

Chorus controls: 
 Delay should be higher than the one in a Flanger, but not high enough to be an echo. Somewher between 15-30 ms is a good idea. Again, as a modulated effect, the Dry/Wet controller should feature both signals. The Frequency of the signal is determined by the Rate knob.In the Logic plugin, the Intensity slider sets the modulation amount of the chorus. 

Figure. Chorus effect window.



There you have it. These effects add a very interesting effect to your signals, and are usually very helpful to create a very vivid environment. If you want to explore the effects, open your music library. I bet that, if you pay attention, you'll find cases where you can hear both of these effects in motion. 

Thanks for reading! 





Monday, March 18, 2013

Week 2: Preparing to Record a Project in Adobe Audition


Hi, Esteban here. This last week Loudon rocked his mohawk talking about DAWs, their main characteristics and possibilities. Given the hypermongous amount of audio software out there, not every aspect mentioned in the videos is applicable to each DAW; but the main ideas remain. I’ll talk today about preparing a project using Adobe Audition 3.

Adobe Audition's interface
Is this guy watching Rudy on a DAW?

Why Audition? Mainly because of two things: first, it’s the continuation of the popular Cool Edit Pro, the audio editor I used to record during my golden years (Oh, the glorious decade of the 2000s!). I got disconnected from music, and when it was time to get back jojo, Audition was there. And secondly, Audition 3 is currently available in its entirety from Adobe’s website, which makes it a good option for people on a budget (note that I do not know if downloading without previously owning a version is legal). Audition 3 is outdated and not that great, but these are more than good reasons to use it. Great reasons.  So, yeah, don’t judge me!

Pre-production steps:

DO NOT use your illusion just yet. First of all, it’s time to get organized.

a.       Set a location folder. Before even opening Audition, you should create a project folder, and name it appropriately.

b.      Set your Audio rate and bit rate. When opening a new project, Audition will prompt you a window that will ask you what Sample rate you wish to use. Remember that the recommended rate is 48 kHz. To check the bit rate, go to Edit>Preferences>Multitrack.

c.       Check file types. Audition is set to record your audio as WAV. You should use this as your standard.

d.      Check your hardware settings and Set Buffer size. Go to Edit>Audio Hardware Setup… In the multitrack tab to make sure the correct driver for your interface is displayed, and that your Input and Output ports correspond to your desired, um, input and output, um, ports. Also, set your buffer size in the same window: click on the Control Panel… button to open the audio panel of your driver, where you can change the buffer size. For recording, a low number such as 128 is recommended to avoid latency.

Audio Hardware Setup Menu


Now is time to get to business. If business means a new checklist for recording.
Recording steps:

Two tracks from Audition's Multitrack view.

Step 1. Check your settings and save your session. Make sure everything is in order. Name and save your session.

Step 2. Create a track. Tracks are shown in the Multitrack view. Use the arrow pointing right to determine the input of the track and the one pointing left for the output.



Step 3. Name the track. This is pretty straight forward. Click on the track name (“Track 1” by default) and name it to something useful (i.e., lead guitar... Or if you want, Bill, or George, anything but Sue.)


Step 4. Record/enable the track. Click on the red button to the right of the track name, the one with the “R” on it. I think “R” stands for either “Rock ‘n’ roll” or “record”, I’m not sure.


Audition's session properties
Step 5. Set your levels. It is best to set your levels using your preamp or audio interface levels. Also, trust your ears. Audition has a horizontal bar at the bottom of the screen that will show you the levels. Color red is a no-no.


Step 6. Enable your click and count-off. Set the metronome by going to Session properties. There is a shortcut at the lower right corner. Change to the desired tempo here. Press the metronome button to activate it.




Transport options. To the right: Japan's flag. 
Step 7. Record! It’s a kind of magic! Press the Record button on the lower left corner of the screen, in the Transport options (Ctrl+Space bar is the shortcut). If you don’t know what the button looks like, search for the one that looks like Japan’s flag. If you don’t know how Japan’s flag looks like, well, it looks like a record button.

DOMO ARIGATO! You are ready for the Rock part. Time to get sex and drugs (but these are optional and kind of overrated, really).


Again, Audition 3 might not be your best option as far as DAWs go, but it gets the job done. It’s a pretty straight forward piece of software, and I find it easy to navigate. The preparation process is easy to figure out once you know where everything is. Also, Audition will prompt you if you are doing something wrong; for example, you won’t be able to arm a track if you haven’t saved your session. Get your free (?) copy at Adobe.


Thanks for reading guys! Good luck on the rest of the class!

Sunday, March 10, 2013

Week 1: Visualizing sound





An object vibrates. Its vibration releases energy. This energy transforms the air while travelling. Behold: sound! Sound is the longitudinal wave that travels through the air after that vibration. This means that the waves that such object produces travel in the same parallel direction as the particles of air that are being affected by it.

An example of a longitudinal wave:


Via physics classroom

But being a display of energy in air, sound is not perceptible by our eyes. However, humans have felt compelled to represent these sounds through their must trusted sense. Arguably the most important innovation in music representation is the staff:

File:Brace (music).png


And the staff was a magnificent idea because it helped people to "record" their musical creations. However, it was not a precise way to represent sound visually. In reality, the staff is not necessarily a visual representation of sound, but an extended symbolic language that coded and decoded the pitch an instrument was supposed to make. First, the staff is only useful to represent music; and secondly, it is useless without the written language, because it doesn't allow us to pinpoint the instrument making the sound, the amplitude of the note, or the tempo of the "recording" without words.

The representation of a staff consists of x=time and y=a representation of pitch. 

In modern music production the visualization of sound needed to show different elements to capture the complicated process of audio recording. One of the most popular ways to visualize audio this days is through an oscilloscope display. This is how such display would look like:


The oscilloscope display is arguably the most common display in a music studio, because it allows an easy interaction between the producer and the soundwave. It allows you to move easy and fast through an audio track, and locate any part that needs to be located. 

The representation consists of x=time, and y=amplitude. Time in music can be measured by seconds, beats, or other methods. The amplitude, which is a measurement that determines loudness, is represented in full-scale decibels. It's common to see the sound wave with a similar second audio channel right under it. This shows that it's a stereo recording, the top being usually the left side and the bottom the right side of the sound. 

As practical as the oscilloscope display is, it is really a very simple representation of sound, for it doesn't provide you with an idea of the frequency of the sound. Frequency is an indicator of the pitch of a sound, and it shows a measurement of the amount of times a sound wave vibrates in a determined period of time. To see it in an oscilloscope, you would have to zoom in (a lot) and count the amount of waves that a given track produces in a set amount of time. Well, that's just ridiculous. Just as an infomercial, there has to be a better way!


File:Voice waveform and spectrum.png
wikipedia

The Spectrum analyzer shows the range of the frequency of a sound (a spectrum, duh!). Each component of a sound, each part of its frequency, is decomposed in the spectrum to show the different rates in hertz of said sound. A hertz is the measurement unit for frequency. 1 vibration per second=1 hertz. In the graphic above, a Spectrum analyzer is shown compared to an oscilloscope. They are very different, because their horizontal representation shows different elements. The spectrum analysis is actually showing the frequency of the sound, the spectrum of elements that forms it, and it can help us to study the timbre of a particular sound. 

The Spectrum analyzer shows y=amplitude (in decibels) and x=frequency (in hertz).

However, this display is not very dynamic, because it doesn't show time.

One of the most complete representations of sound is the Spectrogram analysis. It resembles an Spectrum analyzer, but laying down. The main difference: it also shows us time. And this difference is huge, because the Spectrogram can show as the variations of frequency of a sound and the ways it changes during the entirety of its signal.

U of Wisconsin

In this visualization, x=time, either in seconds or beats, y=frequency, the timbre of the sound and how it changes through time, and z=amplitude. In this case, z is the color of the signal.

The Spectrogram analysis is a very important tool for the study of sound, and it is not used only by musicians or producers, but also by anyone interested in the properties of sound. Physicists may conduct experiments on the properties of the frequencies; linguists may study the difference between syllables  zoologists can compare different sounds made by animals.


As I said before, the oscilloscope display is probably the most common way to visualize music in the context of music production. But as you progress in your development as producer, you may find the need or the curiosity to explore the way frequency of a sound is displayed. Sooner or later you'll find yourself using either the Spectrum analyzer or the Spectrogram display.


Visualization of sound through a Spectrogram display is an interesting experience, because it really gives you a visual sense of the sound properties. It has become so popular that some artists have created pictures with their sounds. Like Aphex Twin, in a song people just call "equation" because its name is a long... um... equation.
Aphex Twin's "equation" through a
Spectrogram display

Also, check out below Venetian Snares track "Look", on their aptly named album Songs about my Cats. Notice the difference of visualization between a Spectrogram analysis, the main composition of the video, and the tiny oscilloscope display that is shown at the bottom of the video.






Thanks for reading!


E.