Click here to go to the RMI website for a full course description.
Yesterday’s AIMS meeting was a great experience for me, getting some insight into how Toronto’s marketing community are getting into the game, experimenting with formats and getting great results.
It was a pleasure meeting with people at the networking event that followed and interesting to find out the various levels of experience so far.
One thing I was most happy to hear from several people was the idea of improving the quality of podcasts. I am still a firm believer that in a sea of media, the most interesting and well produced content will achieve longevity even if in the long tail as Mitch Joel pointed out.
Myself along with Chandra Bulucon of Puppy Machine are currently in the design and research stages of what we’re calling the Sound Education Program.
Aimed (currently) at schools in the GTA, the program seeks to add value to learning through the use of sound. Projects like audio book reports, podcasts and the like are part of the potential agenda. In the way that visual images can help some students better absorb material, we feel so can sound. It worked for us!
Anyone with links to projects of a similar nature or contacts they feel of benefit to such a project are encouraged to contact me.
After listening to a few podcasts though, I admit, I was slightly miffed to find out that musical examples given throughout the lecture series’ were edited out for “copyright reasons”.
When will big (and small) music learn that we now live in a digital age? If someone wants to rip that song out of a lecture series by grabbing the video and taking out the audio, they will!
Then again, is this perhaps one of the better ways to lure people back to your site? What do you think? Marketing touchdown or fumble?
It’s not as though we all have a LOT of time on our hands these days, I think everyone understands that. But how many “ums” and “uhhs” are too many and how many are too few?
When editing a podcast, try to listen to it from an audience perspective, taking the vantage point of someone who is listening for the first time. Try backing up your playback by about 3- 5 seconds to hear whether that pause seemed natural or rushed (bill[EDIT!]is founded on junk science)
If you NEVER edit out those ‘thoughtful speech fragments’, do consider those times when you’ve had to listen to politician’s sound bite on the radio and how inane and boring it can be (though the linked example’s not THAT bad). Then multiply that feeling by 5 or 10.
Try to find a balance between the first and the second for a natural sound.
Whether you’re producing something for radio, internet or just having a speaker at a conference and not recording it at all, some form of what’s known as limiting is useful.
A peak limiter essentially controls the volume of the audio signal so it never goes above a certain volume. Typically limiters are set just below maximum because their method of controlling volume levels can be quite agressive and may actually work to make the sound worse.
In our signal chain, a limiter could be placed either between the main output of the mixer and the speakers (black path below) or between the record output of the mixer and your recording device (dark green path below). Either method will help to eliminate distortion (crackling, noise, etc) from your final destination. Some mixers will incorperate a limiter before the final output stage.
Okay, I’m not going to criticise people for not giving me much feedback on this site. Chances are if you’re actually reading it, you don’t have much knowledge of audio in the first place, so what would you comment about?
Besides, I’m talking about feedback in the audio sense; when what is playing out of your monitoring source, typically speakers, is reaching back to the microphone and is getting re-amplified. Then that process just continues to happen until you get that awesome ringing sound.
I bring this up because it’s a common issue with conference audio and can be helped substantially with a very simple solution; put the speakers IN FRONT of the microphone.
While early reflections (the first sound waves reflected off the walls) may arrive back at the mic position, in general moving the speakers in front of the microphone position (podium, panel table, etc.) will dramatically decrease the amount of sound coming back and if you’re using a cardiod microphone, you should reject most sound from behind the mic source.
Equalization is likely the most well known audio effect processor as it’s used in everything from your car stereo to iTunes or Windows Media Player.
The short description of what is does is to boost or cut certain frequencies out of the audio signal to get more or less bass, mid or treble in your sound.
There are various types of eq designs and each can be used to particular effect to enhance or detract from the character of your sound source.
For example, the typical radio announcer voice might have a boost in the area of 250-500Hz. Or, if you’re using a cardiod microphone, often the proximity effect willÂ give youÂ too much in the low frequency range and you’ll have to lower some of those frequencies to acheive a smooth sound.
In upcoming posts, I’ll begin addressing ways in which eq can help to aleviate problems in a public address situation, such as conferences.
The eq in a mixer typically occurs on a channel by channel basis and is used to boost or cut the frequencies of whatever the input signal (microphone, CD player, instrument, etc.) is.
In the midst of my transition from London to Toronto, I am of course reminded of how important backup procedure is in audio as with everything else in the digital sphere.
Just a reminder that with digital audio, as I’m sure with other digital media, it is recommended to make at least 2 copies in addition to the original backup in some hard format like DVD or CD. My new personal method however uses a larger hard drive partitioned into business and personal. My biggest concern with writable media is the unknown shelf life. Any comments?
On a related topic, LaCie has come up with a novel, if nothing else, approach to multiple hard drive using people like us audio engineers know as the brick that patterns itself after the peg and hole lego bricks.
From conversations with the fine folks at Carbon Computing in Toronto is that they are pretty but no less noisy than any other drive. Something a rather large firewire cable could remedy.
Hopefully, the downtime from the move won’t be too long. More to come about the audio signal chain.
I’ve drawn up two typical audio signal chain diagrams to illustrate how most people would be recording podcasts. Before getting into the finer points of signal processing, I thought it would be useful to discuss the use of a mixer in this chain.
1) MIXER BASED MONITORING
2) MIXER BASED RECORDING
In the monitoring (1) diagram shown, the audio signal goes from points A/B to Y/Z via the mixer. The mixer is used to amplify the individual input signals (microphone, CD/DVD, etc.) and “mix” them together to go to the main outputs. In many cases, the will be an additional stereo output (left and right channels) used to get the mixed signal to a recording device (2). That record output path will mirror what is going to the speakers (as shown in the diagram).
Mixers are an excellent way to simplify the recording and editing process as the resultant audio file recorded will only be stereo, although this also limits the ability to edit any individual microphone signals in post production.