Another week and another assignment for my Intro to Music Production class with Coursera.com. I’ve really enjoyed this class but I must admit it’s been a real challenge trying to get all the information to stick in my head.
This presentation discussed the “Usage of the most important synthesis modules.” Just click on the Presentation Art below to make the jump to view my prezi. I hope you enjoy it!
Greetings music lovers! This week my assignment with Coursera for the Introduction to Music Production class is to “Compare and contrast an algorithmic and convolution reverb. Demonstrate the difference and the important features in both types of reverb.”
I’ve tried on several occasions to embed my presentation but have had not luck. Please visit my prezi by clicking the Prezi Artwork below to view the assignment. And, thanks for visiting and taking the time to read through my work.
This is the fourth assignment in a series of posts I’m writing for the online Music Production class I’ve been taking. For this week’s assignment I’ve chosen to prepare a presentation to “Explain distortion and give examples where it can be both musical and problematic.”
Click on the Prezi Artwork below to enjoy my presentation and perhaps learn a little something too.
God bless and thanks in advance for any input you might like to add.
In my last post I briefly took a look at Digital to Analog Conversion. Today I’d like to discuss effects. Not guitar pedal effects, which in my case would probably make more sense to those of you who know me well, but Digital Audio Effects used when configuring a digital mixing board, their categories, plugins and properties when using a DAW (Digital Audio Workstation).
This is the third post in a series devoted to completing assignments for an online Introduction to Music Production class. I hope you enjoy reading about what I’m learning and perhaps get some learning along the way. Any input on your part is appreciated. Thanks in advance for taking the time to read through the material.
Categories of effects: Teach the effect categories including which plugins go in each category and which property of sound each category relates to.
Categories of Effects: Plugins and Properties.
The process of recording, mixing and editing music has come a long way and those that have gone before us have paved the way to great music production by giving us some pretty awesome tools or plugins that help us get the sound we’re hearing in our heads into the airwaves and into the ears of our audience. The complex spectrum of Audio Effects at our fingertips is simplified a great deal when we understand their categories and the most appropriate way to configure them into a signal flow based on their uses.
Digital Audio Effects fit into three basic categories in digital processing that relate directly to some basic elements of sound itself. These three categories are
Category 1: Dynamic Effects
Category 2: Delay Effects
Category 3: Filter Effects.
Dynamic effects plugins- generate amplitude over time. You may recognize these effects as gates, compressors, expanders and limiters and can give the listener a sense of emotional intensity or help the music “tell the story” by increasing or decreasing the dynamic.
Delay effects plugins – Sound propagation or the speed at which a sound travels through and around objects can be simulated in the DAW to give us a sense of space. Delay effects, like chorus, or phase and reverb as well as the flange make a recording sound as though it were played in a large or small space. If you want your audience to get the feeling they are in a concert hall or perhaps outdoors delay effects can accomplish it.
Filter effect plugins control something called timbre, (ˈtambər) or particular sound quality of an instrument such as a trumpet or violin or a voice. When you adjust highs and lows in the DAW you are using filters. The most common filters are the parametric and graphic equalizer or EQ. Other Filters include high, low and band pass filters.
My first assignment was to discuss signal flow in a home production studio set-up. Part of the signal flow which I did not discuss in-depth included the flow through the DAW itself. Knowing where to position which effects can help a lot when producing music especially when mixing multiple tracks.
For instance, lets assume you’re mixing several background vocals and you equalized them carefully but now you want your listeners to feel as though the singers had performed in a great cathedral. You’d want to add a delay effect plugin. Trying to mix delay into each singer’s track individually and keep it consistent between the tracks would take some time to accomplish but if you routed those tracks into one sub-track you could filter them all at the same time, equally, and get that cathedral sound without all the fuss of individual mixing for that plugin.
So, you see, having an understanding of when and where to use which effect can make a huge difference in time management in the studio as well as improve accuracy and efficiency in the processing stages.
In reflection I’ve learned so much as I’ve contemplated and researched this topic. My appreciation for those who have a great knowledge and understanding of this topic. Learning these categories and knowing where the plugins fit helps me get my head around some complexities that would otherwise be out of my reach! And, in the end it’s not so overwhelming.
Thank you again for taking the time to read through my topic and for sharing your knowledge with me!
Hello musicians and friends! My name is Cosima and this is my second assignment for Intro to Music Production online at Coursera.org. For this assignment I’ve chosen to discuss the analog to digital conversion process. I spent some time reading up on the process and enjoyed learning something new. I hope my post will spark some interest in this topic for my readers. Thanks for visiting my blog and reading my post.
Analog to digital conversion process
In my last post I indicated that the source of an audio signal in my studio generally is a voice. The sound of that voice affects the air and creates longitudinal pressure variations that are picked up by a microphone which converts those variations into voltage variations know as an analog signal. That’s great for live performance but if we want to send that signal into a computer’s digital audio workstation (DAW) we’ll need to convert the analog wave signal to a digital signal or data.
The only thing the computer can deal with is strings of numbers. Things represented in 1 and 0s, called binary information. So, there’s a process to go from the continually variable sound into a stream of ones and zeros and that process is called a sampling process. An analog signal is a wave form that is a continuous stream of data that the computer can’t recognize whereas digital data is discrete or individually separate and distinct. To convert the analog wave into digital data of ones and zeros I’ll need to use the Analog to Digital converter in an audio interface device.
The audio interface uses a common method that converts analog to digital that involves three steps: Sampling, Quantization and Encoding.
The analog signal is sampled at an interval rate making many, many measurements per second. Most important factor in sampling is the rate at which the analog signal is sampled. Over 40,000 times per second to be able to accurately represent the continuously variable signals in the air as a digital representation. And the higher the sampling rate the higher frequency that can be represented accurately in the digital domain. And this frequency is known as the Nyquist frequency, just half a sampling rate. So a sampling rate of 44,100 hertz can accurately represent half of that in the digital domain, 22,050 hertz. The human ear can hear a range of about 20,000 hertz and the CD standard sampling rate is 44,100 hertz which will accurately represent everything we hear as human beings.
Sampling yields a discrete or individually separate and distinct form of continuous analog signal. Every discrete pattern shows the amplitude, the extent of a vibration or oscillation, of the analog signal at that instance. The quantization is done between the maximum amplitude value and the minimum amplitude value. Quantization is approximation of the instantaneous analog value.
In encoding, each approximated value is then converted into binary format of 1s and 0s the computer can then recognize which we now can manipulate in our DAW for the purpose of music production.
Thanks again for taking the time to read my post! Please feel free to leave comments. Your input is appreciated.
(sources include http://www.tutorialspoint.com and wikipedia)
Hello, my name is Cosima Ybarra and I live in Southern California. I’ve written this post to fulfill the first assignment for the Introduction To Music Production class I’m taking online at Coursera.org. This assignment will cover what a simple recording signal flow looks like in my “home studio” which really is a little workstation in my office at home. My hope is that I could share this information is such a way that someone could gain a bit of understanding regarding using signal flow in a low cost but effective home recording studio.
As a voice teacher I generally record my students semester project song for their semester final. In most cases I just run an Apogee cardioid condenser microphone into my iPad with a USB cable, use the Garageband application and then run a cable out to some commercial home theater speakers. This simple setup works fine for my needs but it’s certainly not very professional.
iPad with Garageband
Logitech Home Theater Speakers
In this assignment I hope to explain signal flow in a bit more depth. Thanks for reading my post. I appreciate constructive input so please share your thoughts with me… Thank you.
Introduction to Music Production – week 1
According to Wikipedia“Audio signal flow is the path an audio signal takes from source to output, including all the processing involved in generating audible sound from electronic impulses or recorded media.”
The gear from source to output in my demonstration includes:
A digital Reference DR-VX1 Dynamic Cardioid Vocal Microphone
A standard XLR microphone cable
The M-Audio DUO (2×2 audio interface) Pro USB Mic Preamp with S/PDIF
A USB cable with a device end
A MacBook Pro Computer
A set of Bose noise canceling headphones
The source of an audio signal in my studio generally is a voice. The sound affects the air and creates longitudinal pressure variations that are picked up by the microphone which converts those variations into voltage variations. The dynamic cardioid mic is designed to respond to that sound and convert it at a low amplitude but I need to use the preamp to boost that up to line level once the variations move through the balanced XLR cable and into the audio interface. Then adjust the gain, making sure the indicator light doesn’t go into the read zone.
Once that’s set the signal can continue to flow through the audio interface where it is converted into a digital signal. This stream of ones and zeros is sent to my computer via a USB cable with a device end to be processed in the Digital Audio Workstation, which in my case is Garageband. I’ve not explored the limits of Garageband but the digital audio workstation we can make adjustments to the timbre and dynamics, mix and edit. Once that’s completed the signal almost is ready for listening either through my computer’s audio output to my headphones or I can send the signal back to my audio interface. In either case the signal is processed through a digital converter from the stream of one’s and zeros to digital, then to my out to my headphones for my enjoyment!
Thanks again for taking the time to read through my assignment.
I recently got a call from Ric Flauding, one of the musicians without whom my CD would not have been possible. Ric composes, orchestrates, arranges and plays guitar beautifully. His knowledge and experience far exceed my own. And in the few hours that I’ve spent with him and his wife Denise I’ve grown as a musician and person, and come to truly appreciate them as friends and fellow worshippers.
Anyways, Ric calls me the other day about an article he was writting on the capo. We had a great conversation about using capo’s and about one of my pet peeves- offering our best to God as musicians. I’ve said it on my blog before, “If we are created in the image of God and He is the creator of the universe, then why does the christian music scene seem to be following behind the secular?” And, “I wonder what the great writers of the music of the church like, Handel, Vivaldi, Mendellsonn, Bach and others would have to say about the quality of some of the music we call great.” (ok, so that’s another post)
So, as I said we had a great conversation. Ultimately, we talked about capos and the article. To read what Ric has to say visit Soniccontrol.com. It’s informative and covers some very good, very practical instruction.