The MOOC – *cheesy music* – Education for the Futureeeeeeee

Education to the masses

One of the more interesting and out there class projects I completed recently was to design a method of recording classes and distribute them around to students at home and abroad. That project was quite suited to the University of Salford and it’s MediaCityUK campus. It has the equipment and the student talent on hand that would be required. Television students to record class with a couple of cameras and get that edited. It has the sound department there to ensure that goes well too. What a perfect way to combine a practical project and assessment towards a student’s grade while also vastly improving the reach of the University.

But I suppose there are problems associated with that. Modules change year by year, or at least they should if a University takes student feedback in any way seriously! Content will change, assessments adjusted and delivery tweaked. Add to that the interesting challenges associated with a live recording of a class! In the quest to provide the best education, these are problems that will sort get sorted I am sure.

Continue reading “The MOOC – *cheesy music* – Education for the Futureeeeeeee”

Listening and the Modern World

Our world has never been louder. Cities are full of loud engines and angry motorists making good use of their horns. MP3 players and smartphones are widespread so our music or radio can follow us anywhere.

How loud is our world, though? Continue reading “Listening and the Modern World”

Music Production – Back to Basics

I am left a bit inspired and overjoyed after watching the Sound City documentary which was written and directed by Dave Grohl.

The documentary follows the Los Angeles based Sound City studios from its birth in the late 60s through to the present day and recent closure. It talks about the various musicians, producers, engineers who have worked and been touched by the studios. The list of albums coming from this studio is amazing. The Kyuss album “Welcome to Sky Valley” and The Queen’s of the Stone Age album “Rated R” take special places in my heart and since I found out that their sound comes from the Sound City attitude and ethic of doing things, I am left motivated in my own decisions regarding music production workflow. I do not want to give too much away but there is one point which I want to talk about in this blog. This point is the attitude and ethic I mentioned.

24 Tracks

The most thought provoking point it raises for me is the development and modern use of digital audio equipment compared to the Sound City days. The studio itself revolved around a very special Neve mixing console.

Having no real knowledge of the Sound City story before this documentary, I sighed and rolled my eyes when Pro Tools was mentioned. Happily, it went on to criticising the DAW. I am not going to start slagging Pro Tools off because that is not what I mean. This line was one of the first indications to the theme of the documentary. This theme was that live, emotional and powerful music is what the aim of production should be and the digital revolution can and had some very negative impacts on this due to misuse. It outlined what digital audio workstations as a whole have allowed anyone to do and allowed cheaply. This could be a factor to production issues that we have had to deal with over the years, such as the loudness wars and overly produces, polished and edited music.

This is something I have believed for quite a while. It has never been easier to do some tracking and then edit or generally manipulate the material until it is deemed perfect. In essence what we are doing is opening the floodgates for the pushing of this concept to an extreme.

Newbies, through no fault of their own, can be sucked into The “fix it in the mix” mindset. Word processing and laptops do this too; I can now say to people that I am a writer. I am a blogger; I write important opinion pieces and post them on a fantastic server. Well, I am not that great. I don’t have perfect grammar and I am sure the spell check has made some confusing corrections in this regard for me. That and I am not for one moment going to tell anyone I am an authority on anything! Where I think my blog is decent, on the other extreme you can have utterly horrendous blogs which are just a means for someone to massage their ego or simply write about things badly. With fancy themes, they can be seen to be great.

To take what I am saying and bring it into an audio context let’s look at this in a digital context. Essentially, it has never been easier for people to write what they feel and post it to an audience which could possibly be millions of people. Digital has also revolutionised the audio industry which allows anyone to become a music producer. This dilutes the first principles which many people have worked extremely hard to work along and with. This dilution is in some ways something which every single one of us do. We all have phases of not knowing what we are doing so we will always produce something which is simply not done in the way it should have been.

In the audio world, vocals can be auto tuned heavily. Bass and drum takes can be edited to the point of technical perfection but all these examples result in musically deficient music. What this Sound City documentary told me was not to let the possibilities of digital to pull me into a place where I mix and produce this music without the music in mind. Record live, overdub what you really need to and keep the energy intact.

A lot of people say click tracks kill the musicality of a piece. I would argue that recording music track by track does this and as it happens, clicks are used a lot in that and get the blame. Something which I have done a lot is use the least number of tracks I can. What I would love to do in the ideal recording setup is live, live and live with minimal and only necessary overdubs. Id keep the editing to reasonable amount and learn when enough is enough and tell the musician we need to get it recorded better. Okay, maybe that is just so my job is easier but I cant help but think that a song which could sound amazing spread across 24 tracks would sound much better than a 40 track traffic jam.

Does it have to be Analogue?

The documentary talks about how the Neve mixing board contributes to the music. Where I felt that sometimes I was being told that the mixer was the reason I ended up appreciating that it is the analogue way of recording that stitches musicians together. This is not because it was recorded on tape, it is not because that particular mixing board was being used as such. For me I was being told that it was because of the limitations of a fully analogue system where we ended up recording bands live, overdubbing only when we really needed to and all I can say is that the music which we could hear being recorded in this way in the documentary is just fantastic. It may not float everyone’s boat but as an engineer I can feel the feel. I can sense the energy, the spontaneity, the music. What needs to be realised is that this production is not about DAW bashing and is not an anti technology group of people whining about the digital domain. What it is about is an attempt to get the audience appreciate the methodologies that the tape medium allowed or probably forced upon us. Even though it has been made extremely easy thanks to digital we should not to let these methodologies go.

Conclusion
So, for anyone reading please don’t get too caught up in the possibilities of digital. It is only going to offer us a consistently increasing number of possible track counts and plug in instances. This documentary shows us all how it used to be done and I think what we have to face is that maybe we reached the peak and musically best way to record and mix just as the digital revolution began. Maybe all the digital revolution has done is allow us build on and streamline aspects of the old way of doing things but at the expense of allowing more misuse and technique abuse in.

The title of this blog post is back to basics which, I hope you can appreciate actually are not basics at all. They are extremely complex recording and mixing techniques driven by experience and genius. They are techniques developed over years of work which demanded certain technological improvements which digital as delivered on.

Who are we to ask for unlimited track counts and millions of plugin instances with surgical editing capabilities when we are clueless about recording and mixing in a fashion which has produced some of the best sounding music ever? Who knows, maybe fighting the noise floor was a much more significant and positive development than any increase in track count or auto tuner could ever imagine to be.

Where does this change our development focus? Well, readers, where do you think?

Thanks for reading!

www.soundcitymovie.com

[youtube=http://www.youtube.com/watch?v=HQoOfiLz1G4]

Why Listening Matters

I am not a heavy YouTube user. At the most, I will look at chunks of concerts by my favourite musicians or possibly a long interview. That said, I have been glued to YouTube for about two and a half hours listening to round table discussion with Steve Berkowitz (Senior Vice President of Sony Legacy), Greg Calbi (Mastering Engineer), Evan Cornog (Audiophile), Michael Fremer (Editor of Stereophile Magazine), Kevin Killen (Record and Mix Engineer), and Craig Street (Record Producer). The panel span the music production process from inception to playback.

Music as a Rewarding Experience

First, they talked about how people need to set aside time to listen to things. In our modern listening world, everything is done in a rush; probably more so since this video was recorded. Services like Spotify and TuneIn are amazing for allowing us to listen to what we like, when we like. However, the chances are that we will mostly listen to them while we do something else. Music is not seen as an enriching experience as much as it should.

Technology and Music

Interestingly, the question of technology advancing and whether the standard of low quality MP3 would move on and be replaced by something better was raised. The panel were mainly positive that it would; however, one person felt that there is no demand for anything better as no one knows that there is better; which, is a fascinating thing to think about. The term “sonic junkfood” was extremely apt. I would be confident in saying that the breakthrough in the ability of storing more music than you would ever want in your pocket greatly outweighed any sonic disadvantages which were inherent. One good sign is that since this was recorded, the emergence of Spotify and its premium 320mbps streaming service has taken quite a hold; which is definitely a great place to start. Hopefully, this will become the norm for free subscriptions.

The panel asked could the digital recording process recreate old analogue recordings? For me, this is one of the audio myths. If a single performance is simultaneously recorded with analogue and digital (properly), I am of the belief that both performances will be captured adequately. In fact, I would go as far to say that the democratisation of audio production to the masses is what gives digital systems a more negative reputation that deserved as productions can be more often than not, low quality. This democratisation is also the most positive development in terms of promoting music which may not have otherwise gotten out into the world.

An interesting point touched on in the video is how DAWs nowadays give users almost limitless possibilities with huge track counts and processing that would have been considered magic in the olden days! Perhaps we have reached a point where we need to combine the working methodologies of analogue and digital, which is what I do everyday as a sound engineer. If a song can’t be put down in 24 tracks, then chances are, there needs to be a rethink!

The Video

The video has been muted by YouTube due to it having real life songs played as musical examples. I have re-uploaded the video without these musical pieces in the hope that YouTube does not mute it again, so apologies if you are disappointed by the back of musical examples.

 [youtube https://www.youtube.com/watch?v=lAz1aObRkjM&w=560&h=315]

First World Problems – Tinnitus

First World Problems – Tinnitus

20121015-153055.jpg

As technology advances it becomes possible to play music through ever more efficient amps and loudspeakers. The introduction of neodymium magnets to headphones is one of the big leaps in clarity which I noticed in a not so scientific way by upgrading my Sony MDR 150 to the 300s. Loud environments like city centres and public transport now offer no resistance to my pleasure of music. Is that really a good thing?

These headphones offer more volume and better quality of sound, in my opinion better quality doesn’t come from louder music but a fundamentally clearer sound which these headphones offered, its the reason I have had them for so long. However, I also feel in today’s world of rapidly developing tech and perhaps a total absence of education from manufacturers to its consumers at the very least, I feel we are rapidly deafening ourselves.

With the ever decreasing difficulty in getting a hold of a portable music player and cheap as chips headphones we only have to walk around town to hear music blasting into people’s ears, to my dismay I have witnessed this in Uni of Salford MediaCityUK. Technology has to advance and I don’t blame manufacturers for developing more efficient means of reproducing sound.

To give a minor comparison, bright screens is the visual equivalent for me however with phones, laptop monitors and probably some televisions there can be the option of automated brightness level which reacts to where you are and how bright the screen is being asked to display. For example, a dark coloured page in a dark room will display quite a bit brighter than a sheer white page in the same room. This really help solve the problem. Is there an audio equivalent?

No. Music is inherently so transient that we can’t “automate the brightness”. Compression kills the musicality for example. There is no real way we can flatten the volume of music in a way that won’t anger lovers of music like me.

One great feature I have seen is in the music library program MediaMonkey which is a fantastic iTunes and Windows Media Player alternative. This has the option to:

1) Analyse each audio file in your library and apply an amplification or attenuation during playback. This feature is very handy as it stops you from raising the volume of a quiet track and then getting your ears blown of when something loud comes on after. This feature is similar to Apples Soundcheck on their iPods and they both raised a Damien Rice song by about 3 to 6 dB while attenuating a Foo Fighters song by a colossal 12 dB. Can you imagine listening to the Damien Rice song in a loud bus and being greeted by “Good Grief” by the Foos? Though to be honest, I don’t notice Apple Soundcheck doing all that much compared to the cheeky monkey.

2) MediaMonkey also allows you to physically write this adjustment to the file by replacing it with a file with the attenuation or amplification applied which means its “safe” on anything you play it on. Basically, re-saves the file louder or quieter than it used to be. The Monkey also can level the volume in this way when you copy a CD to your computer.

3) A further feature which is the basis of 1 and 2 is that you can set a target dB level for it to attenuate and amplify towards. We have to remember that it is essentially normalising the track so it’s finding the loudest part and raising or lowering it to the target. A badly mixed song with a very loud snare drum at one instance for example could be lowered but the rest of the song would get quieter, bringing back the need to turn quiet songs up and being blasted out of it when a loud big comes up.

Is this a reason FOR the rebel alliance (audiophiles) LOSING the Loudness Wars to the mighty Federation? (Idiotic music labels who are responsible for modern pop). This is where essentially everything gets squashed into a song with little or no dynamics or “soul”? Not that anything produced by xFaktaaa is good in the first place, I’m not going to tackle this but its a consideration.

Back on topic, on paper this solves a lot of problems which we encounter but it does not stop the slab of flesh and bone silly person from simply turning the volume up. Feature 2 may even things out but I’ve already mentioned its drawbacks.

How about a standard?
Drawbacks aside, we have standards for: Sampling, frame rates, timecodes, surround arrays, proving we can write to a formula which results in difficult reading of papers (see tinnitus link).

Why can we not have a headphone standard? Mediamonkey offers flexible normalisation of tracks so volume becomes fairly level so why can’t headphones be produced to such a standard where:

If an audio file has been normalised to a target of a certain dBfs value, than a set of headphones which meets this new standard when connected to a media player playing at full volume using Apples Soundcheck or similar will reproduce sound at a safe SPL.

In other words, with these special headphones and a music player designed in a similar way, the loudest it could ever get is a safe level for our hearing.

This would mean music players need to be calibrated to a certain spec to make sure that they chuck out a signal that cooperates with the idea of calibrated headphones. No big problem in my eyes.

This approach would cause problems with people who love their dub step etc. but I think we have to admit, our modern world is so loud that there are places where we simply can’t listen to music safely.

Lets do it!