Jump to content

Joe Finlan

Members
  • Posts

    78
  • Joined

  • Last visited

Everything posted by Joe Finlan

  1. Doing some audio post workflow tests with an editor using Davinci Resolve 11 and Final Cut Pro. Production shot some tests using SD 664 and BM Ursa. Fed tc from SD to slate and Ursa. I was supplied with a copy of all the original source audio. In Resolve the editor synced the 664 source audio files to the Ursa's RAW video clips. All synced as it should. He then exported the video (as proxies) and synced audio for use in FCP. On the FCP timeline everything is in sync but the audio has now been renamed to their associated video clips' titles rather than retaining the source audio files' names (scene#, take#). They sent me an xml file of a rough edit but of course the names of the audio clips no longer match the source audio files so I can't match them. Does anyone have any idea how the editor can keep the source audio file names when they export the synced files from Resolve to FCP so the xml file will show the correct file names that I have? I've perused the BM forums and searched around the net but haven't found an answer. Any insights would be greatly appreciated.
  2. Well what do you know, there was a 2nd page!
  3. So, the director is expecting you to record what would otherwise be the sound of the actor singing & playing the piano live in the room with a boom mic. That's as simple as finding the best spot to place the boom mic to give you the best possible balance between voice, piano and room. However, in this instance, a speaker will be replacing the piano as the piano sound source. The same technique applies. Ideally, if the shot allows, place the speaker flat on its' back (facing up) on the floor directly behind the piano bench. Adjust the volume of the speaker to give the desired balance between the voice and the piano sound and record it. Take a dry feed of the electric piano and use a lav to record the voice to separate tracks for possible use in post. If you use a high quality monitor speaker it should come close to sounding much like the real thing. You might also consider patching an eq between the EP and the amp to give you some tone control. Knowing the level of productions you work on, Jeff, they should be able to find some time and dollars to allow you, the actor and the pianist to get together in a room with a real piano and run through the scene in advance. This will allow you to hear how the balance should sound between voice and instrument. Maybe even make some test recordings of the voice and piano and voice and speaker setups so the you and the director know what to expect.
  4. I just finished mixing a low budget film using the D8's for LCR, D5's for the surrounds and a JBL sub for the LFE channel and checking bass management. I quite like them and the mix held together at both a theatre showing for cast and crew as well as on a home 5.1 system. I found them quite easy to mix on. Excellent imaging and clarity and the bottom end sounded fine without the sub though the sub does open up the bottom 2 octaves. They really are a steal at their price.
  5. I recently finished a film shot on the Red Epic and I had insisted the tri-level sync be plugged into the genlock input to ensure the tc was synced to the start of each frame even though we were shooting single camera. By chance I was talking to the editor today and he actually mentioned how well everything was syncing up. This may be the secret of successfully putting tc on the Red.
  6. Calvin Russel - Crossroads http://youtu.be/xLUMmp0tLJA
  7. Ah, my age is showing, lol. It has been some years since I last did post for tv. That or the stations here in Canada hadn't yet moved into the digital age with their transmitters.
  8. It still surprises me at the number of misconceptions and misunderstandings concerning recording levels that exist within our industry. While it is true that there are differing operating procedures between film and eng/tv work, as production sound recordists/mixers we should be well aware of the whys and wherefors of what is good practice when recording for a particular end medium and how these affect the post production chain. Having had a background in music recording and audio post production before moving into production sound, I knew what kind of levels I needed to record in the field in order to make the post process work cleanly and effeciently. If I'm doing tv, eng or corporate video I'll record at 48K/16 bit with my average rms levels in the -24 to -20 dbfs range with peaks averaging somewhere in the -12 to -10 dbfs range with the occasional louder peak. Why do I do this? Because when I put my audio post hat on that is exactly where I want the levels to sit in the final mix. Every tv delivery spec I've ever seen stated that the maximum peak level must not exceed -10 dbfs and that average rms levels should not exceed -20 dbfs. What's the benefit of doing this? I, or any other post engineer, doesn't have to waste time in post either increasing or decreasing levels that were recorded too low or too hot for the project. I've never had a video editor tell me my levels were off and I've had feedback from posties complimenting me on my tracks because they didn't have to spend time "fixing them in the mix". If I'm working on a film project then I will adjust the way I work. I record at 48K/24 bit with the average rms dialogue levels in the -34 to -30 dbfs range and the peaks around -24 to -20 dbfs. Why do I do this? Once again it comes back to the final mix. Mix stages are calibrated to -20 dbfs = 85 db spl at the mix position. In the real world the spl of conversation is around 65 db or about -40dbfs if you think of your meters as an spl meter. A dialogue track recorded with an average rms level of -20 dbfs is going to sound unnaturally loud on the mix stage if played back at unity gain. Tracks recorded 10 to 15 db lower tend to "sit" at a volume level that sounds more natural in the film mix. (Not only that but I've now got more headroom for that unexpected change in performance delivery). Once again in post the mixer doesn't have to spend extra time adjusting overall levels up or down prior to mixing, they're already in the ballpark. And to those who think they need to record dialogue as close to the top of the dbfs scale as possible I'll say this: "you've gained nothing and lost something". You've gained nothing because the level you record at does not affect the frequency response of the sound in any way and the noise floor of your mic/ preamp and recorder is already well below the signal you are recording. The days of tape hiss are far behind us. What you've lost is the headroom necessary to handle that unexpected loud exclamation or sound in the background that now causes your limiters to clamp down excessively and become audible or, worse yet, to distort. You're walking a tightrope without a net. Though, as production sound recordists we are at the beginning of the audio production chain, how we do our job has an effect on the entire chain. The better we understand that chain and its' requirements the better we can provide tracks that need the least amount of attention throughout the post process. And that keeps everyone happy, even Henchman.
  9. I use a Browning Camping Directors Chair XT. Light, sturdy and cheap.
  10. Best response I've seen so far re PT 11 and its' features: "So 2007".
  11. Can we say "Pro Tools". Back in the early days of DAW's they bought their market share by advertising everywhere and in everything. You couldn't open a mag without seeing their ads. Eventually it became an almost generic term for a DAW. Unfortunately, as a result, far better systems fell to the wayside as non-techy types insisted it had to be PT. Proof that advertising can sell an inferior product.
  12. Having spent the better part of the 1st decade of my professional audio career in recording studios (with mic closets full of some of the finest mics made and a proper monitoring envirionment) I came to learn not only what "good" sound was but also the sonic characteristics of different microphones. This "ear training" allowed me to determine which mics "colouration" best fit a particular application. When I eventually moved into production sound work I had a base reference as to what I wanted to hear, sonically, and a good understanding of how polar patterns would affect the direct to ambient sound ratio and how the patterns could be used to reduce unwanted sounds in the recording environment. During my time in production sound I've used Sennheiser 416's, AKG hypers and short shotguns and Sanken short shotguns. Each had their own "sound", some which I liked and some which I didn't. However, the first time I put a Schoeps on the end of a boompole and opened the pot my eyes widened and my jaw dropped. (I've seen the same reaction from other soundies on their first listen to a Schoeps) You see the Schoeps had no "sound", it was totally transparent. I was literally astounded that a microphone could be that linear in it's response and that the linearity held through most of its' off axis response. Being that the 41 is a supercardioid pattern it's a little broader than a hyper pattern which is why it works well in picking up somewhat off-axis dialogue more accurately while still maintaining the direct to ambient sound ratio of a hypercardioid. I immediately understood why it was the microphone of choice for interior dialogue recording by the top mixers in the industry. The one other thing about a Schoeps is how dead quiet the preamps are. I've done A/B tests in the studio and at the point where you could hear the hiss of the other mics' circuitry the Schoeps was so quiet we thought the channel was turned off. Expensive, yes. But that's the cost of perfection.
  13. ...because you can be guaranteed that the cameraman didn't.
  14. To truly keep your feet warm in extreme cold you probably won't do better than these http://www.furcanada...miks-boots.html . Refined and tested over the past 2,000 years they are lightweight and breathe well, keeping moisture from building up inside the boot. Available in caribou for use in snow and waterproof sealskin for use on ice and in wet conditions. I've had the good fortune to have worn them in -30 degree weather and at worst my feet were comfortably cool while others on the crew, wearing military grade arctic boots, complained of feet so cold they were worried they were getting frostbite. A bit on the pricey side if you don't have an Inuit guide to loan you a pair.
  15. Audio Tool by JJBunn - SPL, decibel & RT60 meter, spectrum analyzer, signal generator and polarity checker.
  16. Yes, I used Reaper's video track/window only. I was using an ASUS laptop with dual 7200 rpm drives and a dedicated Nvidia 9800 card connected to a 28" monitor. I would run picture on the laptop display and the multitrack & mixer on the external monitor. We were using the H.264 codec and had no sync issues. Reaper has since improved their video capabilites and there is a thread in the forums dedicated to getting video playback set up. Unlike Pro Tools, where all audio must be the same file type/sampling freq/bit depth, Reaper allows you to use just about any audio file type/sampling freq/bit depth simultaneously in a project. Because audio was coming from various sources the Reaper project ended up with 44.1/16, 48/16, 48/24 wavs, 48/16 & 48/24 aifs and even a few mp3's. I consolidated the Reaper projects to 48K/24 wavs and then created Pro Tools session files with AATranslator. At the time AAT could only output PT5 files but there were no issues when the studio converted them to run on their PT9 system. Just recently AAT released an update and they can now convert to PTF files.
  17. I"m PC based myself but there is a Mac based version of Reaper and from what I know there should be no issues for you. On the Reaper forums site http://forum.cockos.com/index.php there is a Mac specific forum that should provide you with the information you seek.
  18. HI Phil, The film was cut on FCP and video files and OMF's were made and sent to me. I used AAT to convert the OMF's of the different reels into Reaper project files. From that point I did all the dialogue editing first. Very little adr was required - about 30 words/lines in total. Once that was finished I moved on to sfx editing. Ambiences, hard fx and foley for the english soundtrack and the M&E were pulled from sound effects libraries or recorded as needed and edited/compiled/submixed as required. At this point I ended up with 16 tracks of audio. While I was doing this the score was composed and a local sound designer created the elements for the "demon" voice. The score was delivered as 12 stereo tracks broken down by instruments (strings, horns, etc) and the "demon" voice as 3 stereo pairs to allow flexibility during the final mix. Naturally, as I was compiling the reels, changes would come in from the editor. Fortunately they were all edits to tighten up the film so nothing was being rearranged or added. A new video and an OMF of the changes would come to me, I would convert it to a Reaper project, copy and paste the tracks from the previous version into it and conform to the new picture. The only tricky part of this was having to edit the music tracks as the editor was more concerned with how the picture edit worked than whether it worked musically. I've had many years experience editing music so was able to make everything work. Once ready to mix I consolidated all the tracks in Reaper and then used AAT to export the reels to Pro Tools files. I had been working up a good rough mix and all fades, pans, automation data and clip levels were converted with no issues. Everything imported into Pro Tools with no problems and from there we went on to polish the mix into it's final form and output the necessary stems and M&E's for final delivery. I should say I'm not a Pro Tools user. Back in the '90's I had used the Studer Dyaxis system and then the Spectral Audio Engine. By the time the studio I had been freelancing in moved to Pro Tools I had already moved into production sound work. I will say that when we had to do a number of last minute edit changes during the mix I was not impressed with the speed at which we were able to do them in Pro Tools - it took a couple hours more than it would have in Reaper. And the fact that everything had to be mixed out in real time was another time waster. In Reaper an off-line render of my reel mixes took 5 -6 minutes as opposed to the 18 - 20 minutes real time with Pro Tools. I'm not knocking Pro Tools. It's a mature system , well accepted and does what is needed but it is getting long in the tooth and software like Reaper is doing the same job in less time, for far less money, with a smaller footprint (under 20 megs), fewer crashes and continuous updates and advancements. And with tools like AATranslator to easily convert from/to some 20 different project file formats I'm pretty much compatible with everyone.
  19. Go the Reaper + aatranslator route and use the money you would have wasted on Pro Tools 10 to buy a decent pair of small monitor speakers like the Equator Audio D5's. I use Reaper/aatranslator to do post jobs for my corporate video production sound clients and have used Reaper to do all the audio post for a full length film ( http://devilseedmovie.com ), using aatranslator to bring in OMF's and to output PT files for the final mix at a proper mix studio. If you're doing under $20K a year in business using Reaper the licence is $60.00 and is good for 2 full version upgrades. While PT may be entrenched as a standard in professional facilities, once you get into Reaper you will find that you can do more, faster and more efficiently with Reaper.
  20. For this type of recording the mic will have been set to an omnidirectional pattern so everyone within 360 degrees facing the mic is "on mic". To do this style of recording the musicians must balance themselves with each other. In essence they are creating the mix as they perform. It's they way records were made originally. Some of the clues that this is a live recording: 1) it's in mono 2) there's no reverb or delays used on anything 2) listen to how the trumpet player restrains himself to keep in balance with the others 3) notice that you don't hear much of the cymbal or it's top end (playing lightly and a distance from the mic) 4) the vocal sound is what I hear when doing production sound outdoors, not indoors 5) the bg ambience sounds they way it should for an outdoor, live recording 6) all the things that Jeff mentioned. I spent the first nine years of my audio career in music recording studios and can't see why anyone would pay for studio time to record a song with multiple mics and great gear in order to mix it in mono, with no effects, in order to make it sound like it was recorded with one mic on a sidewalk. Sometimes the simplest answer is the right answer.
  21. I've done a few "gather round the omni" recordings and that first video is definitely "live on the street".
×
×
  • Create New...