Jump to content

Search the Community

Showing results for tags 'Post'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Main Board
    • Current
    • The Daily Journal
    • General Discussion
    • Equipment
    • Cameras... love them, hate them
    • Recording Direct to Computer
    • Workflow
    • The Post Place
    • Images of Interest
    • Macs... and the other computer
    • All Things Apple
    • Technical Reference
    • Do It Yourself
    • Manufacturers & Dealers
    • Work Available - Available for Work
    • Post to the Host
    • Donate to Support JWSOUNDGROUP

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL








Found 14 results

  1. I threw this together to accompany my Producing Great Sound book. It uses convolution modeling to create random room tone. Load a clean, breath- and noise-free sample from the production track (15 frames or so). The sample isn't repeated, just used as a model for timbre. So the output doesn't sound looped. I've used this (or a similar process) on a bunch of projects over the past year or two. Sounds fine, even in a theatrical mix. Free, open source VST or AU plug-in. Runs in Macs only, and requires you load the free open-source SonicBirth first. Full instructions with the download. More info and download links at GreatSound.info/roomtone Comments and modifications welcome.
  2. I came across a new product, VoiceGate from Accentize. The name is misnomer: it's not a gate, but a NN-driven dialog v noise separator. Same category as iZotope's Dialog Isolate module, and Audionamix' IDC, but with some major differences. Runs very efficiently as a channel insert plug-in, or in an offline window. They've been fine-tuning the beta - and I've found them very flexible at taking user suggestions (plus adding features I hadn't thought of). Should be shipping in a week or so. I just posted a hands-on of the beta with before/after audio samples, in a thread at Geekslutz. Worth knowing about. (Just don't tell any DPs... or else it'll be one more reason for them to say "Don't worry about dialog during production because there's a new magic plug-in." (It ain't magic. But I'm adding it to my arsenal that already includes Rx7Adv and IDC, because it's a useful tool. 😉)
  3. (Note on subject line: I'm referring to this current posting. General comments about my sanity may also be accurate, but I already know I'm cuckoo.) A producer that I like came to me with a semi-freebie: a two hour theatrical documentary about a 50-year-old piece of American history, with lots of contemporary interviews with the folks involved plus historic clips. He's also licensed some scoring from a 1960s mainstream feature as a contribution. I like the guy; he's given me real projects in the past, with real budgets and real schedules, and I want to do this one. Along with very little money, he has very little time. I'll get about eight days for dialog edit, premix, M&E, and remix. Then it gets one very visible screening. After that, there'll probably be time and bucks to take it apart and tweak. (Even if not... as I said, I want to do this one.) Here's the rub: 1) He's reluctant to do a DCP right now, mostly because of the lab time involved. He's expecting to grab my final mix, hop on a plane, and be doing compression while he's flying to the venue. I know from bitter experience that anything shy of a DCP can be mangled by a theater's DVD or similar playback. 2) He doesn't want sweetening in the historic clips; just original footage as best I can clean it. He doesn't want scoring under the interviews; just the folks' voices. There's very little narration. It appears the only music, other than main title and credit, will be during interstitials and chapter break titles. In other words: very little of this show is stereo. None of it is surround. Since this scoring is all archive, probably nothing will hit LFE. What I'm thinking about is mixing the dialog as 3-track mono. Same material on all 3 tracks, maybe -3dB on the center one. The only time L&R will be different is probably during the title and credit. I'm figuring this will give me the best chance of everybody in the front row of the theater hearing a decent track. If I just mix in stereo with phantom center, I'm worried some decoder will decide to suppress the dialog (it's happened before)... or the theater's L&R mains will never be on because everything will be matrixed to the center (ditto). If I encode for the guy's DVD from this LCR master, maybe there's a chance all three front speakers will be talking. And, yes, I'll still try to convince him to pull a DCP. If there's not time for it at the premiere, at least for future exhibitions. Thoughts?
  4. Every post suite I've ever worked in has had aggressive air conditioning. (Dub stages? Not as much. I'm talking about video-style pix or audio rooms.) Clients have often complained about about the temperature, and would ask me to make the room warmer. But mostly the female clients. (After the session I'd have to turn the 'stat back down to make the facility's CE happy.) Some theorists maintain it's because corporate men usually have to wear suits - or at least buttoned collars and ties - while corporate women (even highly professional ones) expose more skin. But in a post environment, most of us - male and female - wear slacks or jeans and an open-collar shirt of some kind. The other likely cause is that men and women tend to have different metabolic rates, so different temperatures are more comfortable. But now there's a study showing it's more than comfort: I've always wondered why there were so few female audio post engineers, when we know women usually have much better hearing than men. Could our facility practices be part of the reason? (Yes, plenty of other reasons in play, probably dating from a kid's earliest playtime exposures to STEM and reinforced by cultural norms throughout their education. This appears to be yet one more.)
  5. Fascinating article in today's NYT about neural networks generating still images of faces with no 'uncanny valley'. But buried in that article is reference to work at University of Washington last Summer... that automatically edits lips to match a different track! Literally puts new words in someone's mouth. On a computer screen, sync looks absolutely realistic. Resolution might not be enough for a big screen... but these things tend to leap forward quickly. Here's a link just to the UW demo. They took some real Obama speeches, and put them into multiple other Obama faces. Same speech, many different visual deliveries. The article doesn't mention what could happen when you edit the source speech to say something new. But heck, good dialog editors have always been able to change what someone says, on the track. Now a computer can make the target individual appear on-camera, saying the edited version! NYTimes full article link.
  6. Hi All, Post Production Audio Question: I have come across some post audio workflows and the topics of submix, buses and loudness normalization. It's a side of post audio I have not seen and it does provide cleaner and fuller audio, but I do have some questions about this workflow: To do the EQ, noise reduction, desses, etc.. for each audio clip is better to do a submix bus for each audio clip and do the EQ and other stuff OR do a submix bus for that one track and do the EQ and other stuff? and after the submix bus then do the loudness normalization? Any answers and suggestions are welcome. Thank you.
  7. My PT session for a short film has several audio tracks, and at some point I imported an OMF from the foley department. I worked directly from this OMF (maybe the problem started here?) with editing, processing, etc. Now that I was saving a copy of the session to another location, I was surprised to see that it required around 250 GB, where the Audio Files folder only was 16 GB. Not knowing where the issue was, I still made the copy. Later on, checking the Audio Files folder of the copied session, I had over 400 instances of the OMF, all with the same creation date and most of them with the complete size. It seems that it´s an instance for every clip that was not processed. Trying it out, I moved all OMF files, except one, to another folder and tried to open my session. ProTools still needed to locate the OMF´s or else the audio clips associated with it, would be stated as missing files. I restored the OMF´s to the Audio Files folder, but ProTools is still not finding them. Why is ProTools doing this and could I have avoided it by duplicated the OMF tracks and work from the copied audio clips? I have ProTools 10. Thanks!
  8. There is a scene in Batman V Superman at Lois Lane's apartment bathroom. Noticed underneath dialogue there was taxi car horn noise. Was just checking to see if there was a discussion on that decision. Whoever made that choice really set the scene really well. You didn't have to say it was Metropolis, it sounded just like New York. Was there any back and forth about laying more noise under dialogue? Or was that one of many honks from on location where more noise needed to be added to match unavoidable set issues?
  9. Hi all, I´m have a "small" problem. I have quite a few polyphonic audio files (22 shooting days!) witch have no Production Mix tracks (Boom mix and lav mix), I have a long shot way of doing this in Wave Agent and ProTools (Bounce new mix files in PT and Combine that with the orginal in Wave Agent later. The problem is that the editors want the tracks in a specific order that I can´t seem to be able to change, making the files in wave agent. Although the option seems to be there, the order always stays the same. So I guess my questions are: Has anyone had to do this as well? Can you guys think of any other programs that could do this without wiping out track names and other metadata? Thanks you! Björn Viktorsson.
  10. Just wondering about how to handle the 60 seconds of tone (-20dBFS/ roughly -20LKFS) and 30 sec of silence before the program starts when measuring/ correcting loudness to a target value of -24LKFS. The 1770-3 standard gates for silence but doesn't ignore tone so the integrated loudness will be skewed by that tone if included in the file to be measured/ corrected. I'm mixing close to -24LKFS by ear, doing what I've always done but I want to include a render for loudness to get as accurate as possible using RX5's loudness tool. If I include tone in the file (like I always did in the past) it'll turn down the audio because there's 1 minute of "content" (tone) that is 4 dB too loud over time, not good. On the other hand, the guys doing the layback want tone included in the file. There has to be a standard way of dealing with this by now. Thoughts?
  11. Cross post from DUC - Hi All - Hoping some of the helpful folks here can help me wrap my head around an issue I'm up against. The Breakdown: Location: Working on a doc that has some unique qualities. 28 tracks of music (live performance) were recorded live - I fed a mono mix to the main camera during the shoot, I also received a mono mix of the Dx mixers tracks during the performance. Post: I was given an OMF from FCP 7 that has a stereo music guide track, with each "song" clip having been re-named by the editor (sequential numbers at the end of the file name - editor was also provided a stereo mix down of all 30 tracks to use to edit to - appears everything was cut and renamed during the edit). There is also some Dx from the live recording, but many VO have been added in additionally (along with some other music stuff). The Question: Is there any hope to use the field recorder workflow to get the 30 tracks of Mx or the 8 Dx tracks from the field recorder into this project from the OMF? The last few times I've used the field recorder workflow, I've ended up pulling in 100+ tracks for each "clip" in the timeline and have to go through manually and delete based off clip scene/take name. And unfortunately this isn't even an option in this case, as no meaningful metadata exists - everything has been renamed. So far I have been unsuccessful using the field recorder workflow to get this to work. Happy to provide more details as needed - hoping someone has some good news for me other than manually spotting all of my individual music and Dx tracks to the OMF. Many thanks for your time in reading this.
  12. http://www.youtube.com/watch?v=O8PQ1vcTaoI
  13. Has anyone heard of this software from Adobe? The demos sound too good to be true. I'm with Rainn Wilson, "I don't believe it." Mark O.
  14. Hey guys, Just wondering which among you do sound reports manually with pen and paper or do them electronically with a laptop, iOS device, etc. (or who doesn't do them at all? I.E. OMB? Reality Shows? etc.) Also wondering how often you put a reference tone file (-20dBFS) in the daily folder for post? Is this still needed usually? Thanks Guys!
  • Create New...