Jump to content

The Documentary Sound Guy

Members
  • Posts

    64
  • Joined

  • Last visited

Profile Information

  • Location
    British Columbia, Canada
  • About
    I am a location sound recordist in B.C., Canada. I specialize in sound for documentaries.

    I’ve been recording sound professionally since 2006. I’ve worked on everything from giant Hollywood blockbusters to your brother’s neighbour’s short student film, and my favourite is documentary. There is nothing else I’d rather do.

    I’ve hiked half a day to a pristine alpine meadow for a shoot. I’ve stood waist deep in the ocean waves to record dialogue in a kayak. I’ve plugged in to helicopter comms at an active heli-logging operation. I’ve recorded a daredevil waterskiier as he skied on and off the shore of the Squamish river. I love documentary because it takes me places that normal people don’t go.
  • Interested in Sound for Picture
    Yes

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Johnny is absolutely right about color. This means you need to be careful with red mini-speakers, because they are always distorted due to clipping. They're not like cars where the red ones go faster. And some of the newer, AI-based mini-speakers have developed collectivist tendencies and will only play communist propaganda. So, your best bet to avoiding this problem with red mini speakers is to buy our competitor's product.
  2. Actually, I change my answer, the best choice for a mini speaker is our competitor's product.
  3. The most important feature of a mini speaker is the price. If it costs more than $10, it's just not worth it.
  4. Things are different where you live! 'Round here, our bus drivers are instructed to let fare jumpers be as a matter of policy and public safety. The majority of people continue to pay the fare. I'm not sure where that goes other than making the analogy fall flat. I think we can agree that production sound mixers aren't a public service, and producers aren't homeless vagrants! I also think we can agree that tech scouts should be mandatory and paid for sound. I'm just not sure I'm on board with the idea that giving needy (or drunk) riders a break on bus fare is so important to avoid because of precedent...
  5. I had the same thought actually ... at the very least, it seemed like whoever asked it wasn't a working production sound mixer.
  6. Short answer is no. I have a 5.1 setup with a dead tweeter in the centre speaker. I routinely make my dialogue more audible by switching to a stereo downmix. A default downmix mixes C into both L & R if C isn't present.
  7. THANK YOU for providing an answer that addresses the substance of my question! It sounds like the header storage format is the missing piece of information I needed. If that part is sample-precise (i.e. samples since midnight), that gives me confidence that it will be possible to correctly align them in post. Though, I'm still unclear what the relationship between the header and the TC is ... I'm guessing the header corresponds to the first sample in the frame that is listed in the TC stamp (and, I guess, also the first frame of audio in the file)? And then I'm counting on the fact that the recorders will only start recording on a sample that matched the first sample of a frame? It sounds to me like this may be device specific, and there's nothing in the TC spec (or header format) that specifies precision. I still have a question about the precision of jamming TC, which I'm also guessing will be device specific. Namely, when I jam timecode from degree of precision is guaranteed? The impression I've gotten is that there is no guarantee better than frame-precision, but most recorders can probably manage considerably better than that. Incidentally, I've manage to track down a test that specifically references the F8: The test notes that there is an offset of "a few samples" between the Zoom F8 and the MOTU 16A (which was the master clock), which means the F8, at least, isn't totally sample-precise in it's TC jam. It does suggest that the F8 manages to do *some* kind of word clock syncing that derives sync from the TC signal, which isn't quite perfect, since it drifted by 1.6ms after an hour. My guess is that it's not a true WC jam, but perhaps it periodically re-clocks things by adding or dropping samples ... I can't think of I'm inclined to agree with the test's conclusion that this is probably "close enough" for most purposes — certainly, I'm not expecting to roll anywhere close to an hour. But it also tells me I can't count on true sample-precision, since even the TC timestamp was offset a few samples. So ... I guess I have my answer: It's device specific, and I have to do a test with my two specific recorders and then decide if that level of precision is good enough for my purpose. And, probably, the "correct" way to do this is by only using recorders of the same model that can sync both TC and WC (Tascam HS-P82 and SD 7 Series seem to be the only ones I've seen that fit this criteria). Thanks for all your help everyone! I've learned a lot ... not least that sync is way harder to implement with precision than I thought.
  8. I'm confused as ever, since the answers given by Vincent and borjam seem to contradict: "Time code can be used as an initial phase reference for jam sync using the word clock as the frequency reference." "Well, aligning using timecode achieves a precision of 1/24th or 1/25th of a second (a frame). While it is sufficient in order to align audio files from a macroscopic point if view (ie, aligning audio to video or, for instance, different instruments so that you won't perceive a discrepancy while listening to it), at the microscopic level consider the number of samples in a frame." So, is timecode precise enough to offer a time reference that keeps multiple recorders in phase or isn't it? At this point, I feel like I understand the differences between WC and TC pretty well, and I like Vincent's tool analogy. But, my question isn't what the tools are used for, it's how straight is the straight edge? What is the error tolerance of the straight edge? Or, is there any reason to believe timecode provides better than frame-level precision? Because, if timecode *doesn't* provide sample-level precision (and I'm more inclined to believe borjam over Wikipedia here), even if the two recorders are both timecode-jammed, and have their word-clocks phase-locked, that still doesn't appear to provide enough information to recreate phase-accuracy across two different recorders in post. Sebi mentioned manually syncing the two recordings in post, but I still don't understand the process for finding which samples match if the time-reference provided by timecode isn't sample-precise. Anecdotally, my experience syncing with timecode is that recordings from multiple cameras are often out of sync at a sub-frame level (and also with the audio recorder), which makes me think borjam is right: Timecode only achieves precision of 1/24th of a second, which is 2,000 less precise than we need to sync audio sampled at 48kHz. So, now I've got two questions: What's the workflow for phase-aligning multiple recorders in post? and Does that workflow even exist? Is what I'm chasing here even possible: Recording music across multiple recorders in such a way that preserves phase coherence across all tracks? I'm kind of shocked to find that the answer might be no. I had kind of assumed there was a good way to do this, and the more I learn, the less confident I am that it can be done. The only thing I can think of is locking the word clocks, recording a common signal to both recorders (with enough variability that a common frame can be identified), and then manually aligning the waveforms at a sample level in post. This isn't a workflow I want to recommend to a post-supervisor, so at this point, I'm pretty committed to a single-recorder solution (probably Dante+DAW). But I'm kind of stuck on the theory of it, since I'm shocked that there's no easy way to keep two recorders in sync with phase-precision. I'm also happy to have my intuition confirmed that maybe phase-coherence across the two recorders doesn't matter as long as I'm careful to keep my stereo pairs (and, more broadly, sources with multiple mics) on a single recorder.
  9. I don't imagine post on this particular project will notice ... but it's certainly a conversation I'll be having. Thanks for clarifying Sebi's post ... I missed the fact that they were manually phase-aligned in post. So, if I'm understanding correctly, the word clocks need to be synchronized while they roll in order to get phase alignment across recorders. That makes sense to me. But bringing the two word clocks in sync doesn't offer any time reference, which is the job of timecode. Where I'm still lost is what the workflow is for aligning the files from the two recorders in post: Can time-code align files from two separate recorders with sample-level precision? How can it do this if timecode only provides a frame-level degree of precision and there are ~2,000 samples per frame? What is the magic that lets post know which sample in each file represents the same moment in time? The only plausible explanation I can think of is that the rising edge of the timecode clock tick in each recorder would need to be synchronized with sufficient precision that both clocks rise and fall at exactly the same time ... and what I know of timecode clocks and their varying precision makes me skeptical that this is possible. In other words, even with both word clocks and timecode synced across recorders, I don't fully understand the post workflow that lets the files from multiple recorders be aligned without a lot of tedious manual labour and a common signal across both recorders that would allow the same point in time to be identified. The only workaround I can think of is if both recorders start recording on exactly the same sample, which (I think?) is the point of C.link on the 788T. But obviously that only works with multiple 788Ts. From what I know of my rental options 3x788Ts might be my easiest option for a rental, but I think the Dante console might win out. I can always feed a timecode signal into a spare channel to make up for the lack of timecode support. I'm also starting to think maybe I don't need phase-accuracy across recorders if I keep all the music tracks on one recorder and use the second for lavs & dialogue tracks. From a practical point of view, I don't imagine phase alignment would be very important for sources that far apart. But I still have the issue of my music tracks overflowing my available tracks on that recorder. Thanks for all the support. I didn't realize I was opening such a can of worms.
  10. I'm thinking a console could work. My only previous experience with this workflow was plugging an X32 into my laptop (X32 has its own interface card, so I was able to just use USB). That experience didn't give me confidence though ... it dropped samples without warning every few seconds. Could have been the hard drive was too slow, but the workflow didn't give me confidence. Good to hear TC kept things phase-aligned for short takes. That's all I need ... maybe a workflow test is in order.
  11. Ok, thanks for confirming. I'll consider my options with a DAW setup, I think you're right, jamming together two recorders of different brands and quality is asking for trouble. Since I clearly don't own any recorders that support word-clock sync, it probably makes sense to rent a recorder with enough tracks (I think I'm at 23 at the latest count). But, failing that, what recorders are out there that *do* support word-clock sync? Sound Devices? I'm guessing the 788. Any others?
  12. Beyond that, I suppose there is also a post workflow question: Even if the word clocks are synchronized, how do I sync the tracks from each recorder with word-clock precision in a DAW if the only timecode attached to the files is SMPTE, and therefore only frame-level precision?
  13. Thanks. Can you elaborate? Can I count on being able to synchronize word clocks on any recorder that accepts timecode input, or is this a special feature I need to look for? Is it normal for recorders to synchronize their word clocks based on jamming a timecode signal? I'm trying to synchronize a Zaxcom Nomad and a Zoom F8. The F8 has an option called "Ext Audio Clock Sync", which sounds vaguely like a word clock to me, but it is jammed with regular timecode. The Nomad doesn't refer to any sort of clock, but it does allow for external jamming. So, I'm kind of back to the question I began with: Will jamming SMPTE timecode synchronize word clocks? Or is it only frame accurate? Or does it depend on the recorder?
  14. If I sync two recorders with SMPTE timecode (i.e. not genlock), how precise is the synchronization? I take it for granted that it's precise within a frame or it wouldn't be reliable for picture sync. What I'm wondering about it audio sync: If I'm trying to maintain phase coherency between tracks for a musical recording, is syncing with timecode sample-accurate? I kind of assume it wouldn't be (since there's no precision below frame level). And, if that assumption is correct, what's the best way to sync two recorders with sample-level precision? Genlock (but isn't that for synchronizing camera clocks)? A shared input recorded on both recorders? Is sample-level precision even possible between recorders? What's the workflow I need to maintain phase sync across recorders?
  15. "How often does lightning strike" seems like a Google question to me. I think it's the lightning rod principle. I started my answer with the idea that electricity takes the shortest path across the highest voltage differential. Generally, elevated, metallic objects attract lightning strikes, and the idea of a lightning rod is to attract the lightning to the rod where it can be dissipated harmlessly to ground rather than strike something else. "Elevation" addresses the "shortest path" part because it's closer to the thunderhead than the ground surface, and "metallic" addresses the "voltage differential" part (indirectly) because it provides a lower resistance path to ground than the air. If I'm not mistaken, lightning rods are sometimes (usually?) given a positive charge to help make them more attractive targets (assuming my memory is correct and thunderheads are full of negative ions). Applying that to on-set generators, generators would be potential strike targets because they are slightly higher than the surroundings, and some of the internal components may be slightly more positively charged than the surroundings. If you live in a mountainous region like me, or if you are set up in an urban environment where there's lots of buildings around, presumably the generator isn't a very attractive target, but if you are filming on the open prairie where lightning storms are common, it seems plausible to me that a strike could be more likely.
×
×
  • Create New...