Jump to content

nickreich

Members
  • Content count

    217
  • Joined

  • Last visited

Everything posted by nickreich

  1. Probably a bit OT here, but has anyone come across a Mac app that works with quicktime video files similar to the way Wave Agent creates sound reports from a folder of BWAVs? I'd like to dump a folder of h.264 Quicktime .mov files in it and end up with a list of the file names, and at least the timecode in and out points from the quicktime TC track - preferably in .csv format. Cheers, nick.
  2. I do mainly large track-count recording (64 channels and over), mainly MADI through a variety of RME and DiGiGrid interfaces, and use Boom Recorder, Reaper and ProTools. Firstly as caleymw says, I've also heard that RME admit the UFX+ can only reach full track count reliably with the Thunderbolt connection, so try that first. I happily do 194 channels from a RME MADI FX card in a Thunderbolt chassis to a MacBook Pro a few years older than yours, recording to a single SSD for hours. I've been a Boom Recorder user since the early days, and have to say while it in theory can do 256 channels now, I haven't had reliable luck with it over about 40 channels and now call 32 channels my limit with it. I've tried working it through with Take Vos who wrote it, to no avail. Any more than 32 tracks, I use Reaper (as Pro Tools is limited to 32 chans of non-avid hardware). I've also tried Nuendo, Nuendo Live, Waves Tracks Live (which is just a skinned version of Ardour) and others, and Reaper is by far the most solid. I note you say "65 channels, 130 Files, 1 Folder" - why are you recording two files per track?
  3. Time code audio and video issue, any help appreciated

    So with the scientific/logging USB interface you are using, as with most more typical 'musician' USB interfaces, the interface's internal clock is the source of timing for the computer doing the recording. It has a proprietary sync system so you can link multiple units for multi-channel logging, but this does not seem at initial glance to be able to accept an industry-standard sync signal such as Wordclock, and as Bash notes - the available samplerates do not include those typically used in the film / music industries. This makes any suggestion of using wordclock / genlock between the camera and audio recorder impractical - although I'd contact Avisoft for comment. I note in their LTC tech note they warn of a possible DRIFT for the reasons I've outlined of somewhere in the vicinity of 300ms (0.3 sec) per 10 mins recording time - certainly in the ballpark of what I'd expect of decent camera and audio gear running un-sync'd (timecode or not). If the drift is greatly more than that, then Bash's suggestion that Premiere Pro is incorrectly assuming the audio is a particular samplerate, and playing it as such, may be a cause. However, my reading of your workflow description is that you are not actually importing the audio into the video editing software - you are simply using the timecode display on the video software to then go find a matching audio event to investigate, using the timecode display in the Avisoft playback application - rather than trying to play them in sync? Here's where another complication raises it's head. I assume that the Avisoft recorder software will happily play back audio through the computer's headphone socket without the recording interface attached. In that case, you need to know how the software handles it's timecode display on playback. Is is reading a recorded timecode track in real-time, or doing what most film-industry recorders do, and reading a start time stamp at the head of the file, then extrapolating based on the sample count from there. If it's the latter, the stability and accuracy of the timing 'clock' generated by the computer will always be different to that generated by the USB audio interface when plugged in. If you have the interface, and the software will play back with it connected, maybe try that, and see if there's a difference in the TC location of an easily-locatable event towards the end of a long recording. These scientific logging recorders are something most of us here don't come across. The KiPro Mini video recorder will be getting it's sync from the Camera via the SDI or HDMI signal carrying the pictures, so camera and KiPro can be considered as a single unit. If that camera happens to be a DSLR stills-type camera, don't expect much in the way of stability / accuracy from it's internal clock either.
  4. Time code audio and video issue, any help appreciated

    The Timecode coming into each device (video and audio) isn't actually labelling EACH frame - the device just looks at it at the exact moment a recording is started, reads the time data (in the format Hours:Minutes:Seconds:Frames, repeating as many times a second as suggested by the frame-rate you chose to use) and timestamps the file with that as it's start time. The vast majority of modern file-base recorders never look at the incoming timecode again during the recording of that take - whatever's playing them back just extrapolates from the start time. It's like the driver and conductor of a train both setting their watches to the station clock when they leave LA, then writing down their arrival time in NYC from their watches - the numbers will probably be different. Note that in the 'olden days' of linear recording media (ie Tape and actual Film), things were different - older texts can add to the confusion here. To give any more advice than we have already done would need more specific information about your setup - we know the KiPro Mini on the video side, but what specifically are you recording the sound on - and what editing software are you trying to combine them with?
  5. Time code audio and video issue, any help appreciated

    Hi inkedotly, in a nutshell, the problem is this: Timecode in the modern world of file-based recorders is a positional reference only, and applies to the start of the file on each recorder only (just like the "LED that lit/sent a TTL pulse to the audio - which we'd call a "Bloop Box" - and is an alternative to the film-style clapperboard). The advantage of timecode over these other positional references is that each take has a unique start reference that may make it easier in Post Production to sort through all the matching files, and allow a degree of automation in the process. After that, you rely on the two recorders (audio and video) to maintain EXACTLY the same speed or you get drift over time. In normal drama film production for example, the individual takes or shots are quite short, and drift doesn't have time to manifest itself if you are using decent equipment - but I suspect your Research applications mean continuous takes, maybe of hours or more. Even the best equipment will have slight differences in the calibration of their internal 'clock' or electronic timing rates that mean their individual ideas of how long an hour is will give you a drift in sync. I'm guessing that when you say you sent timecode to "the microphone", this was a Zoom recorder or similar, with the timecode going on an audio track? While there's ways to extract that using software to sync up the start points of the files (as an alternative to the LED flash and pulse method you used) - it's existence on the recording does nothing to correct the timing of the audio recorder - and even the most expensive audio field production gear will have the same issue, to a lesser extent. So - what to do about it? It's a really common problem, and any reasonably competent Audio Post Production person will be able to get your audio back in sync as long as there's obvious visual / audio events at each end of the material to match up. The techniques are either time-stretching (or shrinking) the audio a tiny bit, or cutting it into smaller chunks and slipping the head of each of those chunks into sync, then patching the holes if required. Note that this needs to be done in Audio editing software rather than Video editing software as on the whole the Audio editing software has a higher degree of time resolution. The reason it's a commonly seen problem is, as I mentioned above, using modern professional gear, the drift is negligible in the normal short (say a couple of minutes) shot lengths most common in film production where separate audio and video recorders are used - and in the Documentary world who do longer takes, recording audio onto the camera's recording medium has been typical - which alleviates the problem. Thus many camera operators are simply unaware it'll be problematic over longer takes. I personally work in long-form Concert filming and similar, and am constantly having to explain this to professional camera crews who are at the top of their game, so don't be surprised whoever captured your material got caught out. How to avoid the problem in the future? The simplest way of course is to record video and audio to the same machine (camera or KiPro) - it can't drift with itself. There's plenty of reasons you'd prefer to use a separate recorder, such as needing more audio tracks, but if long takes are a reality for you, it's going to get expensive if you want drift-free results. Basically what you need to do is 'lock' the internal clocks of the camera and audio recorder together in perfect sync - and as we have learned, Timecode doesn't do that, being a positional reference, not a sync reference. What you need is what the Camera types call Genlock, and the Audio types call Wordclock. They are not the same signal, but serve the same purpose. This means you need a camera and audio recorder that can accept these signals and sync to them, and some sort of scheme to generate the Genlock / Wordclock. There's many options (and even some high-end cameras such as those made by Arri, that can indeed extract and sync / genlock to the timing information within a Timecode signal, if it's coming from an extremely stable source) so I can't really offer solutions here. You should find a good, experienced local film sound professional to help you out. For most researchers / documentary producers, this is outside their budget, so the other option is to accept that drift is a reality and budget to correct it in post production as described above. All the best with it!
  6. DIY camera snake?

    The current Canare foil shielded material is very good - huge amounts of the MR202 in various channel counts are used in touring sound systems. I've been using it for many years in Theatre and Outside Broadcast and similar fields without issue - the only thing being to make it so thin, the jacket around each core is not really rugged enough to use on it's own as tail-ends or fan-outs. You need to cover them with either Heatshrink, or preferably Techflex if you want to make fan-outs.
  7. DIY camera snake?

    Certainly the Canare MR202-4AT cable fits in a NC7-XX connector without difficulty but it's 7.6mm OD. I run AES through 20m lengths of this cable all the time, although it's not sold as such.
  8. DIY camera snake?

    Canare MR202-4AT 4-pair multicore cable. If you want breakaways, the easiest is to use Neutrik 7-pin XLRs (meaning you'd only use 3 of the four cores in the cable - balanced Left and Right plus unbalanced monitor return). I have a short adapter to convert from the 12-pin Hirose connector that Sound Devices use on some of their mixers to the Neutrik, and my little SD302 bag has a small box under the mixer with a panel mount 7-pin on it with short tails coming out to the individual connectors on the mixer. Rugged, inexpensive, and even camera operators know how to unplug an XLR without damaging it. You can then make different camera-end tails (fan-outs) to suit the cameras you work with (ie: 5-pin or mini-trs returns, 5-pin in for Arris, whatever stupid connector Red or Blackmagic are using this year).
  9. PDR Lectrosonics

    It's basically the same mic input as all their current Radio Mic transmitters, so will take any mic wired for Lectrosonics (either electret capsule or Dynamic) or you can wire up a cable to input a Dynamic mic or Line Level. It can not provide 48v Phantom Power for a full-size mic or shotgun - you'd need to add an external P48 power supply box connected with a suitably wired jumper cable.
  10. Spring Clean

    As you're in Australia - the Australian Sound Recordists and Boom Ops WTB and For Sale page would be best. https://www.facebook.com/groups/982542505103878/?ref=group_header For more specialty stuff like the Soundfield, the for sale section of this site (JWsound) would be good but I think you need a minimum number of posts for it to become visible - maybe PM Jeff, the Admin.
  11. Delivering files from multiple recorders

    Hi Bouke, I'm curious how BWFmerge would behave if the 'Master' folder contained several short takes, and the other folder contained TC Sync'd longer takes that overlap several of the 'master' takes. This is typically what would happen when one is using one of the new body-pack sized recorders (such as a Lectro PDR, the little Tascams or similar) on some talent who you may be having trouble covering with a wireless mic, while recording other talent with a boom or wireless into your main recorder - such as a Sound Devices 788. . You normally jam sync the PDR recorder before fitting it to the actor, after which you may do several takes and setups of a scene before getting the PDR back from them and stopping it (other readers, please refrain from mentioning the Zaxcom solution to this - it's not relevant to my situation). Would BWFmerge be able to slice chunks out of the PDR Recorder's single long file to merge with the master Poly BWF individual takes from the main recorder? If not, it might be worth seeing if you can add this function. It would be a big deal in making the use of these little recorders something that requires less work in Post, as while it's pretty easy for a proper audio post person to handle, it's a big task for a picture edit assistant, and they may need that performer's track for the edit long before a Sound Professional is involved.
  12. In the Live Sound world, with Powered Speakers (speakers with their amps built in) becoming ubiquitous, combined power+signal cable is very common, carrying line-level balanced audio and AC power together for long distances. Induced hum is rare and considered a sign of bad equipment design. Your application however is likely to be carrying mic-level signals and unbalanced (though shielded) video so I'd stick with DC power - and don't use the overall cable shield (if there is one) as one of the DC conductors or share grounding within the cable 'bundle'. Then there should be no more risk of grounding issues than individual cables. Techflex is great for short looms, but I'd avoid it for longer runs as cables tend to twist and kink inside under heavy use. There are cable manufacturers who will make up custom cables containing power, balanced audio, and Coax in one overall jacket to your specs, or you might find the right combination in one of the manufacturers ranges, and just use the color-coded Techflex to make the fan-outs at each end.
  13. Lectrosonics SMWB coming soon?

    maybe it could be made to switch between record and transmit 96,000 times a second to keep the Lawyers happy...
  14. As Alexas, Alexa Minis and Amiras can Genlock or sync from the incoming timecode stream, they can be genlocked to a Tentacle Sync box just as they would any other brand. I'm wondering if anyone has experience doing this successfully as I have a gimbal cam operator who'd prefer one, but a client who's a little unsure of the tiny Tentacles, and I have no direct experience with syncing from it. We can't get the cameras in advance for a meaningful test. These are long-take concert recordings where sync is required to reduce TC drift.
  15. Recording applause in Germany - Advice

    I record a lot of audience reaction stuff on the jobs I do - the one rule is you always need more 'air' between you and the closest audience members than you'd think. Otherwise you'll have identifiable single clappers in the foreground. Also, steer clear of Coincident stereo mic techniques for most uses - spread either side of stage gives a much less coherent sound which is more useful to post - especially if they want to put it in Surrounds. If this is a sound-for picture project where you can see the audience in shot, and if you can get a 4-preamp recorder, I'd go for a very tall stand on each side of stage, with a Hyper (like the MKH50 you suggest) covering the near-half of the crowd, and a shotgun covering the rear half of the crowd, both on the same stand (I use a very short Stereo bar for each side). If you want the more diffuse (polite) applause popular with Classical Music recordists, a wide spaced Omni pair would give you that sound, but still up at least 3m. If you have the venue to yourself simply to do "Applause Group" FX, many venues like you describe will have a bar or winch lines over the front of the stage suitable for mic rigging, too.
  16. NEW: Zoom LiveTrak L-12

    just read the manual online - the faders are not motorised, as you guessed, it uses the channel meters to show you where to move the fader to - you have to pass through that point for the fader to 'grab' control - very old-school! It's not just for Scenes though - the faders do the 5 monitor sends as well, using the A-E layer buttons, so this is somewhat of an annoyance.
  17. Wave Agent

    There's a freeware Mac one called SoundFilesMerger which works well, but only for merging - not with all the timecode and metadata functions we need for Film work. In the WIN world, BWF Widget from Courtney Goodin comes with a merger/splitter utility.
  18. Wave Agent

    combining different start times isn't possible in Wave Agent. Other Poly mergers can do it but they'd align the files from the front edges which would be out of sync - so yes, you'd need to spot them to timecode (or waveform match them) in a DAW and re-export.
  19. Multi-bay NP-50 charger

    yes, that's the one I saw. The Lectro one looks good too, wasn't aware it existed. Thanks all.
  20. Multi-bay NP-50 charger

    I thought I saw a photo either here or on one of the FB pages of a multi-bay slot-loading charger for 6 or 8 NP-50 (Fuji) batteries as used in Lectro SSM transmitters (and the new little Zaxcom one, I think). Search as hard as I can - I can't find it again. Does anyone have a link? Thanks, Nick
  21. Low-End Saramonic Wireless

    a great way to test the quality of processing (particularly the compander) in a radio mic system. Jangle a ring full of metal keys about a foot away from the mic and listen to the result - then compare it to another brand / model. It's a very complex and edgy sound source that can be a bit unfair on gear, unless you are working on a cooking or renovation show, when such sounds are common!
  22. The other option to interface to a PIX 260 or 970 is external Analog to Dante conversion, the only practical 12vDC capable ones I've seen so far are the Ferrofish ones. The 16-channel one is MADI only, the 32 channel one is available as MADI or Dante. Both are in a 1rack-unit case. I was looking at these for a colleague - I use a 01v96 myself and love it, but always have access to mains power.
  23. Yamaha QL1 Soundcart

    Hi PJ, while my own console in Australia is an 01v96 which I use with SD970s via dante as you describe, I did use a QL-1 on a job in LA with rented gear last year. Location Sound provided the rig. I found it quite fine to use for the work I do, but could see the fact it has digital input gain adjustment (in 1dB steps) could be an issue for recordists who are used to adjusting input gain over the top of dialog. The stepping might be audible. It's quite a bit taller than an 01v96, but not as tall as a Digico SD11, which I also use a lot. Other than that, I liked it, the big screen is good to use. Sounds nice and clean.
  24. BWF.P vs BWF.M

    Poly files are certainly more editor-friendly in normal production situations - only one file per take. Most of my work, however, is very large track count (64 or 128 tracks) and sometimes takes of 90 mins length in concert filming, and in that case Poly BWAVs have a few problems. Firstly, it's still considered wise to limit file sized to 4GB maximum, and on a 64 track recorder like a Sound Devices 970, that means the recorder will automatically split your take into a new file every 7 minutes or so. These seamlessly re-join, but freak out post people who are not used to it. Secondly, the available apps that might be used to extract/split out specific tracks - so you can give a mix track only to the picture editor, for example, only work up to 32 tracks (including Wave Agent - even though Sound Devices make one of the three available 64 channel recorders) . You can dump a 64 Poly BWAV into ProTools and it will split the tracks out, but that isn't quick and simple enough for end-of-day location use really. I don't know what would happen if an assistant editor tried importing a 64-track BWAV into an Avid or FCP. Thirdly, unless your recorder pads out tracks that aren't record-armed, each Poly BWAV take could have a different number of included tracks, and dropping them into Pro Tools can be a little more thought-intensive for an edit assistant as regards getting all the material on the right tracks. For normal 8-16 track film shoots though, Poly's are nowadays more accepted and less susceptible to post losing tracks.
  25. Samplitude's "O-Ton-Modus"

    Well, I use QLab for this. Easy to make cues from one or many source files and drag them around in the cue list stack, it's way deeper than it looks at first glance, it's the predominant playback system in Theatre nowadays. Not a timeline like a DAW. It's also quite possible to make ProTools stop-and-recue from clip to clip using a MIDI Track sending via an IAC driver to Keyboard Maestro to convert a specific MIDI note to a spacebar-tab message if you really need to use a DAW. That will work with any DAW (ie Reaper) that can do MIDI.
×