Jump to content

nickreich

Members
  • Posts

    274
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by nickreich

  1. With headsets, always tape the cable to the centre of the neck with surgical tape (I prefer something more flexible like Blenderm to Transpore (called Leukofix here in Australia). Loop the cable a bit from where it comes off the headset to cross the centerline of their neck at a 45 degree angle back towards the side the mic is on, so they can fully turn their head each way without it going tight. Definitely Omnis only unless you have PA feedback issues (and even then the advantage of Cardioids is not as great as you'd imagine). Often headset mic capsules are susceptible to wind noise from the capsule moving through the air on a fast-moving performer/gym instructor. Much more so than the forehead mounted lavs typical in Theatre for some reason - probably because the capsule is in free space. So while they don't look great, the 'add-on' windsock is a good idea. If they are still moving around a bit of tape behind each ear helps. If the boom is sitting tight to the cheek of male talent, make sure they shave just before the shoot as the boom rubbing on facial stubble is very audible.
  2. The fun thing with the old 37MHz Sennheisers was the length of the transmit aerial on the body pack. I did Theatre shows with them back in the day, and with the transmitter in a belt around the performer's hips, we'd run the aerial up and over the shoulder and pin the end to the upper chest through a little rubber band taped to the end of the aerial to keep it tight as they moved!
  3. I've used a rented QL-1 on carts built for specific shoots (both reality shows and Theatre show film shoots shot 'narrative style'). I don't run a permanently built cart as my work is too variable - though my own 01v96 still does the bulk of the work. I'm more than happy to use a QL-1 for these uses - though the 32 channel limitation on the Dante IO is annoying. I also use QL and CL series consoles a lot in the Live Sound work I still do a bit of (though strongly prefer Digico for that). Happy to have a go at answering any questions you might have though.
  4. it bi-directionally copies transport control (and Metadata in one direction) between a Sound Devices 970 recorder (using it's built-in web server) and Boom Recorder software on a mac (being used as a secondary or backup recorder) so you don't have to enter it in both places. They coded it for a Reality Show client and then made it available on the web. I just borrowed the bit that reads play and stop messages from the 970 to Boom Recorder so it makes 'files' in sync - for the purposes of making the 'sound report' as described.
  5. Hi Bouke, Livelog looks interesting - I look forward to trying it. I've been using an iPad app (Logster) for this, but in some setups having something that can run on a Laptop works better, so I've been using Boom Recorder (actually recording one track's files which I may then discard) simply as a LTC Logger / report generator - using a cut-down version of Gotham Sound's applescript to trigger from the SD970s via Pixnet. Yes I did try out "LTC Reader's" video slave mode, it works great but as I already own licences for the software from NLE called "Video Slave" which does a similar job and I'm familiar with it - I've stuck with that (and there hasn't been any real work for almost a year in my Industry anyway due to COVID!).
  6. further to this, though I suspect it was sarcastic... I choose Poly vs Mono file capture based on the following: 1) Is the material going to be Posted by someone else (about 50/50 for me) - and if someone else, are they and Audio Mixer or Video editor? Video Editors are more used to Polys nowadays as that's what they'd see from a 'normal' narrative film sound recordist. 2) Are the lengths of the takes going to cause an 'auto-split' in the Poly files (every 7.5min for a 970 recording 64 tracks of 48/24) which is confusing to Editors and in my experience freaks them out more than getting Mono files. Also - some Editorial situations the Picture Editor only wants certain tracks in their NLE (mixes, LTC, specific ISOs) where as the rest are only of interest to the person subsequently doing Audio Post. Mono files mean the Assistant Editor can pick which ones to ingest and not clog up their NLE project. 3) How many takes in a delivery day and am I using multiple recorders to capture larger numbers of tracks - which increases the ingest time to the DAW or NLE session for the Assistant Editor. Poly is generally easier/quicker with lots of takes. 4) do I feel I need to use the Metadata Notes facility in the 970s WHILE ROLLING, and am I recording lots of tracks to 2 drives - if so DON'T USE MONO - you are very close to a crash due to the data transfer overhead of writing the metadata to 128 separate files (64 on each drive) while still trying to maintain record (especially if the filename scheme is one that gets changed based on some other Metadata entry like take number). If you were recording Polys - that's only one file (and set of metadata) to write to each drive, so it can easily handle the extra load. I'll use an external Sound Report app if rolling Mono wavs and expecting to need to make notes.
  7. I don't delete files from recording SSDs - I treat them like tape. They only get re-formatted when the Project is fully delivered (or on a Reality-type show, when the Data Wrangler has ingested them to multiple storage drives).
  8. Both approaches work. The Drive 3 and Drive 4 ports are eSATAp ports - I have some basic eSATAp to SATA-3 cables so I can plug in 'naked' SSD drives and power them from the 970. Personally, I haven't tried actually recording to an externally powered drive from these ports. As I do mainly long-form Concert recording, I tend to record Mono BWAV files more often than Poly BWAVs, and in that case one can only record to two drives at once anyway. I find that recording to two Caddies in the internal slots creates too much heat for my liking, so if I'm required to record to two drives per machine, I'll do one Caddy and one 'external' SSD on Port 3. Some of my 'Reality show' colleagues happily use two Caddies, so don't let me stop you! The advantages of Caddies are primarily in the built-in USB-3 and FW800 ports for offload. I've had a couple of the Thunderbolt docks that take the Caddies and found them totally unreliable on a variety of Mac computers for some reason or other - Caddies often didn't mount - so I gave up on them. The only other thing with Caddies is the 970 (not sure about the others) can only save/load setup files and load firmware from Drive 1 (one of the Caddy slots).
  9. Might be more useful if you let us know: 1) the nature of the project itself (film/TV, recording a live performance, is it voice or musical instruments or a whole band, how many sources at once) 2) Where the recording is taking place (Studio or Venue, On Location, travelling around the country) 3) what's going to happen to the audio recording afterwards (pass on to an Editor, you are going to mix it yourself) 4) specifics of the sources to be recorded (eg: two people talking, a singer who also plays Banjo and Mouth Organ) and the expected length of recording sessions. 5) what the requirement is for 'wireless' - is it shared with a separate live sound system for example. then folks here can let you know how professionals would approach the same task.
  10. We do color-matching (painting) of mics all the time in Live Theatre sound - but generally only for lav mics mounted on the head - not so much for "Headset" mics like the 6066. A couple of reasons why: Firstly, the "headset" style mic is always going to be overtly visible, whatever the color (of course, picking a Beige or Black unit to suit the talent is a start). In fact, quite often the choice to use a 'headset boom' style of mic rather than a head-rigged lav is more for the look. Secondly - apart from the metal end of the 6000-series capsule - with the lav mics, the cable of the lav can be colored quite easily in a number of ways. The two most popular in 'Broadway' type shows are Copic Markers (a sort of marker used in Graphic Art) and Shoe Paint (the spray-can type used to re-color shoes). Don't use Sharpies - the color isn't stable on the lav cable material and goes purple. With "Headset" mics, the boom is also metal, and in the case of the DPA booms, won't hold Copic marker at all, and Shoe Paint will peel off it in a day or two. You'll get a better result on Headset mic booms by roughing them up with super-fine sandpaper then using TAMA or similar modelling paint intended for metal. Note DPA have just announced white paintable capsule 'caps' or sleeves for the metal capsule end of 6000-series mics - I'm not sure how available these are yet. I have seen many attempts to use Makeup products to color mics, usually when a Pro Theatre crew used to color-matching finds themselves working on a short-run show with rented mics. No-one I know has ever found a product that stays on in use byt comes off cleanly after. Of course in Professional Theatre 'running' shows, the mics are an expendable, sold to Production, so coloring them irreversibly is not a problem. If this is a short project or shoot and re-prepping the mics every time they are fitted is acceptable, one trick that might work for you however is eye-liner pencils. You'll never get a smooth coating, but diagonal strokes of an eyeliner color that's darker than the skin, on a headset or cable thats a bit lighter than the skin, can do a good job of breaking up the continuous line of a mic and camouflage it a bit.
  11. This is great - I'm wondering if there's any chance of an audio back-channel (talkback from remote viewers to Set) - even if restricted to one remote user. Ideally with a momentary 'push to talk' button at the remote end on the browser window. Audio can be lower-resolution than the outgoing channel.
  12. What you are wanting is more complicated than you might first think - but it was very common in the past on larger Theater shows, for example, before the specifically-designed wireless headset systems came on the market - to enable Walkies to be integrated in wired headset party-line comms systems. The problem is on most film sets, the walkies (ie Motorolas) are being used in 'simplex' mode - on one frequency per Channel, and only one can be transmitting at a time. Getting outside audio into such a system is difficult - as you need some way to 'key' the transmitter that the Zoom audio is coming into when (and only when) folks on the Zoom side of the system need to talk. There is really no way with Zoom or similar systems to provide a 'Push To Talk' function to key a hardware transmitter. People often try the 'VOX' (voice operated switch) mode on a walkie to make this work, but it's rarely reliable. The trick is you need to be running all your walkies in 'duplex' mode. This is the mode used by mobile radio systems to talk through a 'repeater' on a tall building or hilltop for more range. All the walkies transmit on one frequency and receive on another - specifically allocated pairs per channel. The 'Repeater' is a Receiver feeding a permanently-keyed Transmitter - using the opposite frequency allocation. In the link between the two parts of the Repeater, it's possible to inject other audio (ie the chatter from Zoom) and extract the audio coming from the walkies (to send back to Zoom) - but you need additional circuitry to keep the audio from each side (Radio and Zoom) from getting sent back to where it came from. Most of the major Comms system manufacturers, like Clearcom, make such gear, and Radio Comms hire shops that are used to supplying live performance users should have systems. All commercial grade Walkies on the market will handle Duplex mode, but as you need two allocated frequencies per channel - it's more expensive to rent or buy - plus you need to set up your own local Repeater (more commonly called a Duplex Base Station in this application) on location before anyone can talk to anyone else. Make sure to ask for at least one Simplex channel as well as the Duplex channel so walkies can be used 'stand alone' if the Base Station hasn't been powered yet.
  13. check out the video on Gotham Sound's youtube channel about the actor spacesuit comms system they supplied for 'Lost In Space' a year or two ago. Most live or theatre sound hire shops in London should have suitable DPA or Voice Technologies or Countryman headsets and the adapters to Sennheiser 3.5mm TRS. All the major brands have adapters and all make 3.5mm ones. Try Autograph Sound or Orbital. Assuming you get proper earpieces intended for IEM use, or the telex plastic eartube type earpieces, you should not get any appreciable spill from IEM to Headset mic. If you try and use 'vented' earpieces, like generic iPod earpieces, you might have issues. Ideally you'd get custom IEM moulds done for the talent, but probably cost-prohibitive for an indy film.
  14. your natural instinct would be right - except for the fact that Sound Devices (either intentionally or not) didn't correctly implement this feature of the Brooklyn II Dante card inside the Scorpio and 970 (and presumably 888) - so even though Dante Controller allows you to tick the 'sync to external' checkbox, it will subsequently throw up regular Dante clocking errors. If they intentionally chose not to support this feature, they could have prevented that checkbox from showing in Dante Controller. Sound Devices has said to another user on the JW FB group where we were discussing this earlier this year that they are looking into fixing this for the 8-series down the track - but for now, it's not an option. I don't own an 8-series, but that user did and reported the same error on his Scorpio - which SD acknowledged. They most certainly DID NOT put out firmware that fixed the problem in the 970 - I own two, and operate another four, all on the latest firmware, and the sync-to-external problem persists. Maybe you are thinking of the firmware update when they added the ability to select Dante as a sync source for the 970?
  15. No, I don't muck about with offsets - that would indicate Sound taking ownership of a Camera issue. I do like to get a TC slate shot at least at the top of the performance, as if it and the sound TC agree, it's easier to convince unaware editorial staff that it's a Camera thing (and makes it dead easy for them to see what offset to apply when dropping the clips in their timeline). With the setup I suggested, Reaper is at least running of the Dante Network Clock, so will be pretty well calibrated - better your average Camera's internal clock without genlock. In tests I've done, you might see as little as a couple of frames up to maybe 6 frames in half an hour. Certainly easily fixable in the edit if you have to use it - but still looks bad to a professional Post dept that might not know you personally.
  16. Hi nevo, Filming Theatre plays and Musicals is a specialty of mine. Your thinking is correct, but there are a few issues you are going to run into with the gear you have that will need some work-arounds. First, yes using the Scorpio as the master audio recorder and Reaper as the backup is a good idea, lets leave reaper for a moment and look at the primary chain Sync plan. When I do these projects, I use Ambient TC boxes, but I'm sure the Ultrasync Ones (USOs) will be fine, and like you, I use sync boxes as the TC and WC source for my recorders (in my case, Sound Devices 970s). I am guessing you are planning to route the individual audio inputs to the recorders from the TIO interface via Dante directly, rather than patching through the X32, then the X32 will be making mixes that will also go to the recorders via Dante or analog (in the case of the Scorpio)? Here's where you have to be a little careful. Firstly the Scorpio, like the SD970, does NOT currently allow you to sync the Dante Card inside it to the word clock from the Scorpio's circuitry or external word clock fed into the Scorpio. In the setup you describe, the Scorpio or the X32 are the only Dante devices capable of being the Clock master for the Dante network (unless you are using a RME Dante interface for your Reaper computer, as opposed to Dante Virtual Soundcard - in which case you are in luck). Therefore, you really have to treat the entire Dante system (TIO, X32 and the Dante card within the Scorpio) as if they were just an Analog mixing console and cables for the purposes of planning your Sync system. You sync the Scorpio off one of the USOs (more about this in a minute) and Genlock the cameras to other USOs and your audio will stay in sync with the cameras for the duration of an Act just fine. LTC from the USOs line up the files. For some reason, SD have chosen with the Scorpio to put the WC In and Out on the same Lemo that does TC in and Out - as a software switchable option. Therefore, you can not input BOTH Wordclock and LTC to a Scorpio at the same time. Realising this was an issue, they have allowed LTC as a choice in the Clock Source menu - ie it uses the timing information inherent in the LTC stream as a Wordclock source. You would choose this option to sync from the USO dedicated to your recorder. I'm used to the Ambient system, which uses an RF network to continuously 'tune' all the TC boxes to be in perfect sync - not sure if the USOs do this as well or just jam once over their network, but even so, using the USO on your recorder is likely to give you a better drift result in these long-take projects than using the internal clock and LTC from the Scorpio as master (but you knew that already). The Dante network will just elect either the X32's Dante card or the Scorpio's Dante card or the TIO as it's master - whatever - the Scorpio re-clocks the Dante channels internally to match the chosen WC source and this process does not cause any drift. The bigger issue is your backup Reaper system. Assuming you are using Dante Virtual Soundcard rather than a hardware Dante to USB / Thunderbolt interface, it will be locked to the Dante Network clock - which bears no relationship to your USOs (and therefore the Cameras). You must set the X32 to clock from the 'slot' (ie the X-Dante card) for Dante to work at all in that console without glitching, and as Philip has pointed out, the X32 doesn't have Wordclock (or even AES) In anyway. This means your Reaper recording can only be an emergency backup, and be aware if you need it, it will drift from the cameras and need fixing in the Edit. The way around this, if desired, is to either use the aforementioned RME Dante interface or insert any other Dante Device you can get your hands on with a Wordclock In connector (and the ability to sync the Dante Network to that) into the Dante Network - even if you don't need it for Audio IO purposes. Just set that device as Preferred master and Sync To External in Dante Controller, and feed the WC input from the WC Out from your 'recorder' Ultrasync - as it'll be un-used by the Scorpio. Reaper can chase Timecode in recording, but it's not very straightforward. Lots of menu settings to make it work. As for the Cameras - depends on which model of Arri: Alexa (full size) - DO NOT HAVE GENLOCK. This is why my regular Theatre-filming clients do not allow them for this type of work. I have other clients that do use them (they want LF version, typically) - in this case we usually insist the Cam Ops button-off every 10min or so under co-ordination from the Director so only one is off at a time, in a place that doesn't cost them a desired shot. The Cam Ops need headset comms for this. This creates new timestamped files and reduces drift. Not ideal, really. Alexa Minis & Amiras - these both have Genlock and you have the choice (which I choose) to Genlock from the LTC input from the USO sync box. With the Amira, you have to set the TC BNC to be an input rather than output in a menu, as well. Be aware (if you aren't already) that the TC mode you want is Free Run, not Jam Sync. In Arri-speak, Jam Sync, a relatively new addition, is where the camera samples the incoming LTC/Genlock speed for about 30sec and 'trains' it's internal clock to match - after which time it ignores the sync box. It works OK for a Steadycam who doesn't want the sync box left on, but if you can leave the box there it's safer to do it the 'old way'. I pretty much guarantee with any Arri taking external LTC and Genlock that there will be a 1 to 2 frame Offset on the timestamped LTC as compared with the Recorder or a TC slate. This is an offset, not drift, it sometimes freaks out less experienced DITs or Edit Assitants but is very easy for them to fix in Post.
  17. If a TC Slate is impractical, assuming your GoPros are the newer ones without the built-in 3.5mm audio input jack, maybe get one of the USB-C External Audio Adapters they sell, and connect one of the cheaper timecode boxes to it - such as a Tentacle. Then you can briefly plug it in to each GoPro as you put them into record, and capture a few seconds of timecode (LTC) onto the audio track. This of course needs to be done each time you stop and start recording. After the shoot, drop all the files into Tentacle Studio (or most 'pro' NLEs - eg Resolve) and it'll timestamp the files from the audio timecode.
  18. I like Twistedwave on iPhone / iPad, though it might be a little intimidating to a really non-technical person as it shows several editing tools (not that they need them). File Export (sharing) is easy and comprehensive.
  19. If we are blue-sky dreaming of the ideal A2 or Reality Producer monitoring device - I'd add optional bluetooth pairing to a phone / tablet app with nameable touch buttons (similar to Wavetool or Nemesis Insight) to change the preset easier. Same app could be used for device setting cloning as you mention.
  20. yes - here in Australia too. I'm in pre-production for filming a large outdoor Opera. Cast are in their last day of four weeks the rehearsal rooms today, on-set on Monday (in theory) and the Government has banned events of over 500 persons (this seats over 2000). Haven't got the cancellation email yet - but no show, no film. The third in a row now, and ALL my bookings into the future are going to be subject to this given the niche of the industry I work in - as long as it lasts.
  21. Thanks Constantin, I thought that was the case, but was hoping it had been enabled since I last used one.
  22. Yes, as I said in my original post, the Minis can use either separate tri-level sync OR the incoming Timecode to Genlock the camera. The Amiras can do the same. It appears that the Alexas (full size ones) don't have the ability to Genlock any way. That's what I'm looking for confirmation of (hoping I'm missing something).
  23. Hi All, I usually see Arri Alexa Minis on my jobs, but have a shoot coming up with three Alexa XTs. It's been years since I've seen a full-size Alexa, so can anyone confirm that they still can NOT take external Genlock from a Lockit box (unlike a Mini or Amira - which can take either Tri-Level or Genlock to TC)? This is a Concert shoot - so I'd normally recommend Genlock. thanks, nick
  24. Most likely to block 48v Phantom Power from blowing up the audio output devices in the Receiver, should it accidentally get plugged into a Mixer or Camera input that previously had a condensor mic in it. I suspect that the reason they are in the cable and not on the Printed Circuit Board is either due to there not being enough room, or more likely they only thought of it when units started to be returned for repair.
×
×
  • Create New...