Jump to content

realsnd

Members
  • Content Count

    12
  • Joined

  • Last visited

About realsnd

  • Rank
    Member

Profile Information

  • Location
    Belgium
  • Interested in Sound for Picture
    Yes
  • About
    documentary sound

Recent Profile Visitors

689 profile views
  1. Pretty cool: after training with wide angle images + sound from artificial stereo head + mono sound, a deep neural learned to - generates binaural sound - isolates sound sources from video+mono audio alone. See https://arxiv.org/abs/1812.04204 http://vision.cs.utexas.edu/projects/2.5D_visual_sound/ This is only the beginning... In the future you may imagine, for example, an AI that focus/extract the relevant sound when you zoom in a high resolution image in post, etc.
  2. Thanks all for the useful information. The audio handle problem is solved, but another issue emerged... We are dealing with about 100hrs of 4K footage and more to coming, hence the editor will work on reduced 1080p transcoded material. All the procedures above set the image TC stamps from the LTC from the audio track. Hence, the TC of the original 4K and transcoded material won't be matched anymore, compromizing the final 4K conformation of the project. Any suggestion? Any way to batch-transfer TC from the 1080p to the 4K files, knowing that we can match them by names?
  3. PluralEyes does not trim audio clips, but it is not as reliable as LTC sync when dealing with noisy tracks.
  4. Dear all, I need to sync video with audio WAV (scratch audio on channel 1 and LTC on channel 2). I have been advised on this forum to use da Vinci Resolve. It works, but unfortunately the audio files is trimmed to the exact duration of the video clip. This is not appropriate in our documentary setting where audio starts before and ends after video, providing very valuable material. Does anyone know how to circumvent this problem, ie to sync while keeping the full length audio, in Resolve (v12), or other tools?
  5. Same problem here: erx+nomad=no erx upgrade
  6. Perhaps I am asking the obvious (I'll take the grumble if I am!) but the information I got from other posts/forums is messy, perhaps misses simpler or more recent solutions, and never describe the full workflow. So if anyone here could help clarify: - Say I want to sync in FCP a multi-track timecoded audio BWF with the matched video file, which has scratch sound on the right and TC on the left audio channels. What is the best workflow? (I have heard of auxTC reader http://www.videotoolshed.com/product/26/fcp-auxtc-reader, are there alternatives?) - how will the audio TC track approach compare in term of reliability and efficiency with plural eye if my scratch camera audio track is the nearly perfect mirror of one of the BWF tracks (transmitted via zaxnet/erx1)? Any experience with this solution for rather complex ambiance recordings (e.g. distant action within crowd, forest,...)?
  7. After an inquiry to Schoeps about capsules positions within the mics: ---- The distances from front grill to diaphragm are: CMIT: 92,5mm CCM8: 14,5mm CCM41: 4mm ---- This helped me and will possibly help others to precisely position the mics (the perspective in the photo above gives slightly distorted picture of the optimal position in my view).
  8. Long takes are often a must if the crew is to be forgotten in delicate, sensitive, set ups. I'am thinking of medical consultations for major diseases, councelling, etc.You don't want to distract people by stopping or starting filming, you don't want to give them the impression that their action is going to determine whether you film or not. You want people to forget your presence and the only way to approach this is to be here all the time, filming all the time. Beside this, key moments are absolutely unpredictable unless you stage them, which I find questionable in docs. If you don't do that you need to roll for long hours. You can deal with several hundreds hour of footage with appropriate transcription and time. Yes, it can be exhausing. I use the K Tek KA113 articulated boom which allows more relaxed postures and single-handed handling (using your arm pit), while maintaining the pole parallel to the frame line. I also find it convnient in fast action because you can fold it, and thus significantly reduce/increase your reach instantly while the cameraman is focusing. It's a compromise between the rather cumbersome support systems discussed above (suppose you sudently have to jum in a car...) and a regular boom.
  9. That must be a limiting factor, indeed. According to Shoepd polar diagram the lobe is -10dB compare to front below 2khz, I don't know if that's the reason for cancellation failure. (see http://www.schoeps.de/en/products/ccm41/graphics) Regading coincidence, I have assumed the mic placement displayed on the Cinela site was correct (see attached image), but I am not 100% sure of where exactly the capsule is inside the CMIT. Is there someone expert enough to confirm this is the optimal placement? (I can satisfactorily decode MS on both rear and front, so it is at least OK, but perhaps not optimal). Since I am on the topic of mic placement, Does any one see any specific reason why is the CCM8 on top and the CCM41 below? It seems odd because the CMIT suspension gets in the way of the CCM41 in the set up and would be better place on top, no?
  10. As matter of fact, my tests suggest you're right, but there is something that bother me. If capsules' coincidence is good enough for 5.1 decoding---which involves some cancellation---why isn't it good enough for rear cancellation? In this set up, the CMIT and the CCM41 capsules are on the same plane, approximately orthogonal to the mic-rear sound axis. The wave length at 1000Hz is about 34cm, the distance between the capsules on this axis is about 100 times smaller. So it seems negligible, considering we are talking about an diesel engine sound quite muffled, 100m away.
  11. Thanks for the replys. The plugin I was refeering to was actually the Double MS tool BF from Schoeps (wrong link in my previous mail, correct link is http://www.schoeps.de/en/products/dms_plugin_bf). Cancellation of the CMIT's rear lobe was not very effective, some phase-related distortions appeared with the most extreme settings of the "Focus' parameter, which was the most effective. So, I'd appreciate alternative ideas, other plugins, insights on the theoretical problem, etc.
  12. Dear all, I’ve recorded some tracks in double MS and wandered if anyone has ever attempted to use the rear and fig-of-8 channels to increase (in post) the lateral and rear rejection of the front mic. The end result would be a super-directional mono (virtual) mic. This would be useful to adjust in post tracks recorded in uncontrolled environments, e.g. documentary setting, direct cinema style, no time to change mics or lav people. Is the idea making sense at all? If it is, any suggestion to make it work? My set up was a CMIT (front), CCM41 (rear), and CCM8 (Fig-of- inside a Cinela PIA-3, recorded on a Nomad with the three inputs with identical fader and trim settings, no compressor/limiter involved. The typical track: a quiet river surrounded by open fields, a man bathing 3m in front, an engine 100m in the back, kids babbling on the sides from various distances. The three channels are quite contrasted but the front mic still got too much engine and kids. The goal is to get a mono track with the kids, and more importantly the engine, attenuated. I played around with Schoeps' DMS plugin. http://www.schoeps.de/en/products/dms_plugin/overview The displayed polar patterns suggests that I could achieve complete rejection of rear and attenuation of side with some settings and using only the center channel (from DMS decoded as 5.1). But the actual result is unimpressive and there is some distorsion if I push it too far. So, what’s your take on this?
×
×
  • Create New...