Jump to content

Jay Rose

Members
  • Posts

    1,295
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Jay Rose

  1. How about that you posted a 90 second ad-supported clip, rather than trimming it to the desired sound, or telling us where the sound is located, or (most preferable) posting just the piece of audio you wanted to emulate?
  2. Please add a basic understanding of acoustics, particularly the inverse square law, and a sensitivity to the real echoes in practical rooms.
  3. We are generating commercial story-telling to keep an audience entertained so they either pay cash or pay attention to commercials. Some of us, above and below the line, manage to do some art at the same time. But that's not what brings in the bucks, either to pay for location shoots (or vfx), or to buy us new gear. From a post POV, I just hope they treat the LCD-equipped studios acoustically so you can boom and get a sense of perspective. Otherwise, I'd argue that lavs in a controlled studio need a lot less fixing than lavs in a randomly scouted location.
  4. YouTube has a fascinating piece about a new virtual set technology demoed at Siggraf. Actors work in front of a high res, large LCD like an old-fashioned rear projection... But instead of projecting a pre-shot still or moving plate, the system renders in real time... with real perspective, based on the lens and camera position! Move the camera, and elements on the plate shuffle around with it to keep the background realistic from the camera's POV. Meanwhile, the actors get more of a sense of working in an environment, rather than against a screen. On top of that, the director can sit at a laptop with virtual viewfinders, exploring the look of any kind of lens from anywhere in the shooting area. And the art director can move individual elements, such as shifting a car to the other side of the street if it looks better... As I understand it - and I'm not a vfx guy - the system can respond to only one camera position at a time. (Shades of the "tech snafu" pinch in Mission Impossible: Ghost Protocol.) So the director can't use two cameras. And, hopefully, you can get a boom in there. Prediction: this will get cheaper, stop being a 'special effect', and because of lower cost will replace a lot of location work. Other prediction: they'll come up with some way to code multiple images for two or three cameras - maybe using polarization - and we'll be back where we started with tight and wide.
  5. It's old hat, but might bail you out: pack a couple of dynamic mics as backup. In my experience, externally polarized caps (like the 6080) can be very sensitive to humidity and condensation; the electret elements in most modern lavs, less so. But even an SM58 might be what bails you out, if there's a problem.
  6. What's the room like? Does it have acoustics worth saving? -- FWIW, I've had very good luck in this kind of situation with a kind of pseudo m/s. Cardioid or hyper pointing at the choir (from whatever distance is appropriate for the room and grouping), co-located omni or even a PZM on the floor. Treat the omni like an s channel. Biggest advantages: 1) Absolutely mono compatible - the omni disappears - which may be handy if people are listening to the company server on their desktop computers. 2) Width (in the stereo decoded version) without firm L/R locations. So any MOS cutaway against it still seems right.
  7. ...maybe a mandatory plug for Nuendo as well...? -- Anyway, OP, the biggest issue is that roomtone edits (or any really clean dialog or music editing) seldom falls on frame lines. 1/24th of a second is long enough to totally miss some common phonemes such as /d/ or /t/, which can be as short as 1/100 second. Ditto, you can miss a 16th note in music at a moderate 120 bpm tempo. Audio programs, from the incredibly powerful ProTools and Nuendo to the simplest free open source Audacity, let you edit with much more precision. There's a lot more about this in my book (which Jim was kind enough to mention) and the tutorials on my website.
  8. I haven't used it. Based on their website, however, I'd have a couple of concerns: 1) If you attempt to "flatten" a speaker/room combination that isn't very flat to begin with, it'll have to add some fairly heavy eq. This can introduce two problems: 1a) Sharp eq in realtime - like to tame a room mode - introduces phase distortion. 2b) Extreme boost eq - which can be necessary in the mids, as well as the extremes, of mediocre speakers - can drive distortion. So you end up with something flat, but not the track viewers will hear in a theater. 2) Depending on the room size and your practices, you might want to apply a standard or modified curve to make your monitors sound more like a theater. I don't know if Sonarworks lets you modify the eq curve it measures. 3) It appears to be a stereo solution. What do you do with the other speakers? As I said, this is just from the website. It might address all these issues and just not publicize the fact.
  9. Article in today's New York Times about projection levels, reporting on a paper in the Journal of Personality and Social Psychology. (Journal is paywalled, so I'm going by the Times reporter's summary.) Apparently, research found that someone appears more confident and persuasive when they project louder, which is certainly intuitive. But it went further to show that you're even more persuasive when you break things up with softer-than-normal volumes, as well. Trained actors probably know this -- think Richard Burton -- and know how to modify it to keep the variety while filling a theater or filming a line. The great film actors of the past could vary their projection a lot, even while respecting the requirements of a boom and optical recorder. Not-so-trained actors might know it as well. But might not have learned to seem loud and soft while keeping levels good for the track. There's a wonderful spoof somewhere on YouTube of a Richard Burrton-wannabe constantly blowing out the mic and then dipping into the mud during a speech. To emphasize the point, his on-camera boom is constantly swinging up and down. Any experiences to share?
  10. +1 John. Mixers work in production. They have the headaches of getting a usable sound under less than ideal circumstances, and the tools to help them do that job. They don't have the calibrated monitors, precision eq tools, and UNDO buttons we have in post.
  11. I missed the 😄. Of course, there are also lots of folks who read this forum and might not have a good technical background. (You don't need a computer background to be a good production mixer... or at least, you didn't when I started... and that's not quite a joke.)
  12. It's available only as an offline Audiosuite for PT. There are lots of professional posties on Nuendo plus composer/editors on music DAWs, as well as PT users who have other platforms on the same machine. I asked iZotope when they're going to have either a VST or a desktop version (ala Rx7), and they said they can't talk about it yet. Apparently they're still deciding if it's worth it! There's a popup on iZotope's product page, where you can 'vote' for a non-PT version. Please do so.
  13. Here's one I saved for my office, when I sold my studio... Hey, let's have a block party!
  14. See my 10/22 post. In all probability, MPEG and other psychoacoustic formats break the hidden malware. That's what we found in a multi-year well financed project with a different audio steganographic format, where I was one of the devs and also wrote the docs.
  15. FWIW, the audio payload in a BWF is identical to a .wav
  16. Amazing, Eric. We burned through Editall blocks about once every 6 to 9 months per studio. The 45 degree angle would get so wide from repeated use that it wasn't precise any more... and the center would be scratched up from all the times we made longer-than-normal crossfades. The old-but-still-usable ones went into the dubbing or prep room, primarily for splicing leader on tapes. We also figured a box of 100 blades every couple of weeks per studio. ( FWIW, my place was known for our editing. And we were doing ads, which have a lot of edits. )
  17. Did you ever use the Studers' scissors? How well did that work in actual practice?
  18. A friend of mine runs MBT, a broadcast technology museum. It derives a lot of its income by renting equipment to filmmakers doing period pieces. First time I walked into the physical museum, I looked around and said "this is my whole career in one building". Including an early 1960s radio room. This guy's 400 analog recorders is the other half of the equation: in there are all the studio recorders I've had to fix.
  19. So why not go all the way? Record on a 'portable' Magnasync. Or even better, record optically? And don't forget to tell post to use multiple analog generations. Oh, and if you really want to capture that classic sound: tell post that all exteriors need reverb. Because everybody knows there are echos outdoors*. -- * Not kidding. Listen to some very early talkies. To make dialog "sound like an exterior", they used the studio's hard-walled (indoor) chamber.
  20. Capitalism is 'way ahead of politics. Many modern cellphones use vocoding, analyzing speech into critical component bands, sending just the data for each band, and re-synthesizing it at the other end. There are fewer bands than in mp3, chosen to reflect the acoustic filters in the human vocal tract, and also data is sent about fundamental pitch and high-end noise (such as sibilance). The technique is almost a century old and was developed to compress phone calls over analog lines, but not practical with the equipment then... so it remained a lab and creative tool. Modern DSP gave us the technology to build it into practical phones.
  21. I can say that in the audio steganography I actually know something about -- Nielsen's Portable People Meter for station tracking -- data compression does serious damage to the hidden signal. PPM relies on details below the threshold of hearing, which changes in each narrow band depending on what the loudest content is. Most brains will ignore softer content in the same narrow band, because there are only so many neural pathways from available. Consider just the fundamental of a flute and a trumpet, in the middle of a sustained C... you can hear that there are two instruments, but only because the harmonic patterns are different. These bands aren't limited to a single frequency. Many mp3 algorithms break the spectrum into 384 bands, chosen based on lots of tests of listeners. So a loud "A440" will overrule the short term mod effects of a softer A441 playing at the same time. And it'll completely blast out any very soft random noise (or harmonics from bass instruments) that might be around. Fewer bits are needed to encode that 440-and-neighbors band than if there were full resolution. (mp3 then applies a zip-like compression and AAC adds some other tricks, but you get the idea.) Nielsen's audio steganography relies on the same trick. If a station's program has energy in a specific mid-range band (overlapping filters with a Q of ~15, IIRC) for long enough, then a single frequency is generated at the center frequency and mixed in ~ 6 dB softer (how soft becomes a station's choice... too loud is perceived as a harsh distortion, which can turn off listeners). If a burst lasts 480ms, IIRC, it becomes a data bit. So both systems are relying on the same phenomenon: listeners generally can't hear softer signals in the same narrow and momentary band. Nielsen puts the station ID code down there. mp3 kills anything down there, to save data. Seems like they'd fight. -- We found that to be true in actual tests, as well...
  22. Jim and Borjam, Yes, thanks for Jim's second quote. It does appear to be steganography in the audio stream itself. Which means it would also live in AIFF translations. Next question: since it's using tiny changes in the audio, would the payload disappear if the stream was then run through common psychoacoustic data compressors?
  23. Is it steganography or just the reporter's shortcut to an idea? The .wav format allows all sorts of nonstandard chunks in the header, for things like broadcast commercial codes. It would be a lot easier to hide instructions to a compromised machine there, than trying to bury it in the audio. (FWIW, I've done a lot of work with People Meter radio listenership tracking, which is actual steganography: station ID code and time stamp are hidden under program audio, and picked up acoustically from any nearby radio by a tiny gadget the sample listeners wear. It's a very dicey system, because it depends on masking energy that's in some pop music formats but not in most jazz, classical, or talk.)
  24. Assuming the band is actually moving around while they play, and you're the entire department... can you rig two mics on your boom? The SM57 to one track, gain set for the band; your normal gun or hyper (exterior, right) at a decent level for dialog on another. So the dx track will be overmodded during performance and the mx track will be mostly noise during dialog, but post should be able to sort them out.
×
×
  • Create New...