Jump to content

Jay Rose

  • Posts

  • Joined

  • Last visited

  • Days Won


Profile Information

  • Location
    Boston US
  • About
    Sound designer and industry author. Member CAS and AES. Humor, articles, and studio info at www.dplay.com.
  • Interested in Sound for Picture

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I had a strung-together TRS80 doing studio chores like sfx database and dub labels/tracking, and my wife hated it... and by extension, all computers. (I think the 80 hated her as well; it seems like it would frequently crash when she came into the room. But it also had these dicey ribbon connectors, and software I'd written, so it crashed a lot.) Then I dragged her to a "computer store", where she could see this new gadget. She later described it as "it had this little bar of soap with a wire, and I could move it on the desk and see something get drawn on the screen. I was hooked!" Why she got to describe it that way, was in an interview after she wrote some two dozen books about Photoshop!
  2. It’s a terminology thing. Countryman calls their E6 an “Earset”. It has a boom, and a tiny mic element. Maybe someday AES or SMPTE or CAS will standardize “earset” and ... “cheekset”(?)
  3. Embrace's website says it's an omni. It may be hard to implement any other pattern: the unit is small so there wouldn't be much difference between front and back entrances, and one side is so close to the head it's virtually blocked. I worked for the late Carl Countryman on a project involving his earsets. These were different, of course, with a semirigid boom intended to be molded to the actor's face, and the element much closer to the mouth than Embrace's. But Carl made both an omni and a cardioid version. He told me that buying the cardioid was almost always a mistake. Its proper use was for performances with large speaker stacks behind the singer. For dialog or speech s/r (preachers were a big market) any advantage to directionality was cancelled by the problems of getting a consistent sound on both near and distant sounds. The omni seemed incredibly directional, rejecting almost everything else in the room... but that was because of inverse-square.
  4. The rally issues were a problem with the venue’s surround: they had only RT. Bernie’s have only LT. (I’m old enough to remember when political gatherings had an actual C. Not even Dolby can derive good Dialog from just one side.)
  5. If you run out of absorbers, try diffusing the slap between parallel walls. Anything that'll aim it in different directions - like round fiber tripod cases against the wall, or even PAs and others leaning against the wall - will help. Not as much as absorbers, but every bit counts. The other thing, of course, is inverse-square. The closer the mic is to the actor's mouth, the less reverb by comparison. Earsets or hair mics can be very helpful... if production is willing to cooperate. I'd rather add verb and ambience to something that's too dry when we get to post, than try to get rid of big-room reverb in an intimate close-up. (I'm waiting for some plug-in company to invent a Neural Network reverb-killer, rather than the algorithmic expander-oriented ones we have now. But I can't figure out how they'd ever derive a useful training set. Polluting dry recordings with artificial reverb would just train the NN to reject that artificial reverb, not the incredibly complex early reflections of the real world. )
  6. Purcell's Dialog Editing is an excellent book. He's a very good writer, and covers every aspect of turning the ransom note of edited production audio into something that'll work smoothly and quickly on the dub stage. I recommend it highly, and have a copy in front of me right now. But he's primarily an editor, not a rerecording mixer. While he walks you through just about every possibly editing scenario with lots of pro tips, his book has less than a dozen pages on processing. I wrote Audio Postproduction to fill that gap. There are sixty pages just on equalization, dynamics control, and noise reduction, plus chapters on time domain (including reverb) and other processing. I cover dialog editing - but nowhere near as deeply as Purcell - plus editing music and sfx, and recording VO and ADR (which are postproduction operations). It also comes with a one-hour audio CD of tutorials, examples, and diagnostics. Ideally, you should have both books. They've also both been out long enough that there are used copies around. But if you buy a used copy of mine, make sure it has the CD. A lot of its content was cleared only for single-CD-with-book, so I can't send you replacement files.
  7. Not witchcraft, just a totally different way of dealing with noise, which wasn't practical until we had today's powerful host computers and cloud services like AWS. When the iZotope and Audionamix software first came out, I wrote a CAS Quarterly article about how it thinks. Cedar's realtime, plus just about every NR plugin prior to these new ones, and even the old solution of running a Cat 22 decoder on production track, all rely on narrow-band expansion with carefully chosen thresholds. So does mp3, in how it simplifies PCM audio streams to them it smaller.
  8. I am glad I learned ProTools. That way, I was even more impressed when I saw how well Nuendo improves my workflow. Tried to run parallel for a few years, then stopped paying for upgrades on the PT I wasn’t using. YMMV. I cut my teeth long before DAWs were practical. So I already had a reputation and client base who trusted me. But if you’re in LA and just starting out, knowing ProTools is probably essential to finding a studio job.
  9. I came across a new product, VoiceGate from Accentize. The name is misnomer: it's not a gate, but a NN-driven dialog v noise separator. Same category as iZotope's Dialog Isolate module, and Audionamix' IDC, but with some major differences. Runs very efficiently as a channel insert plug-in, or in an offline window. They've been fine-tuning the beta - and I've found them very flexible at taking user suggestions (plus adding features I hadn't thought of). Should be shipping in a week or so. I just posted a hands-on of the beta with before/after audio samples, in a thread at Geekslutz. Worth knowing about. (Just don't tell any DPs... or else it'll be one more reason for them to say "Don't worry about dialog during production because there's a new magic plug-in." (It ain't magic. But I'm adding it to my arsenal that already includes Rx7Adv and IDC, because it's a useful tool. 😉)
  10. Danish, which clips that you've done -- even on your own, for fun -- are you proudest of? You can still show them to clients with the caveat "this is my demo, not the real film". If you don't have any you've done that you're proud of yet, you're not ready to show for someone else. But... Don't circulate them on the web. That can hurt your reputation in the long run. And it's infringement... if you ever strike it big, somebody's going to find your old demo and sue you. And re-do your demo as soon as you've got real projects. Even pro bono or tiny budget. Get rid of the fake ones, and replace them with something that faced real-world challenges.
  11. If you don’t have a sense of humor, don’t go into sound. Or maybe a sense of the ridiculous. Or a sense of pathos.
  12. This mummy isn't a movie, but an actual mummified Egyptian priest from 3000 years ago. Researchers wheeled his remains into a CT scanner, mapped his vocal tract, and 3D-printed a replica of his throat and mouth. They bolted it to a compression horn driver, and claim it reproduces the guy's authentic voice. It makes a good story in today's NYTimes along with a brief audio sample and a link to the research paper in Nature. It's also only a story. The mechanism used to create "speech" -- a complex vowel waveform generated by a computer and controlled by a joystick, then sent through a few fixed low-Q resonators in the 3D-printed "mouth" -- is nothing like the way human voices work. We have high-Q resonators, constantly moving to form different filters on the wideband buzz from the vocal folds. Essentially, they've put a non-linear horn on the output of a conventional speech synthesizer. And it's just guesswork, because we have no idea what sounds (or resonances) were used in the priest's language. The real breakthrough is mapping and printing an ancient mouth, even if it doesn't have the muscles essential for speech. It's a bit more sophisticated than getting a new set of false teeth... but how they made this mouth "talk" is just window dressing. So why am I posting this admittedly speech-nerd story at JWSoundGroup? Aside from the scientific interest (yes, I do get off on this stuff)... Some producer is going to glance at the Times' article, and then demand we create authentic voices for their next horror film or biblical epic!
  13. I'm a few years ahead of you, understand what you're facing, and am in no position to give financial advice. But one thing that's kept me sane: When I moved an hour away from downtown (Boston) three years ago, I made sure to get a place that would accommodate a small but workable in-the-box mixing setup. I let enough people know I still had two ears and ten fingers, and was open to anything that (a) interested me, and (b) was being done by someone I liked and/or respected. I haven't made much money this way (and don't have much overhead), but the creative low-$$ indies and pro bonos are just enough to give me ongoing energy and pay a few bills, while letting me sleep late when I want to. YMMV. But I hope it turns out to be close.
  14. "Half the Movie" ...whoops, sorry; show cancelled due to lack of support.
  15. 1970s band: Electric Fudge Which is what we do in post all the time...
  • Create New...