Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


About Bouke

  • Rank
    Hero Member

Profile Information

  • Location
  • About
    I'm a developer of software tools, both ''off the shelf' as custom work.
  • Interested in Sound for Picture
    Not Applicable

Recent Profile Visitors

1,869 profile views
  1. Hi all, Since some (well, a lot) of you are responsible for Timecode: QT change has had a major update. It is no longer dependent on QuickTime being installed, and 64 bits (big sur / Catalina compatible.) And it now works also on Mp4 and BRAW files. Beside that, I've recently added an option to offload cards and set TC based on system time, but with a non-drop option. This was done per feature request from Top Gear USA. The logic behind it: They shoot 23,976, but have a couple of drones with no TC nor sound input. Thus the sound guy jams all cams, and the drones get thei
  2. Whom's? If you mean my software, you have to be a tad more specific, as I can't mind read over this distance. Bouke
  3. Hi Nick, If you toy with LiveLog, keep in mind that it was intended to log with minimum effort, hence the customizable comments / shortcuts. I've used the previous version myself for logging just markers / comments in very long lectures, where I was busy controlling and switching 2 cams / powerpoint, so not much time for writing notes. (I had to post myself, and did not want to go trough the whole shebang again.) But that was with a Avid Marker TXT files. Now the same client is going to record the lectures themselves (the local university), and I've build a lot of stuff in to accomm
  4. Speaking of, have a look at this: https://www.videotoolshed.com/product/livelog/ (I did not (yet) build in a 'sound report' option, but that would be a breeze.) And that reminds me, did you ever toy with the video slave mode of my LTC reader?
  5. Sorry, but IMHO the most important is writing. Both image and sound can tell a 'story', but if the story isn't written, meaning, 'defined on what it should be', it's just 'as it is', like you take a snapshot of 'anything'. That does tell a 'story', but is not 'telling'.
  6. Oh dear... Does this mean that if you output stems and print masters, the latency can be different between them? (That would mean the stems won't add up to the exact same as the PM later on, potentially introduce phasing...) Next, does this also mean that the latency is 'unknown' until after the QC? My LoudnessChange app now has a 'latency compensation' build in, but that assumes you know the latency, and it's the same for PM and stems. (Toy with it: https://www.videotoolshed.com/handcrafted-timecode-tools/loudness-change/ ) I could add a routine to find the first
  7. I would think any half decent DAW would have a 'find silence' routine... But, I already have the code to do so. (Toy with my Transcriber app, you can set it to 'jump over silence'.) Now, I could make something that lets you select files / folders, select channel(s), set a threshold, minimum silence/low level duration and it will do a fine job. BUT, I have no clue what would be a decent way of returning the found data. I could add XMP to the files, no clue if ProTools accepts that. Or, markers / cue points old style to Wave? CSV? It's definitely not rocket science, just making
  8. For the ones that do / know about audio post: When I got back a mix from audio post (Few years ago, mixes came from ProTools), sound was always late by some 20 Msecs. Audio post told me this had to do with latency introduced due to processing. (I can remember from VERY long ago that a 'render' in PT was in fact a playback trough DSP's with simultaneous recording, can't imagine this is still the case...) But, fact is (well, was, question follows..) that sound was not proper in sync. (Don't bitch about 20 Msecs being no issue, that's not the point.) Question, is this still the case no
  9. Bouke

    What's wrong?

    https://www.youtube.com/watch?v=qFfnlYbFEiE Fun, but it sounds too good to be true.
  10. How about getting everyone involved and make an educated descision. Do not believe me, make it so everyone in your group is happy.
  11. Yeah, I know. I worked on Avatar. I was the one who wrote the softare to decode the VITC / remove pulldown and transcode part of the rushes on good takes :-) But the video assist back then was recorded on DV from a composit (ugly noisy) signal. (SD of course...) Things have moved forward since then. I've bitched about this being redundant long time ago, but as a backup, why not. I don't think this is a wise idea. TOD is ideal for media management. You know what time it was during the shoot. It's also a good reminder ("that take was after lunch") If post
  12. Reaper will be as accurate as the system clock, that could be off by a few seconds over a day, hardly noticeable over the duration of an average take. Why make it so difficult? Make a file with LTC on one channel, playback on the other and use 'any' player, split the signal. I have no clue what a 'motion control' thingy is, but I have several clients that do Motion Capture. They typically have a TOD master TC generator, and feed that to all devices (either to TC input, but also as AUX on a normal sound channel for cheap devices.) One gotcha I can think of: If you have duplicat
  13. Hi all, For those who need to deliver conform specs: I've been working my butt off to get things easier: My loudnessChange can now measure mono, stereo, 5.1 and 7.1 And it can correct if the loudness is wrong. But wait, there is more! It accepts inputs mono or poly, and you can output mono or poly (thus, interleave / de-interleave). For non-interleaved output, you can decide the new track names, and auto place them in a subfolder if desired. It can also remap if you go from L C R xxxx to L R C xxxx or alike, no matter the interleave in/ output. Then, if you have stem
  • Create New...