Jump to content

Bouke

Members
  • Posts

    373
  • Joined

  • Last visited

  • Days Won

    15

Everything posted by Bouke

  1. Hi Nick, If you toy with LiveLog, keep in mind that it was intended to log with minimum effort, hence the customizable comments / shortcuts. I've used the previous version myself for logging just markers / comments in very long lectures, where I was busy controlling and switching 2 cams / powerpoint, so not much time for writing notes. (I had to post myself, and did not want to go trough the whole shebang again.) But that was with a Avid Marker TXT files. Now the same client is going to record the lectures themselves (the local university), and I've build a lot of stuff in to accommodate for that. (Hence the PP / image viewer that you probably never will use...) For your type of work, you probably want to put it in 'Marker' mode rather than In - Out. For outputting (besides CSV that's already there) I could build in an option to put Cue's in Wave, but not sure if that is still used. It can do XMP to Mp4, Mov and MXF, have not tested Wave but that 'should' not be an issue. What does Gotham Sound's applescript do?
  2. Speaking of, have a look at this: https://www.videotoolshed.com/product/livelog/ (I did not (yet) build in a 'sound report' option, but that would be a breeze.) And that reminds me, did you ever toy with the video slave mode of my LTC reader?
  3. Sorry, but IMHO the most important is writing. Both image and sound can tell a 'story', but if the story isn't written, meaning, 'defined on what it should be', it's just 'as it is', like you take a snapshot of 'anything'. That does tell a 'story', but is not 'telling'.
  4. Oh dear... Does this mean that if you output stems and print masters, the latency can be different between them? (That would mean the stems won't add up to the exact same as the PM later on, potentially introduce phasing...) Next, does this also mean that the latency is 'unknown' until after the QC? My LoudnessChange app now has a 'latency compensation' build in, but that assumes you know the latency, and it's the same for PM and stems. (Toy with it: https://www.videotoolshed.com/handcrafted-timecode-tools/loudness-change/ ) I could add a routine to find the first sound (Assuming that is pop / beep / whatever you call it), with a routine to round it / decrease to the rounded TC calculated from the BWF timestamp. (Assuming you feed it the video FPS of course...) Or, add a routine where you can state 'pop is on TC XX:XX:XX:XX, with FPS = xx.xxx'. Mind you, BWF does NOT carry TC! It uses a frame number, where a frame is a sample. Math is done based on that to calculate TC. Bouke
  5. I would think any half decent DAW would have a 'find silence' routine... But, I already have the code to do so. (Toy with my Transcriber app, you can set it to 'jump over silence'.) Now, I could make something that lets you select files / folders, select channel(s), set a threshold, minimum silence/low level duration and it will do a fine job. BUT, I have no clue what would be a decent way of returning the found data. I could add XMP to the files, no clue if ProTools accepts that. Or, markers / cue points old style to Wave? CSV? It's definitely not rocket science, just making the interface / export found values would take some time.
  6. For the ones that do / know about audio post: When I got back a mix from audio post (Few years ago, mixes came from ProTools), sound was always late by some 20 Msecs. Audio post told me this had to do with latency introduced due to processing. (I can remember from VERY long ago that a 'render' in PT was in fact a playback trough DSP's with simultaneous recording, can't imagine this is still the case...) But, fact is (well, was, question follows..) that sound was not proper in sync. (Don't bitch about 20 Msecs being no issue, that's not the point.) Question, is this still the case nowadays? Reason, I'm altering my LoudnessChange app. (It nowadays can do way more fun, like altering stems based on the PrintMaster.) It can also set new TC / timestamps. So it's a breeze to deduct some samples from the timestamp to compensate for latency. But if it's no issue, why make the interface more complicated with something outdated... And then, what to do when the start TC = 00:00:00:00 / AKA BWF SAM 1? Trim the start of the file instead of setting the timestamp X samples lower?
  7. Bouke

    What's wrong?

    https://www.youtube.com/watch?v=qFfnlYbFEiE Fun, but it sounds too good to be true.
  8. How about getting everyone involved and make an educated descision. Do not believe me, make it so everyone in your group is happy.
  9. Yeah, I know. I worked on Avatar. I was the one who wrote the softare to decode the VITC / remove pulldown and transcode part of the rushes on good takes :-) But the video assist back then was recorded on DV from a composit (ugly noisy) signal. (SD of course...) Things have moved forward since then. I've bitched about this being redundant long time ago, but as a backup, why not. I don't think this is a wise idea. TOD is ideal for media management. You know what time it was during the shoot. It's also a good reminder ("that take was after lunch") If post is using Avid MC, you can use TOD for normal media management / logging / script girl, and AUX (LTC on a sound channel) for syncing / multicam etc. As always, if you can, talk to post. (I AM post, beside being a developer.) Yes. This means at least two humans (script girl / logger) and AE to re-type the metadata, with all kind of options to make mistakes. I'm not an IT guy to take away people's work, I'm doing this to make everyone's life easier, automate away boring stuff that is bound to make mistakes. For Motion capture, I've automated some poor souls 5 hour a day doing stupid tedious subclipping to a few minutes. He did not loose his job, he got a better / more challenging one. I don't know much about motion rigs (I know, robotic crane / cam), but the principle is the same as making a video clip with boom box on set, doing short takes instead of the entire song, then multicam the bunch together for fast rough first cut. hth, Bouke
  10. Reaper will be as accurate as the system clock, that could be off by a few seconds over a day, hardly noticeable over the duration of an average take. Why make it so difficult? Make a file with LTC on one channel, playback on the other and use 'any' player, split the signal. I have no clue what a 'motion control' thingy is, but I have several clients that do Motion Capture. They typically have a TOD master TC generator, and feed that to all devices (either to TC input, but also as AUX on a normal sound channel for cheap devices.) One gotcha I can think of: If you have duplicate TC's due to multiple takes on the same clip, media management needs file names or alike to keep track of good and bad takes. That can be a challenge. I would suggest two TC's, one TOD, one Clip Position TC. (Yes, on cheap devices that means loosing your scratch track, or an A/B switch to record Clip TC for a couple of seconds, then switch to scratch sound.)
  11. Hi all, For those who need to deliver conform specs: I've been working my butt off to get things easier: My loudnessChange can now measure mono, stereo, 5.1 and 7.1 And it can correct if the loudness is wrong. But wait, there is more! It accepts inputs mono or poly, and you can output mono or poly (thus, interleave / de-interleave). For non-interleaved output, you can decide the new track names, and auto place them in a subfolder if desired. It can also remap if you go from L C R xxxx to L R C xxxx or alike, no matter the interleave in/ output. Then, if you have stems, it can auto correct the stems based on the printmaster. (that's not all, but I'll keep it short here.) I think the accuracy is on par with the competition (or better), for a fraction of the cost. But, to make it really great, the interface needs a bit of work. It's a mayhem of different formats to expect / output. I could use some others to have a look at that. If you feel to help, drop me a PM. Thanks, Bouke
  12. Do you have an email adress? (Site is horrible...)
  13. My SD Pix 240 has trouble (nothing physical), and went to the local agency / dealer / importer. They can't fix it, so it has to go to the mothership. I'm told that it will cost at least 800 euros to fix it. Seems a big steep IMHO... Opinions?
  14. Ok, I'll step out, but I do feel you guys really need to innovate a bit more. (Ok, you can't do that on yourselves, you need manufactures to give you new toys.) You are all a bunch of hardware / black boxes liking crew, while the rest has moved on. And no, I do NOT want to start a war around this! Peace... /bouke
  15. Seconded, but I still miss it...
  16. I'm missing the fun, Elaborate or tell met to shut up. (Since I like you, I would accept that, but it does not show much respect...)
  17. Ok, fair enough (I'm an IT tech / editor, this is not my line of work. I'm a hacker / developer who makes the impossible not so impossible). Of course the metadata mentioned could fit into the stream, but one would need to get it out of. Monkey patching would be to put a cheap webcam on the devices, (If they display all status at once, and if so, that would set you back another 100 bucks.)
  18. I feel left out... (Or, I fail to see the problem...) 50 meters of SDI is no problem. SDI cable is like, 1 euro a meter? SDI can carry 8 channels of 24 bits sound. Cat 5 (or, go crazy, Cat 6) should do the trick as well. So, what is the problem of getting some SDI embedders / de-embedders? The whole setup should be some 500 euros per 8 channels... (Unless BlackMagic stuff ain't good enough.) Then if there is video assist / cams, they have SDI embedders as well... (A cam is one too, same as my Pix recorder.) Fibre is normally packed in a Kevlar shielding. At a festival I've worked we had a big forklift driving into our fibre cable, taking down lots of fences and nearly tore down a tent. The cable survived. (Glass is a liquid, thus quite flexible / stretchable. Kevlar does not stretch at all. I would not worry about that.) So, what am I missing here?
  19. Hi Ed, Another Australian member also bitched about this. (And had other needs.) My LTCreader nowadays can playback video in sync with incoming TC. (From PT it can be LTC or MIDI) https://www.videotoolshed.com/handcrafted-timecode-tools/ltc-midi-readerconverter/ Under the hood it uses VLC for playback, and it can run in the background. It likes ProRes or DNxHD files, long GOP does not work well for accurate sync. Now, it's still work in progress, but do toy with it. (There is a free demo.) Let me know if you have questions / feature requests. Bouke
  20. https://www.videotoolshed.com/product/ltc-convert-auxtc/ You can set new (sample accurate) timestamps on your BWF files, with an offset to your liking. hth, Bouke
  21. Please send me more testfiles! (15 seconds is plenty.) And, please tell me what (audio) settings were used! Well, normal TC output is quite loud, I take it you select 'line' input on the gopro? Then, I'm not quite sure if ProTune will affect external input... please use my WeTransfer: https://videotoolshed.wetransfer.com/ I'm pretty sure there will be 'some' magical settings, and using the .wav instead of the AAC sound is not that hard, so I'm pretty sure I can make this fly. If you have a clap, please use it and also send your normal corresponding BWF files.
  22. Well, it also makes a nice real world testcase. (Ok, a nightmare one, but still fun.) What I don't get is why it works here and not on your systems. Do both the Wave files work as expected? (The first one is a bit soft, but readable, the second one took me some time to adjust the code.) Worst case, I can always fall back to the Wave files, but I need to write some extra code to make that happen.
  23. Hmm, 6.0.6 did work for me... However, I've altered the routine again, and it now works (at least here) on both files. The 25 FPS you see is from your settings. The column dispays the source FPS, NOT the LTC FPS. (After scanning, you can see the LTC FPS when double clicking a clip.) 6.0.7 is now on my site, but even if it does work for you now, I would not do production this way. Your second testclip is strange. The signal is a reversed sawtooth. That normally indicates a wiring, impedance or hardware problem. Or, the cams electronics kicked in. The Wave is starting OK, fading out quite fast, fade in with the sawtooth. That looks like the cam detected a 'problem' and tried to fix it....
×
×
  • Create New...