Jump to content

Bouke

Members
  • Posts

    317
  • Joined

  • Last visited

  • Days Won

    14

2 Followers

About Bouke

Profile Information

  • Location
    Netherlands
  • About
    I'm a developer of software tools, both ''off the shelf' as custom work.
  • Interested in Sound for Picture
    Not Applicable

Recent Profile Visitors

2,922 profile views
  1. I have no clue what the NATO-BRICS yadda is about, please elaborate.
  2. No clue how much, but LTC 'should' have a low pass filter applied to help here. I don't think it's a big issue. DC blocking, probably all, acg, varies, but that also should be no issue at all, in both LTC and IRIG the gain is not important. IRIG will have no fight with a DC blocking system, LTC 'could', but I have never seen it. (DC blocking should not be an issue, or AC / DC would not sound well. A distorted guitar is a blockwave : -) Why? At 48000 Hz you would be as accurate as 0.02 milliseconds, and with (to keep it simple) oversampling (like for true peak finding) you could get 4 times more accurate. And then you could do some math if you have the exact framerate and average things out, to get even further accuracy. My 2 ms is a guess. I have no means of real world testing. I can test my own software generated files, and then I indeed get a 0.02 millisec accuracy, but that has nothing to do with real world. (What my routine does is generating lists with sample values and sample positions, omitting those if no zero has crossed, then find bits, then find syncword, and thus the exact sample of a new LTC frame is known.) I don't bother with oversampling, although it won't be difficult, the theoretical 0.02 ms is more than enough. While we're talking about this (fun stuff imho), others are talking about phase issues between boom and mic. Keep in mind, I am NOT an audio engineer, but some math: To get two mics 'really' in sync, sound must arrive at the same moment. Assuming a very close boom and a lav, there is still 30 cm of difference. Sound travels at 300 m/s, so 0.3 / 300 = 0.001 = 1 ms offset already. (I can't understand how you could ever work out all phase issues, as each frequency has it's own wavelength thus different phase angle, and sound never comes from one direction / source alone, but that's another issue.) I don't bother with more accuracy, nor with exact measurements, as the typical cheap cams / setups involved introduce so much 'unexpected' that it is 'good enough'. (And I provide an user custom offset, if the offset is constant (which, funny enough in a digital world, not always the case.) Then, the high accuracy for syncing sound to picture ONLY is possible if the end user either lets me render new video files with audio embedded, or changes the BWF timestamp. BUT, stupid enough, EVERYONE changes the video TC to match the LTC, instead of altering the audio timestamps, thus rounding the found values to whole frames, introducing a potential 20 msec error, instead of 0.02 msec accuracy. (Still good enough for rock 'n roll most of the times, but that's beside the issue.) Fun! Can you point me to the project? For science, I can totally understand it. For Indy, please don't, as even the simple LTC workflows cause enough trouble already. Of course there will be a few smart kids who get it running, but it won't be for the masses. Issue remains that you need to insert some params for your recording devices / setup (wireless introduces delay), but for measuring delta T it probably always works. What I don't understand (not knowing your signal encoding / error handling / checksum yadda), how does it come that you are only 0,083 ms accurate? (Yeah, of course if you have a boom box on set and let the cams walk around without position info, it will be hell, but even that could be worked out with two boom boxes and Pythagoras : - ) Thank you! (I wish I could do more open source, but I'm fighting for a living already. I can give you a better Python Timecode module than the current one on pypi, that is filled with bugs (eg, it floors values instead of round them, resulting in being off by a whole frame every now and then.)
  3. this is a bit of a bullshit sales quote. IRIG B transmits time and day once a second, and the duration of the whole 'the time / date is now exactly: yadda' takes one second. As I've explained earlier, that does NOT mean it can't be highly accurate (Like in music, the metronome dictates the beats, but NOT the exact moment a note is played.) But I don't see IRIG B has a place in the syncing world.
  4. Only IRIG i've seen is on a second track of BWF in aviation communications recordings. (I wrote an IRIG B decoder for MakeTranscriberFiles)
  5. Of course it can be measured, but it highly depends on the software / algorithms / sample rate to define 'how' accurate it is. You also need to take into consideration how accurate your reference clock is, or if you take the signal itself as being the clock. Then, how accurate is the signal itself? (It could jitter a bit.) The exact position of the sync word (the end of it) is defined by the end of the last bit. That is, in audio terms, a polatity change. (End of a block of the blockwave) If you don't write any 'smart' algorithm around it, that is thus as accurate as the sample rate used to digitize the signal. In case of 48Khz, that means 1/48000 of a second, and that is way more accurate than 2 ms of course. But, jitter, latency etc will mess things up. (Note, dedicated hardware probably will not 'sample sound', just see high/low changes and clock on that.) But why do you ask?
  6. First of all, my apologies, I had no clue indeed about cost of living in LA. Over here the market in film / television has become rotten. the Netherlands has the second lowest price per hour TV from Europe (absolute lowest is Albania, at least that was what I was told by a famous presenter over here.) I hardly do editing nowadays, but as a coder I'm lucky if I get 80 dollars an hour, and that's always for 'short' jobs. (Say 20 hours tops.) I will keep this in mind for the next Hollywood job. (I once quoted 15.000 USD for a job that would have paid for itself within 3 months, and had a lifespan of several years, but that did not fly, no clue why, as no-one else could do it, or at least no one else did afaik...)
  7. So you are doing a job for 900 bucks for one hour., and you are discussing rates with poor smeggers. Either I am a total nutcase, or you are a total arrogant SOB. ( Both is possible also, but get real, an hourly rate of 80 is reasonable for a plumber, and as we all know, plumbers dive into sewers thus they earn more.)
  8. Let me elaborate on that. What I did was (this was tape era) ffwd trough the footage looking where people were making large gestures. In that case, I hit play (or play 1.5, I forgot, but I can listen faster than RT), and used parts of sentences that I could understand, instead of fuzzy jargon. (I had no clue about what they were talking about, and I'm well educated, but still, not my line of work...) I did get compliments on 'how well I had covered the event', this was one of the first events when I made a decision about the rest of my life. (Note, this was the same company that asked me to edit a visual piece based on a sound track without images. 'Here is the voice -over, there is a caption studio, here are 4 bottles, make 20 exciting thrilling minutes of unbelievable video. (Having a cuts only U-matic system, and a caption cam.)
  9. This makes no sense. I'm an editor, not a sound guy. If you say: 'there will be a real post production', how much time does post get? Is it only video post, or is there sound post as well? If no sound post, do include a GOOD mixdown (mono!), as your precious multitrack probably will not be used. (There is simply no time.) I know what I'm talking about, I've been given 6 hours of footage to make a 'flashy exiting compilation of the best parts' an I got 2 hours to do the entire piece, without ANY notes on what the footage should / could include. Just 'footage'. Without further information, do whatever you think is best to get a good mono mix. (A boom is always too late, a gooseneck per 2 people is always aimed at the wrong person, pick your poison.)
  10. No, he's holding it right!. He's recording a seagull. As you know some birds know when you are looking at them. (Seagulls are famous for trying to steal food.) So, he's not looking at the bird not to scare it off, and obviously the bird is circling above / behind them. Bottom line, as always in film- tv production: Never let the truth get in the way of a good story.
  11. This is new information to me, and is the actual practical answer to the question I originally asked about how precise timecode (meaning LTC) is. Thank you! Do you have a source for further reading? You totally did NOT understand a word I've written, start over please this is NOT the case with BWF files. BWF files DO NOT have timecode, but a frame label, where each sample is a frame. This has been written in this thread several times by several people. Please, start over from the beginning, you have not grasped even a bit.
  12. I've done enough bitching for one day, let's all not forget that it's the person who is important. Borjam has in his sigline: "Don't worry about the drums. I'm a percussionist, I can play on carton boxes" - Peer Wyboris So true: Zappa when he was young. (Do click the link!) Must see, not a cat puking funny! Then there was a phrase (I forgot by whom) 'I don't care how many tracks you have, how does it sound?' One of our local rock 'n roll heroes was also a painter, he once (for a stupid TV show) painted with shit on the cheapest paper he could find, using a piece of chewed wood as a brush, making the statement that an artist should be able to work with whatever he has. I once wrote here about Robbie Muller, who worked with whatever equipment was available. It's about mastering the craft, adapt to situations, not the equipment. /rant
  13. I feel left out. How does the calm act makes mixes worse? The whole idea was to make the overall loudness equal, so TV commercials are 'as loud' as normal programs. Thus, the whole compression / harmonic tricks are useless to 'stand out to the rest'. This gave back the ability for mixers to use high dynamics, overcoming the 'peak limits'. This is, IMHO, a GOOD thing. What am I missing? I do understand that 'smart' devices might interfere. (Like the 'auto scaling' on images, change aspect ratioos with or without letterboxing / scaling, 'smart' scaling, whatever did horrible things to original framing.) About Netflix: Netflix is (afaik) the only major player that has loudness requirements with 'outdated' specs. I don't care if the specs are public, the implementation of the specs is not, there is NO open source software available for their specs. That is plain arrogant IMHO.
  14. If you are a very slow reader... I did read it all, and it's not really interesting for normal production, as we all rely on stuff that is available today. Most stuff nowadays just works, and if you need enhancements in your workflow, people like me are available to improve it. But with off the shelf products, it IS possible to create something. (And that has been the case since the stone age, see cave paintings.) The article isn't that interesting to me, and in some points plain wrong. (Don't get me started on how to use math to solve 'issues'.) The interesting point is the TLX project, where some smart people are trying to solve problems before they exist. (I don't think that ever worked fully, but there are a lot of examples where things have been made 'quite' future proof, but sadly a lot other examples where a new standard was outdated the day it was introduced.) What we all know, we have two standards, that's not practical, let's introduce a new one to overcome that, now we have three standards...
  15. Not really. Imagine the following scenario: You're standing on a track, a high speed train is coming towards you, traveling at 100 meters per second. The train is 100 meters away. When will you get hit? Does it matter how long the train is? Now, unless you have digitized the timecode (assuming LTC, timecode can also travel as metadata, or in old fashioned 422), a bit has a length measured in time, not in samples. Samples is only relevant in this matter if you know the samplerate. Normal LTC has a 'sample rate' of 80 per frame (80 bits...) Thus, in theory, timecode COULD be used for syncing like Wordclock, as every piece of equipment knows exactly when a new 'frame' starts. (Note, 'video' frame, an audio frame in the digital world would mean sample), like the engine of the train, that's the part that will hit you. I've discussed this some years ago with Ambient, and we agreed that this indeed 'could' be implemented, but afaik, no one ever made something like this in hardware. My software LTC convert however does this. Since it knows highly accurate what the relation is between video (always starts at a full frame) and audio, it is possible to do subframe accurate syncing. But only if you either go over XML or change the BWF timestamp instead of the video timestamp. Following this logic, it is also possible to sync two files and correct for speed changes, by measuring the time between the first and last found LTC frames, and do a tiny bit of math. But there is no need for this. If you don't have long recordings, with any half decent piece of equipment you will stay in sync. Speed differences without wordclock / genlock will only occur after half an hour, or way more. For very long recordings, just bring the good stuff. More important for phase / timing issues are your precious wireless thingies. A typical wireless mic will introduce 19 msecs of delay, and that is about half a video frame, thus, A LOT. (What I don't get is why not everyone is shooting Mono Wave, and give a -19msec BWF offset to the tx/rx files, and while we're at it, compensate for the distance of the boom to the subject.) As said, there are plugins that will take care of this in post. And normally post WILL give an offset every now and then. (Blowing up a building is normally filmed from far away, say 300 meters. Sound travels at 300 meters per second, so to get the sound in sync with the image, you have to give it an offset. Is that logical? Yes and no, it's not how it is perceived in the real world, but it is in movies, like spaceships exploding, space is vacuum, carries no sound, but go to the theater...) Conclusion, Get friends with some post guys, ask what they like, and indeed, test test test. (And then test more : - )
×
×
  • Create New...