Jump to content

Why use time code if timestamps are precise enough?


lutzray

Recommended Posts

5 minutes ago, Jim Feeley said:

I defer to people here who've worked with ACN & Zaxnet; how well have they worked for you?

I can speak to Zaxnet:  It syncs pretty much continuously (I believe Zaxnet embeds some sort of LTC signal, but the details aren't publicly known as far as I know).  All my Zaxcom devices sync with my master recorder (which is transmitting Zaxnet) as soon as they are turned on.  If they go out of range (which does happen), they'll hold sync about six hours, which is very a long time for a transmitter to be completely out of range and unable to re-jam.  In practice, I spot check to make sure that things aren't drifting, and I can't think of a time I've had a problem.

I have some Deity timecode gear that does something similar:  They'll operate as a mesh network, so as long as they are reasonably in range of each other, they'll stay in sync indefinitely.  All I have to do is keep track of one "master" box which is permanently attached to my recorder.

Long story short, even though I have far more timecode-aware devices than I've ever used, jamming is mostly automatic in my workflows except for spot checking once in a while to make sure things are working as expected.

Link to comment
Share on other sites

49 minutes ago, The Documentary Sound Guy said:

What I was actually thinking of was a music video workflow where an LTC track is laid down on the same timeline as the source music and used as the timecode source for every take (which allows multiple takes to be overlaid in sync very easily).  If the audio / LTC source is fed to the camera as a scratch track, you have a useful reference for variable drift relative to video frames.  That doesn't necessarily make it easy to fix, but hopefully it makes it possible to use the scratch audio as production audio if something get screwed up.  It's also another scenario where UTC timestamps aren't able to duplicate what timecode can do (UTC timestamps can't be forced to repeat every time the music is reset).

 

Thanks for giving this use case, it's instructive. And just for check: idioms "scratch track" and "scratch audio" don't share the same meaning for scratch, right? (l'anglais n'est pas ma langue maternelle).

 

Link to comment
Share on other sites

1 hour ago, Jim Feeley said:

lutzray, at this point perhaps it would be helpful to tell everyone your real name and your involvement with syncing video and audio devices.  Who are you, why is next-gen TC so important to you (ie- were you burned on a few projects) and who are you working for or with? 

🤣 I'm using almost my real name (Raymond Lutz). I'm a  -soon to be retired- physics teacher (college level) and I build GNSS based syncing devices for pastime. Seriously. The goal was to use them for my -coming- YouTube channel. Channel that never took off because I've a full time teaching job, three kids, a wife, an old house and this stupid pastime.😏

Link to comment
Share on other sites

1 minute ago, The Documentary Sound Guy said:

Now I'm second-guessing myself.  I'm pretty sure I intended them to mean the same thing.

hmm... "LTC source is fed to the camera as a scratch track"  = scratch as in sacrificial and "possible to use the scratch audio as production audio if something get screwed up" = scratch as of lesser quality?

Link to comment
Share on other sites

34 minutes ago, lutzray said:

hmm... "LTC source is fed to the camera as a scratch track"  = scratch as in sacrificial and "possible to use the scratch audio as production audio if something get screwed up" = scratch as of lesser quality?

I don't understand scratch as in sacrificial.  Nothing is being sacrificed that I can think of.  In both cases, I meant scratch as "included as a reference without the intention of using it for publication".  So, it's implied that a lesser quality is acceptable, though it's often perfectly usable as production audio (and what editors do with it after it leaves my hands is up to them).  The difference for me is that I don't monitor scratch audio to make sure it is good quality, whereas I do monitor the tracks on my recorder.

Link to comment
Share on other sites

The "scratch" track, AKA "reference track" is an audio feed sent to cameras from the sound dept so that A: playbacks with sound can happen immediately on-set (without syncing up the audio recordings) and B: so there is a reference for the editor to "eye-match" or auto-sync by audio waveform to in the audio syncing phase of post.  A timecode-based autosync is perferred and has fewer issues, but many productions that do not use any form of timecode in the field rely on audio-match syncing (ie "Pluraleyes" and the Premiere onboard version of same) to sync their field audio with dailies picture.  Thus having both a "scratch track" and providing timecode to a camera (from a TC generator that has been jam-synced to the TC generator in the audio recorder) is a very reliable method for making post syncing possible and straightforward.

Link to comment
Share on other sites

On 9/30/2023 at 6:43 PM, lutzray said:

🤣 I'm using almost my real name (Raymond Lutz). I'm a  -soon to be retired- physics teacher (college level) and I build GNSS based syncing devices for pastime. Seriously. The goal was to use them for my -coming- YouTube channel. Channel that never took off because I've a full time teaching job, three kids, a wife, an old house and this stupid pastime.😏

 

Thanks Raymond!

 

Again, the current TC tools and approaches are mostly working for me. But there is something that could make GNSS syncing appealing: If you and your cohorts can get camera manufacturers to finally put stable clocks and systems into their cameras, then I'd be really interested in GNSS... 🙂 Truly: That would be a win.   

Link to comment
Share on other sites

8 hours ago, Jim Feeley said:

If you and your cohorts can get camera manufacturers to finally put stable clocks and systems into their cameras, then I'd be really interested in GNSS... 🙂 Truly: That would be a win. 

This touches an interesting point: how tech innovation happens nowadays in our financialized capitalist economy... Only small privately owned companies seems to gather to the needs of its users base. If an IPO occurs, shareholders values takes priority and enshittification is not far away. In the beginning, RED was the cool new kid on the block, now it pisses off everyone. Big (and medium) tech plans obsolescence, fight rights to repair and enforce vendor lock in with proprietary ecosystems ignoring users and interoperability.

Link to comment
Share on other sites

To be fair, RED has always pissed people off, even when it was a startup.  But maybe for different reasons now that they have financial power.  And you don't have to be big to piss people off as a patent troll and vendor lock-in ... just ask Zaxcom.  I would add Sound Devices to the list of companies that are being enshittified by corporate priorities.

I agree 100% with Jim that industry standardization is really what would it would take for me to be interested in a new timecode solution ... but that seems almost impossible to coordinate.

On the other hand ... our industry has come a LONG way in terms of innovation in specialized gear.  It's pretty impressive how much purpose-built kit there is out there to solve almost any problem that we face.  I feel like if there's a reason why small, scrappy companies aren't as competitive as they once were it's because a lot of the problems we used to face from adapting equipment built for other purposes to our particular niche have been solved.  Perhaps there just isn't as much room for innovation as there was a couple decades ago...

Link to comment
Share on other sites

1 hour ago, The Documentary Sound Guy said:

I agree 100% with Jim that industry standardization is really what would it would take for me to be interested in a new timecode solution ... but that seems almost impossible to coordinate.

 I understand that for professionals to switch to new gear and/or workflow, the pros should be way more attractive and numerous than cons: my low cost TicTacSync project  is more relevant to hobbyists and DIYers youtubers. How can you go more low tech than that: I've even implemented acoustical coupling for Point & Shoot consumer cameras without mic inputs😏 (and that killed LTC right off the bat: my limited tests showed that biphase mark encoding doesn't play well with acoustical coupling).

 

CANONacousticSmall.jpg

CanonWord.png

term.png

Link to comment
Share on other sites

You should have led with that.  That's very cool ... scratches my nerd itch like crazy.

My criticisms about this not being a full replacement for regular timecode stand, but what this does look like is a better replacement for laying down LTC on an audio track onto cameras & other devices that don't support timecode at all.  Am I correct in understanding that each TicTac box outputs an audio signal that is then recorded on each device?  Then, the post-processing step reads the GNSS audio track from each file and syncs everything else to the timestamps?

I have to admit, I don't understand what you mean by "acoustic coupling".  Are you embedding the audio GNSS signal into a video signal somehow?  How do you get better-than-frame precision if that is what's happening?

A major downside of this approach is that it requires an extra audio track on each device, which is then unusable for audio.

I think you are right that this is of more interest to amateurs and YouTubers than "professional" film & TV types, but that is still a large and growing userbase!  You need to know a bit more about professional workflows and post-production to understand why it's not well suited to (most) of the work that people here are doing.

For starters, very few productions would want to work from single files that contain both video and audio.  Picture and sound are handled separately, by different departments, and remain separate until the very end of the process.  A picture editor will work off of either a camera scratch track or may sync to the on-set mix, but is unlikely to listen to or want to deal with all the iso tracks that we record.  Once picture is locked, a sound editor will sync the original audio to the picture edit — not to the original raw footage.  In many cases, the production audio *does* end up in sync with the original footage, but there may also be audio clips used that aren't intended to be in sync with the picture as it was originally recorded (e.g. off-screen dialogue taken from alternate takes compared to what is shown on screen).  All this sync information is relative to the timebase of the sequence, not to any absolute or clock time.  It's the responsibility of the editing software to keep track of the relationship between the timecode of the original audio and video and their relative positions on the timeline.  This timeline counts in SMPTE frames, not in "clock" time.  An edited sequence is represented as an EDL (Edit Decision List) that is essentially a table of reference points that match in and out points in the original footage to specific timecode points on the timeline.

 

Long story short, treating audio and video separately is actually an important function of video editing software, and it's not really advantageous to handle them in a single file.  And, because editing mostly involved cutting video, being frame-rate agnostic is actually not an advantage.  When keeping video clips in sync with other video clips, being able to identify a specific frame is actually critically important.  Replacing SMPTE would mean re-writing just about every editor and post-production software on the market.

Secondly, as audio mixers and recordists, we are responsible for making sure the audio we record on set is in sync with picture, but we are *not* the people who actually do the syncing.  That is usually the responsibility of an assistant editor (or just the editor on an indie film).  Thus, an important part of my job is having a discussion with post-production before I start and making sure that my plan for keeping things in sync on set will work with their workflow in post, or, if no plan for post-production exists, making sure that what I do on set can be straightforwardly synced when post does look at it.  That puts using an "unusual" sync method such as your GNSS audio tracks as a severe disadvantage.  I very much do not want to be in a position of having to pitch post-production on a "better" way of syncing.  I want to give them tracks that they can use in a way they already understand, and that will suit what they intend to do in post.  Most of the time, that means using timecode; on smaller / documentary shoots, it means sending an audio scratch track to camera, and in many cases, it means using multiple methods of sync in case one fails (e.g. supplying a slate and making sure it is used in addition to sending both a scratch track and a timecode signal).

Lastly, I'm a professional, and significant part of how I earn my living is by maintaining a certain mystique as an audio "expert".  That means maintaining a professional image and using tools that fit that image.  As much as I like the idea of messing around with TicTac sync boxes and building my own tools, I would never bring them to a professional job because they compromise that image.  I don't want to give the impression that I'm cutting corners by being cheap, and I don't want to give the impression that my employers could "do it themselves".

That said, there *are* a lot of workflows where integrating audio and video in a single file makes sense.  They just don't exist much in the professional film & TV world that hires professional audio recordists.  Freelance news shooters, corporate videographers, and, yes, YouTubers all frequently work solo or in small, integrated teams, and they edit quickly and simply.  Notably, they also don't tend to make much use of timecode either; for the most part, they will record audio direct to camera, and if they do use a separate recorder, waveform sync is "good enough" for most of what they are doing in post.  They aren't trying to coordinate footage across many different departments and versions, which is where timecode really becomes necessary.

I think the TicTacSync could make sense for "in-house" production where the people doing the filming are the same people editing in post.  That sort of production can certainly benefit from better, more automated tools to keep single takes of prosumer equipment in sync, and they are more likely to want to edit single files that contain all the video and audio in one place.  It might only be an incremental improvement over waveform sync or the Tentacle app, but I do see the benefit, both in terms of speed and precision.  I do think GNSS is potentially a better way to sync individual takes of video and audio into single files.  Just don't claim it's a full replacement for SMPTE timecode ... SMPTE does a LOT more than just sync individual takes, and it's evolved into the beast it is for good reasons that can't be replaced by UTC timestamps.

Link to comment
Share on other sites

4 hours ago, The Documentary Sound Guy said:

Am I correct in understanding that each TicTac box outputs an audio signal that is then recorded on each device?  Then, the post-processing step reads the GNSS audio track from each file and syncs everything else to the timestamps?

 

Yes, you're correct. My gizmo is, actually, a timecode generator but not LTC. And acoustic coupling is a way of transmitting a signal when no direct electrical connection was initially designed: a small speaker is put in front of the mic (that's how computer modems worked in the 70s: Bell didn't want anyone to mess with their circuitry, but was eventually forced by regulators to open up?).

 

4 hours ago, The Documentary Sound Guy said:

Long story short, treating audio and video separately is actually an important function of video editing software,

 

Indeed: for now my software simply strips off the AUX track once video and sound is merged but I plan to code the option to keep it for mix downs on DAWs (as illustrated in this master plan on hackaday. Back then I called the sync signal YaLTC)

 

4 hours ago, The Documentary Sound Guy said:

A major downside of this approach is that it requires an extra audio track on each device, which is then unusable for audio.

 

Well,  LTC was designed as an audio signal precisely for that. But I concede it's kind of a hack and wasteful: manufacturers should implement it in their hardware 😏

 

4 hours ago, The Documentary Sound Guy said:

You need to know a bit more about professional workflows and post-production to understand why it's not well suited to (most) of the work that people here are doing.

 

That's why I'm in this forum! To learn about the pitfalls of my project regarding 'real' production settings. It's not an easy task: having to hang out with 3 kind of crowds... Camera users, sound engineers AND editors... So, thanks for your lengthy reply, chuck-full of data!

Link to comment
Share on other sites

2 hours ago, lutzray said:

Well,  LTC was designed as an audio signal precisely for that.

I'm not sure if that's historically true.  I think it's coincidental that it happens to be readable as audio ... all the older equipment I can think of used dedicated (non-audio) ports for LTC.  Prior to digital systems, it took special circuitry, not just software to read an LTC system, so I think running timecode over circuitry designed for audio has always been a hack, not a design consideration.  Because of that, recording LTC on audio circuitry wasn't that useful ... you still needed hardware to decode the LTC post-recorder.  Plus, older audio circuitry being what it was, LTC would have been very prone to bleeding into adjacent channels.  I have trouble imagine it being an "intended workflow".

The oldest use of "audio" timecode that I'm aware of is using Comtek receivers to distribute an LTC signal to timecode slates (again, a music video workflow, and maybe useful before there were timecode boxes?)  As far as I know, the modern workflow that records LTC on an audio channel is fairly recent:  Tentacle suggested it (and built software to support it) as a means of getting timecode into cameras that didn't support it electronically. 

But, maybe that hack has more history than I'm aware of.  Perhaps some old-timers can step in and correct me if I'm wrong.

Link to comment
Share on other sites

Each frame of LTC is coded by 80 bits.

In 24 fps, that's 24 x 80 = 1920 Hz of a square wave.

That's a perfect signal to record on a audio track, wether the center track of the Nagra IV-S or any specific track of an analog VCR (1", 2", U-Matic and Betacam were all analog recorders).

Even the Betacam Digital used 3 analog tracks to record the CTL (Control signal for linear editing), LTC and a scratch audio track for fast forward and rewind.

 

Link to comment
Share on other sites

LTC on analog audio and video equipment was very much an "intended workflow", and was a pain in the ass.  It did bleed, and when recorded the edges of the LTC word would get rounded off so recovering usable TC from an analog recording was a serious challenge.  Since this TC was absolutely vital to the synchronization process a great deal of effort was put into the recording and playback of LTC on the machines of that time.  I used to include instructions and suggestions for telecine operators syncing my 1/4" CTTC audio recordings, as well as my phone number in big letters on the outside of the tape box.  It seemed that about 1 in 6 jobs would have some sort of issue with my tapes, usually (it would turn out) inflicted by the post-syncers on themselves.  Since post-syncing usually  occurred on the graveyard shift I got used to late-night or early AM phone calls, sometimes resulting in me having to go down to the post facility to coach the operator.

Link to comment
Share on other sites

In mixes at that time we could be running several analog machines in sync at once via LTC, including multiple VTRs, multitrack tape decks, 2 and 4 track tape decks as well as devices like CD players or outboard signal processors that were synced to or triggered by LTC.  Making an analog machine run in dead sync to LTC required extremely sophisticated electro-mechanical designs, and the setting up of synchronizers that could talk to all these machines was a highly fraught task requiring a lot of time.  LTC continued to be a major factor in audio post operations well into the early digital era, including DTRS tape decks and the early versions of ProTools, WaveFrame, Dyaxis, Sadie, etc etc.  Facilities were often judged in part on how seamlessly all these devices could work together during a mix.  In the Bay Area the patron saints and ascended masters of this were Bob Berke and Kelly Quan.

Link to comment
Share on other sites

9 hours ago, lutzray said:

My gizmo is, actually, a timecode generator but not LTC.


So this is actually a cool project you got there, and you found a nice way there to bring tc (or a sync signal) into a camera that doesn’t even have an audio input other than a mic. You didn’t need to build your own box for that, but why not. But I still can’t see the advantage of the GNSS application here. Ok, it may be very precise and it may be easier and cheaper to build than a regular tc box, but in the end it generates a sync signal that all parties involved will record in some way and later it will help with automatic syncing. Cameras and editing software tend to still think in frames so your ultra-precise sync signal will still only end up to be precise down to a frame. It’s a good idea to record the signal at the beginning and at the end of the recording, but that can easily be achieved with regular tc as well. And you’re wasting a audio track, just like when cameras don’t have tc capabilities just like regular tc boxes. 
The only advantage to your system that I can see is if manufacturers really would integrate it into their hardware. Then it could theoretically all happen automatically, although someone would still need to make sure that all devices have actually received a valid signal. And again, the same would be for any tc box integrated in to a camera. They all now have some sort of remote control feature. 
Camera makers couldn’t be convinced for the past … I don’t how many years, to integrate proper tc circuits into their cameras, with a few exceptions. So I‘m not sure how you will fare. 
lastly, the amount of time I spend each day thinking about and actually working on timecode is on average about 45 seconds, including recharging the boxes at night. It may be a bit more for the assistant editor, but all in all the system works pretty well. 
So how can you improve my day by 45 seconds? 
I really don’t mean to stifle your enthusiasm, I‘m just genuinely curious what it is you thinl you will bring to the table

Link to comment
Share on other sites

39 minutes ago, Constantin said:

So how can you improve my day by 45 seconds? 

You have nothing to gain here, because you're a pro. But I'll reaffirm the argument that I gave here: TC is complicated for neophytes, the learning curve is steep. You don't recall this because you've learn it decades ago but the myriad of primers, introductory videos, forum posts, etc... explaining TC and its quirks confirm this fact.

 

Even seasoned experts here can feel some pain points, writing: I'm not convinced that is necessarily any simpler than the mess that we have right now.

 

And there is the cost: my BOM is 45 USD because I'm using off the shelf boards, but if this get traction (or if I speed-learn KiCAD) some hacker in Shenzen will whip up a one board solution for 20$. For the rare occurrence where GNSS aren't visible (buildings with large steel roof or deep in a mine), GNSS signals can be simulated with a 330$ box.

Link to comment
Share on other sites

On 10/4/2023 at 3:15 AM, The Documentary Sound Guy said:

  I feel like if there's a reason why small, scrappy companies aren't as competitive as they once were it's because a lot of the problems we used to face from adapting equipment built for other purposes to our particular niche have been solved. 


Thank you. I was the one who forced Canon in giving the DSLR's 'real' timecode. (Well, just system clock emdedded as tc)

Now, how can I quote from multiple posts here, I don't feel like making 20+ new ones...

 

Link to comment
Share on other sites

On 10/4/2023 at 4:44 PM, The Documentary Sound Guy said:

I think it's coincidental that it happens to be readable as audio

I agree, it's a TTL signal. But, the Phil Reese site (Mentioned here before) gives info on a low pass filter to make it 'suitable' for audio tracks, taking off the sharp edges. (No one except me does that nowadays, but it's good practice.)
 

 

On 10/4/2023 at 6:08 PM, Philip Perkins said:

and was a pain in the ass.  It did bleed,

No, it did not if you set it up correctly. Most of the time it is / was WAY TOO LOUD. Volume is not important for LTC, it's just 'on / off' for a specific period. The rounding error as mentioned only exists in digital systems, and is just a sample here and there, not really relevant for the system to work.
 

 

On 10/4/2023 at 4:53 AM, lutzray said:

I've even implemented acoustical coupling for Point & Shoot consumer cameras without mic inputs😏 (and that killed LTC right off the bat: my limited tests showed that biphase mark encoding doesn't play well with acoustical coupling).


You might want to read in on IRIG.
This is Amplitude Modulation, sorta kinda you do, but it's a standard (set of standards). It's still used in aviation traffic control recordings, I have two clients using that for transcription work on communications when accidents / incidents have happened. They get 2 channel Wave files with the communication on left, Irig on right, I think to make sure no one can mess up the time stamps. (And no, it does not bleed.)

 

On 10/5/2023 at 12:46 AM, lutzray said:

And there is the cost: my BOM is 45 USD because I'm using off the shelf boards

 

Don't forget the housing / connectors, let alone marketing stuff...
(Funny story, the outside sensor of my central heating broke. List price is euro 45,50.
https://www.warmteservice.nl/Verwarming/Thermostaat/Buitenvoeler/Buitenvoeler-weersafhankelijke-regeling-voor-Mod-30-400/p/11850325?gclsrc=aw.ds&gclid=CjwKCAjw4P6oBhBsEiwAKYVkq6qGd6-BSr8UJhfZ8P8h2wPtLZhO_ZnMSPSKBs-5lHqilRxZozvPjBoCtKUQAvD_BwE

This is a 30 cent plastic housing, a 5 cent terminal block, plus a 20 cent temperature resistor...)
 


Then, I totally support the idea of a universal clock. Not just for syncing, TOD is always nice to have. Makes the life of editors way better.
Not everyone is doing feature work...
BUT, timecode is a meachanism to always be able to conform, so each frame MUST have a unique ID.
Most of the time TC is written as a start frame number, with a framerate. (And that does NOT have to be the same as the video frame rate, but if it is not, it gets complicated.)
To get a 'sorta kinda' UUID, add Reelname and Date (userbits).
The rest is just stupid math.

 

Link to comment
Share on other sites

Bouke...TC DID bleed, sometimes at any level.  It wasn't just a head stack issue, it was also a cable+connector+chassis+other electronics and routing issue as well.  A great many things had to be done really well to avoid TC bleed, especially on the way IN to the recording device.  We spent SO many hours chasing TC gremlins in audio post studios and on multicam video shoots.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...