Jump to content

Why use time code if timestamps are precise enough?


lutzray

Recommended Posts

The use of GNSS syncing has been discussed here before, but in the context of SMPTE linear time code...

 

Ignoring completely for a moment the LTC format, what if camera and sound recorder manufacturers used the GNSS 1PPS signal to simply write the start and finish times into metadata tags? UTC timestamps can be written to the nearest hundredth of microseconds. Couldn't that information be used in post-production to solve syncing and drifting issue? Sound tracks could be padded or chopped before being merged to video (and <hack!> time stretched if needed). Frame rate settings on devices would be irrelevant and no more jam sync needed!

 

When sat signals are unavailable (ie in a steel building) a RF 1PPS repeater could be used. No need to rebroadcast the whole date and time, just the 1PPS clic: that would greatly simplify circuitry and FCC requirements, maybe using the ISM band (since no data would be transmitted?)

 

Device manufacturers could progressively phase out the outdated SMPTE standard, offering systems with both solutions: legacy SMPTE LTC and GNSS microsecond UTC timestamps.

 

I know some contributors here have heavily invested in development of LTC solutions (and obtained US patents for methods to shoehorn UTC into LTC), but DF vs NDF? integer vs rational FPS??? This is giving headaches and nightmares to too much people...🙂

 

sure, LTC and/or MTC would be still useful for synchro of live events (synths, light effects, etc..) but not for shooting...

Link to comment
Share on other sites

So you want us to replace our timecode sync box with another box that can relay sync from a satellite? To me as an end-user, what’s the advantage? 
 

fps, nd, df, etc. have nothing to do with your chosen method of syncing. 
 

Genlock and possibly Wordclock also are something that need to be considered. 
 

Soneone did actually propose exactly what you are proposing now and we had a discussion about that here. I believe he did start development, but I‘m not sure where it’s at

Link to comment
Share on other sites

Well, I did some research years ago.
A GPS receiver is dirt cheap, and coding for it is incredibly easy. (Just connect to it, and data comes in in human readable form.)
But, the clocks from satellites do NOT run at the wall time on earth.
They have an offset, that is increased every such time (I forgot...)
So, to get the real time, you need to know the offset. But that is only broadcasted every hour or so.

Then, it might take a long time for the unit to find a satellite, that is not nice for run & gun work.


 

Link to comment
Share on other sites


The product that started on Kickstarter was created is
Iterative Features, LLC
https://www.facebook.com/people/DishTC/100057616712042/
https://dish.tc/
offers two products:
Dish DM and Dish Pro.
I heard you can get Dish Pro if you ask for a price quote from Gotham Sound and order it directly from Gotham as they don't stock it in house.

 

Dish.TC says: 

Quote

if you lose signal, an onboard TCXO holds you over until you reacquire it.


The issue with GPS-derived sync is you need almost a clear view of the sky to get a lock on 1 GPS signal for the UTC stamp. As you know in your car using GPS you need a minimum of 3 GPS signals to triangulate for navigation and 4 signals to show height. If you have a show indoors in a basement and put your batteries in inside the location no GPS signal = no timecode on startup. The solution would be to put in batteries and make sure the GPS receivers all see the sky first at start of day.

 

The replacement for SMPTE12 TC which development of the SMPTE ST 2059 family of standards for synchronization using the Precision Time Protocol (PTP).see ST 2059-1:2015 - SMPTE Standard - Generation and Alignment of Interface Signals to the SMPTE Epoch | SMPTE Standard | IEEE Xplore
and ST 2120 are all of the standards and it's all moving to IP for professional broadcast video.
 
Things like 60fps and 120fps are real in 4k and 8k UHD2 formats and need to be added to a SMPTE standard for audio to have a way to sync to these framerates. (No not overcranking.)

 

Like Constantin mentioned wordclock and Genlock Sync are totally related and should be considered in any new product away from our current workflow of sound for picture on film/TV sets and while you may not need Genlock in 10 minute takes on a narrative film set others will when shooting with multiple camera manufacturer's models and sound doing a concert or 3 hour long event recording in-camera as TC sync with a stamp is only accurate on the first frame. the drift occurs from the internal clocks being different.

Link to comment
Share on other sites

Thanks to all but this forum would be a great place if regular members didn't take for granted the occasional poster is a noob. Yes, I know about TAI vs UTC vs UT1, genlock vs TC, cold fix vs hot fix, etc... And I mentioned this was discussed before, so no need to refer to Ari Krupnik's project: he was using LTC (see his US patent).

 

And nobody answered the original post question: Why would one uses time code if UTC timestamps are precise enough?  With as stated the hypothesis that manufacturers are providing said information via their internal hardware (so, no little box added).

 

It's really disappointing to see manufacturers (Nikon, TASCAM) incorporating new tech into their hardware (ATOMOS AirGlu™) although simpler, cheaper, designs could be implemented.

 

Link to comment
Share on other sites

Ok, here’s one simple answer: time code and regular clock time are not the same. You can easily see this if you try jamming your device to time of day. To best illustrate this, compare it to your phone’s clock, which should be accurate because it’s pretty much continuously updated when you have a cell or WiFi connection. Already after an hour you’ll be 10 seconds out.

Link to comment
Share on other sites

On 9/28/2023 at 9:58 PM, lutzray said:

And nobody answered the original post question: Why would one uses time code if UTC timestamps are precise enough?


In fact everyone who replied answered your question by giving you reasons why it isn’t as simply as you make it sound. 
You do seem like a noob, though, for finding tc complicates and for thinking that we won’t need to worry about framerates anymore if only we could figure out how to sync our various devices. 
 

Link to comment
Share on other sites

2 hours ago, Constantin said:

In fact everyone who replied answered your question by giving you reasons why it isn’t as simply as you make it sound. 

Those reasons were based on outdated informations and misconceptions. Nowadays GNSS modules can get a fix in less than 10 seconds and don't need clear view of the sky (mines are keeping fixes in the basement of a two stories brick house without any problem). And those modules output UTC times which is the civil time (what people here seems to call wall time) modulo your time zone.

 

2 hours ago, Constantin said:

You do seem like a noob, though, for finding tc complicates

I can understand LTC thoroughly and, still, find it convoluted and outdated. A whole lot of people struggle to grasp it though (not me).

 

2 hours ago, Constantin said:

You do seem like a noob, though for thinking that we won’t need to worry about framerates anymore if only we could figure out how to sync our various devices. 

You wouldn't have to sync your devices:

  • your devices would simply receive continuously the date+time from the GNSS module,
  • start a 1 MHz counter (for microseconds) on each rising edge of the 1PPS square pulse emitted by the same module
  • and when the user starts a recording (audio or video) write down this time as metadata for the file.

The syncing itself is done in post-production, knowing the precise starting time of the video recording and the starting time of the audio recording. And then you import your synced video+audio in your NLE.  Et voilà, syncing without TC! It is simple and doable.

Link to comment
Share on other sites

What keep the devices running exactly at the same rate after the time-stamp?  Each device will have a time-based drift.  At the end of a certain amount of time, each device will have drifted a slightly different rate.  That is why there was a whole thread on gen lock.  That IS a method of time sync over a long period of time.  No starting time stamp will work.  Am I missing something?

 

D.

 

By the way, any pro knows that TC drifts as well.

Link to comment
Share on other sites

I think what the author is suggesting is that with start and end times known with a sufficient degree of precision, the audio could be conformed / reclocked / time warped such that sync would be possible in post.

I'm not convinced that is necessarily any simpler than the mess that we have right now.  But I get the impulse behind it, and I can kind of see how it would work.

But ... it can't replace live sync mechanisms (word clock sync / genlock), and, for extremely long takes, it makes the assumption that footage would drift at a constant rate, which probably isn't valid (but maybe wouldn't make much difference in real life?).  I can imagine a "reality" scenario where multiple cameras are shooting indoors and outdoors in varying temperatures that would drift noticeably even if the start and end times were aligned.  Plus, there isn't a good technical solution for reclocking video frames in the same way you can time-warp audio, so trying to sync multiple *cameras* would fail if only the start and end times are known.

Long story short ... I don't buy that we can replace timecode with satellite time in all the scenarios we currently use it, nor do I think it's desirable to add yet another method of sync to the pile.  I don't see why we would want to make our workflows dependent on a satellite service.
 

Link to comment
Share on other sites

53 minutes ago, tourtelot said:

What keep the devices running exactly at the same rate after the time-stamp?

Nothing

 

53 minutes ago, tourtelot said:

At the end of a certain amount of time, each device will have drifted a slightly different rate.

For short takes, syncing only the beginning is sufficient. If drift between the camera and the audio is important, the timestamps done at the end of each files (video file and audio file) can be used to time stretch the audio to fit the video file at both ends.

 

53 minutes ago, tourtelot said:

By the way, any pro knows that TC drifts as well.

I'm not pro but I can affirm any clock drifts except atomic ones based on cesium (because that's how the length of a second is defined).

Link to comment
Share on other sites

5 minutes ago, The Documentary Sound Guy said:

I think what the author is suggesting is that with start and end times known with a sufficient degree of precision, the audio could be conformed / reclocked / time warped such that sync would be possible in post.

Yes! But as said, for short takes only the beginning of audio needs to match the first video frame (this is done by padding or chopping the sound file).

 

9 minutes ago, The Documentary Sound Guy said:

Plus, there isn't a good technical solution for reclocking video frames in the same way you can time-warp audio, so trying to sync multiple *cameras* would fail if only the start and end times are known.

Interpolating video frames is for now exclusively a research topic in AI fields (as resolution enhancement). So perfectly keeping multiple cameras in sync for long takes with subframe precision (as in stereo videography or full 3d solid angle for VR) indeed calls for genlocked cameras.

 

14 minutes ago, The Documentary Sound Guy said:

I don't buy that we can replace timecode with satellite time in all the scenarios we currently use it, nor do I think it's desirable to add yet another method of sync to the pile. 

Manufactures can offer both at no cost, without any more settings, cables or knobs

15 minutes ago, The Documentary Sound Guy said:

I don't see why we would want to make our workflows dependent on a satellite service.

Well, more and more people on sets rely on RF communications, bluetooth, cell phones, and what not. And they're all exposed to failure (crowded radio airspace, empty batteries, etc), so it's not clear cut.

 

And with this system, no more jam sync! no framerate to set for each device!

Link to comment
Share on other sites

30 minutes ago, lutzray said:

But as said, for short takes only the beginning of audio needs to match the first video frame

I've had a 3 minute song drift against a single camera so badly that it was unusable, and time-warping the song didn't help because the drift was variable throughout the song.  I never did figure out what was at fault:  Either a consumer iPod that we used for playback, or possibly bad mains power were my guesses.  It was a lesson that, if you need guaranteed sync, matching the first frame doesn't give that guarantee.  It's why I always record the source audio in a playback scene just in case.
 

 

33 minutes ago, lutzray said:

Manufactures can offer both at no cost

Adding an IC to receive the satellite signals and the logic to implement it is not zero cost.

 

34 minutes ago, lutzray said:

Well, more and more people on sets rely on RF communications, bluetooth, cell phones, and what not. And they're all exposed to failure (crowded radio airspace, empty batteries, etc), so it's not clear cut.

"Everyone is doing it" is sometimes a practical reason for adopting something (i.e. a form of vendor lock-in), but it's not a good reason to add extra points of failure.  As I understand it, you are trying to make a case for the engineering benefits of satellite sync, which have to be balanced against the drawbacks.  Dependence on an external service is a drawback.

 

42 minutes ago, lutzray said:

 

And with this system, no more jam sync! no framerate to set for each device!

This is already the case with many modern devices that incorporate some variety of wireless control protocol (Zaxnet, Ambient ACN, etc.).  The next step is for the industry to standardize around one control protocol, not create yet another one.  This means adopting across brands and across sound and video devices.  I think this is already happening to a small extent with ACN and certain cameras, but it's a long way from universal.

Link to comment
Share on other sites

1 hour ago, lutzray said:

any clock drifts except atomic ones based on cesium (because that's how the length of a second is defined

This is what my earlier comment is based on. I was on a semi amateurish shoot years ago where the script supervisor was looking at an atomic clock they had and thought there was something wrong with my timecode slate, because it drifted apart severely from the atomic clock. But of course, the slate and recorder were in perfect sync.

 

You can jam a TC device from your phone, and check and compare it after an hour or two and you’ll definitely see several seconds of drift. 

 

This has been discussed before, and while it seems simple enough on paper, I do believe there is a reason why SMPTE came to a consensus and decided on the standards we now have. 
 

Different frame rates… broadcast standards in different regions…. Differences in electrical grids… etc 


I am definitely, absolutely not against innovation and new ideas, but unfortunately there is (usually) no simple solutions to complex problems. 

Link to comment
Share on other sites

2 hours ago, Johnny Karlsson said:

You can jam a TC device from your phone, and check and compare it after an hour or two and you’ll definitely see several seconds of drift. 


ONLY if your TC is 23.976 or 29.97ND. Then you have a speed difference of 0.1 % meaning 3.6 seconds per hour.
For 24, 25, 30, 50, 60 120 and drop frame rates there will be no noticeable difference.

For time warping video, as mentioned a couple of times, that is VERY easy.
A long take is close to never used in full, so it's very easy to cut out / slip a few frames when the angle is not used.
An offset of 1 to 2 frames is often barely noticeable. (If you're in the back of a movie theater, the slow speed of sound makes that you're out of sync by 2 frames...)
The main trouble is that it takes time, so a clap / flash at the end of the recording to check how many frames to add / loose would be handy.

 

Link to comment
Share on other sites

Thank,  @The Documentary Sound Guy, for your reply but you can't argument against my proposition writing

 

Quote

Dependence on an external service is a drawback.

 

and a few words later praise an external solution (or imply it's a good one):

 

Quote

This is already the case with many modern devices that incorporate some variety of wireless control protocol (Zaxnet, Ambient ACN, etc.). 

 

And the proprietary Ambient ACN uses the crowded 2.4GHz band, talk about adding extra points of failure...

 

Your story of an iPod doing vibratos is interesting but tangential to a discussion about syncing the results of recording devices.

 

Quote

It was a lesson that, if you need guaranteed sync, matching the first frame doesn't give that guarantee.

 

I was talking about short takes with a few ppm drift, not filming a scene with audio source going bunker. For variable drift on a recording device during looong takes (eg with cooling electronic components), a calibration file could be produced by the camera, dumping at regular interval frame counts and GNSS timestamps.

 

8 hours ago, Johnny Karlsson said:

This has been discussed before, and while it seems simple enough on paper, I do believe there is a reason why SMPTE came to a consensus and decided on the standards we now have. 

SPMPTE TC standards (VTIC and LTC) were designed in 1967 for tape transport, are archaic and forced on users because of inertia. Time to break free.

Link to comment
Share on other sites

1 hour ago, lutzray said:

Your story of an iPod doing vibratos is interesting but tangential to a discussion about syncing the results of recording devices.

If you are only interested in syncing recording devices, sure.  My point is that the use cases for the SMPTE system we have now are a lot broader than the use cases you are offering a new solution for.  Those use cases encompass live sync, playback, and long-form live television (e.g. a sports event).  If you want to get rid of SMPTE / LTC, those use cases aren't tangential ... you have to propose a solution that is robust enough to replace all the things that timecode is used for, otherwise you risk ending up with two competing systems and even more of a mess.

 

If it was indeed the iPod that was at fault, the solution that guarantees sync is to use a playback device that was capable of locking sync with my camera ... which requires an ongoing timecode signal.  Start and end times do not provide that guarantee.

 

1 hour ago, lutzray said:

For variable drift on a recording device during looong takes (eg with cooling electronic components), a calibration file could be produced by the camera, dumping at regular interval frame counts and GNSS timestamps.

Yes, this would work.  But now you are back in the realm of an LTC-style timecode where the sync signal is embedded on an ongoing basis, not just the top and tail of the take.  You are reinventing linear timecode, and making it more complicated by adding UTC timestamps.

 

1 hour ago, lutzray said:

praise an external solution (or imply it's a good one):

Not sure what you are talking about here?  Maybe the word "external" is being confused?  By external, I mean relying on a service that someone else has to maintain on an ongoing basis (someone has the keep the satellites operational).  Nothing I suggested requires relying on someone else keeping anything online to work.

Link to comment
Share on other sites

Interesting idea.  If Sony, Arri and RED adopt this then I guess it's what we'll do.  If producers get sold on it as a new cheaper+more efficient workflow component then ditto.  Until then this is pretty speculative, since this sort of change to established methods doesn't generally come from users like us, they are invented by the big manufacturers of gear and then sold to filmmakers through advertising and demos to their senior tech advisers.  Standing by for more info!

Link to comment
Share on other sites

7 hours ago, The Documentary Sound Guy said:

Yes, this would work.  But now you are back in the realm of an LTC-style timecode where the sync signal is embedded on an ongoing basis, not just the top and tail of the take.  You are reinventing linear timecode, and making it more complicated by adding UTC timestamps.

Except that for this edge case (variable drift during a long take), LTC won't give you any useful information about the variable drift but regular GNSS timestamps will. So it's not back to square one as you are asserting. And if it's embedded by the manufacturer, it will be simpler

 

7 hours ago, The Documentary Sound Guy said:

playback device that was capable of locking sync with my camera ... which requires an ongoing timecode signal.  Start and end times do not provide that guarantee.

What does locking sync mean? Are you saying that your hypothetical iPod with its DAC clock oscillating between, say 44.104 kHz and 44.996 kHz would force  its variation on your camera frame rate between 30.03 and 29.97 fps in a master-slave configuration? No, that would be kind of genlock.

 

So what does the camera do with the incoming TC values? Is it used solely to sync its internal TC counter? But not the system clock regulating frame capture? And when the user starts a recording, the device writes down its current TC value into the metadata file as a timestamp, is this right? So this TC tool chain is used as timestamp generators? Correct me if I'm missing something, that's why I started this thread.

 

 

Link to comment
Share on other sites

11 hours ago, lutzray said:

And the proprietary Ambient ACN uses the crowded 2.4GHz band, talk about adding extra points of failure...

 

I guess, but my understanding is that Ambient's ACN only uses 2.4 to re-jam devices every six seconds. The devices all have stable clocks, so if they miss a few (or probably dozens) of rounds of synchronization, it's not a big deal. Perhaps 2.4GHz isn't ideal, but the way ACN works, it doesn't seem like a huge flaw. I think Zaxcom's ZaxNet works in a similar manner: There's TC going over 2.4 that can rejam devices, but the devices still have their own accurate clocks. I defer to people here who've worked with ACN & Zaxnet; how well have they worked for you?

 

As for improvements on current timecode systems, IIRC, Peter Symes (Grass Valley, SMPTE, etc) was working on (or was part of?) the TLX Project (Time Label, eXtensible) to address issues they saw as problems with TC-12. I recall hearing about this a few years ago, and the main preso I see online is from early 2019, 4.5 years ago. I don't know where that project stands (I dig SMPTE, but I'm not an engineer). Here's a relevant page (feel free to link me to something more comprehensive and current). https://www.smpte.org/past-events/beyond-smpte-time-code-the-tlx-project

 

And more recently, some people at frame.io were pretty excited about the need for more modern timecode/syncing standards. I guess that grows out of their camera-to-cloud projects. Here's a long blog page from last year that I think restates some stuff they presented to a SMPTE conference. It starts with history, goes into what they see as problems with current approaches, and then touches a bit on what they'd like to see next. Long, but there are pictures ;-): https://blog.frame.io/2022/07/11/reinventing-timecode-for-the-cloud/

 

Again, I'm an engineering buff, not an engineer.

 

For me, I mainly work on small simple stuff, but that can include a few cameras, dedicated sound, and an editor+AE who aren't the producer/director. The projects either have their TC act together, or they really don't. Those that do get by with current standards and tools. That don't I think would be good candidates for a better/simpler solution, but they also struggle with cloud stuff like frame.io (even though plenty of them are good storytellers), and often are trailing edge technologically so it might be a while before they'd adopt anything different. I'd be happy with something better and simpler. But I'm making do with a good recorder, good TC boxes, reasonable camera crew (when I'm lucky), and an editor who can fix (or appropriately contact me/others if there are) little mistakes.

 

Perhaps I'd feel differently if there were 20 cameras, several different live remotes, and a bunch of other stuff. But that's not my world

   

lutzray, at this point perhaps it would be helpful to tell everyone your real name and your involvement with syncing video and audio devices.  Who are you, why is next-gen TC so important to you (ie- were you burned on a few projects) and who are you working for or with? 

 

Thanks for sparking a good, if occasionally contentious, discussion.

Link to comment
Share on other sites

44 minutes ago, lutzray said:

What does locking sync mean?

I'm possibly being lax with my terminology here.  Genlocking the playback machine to a video signal (or more likely, both to a master clock)  is certainly a possible workflow, but it isn't what I had in mind.  It's probably a better workflow for a full concert situation though.

 

What I was actually thinking of was a music video workflow where an LTC track is laid down on the same timeline as the source music and used as the timecode source for every take (which allows multiple takes to be overlaid in sync very easily).  If the audio / LTC source is fed to the camera as a scratch track, you have a useful reference for variable drift relative to video frames.  That doesn't necessarily make it easy to fix, but hopefully it makes it possible to use the scratch audio as production audio if something get screwed up.  It's also another scenario where UTC timestamps aren't able to duplicate what timecode can do (UTC timestamps can't be forced to repeat every time the music is reset).

You're probably right that modern cameras won't do anything useful with LTC except record a timestamp unless you lay it down as an audio track though.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...