top of page

The Time-Delay Life

I love watching football. It is by far my favorite game on TV. The display of athleticism and skill, the sophistication of the plays, and the strategic battle between opposing coaches all captivate me. Thanks to modern technology, my wristwatch will even prompt me with scoring updates from my favorite teams when I am unable to see them live. This is downright amazing, except for one thing.

Earlier this football season, I was live-streaming a game where the outcome depended on one last play. My team was down by 1 point with 5 seconds remaining and a fourth and goal from the one-yard line. As the camera zoomed in on our field goal kicker getting lined up, my watch buzzed, flashing the final score (we lost). As I looked back to the TV screen, the kicker nodded, the ball was hiked and, sure enough, the kick was blocked. As sophisticated as my watch may be, it has no AI that can forecast the future. Clearly, the “live” stream was not.

Pundits speculate that this can be blamed on the halftime show of Super Bowl XXXVII, where Justin Timberlake tore off a strategic portion of co-performer Janet Jackson’s costume. Some might also suggest that those large parabolic microphones carried by operators along the sidelines sometimes pick up colorful language that the league might not want to broadcast. NFL films actually put mikes in the shoulder pads of some interior linemen – this is one of the reasons that Peyton Manning’s legendary “Omaha” call went viral. Not only was all this audio and video information a bit too risky for the NFL, but the occasional latency problems of a live-stream could prove very annoying to viewers. The solution – a time delay of 20-60 seconds.

This time-delay technology is a growing part of our modern lifestyle. Back in the day, we used to set the timer on a VCR to record our favorite shows. Now, we can conveniently watch on our own schedule without pre-planning or extra hardware. If we are a few minutes late in turning on the evening news, a smart TV will offer the option of starting from the beginning. The same is true for sporting events, where missing the opening kickoff is no longer a major hurdle - provided you don’t look at your watch. If you want to play arm-chair referee and review a play, you don’t need a challenge flag – just a remote with a back-button. While all this time-shifting technology may seem revolutionary, scientists tell us that we have been doing it ourselves all along without realizing it.

The human brain is basically incapable of multi-tasking. If you think you are a great multi-tasker, you are more or less wrong. True, we can drive while listening to a podcast, and I’ve been told there are even people who can chew gum while taking the stairs. As long as different parts of the brain are involved, no problem. But just try and read while listening to something different and see how that goes. Since both activities use the same language processing regions of the brain, confusion will ensue.

Recent research in the journal Science Advances shows that our brains, especially the regions involved in vision processing, are bombarded with constantly fluctuating noise – changes in lighting, perspective, and motion – that would be overwhelming. Our brains create an illusion of stability by processing data for the previous 15 seconds, focusing on similar objects and overlooking subtle changes that may not matter. These and other scientific observations have led to the development of smoothing software for smartphones, as well as research funding to answer the inevitable “WTH?”

Time delay living has been with us from the beginning as a designed-in method of enabling our brains to cope with the world, and more recently as a man-made technology permitting other humans to manage what we experience. With the arrival of AI and advanced digital signal processing techniques, there is good reason to be wary of the latter.

Of course we can trust our own brains to accurately process that 15 seconds of time-delay data – or can we? The phrase “to see the world through rose-colored glasses” refers to seeing things better than they actually are, which could even involve editing out the bad frames in that 15 second movie we are about to view. Some deleted frames may be stored in our sub-conscious – the brain’s equivalent of the trash folder on your laptop. Marketing experts know a lot about the sub-conscious mind, having mastered the art of tapping into and exploiting this region of the brain.

We lead a time-delay life, and scientists continue to explore both the natural and man-made editing processes. Psychiatrists keep on seeking new ways to empty our mental trash folder. Digital editing technology combined with modern computing power is nearing the point where completely changing the audio and/or video in a “live-stream” is possible – the only limit is ethics. So far no one has found a dependable way to change the 15 second movie in our heads.

What if the natural and man-made editing processes could someday be merged? What if that 15 seconds of cached brain information could be altered artificially? There are serious efforts (e.g., Neuralink) aimed at interacting directly with the brain through an implanted circuit.

The possibilities are intriguing, although I suppose there is no way of ever getting that missed field goal back.

Author Profile - Paul W. Smith - leader, educator, technologist, writer - has a lifelong interest in the countless ways that technology changes the course of our journey through life. In addition to being a regular contributor to NetworkDataPedia, he maintains the website Technology for the Journey and occasionally writes for Blogcritics. Paul has over 40 years of experience in research and advanced development for companies ranging from small startups to industry leaders. His other passion is teaching - he is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines. Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.



bottom of page