Paul W. Smith
For better or worse, we humans have been measuring things for a long time. In Genesis Chapter 6, God provides Noah with detailed plans for building a very large wooden boat – 300 x 50 x 30 cubits to be exact. Noah presumably knew how to measure a cubit, the distance from the elbow to the tip of the middle finger. 6,000 years ago, the Egyptians built their pyramids using cubits as their standard measure. Long before there was an accurate way to measure time, Galileo (1564-1642) used musicians to supply a steady beat and help determine the acceleration due to gravity. Technology has come a long way, and we now have ridiculously accurate atomic clocks, along with lasers for precise length measurement.
By the Middle Ages, trade had expanded, and a need for recognized standards arose. In the late 18th Century, the French Academy of Sciences decided that the standard for length should be the shortest distance from the North Pole to the Equator (passing, of course, through Paris). One ten-millionth of this distance, which would be measured by a pair of French mathematician/astronomers, was christened the “meter.” While less dependent on human anatomy than the cubit, it did pose some difficulty in accuracy and replication. Thanks to modern science, we now know that there were errors in those original calculations, and the true meter is about 0.2 mm short. In testimony to the somewhat arbitrary nature of “standards”, that error has never been corrected. The half meridian through Paris hasn’t changed.
Today the maintenance of standards is the responsibility of the National Institute of Standards and Technology. You would expect them to have insanely accurate standards for length, weight and time and you would not be disappointed. As an example, the NIST Strontium atomic clock is accurate to within 1/15,000,000,000 of a second and would not have gained or lost even a second if it had been started at the dawn of the Universe.
In a somewhat more esoteric vein, NIST maintains the SRM library, where over 1,300 standard reference materials, including such items as whale blubber (SRM-1945) at $803/30g and domestic sludge (SRM- 781) at $726/40g are stored. There is even peanut butter (SRM-2387) at $1069 for three 170g jars. Precise scientific measurements can now confirm what some people have always questioned – domestic sludge and peanut butter are distinctly different.
For at least 3,000 years, beginning with the ancient Greek Olympic Games, we have also been measuring ourselves in one way or another. We periodically assemble athletes from all over the world to measure who runs the fastest, jumps the highest or throws things the furthest. Once the athletes started wearing clothes, corporate sponsors showed up and brought even more focus on the numbers. Today even arm-chair athletes can measure their key metrics with fitness tracking devices and apps.
Our obsession with measuring ourselves grew during the 19th Century. Until then, consumer products were made by skilled artisans who hand-crafted everything from start to finish. Once machines were invented that could stamp, cut, and otherwise fabricate components, manufacturing became more and more of a rote process. The skill level of the workforce dropped, and companies sought ways to measure and standardize worker performance.
Frederick Taylor, founder of Taylorism, believed that management knew too little about the capabilities and motivations of the workers to be effective. Fred deserves most of the blame for starting to quantify individual performance, as well as for the much-despised time-card. His time and motion studies gave rise to the assembly line, where workers were seen as mere mechanisms in the larger machine.
Most of us first felt the harsh impact of all this quantification in school, where we were sorted according to age, grade and rank and soon learned that our future success was at stake. GPAs, SAT scores, performance reviews, salaries, and net worth all signal our growing obsession with self-measurement. My annual physical includes a number of blood tests, producing dozens of cryptic readings which I can plot out over the course of many years. In an attempt to add clarity, each graph includes horizontal lines for “High”, “Low”, and “Average”. As long as my doctor smiles and tells me to come back in a year, I brush it all off and move on. Yet still I wonder, what is all this measuring doing to us?
Having worked as a professional in STEM for forty-plus years, I have personally witnessed and benefited from the continual advances in measurement instrumentation and standards, awaiting the next development with hopeful anticipation. While measurement is a powerful tool to help us understand and control our environment, it can also be somewhat fluid and random. It remains critical to “know your gauge.” As measurement technology improves, previously indeterminate entities will be revealed. If current trends continue, our future might even include a direct, standardized reading of our own intelligence and emotional stability.
I hope not.
Author Profile - Paul W. Smith - leader, educator, technologist, writer - has a lifelong interest in the countless ways that technology changes the course of our journey through life. In addition to being a regular contributor to NetworkDataPedia, he maintains the website Technology for the Journey and occasionally writes for Blogcritics. Paul has over 40 years of experience in research and advanced development for companies ranging from small startups to industry leaders. His other passion is teaching - he is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines. Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.