Kolmiomittauksesta tuli mieleen että ammoin, ehkä jonkin aikaa sitten, tiedettä riivasti ilkeä 0,1 sekunnin ongelma joka teki teistäkin ihan paavoja:
The basic thesis is that between the late eighteenth and early twentieth centuries, a period of time — 0.1s — acquired a weirdly powerful role at the heart of science, and indirectly, in the shaping of all modernity.
---
You’ve probably heard something along the lines that the human persistence of vision is around 1/10th of a second, and that this is what allows for the experience of movies. This is the gloriously messy backstory of that idea.
---
The rough story (which the book sort of conveys in passing, but in a very elliptical way) is as follows:
In the late 18th century, astronomers were investigating questions which required observations precise to 0.1s, running up against the limits of human reaction time (the origin story involves an astronomer firing a junior because their observation time records didn’t match)
This created a simultaneous crisis across science (stressing and undermining trust in experimental methods), and philosophy of science (stressing notions of objectivity).
For both practical and philosophical reasons, the “human factor” had to be tackled, and this gave rise to interest in something called the personal equation — a model of a particular observer’s reaction times on the 0.1s scale, for recording stimuli. It was the “cognitive biases” idea of its time. Observatories even used to measure and record the “personal equations,” of astronomers alongside observations attributed to them, to allow suitable (and dubious) corrections to be applied to calculations.
Personal equation studies gave birth to the field of psychology in its modern form. To a first approximation, 19th century psychology in the William Wundt era was reaction-time studies around 0.1s. The philosophical presuppositions and imperatives behind this could roughly be described as “rescuing science from the 0.1s objectivity crisis.” This meant modeling and empirically studying the human being as an instrument capable of characterization and calibration (leading to infinite regress questions that were never really resolved).
This gave rise to really thorny questions. Some were well-posed, like the question of the speed of nerve transmission, which eventually got answered in straightforward ways. Others were less well-posed, like breaking down a stimulus-response cycle into a sensation lag, processing lag, and motor activation lag, that we are not really much better at thinking about today, despite our vastly superior understanding of neuroscience.
Psychology quickly found itself in an impossibly messy 0.1s yak-shave, and at the same time, the experimentalists who originally created the practical motivation increasingly began looking for ways around the human factor. This led to, among other things, the birth of high-speed photography and cinematography.
So through the 19th century, we find three main strands of research inspired by the 0.1s limit: experimental methods to mitigate “personal equations”, early “reaction time” psychology research, and the development of photography and other automated techniques to short-circuit human factors altogether, thereby hopefully restoring classical objectivity to science. The book jumps around among these three strands.
In parallel, at a meta-level, we see the evolution of statistics. Gauss and least squares make a surprise appearance in the context of the 0.1s problem. But in this context of real empirical messiness, the mathematical developments seem naive, despite their clear power. Unlike in the standard telling, Gauss didn’t simply solve the problem of varying observer reactions for a grateful experimentalist community with least squares methods. There were fierce debates about whether human reactions were in fact normally distributed, and Karl Pearson’s chi-square test was born of this debate.
There was also a steadily developing philosophical crisis, as it became clear through the 19th century that the attempt to rescue objectivity in the classic 17th and 18th century senses (think 1600-1780) would in fact fail.
By the late 19th century, methodological/psychological efforts to mitigate human factors even with the best discipline had basically failed. On the other hand, physics-based approaches (in particular interferometry) to try and cut out humans at least to first degree had basically succeeded. This led to physics taking over from astronomy as the most prestigious science.
In parallel, psychology had run aground in impossible problems trying to ground an understanding of humans in reaction-time-based empiricism. And philosophy was in crisis too, unable to formulate a coherent philosophy of science now that “objectivity” was in such trouble. It was a dual epistemic crisis: the reliability of “objective” human observation had been undermined, and the rise of technology had created an “alternative” epistemology of seemingly “observer-free” information with murky qualities.
On the technology front, photography (leading a pack of auto-sensing technologies) was in a race against the human eye, in a story eerily reminiscent of today’s conversations around AI. Though in the early decades skilled human observers easily did better at astronomical observation tasks, the writing was clearly on the wall. Slowly but inexorably, photography (and related automated methods) took over from humans.
Photography itself turned into a practical-philosophical quagmire, with a divide emerging between the experimental scientists who wanted to evolve it as an objective instrument, and commercial adopters, who were turning it into a medium of entertainment, with philosophers unsure what to make of knowledge created without observers, by inanimate devices. Was “sampled data” reality really the same as continuous reality as subjectively experienced by humans? If not, what was the difference? What essence leaked out between frames?
There were huge political stakes as well, particularly around the question of standards for length in post-revolutionary France. The century-long rise of physics at the expense of astronomy is evident here. The increasingly troubled efforts to define the meter in terms of the earth’s circumference gave way to elegant definitions in terms of the wavelength of light.
Things came to a head with the high-stakes international competition to measure the solar parallax during the transits of Venus in 1874 and 1882, important for determining distances in the solar system. Roughly speaking, both human and photographic methods failed due to the 0.1s problem, and the physics approach of getting at the same questions starting with interferometry-based speed-of-light measurements won out. Human observers retreated from the center stage of empirical science along with astronomy, and the synthetic, rather than analytic, function of photography began shaping its future.
In this milieu, the philosophy of time became the most important topic in the philosophy of science itself, and the philosopher Henri Bergson rose to prominence on the strength of his study of the matter, establishing a subjectivist tradition that insisted on the inseparability of the human experience of time, and observations and measurements. Effectively, 0.1s became a barrier humans could not see past without the aid of instruments whose epistemic status was deeply suspect. As a result, it turned into something of a religious question. A sort of frequency-domain afterlife zone.
On the other side, a naive-empiricist anti-philosophical tradition arose in science, insisting on the meaningfulness of objective “freeze-frame” notions of time. Roughly, this tradition took reality to be synonymous with whatever was measured by instruments (a modern equivalent is the conflation of computation and subjective consciousness). This tradition increasingly short-circuited the human factor without actually addressing either the psychological or philosophical conundrums raised by running into its limits. A similar naive-empiricist tradition grew in psychology, effectively based on the false security of a shaky empiricism (think phrenology).
The philosophical conflict came to a head in the 1922 Einstein-Bergson debate, creating a temporary victory for the objectivists (in the science sense, not Ayn Rand sense) and a deep schism between two views of time — philosophical time on the one hand (which Einstein grandly declared didn’t exist) and physics-and-psychology time, both rooted in a rather shaky empiricism.
So the story ends around 1922 with: the 0.1s practical problem kinda went away, photography blew blithely past epistemic concerns to ever greater heights (and so here we are at deep fakes and false-colored astrophotography now), psychology abandoned the human-as-instrument reaction time approach and ended up disrupted by unabashedly subjectivist and introspective Freudian-Jungian approaches, and philosophy and science ended up in a weird schism, each claiming the other had no locus standi on certain questions.
Overall, the collision with the 0.1s barrier led to the displacement of human-centric empiricism by automation, the rise of photography as a powerful but epistemologically suspect modality (it is not an accident that film is primarily a medium of fiction rather than non-fiction), the re-booting of psychology in a subjectivist mode. The philosophy of science evolved from classical through positivist, anti-positivist, and post-positivist phases to its modern indeterminate (imo) condition, marked by no clear consensus on the nature of scientific “knowing.”
https://studio.ribbonfarm.com/p/one-tenth-of-a-second