Seeing the Forest for the Trees

There’s a very interesting article at Wired on the presumably increasing problems with reductionism in science. “Reductionism” is the assumption that increasing detailed knowledge of the parts will necessarily lead to a better understanding of the whole. Not only is there no proof for that; there is good reason to believe it is not true:

The truth is, our stories about causation are shadowed by all sorts of mental shortcuts. Most of the time, these shortcuts work well enough. They allow us to hit fastballs, discover the law of gravity, and design wondrous technologies. However, when it comes to reasoning about complex systems—say, the human body—these shortcuts go from being slickly efficient to outright misleading.

Consider a set of classic experiments designed by Belgian psychologist Albert Michotte, first conducted in the 1940s. The research featured a series of short films about a blue ball and a red ball. In the first film, the red ball races across the screen, touches the blue ball, and then stops. The blue ball, meanwhile, begins moving in the same basic direction as the red ball. When Michotte asked people to describe the film, they automatically lapsed into the language of causation. The red ball hit the blue ball, which caused it to move.

This is known as the launching effect, and it’s a universal property of visual perception. Although there was nothing about causation in the two-second film—it was just a montage of animated images—people couldn’t help but tell a story about what had happened. They translated their perceptions into causal beliefs.

Would a detailed knowledge of the structure of matter have been helpful or inhibiting to a Newton or an Einstein? I think rather the latter.

Objective observation, understanding, and insight are elusive faculties, possibly not subject to cultivation. You can prepare the soil and plant the seed but whether there is a crop or not?

More than a century ago Edison demonstrated the value of a systematic approach in invention. But that’s engineering not science. Would he and the research laboratories he built have been more or less effective if they’d received enormous subsidies?

9 comments… add one
  • Ben Wolf Link

    “This is known as the launching effect, and it’s a universal property of visual perception. Although there was nothing about causation in the two-second film—it was just a montage of animated images—people couldn’t help but tell a story about what had happened. They translated their perceptions into causal beliefs.”

    They reported what they saw. What were they supposed to say? Stories were told because they were SHOWN a story. Some refer to it as observation.

  • They reported what they saw. What were they supposed to say? Stories were told because they were SHOWN a story. Some refer to it as observation.

    I agree here. People are shown a sequence of images that strongly imply that the red ball roles and hits the blue ball setting the blue ball in motion. So they repeat that story verbally….how is this supposed to be enlightening? That something else cause the blue ball to move? Okay, sure it is a possibility, but people often go with the most plausible explanation.

  • Heh, you actually missed, I think, the bigger message of the article. The use of frequentist correlations in science.

    But here’s the bad news: The reliance on correlations has entered an age of diminishing returns. At least two major factors contribute to this trend. First, all of the easy causes have been found, which means that scientists are now forced to search for ever-subtler correlations, mining that mountain of facts for the tiniest of associations. Is that a new cause? Or just a statistical mistake? The line is getting finer; science is getting harder. Second—and this is the biggy—searching for correlations is a terrible way of dealing with the primary subject of much modern research: those complex networks at the center of life. While correlations help us track the relationship between independent measurements, such as the link between smoking and cancer, they are much less effective at making sense of systems in which the variables cannot be isolated. Such situations require that we understand every interaction before we can reliably understand any of them. Given the byzantine nature of biology, this can often be a daunting hurdle, requiring that researchers map not only the complete cholesterol pathway but also the ways in which it is plugged into other pathways. (The neglect of these secondary and even tertiary interactions begins to explain the failure of torcetrapib, which had unintended effects on blood pressure. It also helps explain the success of Lipitor, which seems to have a secondary effect of reducing inflammation.) Unfortunately, we often shrug off this dizzying intricacy, searching instead for the simplest of correlations. It’s the cognitive equivalent of bringing a knife to a gunfight.

    I’m not all that surprised really. Correlations often put the researcher into the position of X or not-X. When reading further down to the back pain issue you see that when they had MRIs they found a correlation, they looked at X and not-X, damaged vertebrae. Problem is they didn’t consider a wider host of hypothesis:

    X(1), X(2), X(3), X(4),… X(n).

    The frequentist machinery can’t handle this approach. How do you judge between

    X(1) and not-X(1) and
    X(2) and not-X(2)?

    A better approach is to evaluate X(1) relative to X(2)…but then that takes most researchers right outside their comfort zone of relying on frequentist methods.

    Also, the author kind of down played it, but the issue of statistical significance in research, even medical research, can result in misleading results. If you have a hypothesis and test it and the results come back with nothing statistically significant the results will almost surely not be published (if it is an initial paper). Another researcher unaware of this research might try a slightly different statistical model and find a statistically significant result and it may very well get published and people might glomb onto it and think, “There, there is the answer!!”

    But lets go back a bit. Suppose the first researcher is sure there is something there, so he tweaks his initial statistical model and there it is! The ever elusive statistical significance. He sends it out and the paper is published…usually without any discussion of the first “failed” model. Now, is the researcher doing good honest research or does he have an answer and is looking for a question that fits his answer? Thing is the two researcher/one researcher with tweaks are from a results perspective very similar. There is no intellectual dishonesty, but the results could be misleading.

    Part of the problem is again relying on frequentist methods, which by the way are easy to turn into canned processes just about anybody can use…or abuse. The above story of the researchers allows for no way to evaluate the two models side-by-side. Part of the problem is how journals work. A statistically significant result is interesting an insignificant one is not…even though if they are on the same thing the two results suggest the underlying causal story may not be quite what we think it is. Another part of the problem is that frequentist methods hides the researchers beliefs behind a layer of apparent objectivity.

    TL;DR–the methodological tools that have taken us this far are starting to hit the end of their life expectancy. We need new ones going forward.

  • Also, the author kind of down played it, but the issue of statistical significance in research, even medical research, can result in misleading results. If you have a hypothesis and test it and the results come back with nothing statistically significant the results will almost surely not be published (if it is an initial paper). Another researcher unaware of this research might try a slightly different statistical model and find a statistically significant result and it may very well get published and people might glomb onto it and think, “There, there is the answer!!”

    In the list of my different businesses in another post I wrote this morning one of those I didn’t mention was that I used to tutor non-mathematics grad students in statistics. This is a subject I find almost too painful to mention. The short version is that the standard for statistical significance is whatever the program they’re using says it is. They don’t understand it at all.

  • Icepick Link

    Some refer to it as observation.

    LOL, brilliant counter-stroke, Mr. Wolf!

  • Icepick Link

    All this makes me happy I studied mathematics. Put the teapot on the floor, bitchez!

  • Icepick Link

    [ substitute “tea kettle” for “teapot” if you have to look it up ]

  • steve Link

    “Also, the author kind of down played it, but the issue of statistical significance in research, even medical research, can result in misleading results.”

    Never believe just one paper. Good science is reproducible.

    Steve

  • Andy Link

    They reported what they saw. What were they supposed to say? Stories were told because they were SHOWN a story. Some refer to it as observation.

    Actually, that’s perception, not observation, since the people not only observed, but made judgements, even unconscious ones, about what they were seeing. Humans use pattern recognition to provide mental shortcuts in order to make sense of the world, and that effect is was that experiment demonstrated. Here’s another cool illustration demonstrating a mental patterning shortcut.

    The point being, perceiving is believing and we are often unable, cognitively, to distinguish between facts, the meaning of facts and analysis based on facts in our everyday perception. Patterns of thought, once set, are difficult to break. People who are able to “think outside the box” are those who can short-circuit this patterning behavior and consider alternatives or they are people who developed completely different patterns growing up.

Leave a Comment