1.
Define a problem
2.
Research what is known
3.
Hypothesize a cause
4.
Test your hypothesis
5.
Analyze your results
6.
Support or refute your hypothesis
In the next few class periods, a few examples are identified and
the class works through them. And there is always the "design an experiment to prove your
hypothesis."
The question of the day:
Is this the only
method to conduct science?
I don’t have room to go into all the things that are wrong
with the process as it is enumerated above, but let’s hit a couple.
One big problem is the idea of supporting or refuting your hypothesis (hypo = under, and thesis =
proposition; a supporting basis for an argument). True, your experiment may do
one or the other, but it doesn’t have to. Many experiments end up with
ambiguous results, especially if the scientist is doing his or her job.
Let’s say you have designed an experiment that will have
a definitive result. What result are you going for? Most will say that you are
trying to see if your hypothesis is supported. But anyone can design an experiment to give a desired result. Your hypothesis says “A” should occur if
you do “B,” so you design an experiment to make sure “A” occurs. It doesn’t
even have to be a conscious effort, your design will just tend toward that result.
A truly scientific experiment is designed to REFUTE your
hypothesis. You design an experiment to prove your hypothesis is not
true. If, under those, conditions, your hypothesis is still supported, then you
really have something. If a scientist tries his/her best to refute their own
hypothesis and can’t, the chances that the hypothesis is correct go way up.
Then you report your results and let other scientists try to design an
experiment to show your hypothesis is wrong. If they can’t do it either, then
you are really onto something. Real scientists try to prove themselves wrong,
not right. The big idea - you can never PROVE a hypothesis, you just have data to support it. But the next experiment may refute it. You can always design another experiment to test the hypothesis, so no hypothesis is ever proven absolutely. True science proceeds when you refute a hypothesis; only then can you make a concrete change to move closer to the truth.
Negative
results and refuted hypotheses are the basis
of science;
too bad they get a bad rap.
Can you
think of another profession
where being wrong is your goal?
|
Editors don't get excited over studies saying we tried this and it didn't work. But, an experiment that doesn’t work isn’t necessarily bad – you can learn a lot from it and so can other scientists. Unfortunately, journals don’t like to publish this data, so those who might learn from it don’t get to hear it.
This is one reason why it is important for scientists to have meetings and talk to one another personally, not to just write journal manuscripts and funding applications. Case in point, it turns out that studies that show new drugs aren't cure-alls, that they don't do what they say or don't do it as well as they say don't published. Ben Goldacre gave a recent TED talk on the subject, and has numbers to back up his assertions that negative data studies on drug efficacies hardly ever see the light of day.
In fact, negative data is the most common data and often the most
useful. Refuting your hypothesis is a type of negative data. When faced with this result, you modify or discard your hypothesis and try again. You can design
a thousand experiments that support your hypothesis and still not prove that
your idea is the true mechanism - you may just not have thought of the
experiment to disprove it yet. Like we discussed above, just one experiment that DISPROVES your
hypothesis results in a step forward. Like Thomas Edison said, "If
I find 1000 ways something won't work, I haven't failed. I am not discouraged,
because every wrong attempt discarded is another step forward."
Negative data truly moves us forward, in fact
failing is the only way we move forward. But this still leaves a problem with
the way science is taught. Is there good data that is neither positive or
negative? We tend to think of data only as that information that supports or
refutes a hypothesis, but do you have to have a hypothesis?
Here is my data:
Year #
of walnuts
2003
3662
2004
604
2005
3508
2006
368
2007
4917
2008
0
2009
6265
2010
0
2011
6395
2012 6
2013 2140
2014 159
2015 1825
2012 6
2013 2140
2014 159
2015 1825
Now, we can ask a question and hypothesize a mechanism? What
is responsible for the pattern in nut production, and why do the results keep diverging? Does
that mean that my original observations aren’t science? True – you could say that I was
answering a question about how many nuts the tree produces, but I did not have
a hypothesis that I was trying to dispute. This is true science, but not the
kind we teach in school. Is the change in the pattern during the middle years accounted for by weather? Do the fewer nuts later mean that the tree is dying?
But the real learning is in defining the limits and possible
confounding effects that could lead to errors. Did the tree lose a limb and
therefore produce fewer nuts the next year? Was there an explosion in the
squirrel population, and they stole all the nuts before you could count them?
What was the weather over the time period you observed, could a change in
weather account for a change in number? Is there another walnut tree too close,
and the nuts are getting mixed up? Am I just getting better at finding and counting the nuts each year?
Each might be considered a hypothesis –
the weather affects nut production, so you try to show that different weather
years had the same nut production – hypothesis refuted. The squirrel population
exploded – talk to the local nature experts, if the number has been fairly
constant- hypothesis refuted. There is an almost infinite number of possible
confounding effects, and your class can come up and test as many as their
brains can think up. Now that is a true scientific method!
Ben Goldacre (2012). What doctors don't know about the drugs they prescribe. TED MED 2012
Well done.
ReplyDeleteI'm coming from a different view (physics - which I don't do anymore, so YMMV) and I wouldn't have any problem with making observations without a hypothesis.
Or rather the hypothesis will be self contained - am I really observing what I observe? Take a sample different from the learning set and find out.
The parameter variation helps to be more generally certain, but now you are starting to model your observations beyond basic statistic and will have complexities of statistics.
The difference between the physicist KISS and the biologist "going nuts"? Instead physicists and astronomers like to bump up the sigma needed to claim an observation, to eliminate look-elsewhere effects (i.e. adjust p values for the effects of data sifting) say, precisely because you don't have a hypothesis that connect and constrains. But I suspect you can't easily increase certainty, hence different ways to do the art.
"Can you think of another profession where being wrong is your goal?"
Ideally, medicine?
This is a grreat post
ReplyDelete