J. Jose Bonner's Outreach Pages
DEPARTMENT OF BIOLOGY
Essential Features of Classroom Inquiry from the National Science Education Standards
Just What is an Hypothesis? The Common Misconception of Equating Hypothesis and Prediction
The Two (or More) Traditions: Different Fields Write and Think about Science Differently
The Parts of a Scientific Article as Represented by Three Different Journals
The Role of Each Section in Ecology and in the Journal of Biological Chemistry
The very different organization of the journal, Nature
A summary page of these differences in html and as a downloadable pdf file
Perhaps the biggest stumbling block for students, and certainly for the public, is understanding what science really is. Broadly speaking, educators divide "science" into two realms:
- scientific knowledge
- scientific process
Broadly speaking, these two realms tend to be taught separately--a practice which we at ISTEME hope can be changed. It is our view that scientific knowledge is so fundamentally interwoven with the process of developing scientific knowledge that the two cannot be separated. Teaching them separately creates a dichotomy in students' minds that cannot easily be corrected.
What do scientists do?
Scientists seek to understand various aspects of the natural world. The process by which they do so is to ask questions, make observations, manipulate conditions and observe the effects of the manipulations, and then develop explanations from the data that come out of their studies. Often, the first explanations are incomplete, and must be revised as new observations are made. This is the fundamental Nature of Science--because we do not already know the answers to questions about the natural world, we can never say that an explanation is Absolute Truth. At best, a scientific explanation may earn the name of a Theory, which is an explanation that has been tested often enough to consider it to be very likely true. Even the most certain of scientific explanations, however, remains tentative; there is always the possibility that new information may come to light that forces us to re-evaluate our understanding.
What, then, is a scientific "fact"?
Things that can be described and measured, things that exist can be called "facts." The data from an experiment, a set of observations can be called "facts." But, the understanding that we build, based on those observations, is always inference. The understanding is always tentative. And yet...it is our scientific understanding that is in the realm mentioned above as "scientific knowledge." This raises some interesting and important questions:
1. Should we teach "scientific knowledge" by itself, or should we teach scientific knowledge in the context of the observations that lead us to think the knowledge might be true?
2. After our students graduate, and new information becomes available that changes the interpretations of things we taught them, how do we want them to respond? Perhaps, rather than saying, "my teacher must not have known science, because what he taught was wrong," it might be better if our students said, "this is just what my teacher said would happen; I'm glad she prepared me to think about interpreting data rather than just memorizing the conclusions."
The essence of "inquiry teaching" is to provide students with learning experiences in which they can develop their scientific knowledge in the context of authentic scientific processes. Unfortunately, there is disagreement over what those processes are, or, more accurately, which of the processes should receive the greater emphasis in the classroom. In general, discussion with teachers suggests that the most common conceptions are that (1) the most important part of scientific inquiry (for students) is the formulation of questions and the design of experiments, and that (2) the ideal should be to move from "guided inquiry" in which the teacher provides some of the context, to "open inquiry" in which students develop the entire process from start to finish, without teacher guidance.
These conceptions lead to a very clear, and very strong conflict:
* How can we use inquiry methods for teaching, when we have a mandate to cover a certain amount of information? Open-ended inquiry takes too long, and besides, students will never be able to re-discover on their own what it has taken thousands of scientists hundreds of years to accomplish. How can we reconcile content coverage with inquiry?
But is this an appropriate view of inquiry? Perhaps it would help to consider what might a scientist actually do during a scientific investigation. She might:
1. Make some observations.
2. Consider possible interpretations of those observations. This usually raises interesting questions.
3. Develop a strategy for obtaining more information that can help address one of these interesting questions.
4. Carry out the investigation, making new observations (i.e. collecting new data).
5. Think about the new observations, and ask how best to interpret the data. Are there clear and unambiguous inferences that lead to real understanding?
6. Decide whether the new interpretation answers the original question(s), or whether there are more interesting things left to find out. If there are more interesting things to find out, repeat steps 3-6 until there is nothing left to find out.
This is phrased intentionally to be somewhat vague, conversational English, in order to avoid specific terminology that has different meanings in different fields. The important point is this: there is more to inquiry than designing the experiment. There is more than doing the experiment, and more than collecting the data. There is the fundamental process of reasoning from the data. The entire purpose of doing an investigation is to obtain data that tell us something. We focus on experimental design in an effort to obtain unambiguous data. To learn something, we must wrestle with the data and develop the best interpretation of it. After all, it is the inferences from the data that make up Scientific Knowledge.
This is reflected in the following statement from Inquiry and the National Science Education Standards from the National Academy of Sciences:
Fully three of these statements refer to reasoning from data, emphasizing that:
- scientific explanations rest upon evidence, rather than appeal to authority
- students must be actively involved in developing explanations, rather than simply being told.
- there are usually alternative explanations, which must be evaluated on the basis of how well each explanation fits all of the available data--and sometimes, we don't have enough information to distinguish among several alternatives.
The first of these statements reminds us that students learn more when they care about the information; they must be engaged. Inquiry helps motivate them, and helps engage them in the material. The last statement reminds us that science is not just about learning Scientific Knowledge. It's about reasoning from data and then presenting persuasive logic as you communicate your reasoning to others.
With respect to teaching, note that this does not say we must use open-ended labs to call teaching "inquiry." Nor does it say that students must design experiments. Rather, it suggests that authentic scientific reasoning involves developing knowledge directly from the data, and communicating the reasoning effectively.
In some fields, the best way to develop new understanding is to follow The Scientific Method. In others, the "rules" of this Method do not mesh well with the types of questions with which the field is concerned. In brief:
In a field such as ecology, where there are a great many variables, and where most of the variables are beyond our control, it is challenging to design experiments that lead to a single interpretation. Therefore, the tradition is to use the Scientific Method as a means of evaluating one's own understanding. Based on an initial set of observations, one develops a possible explanation, then states that explanation as a formal Hypothesis. One then states that if this explanation is correct, then certain results should follow specific manipulations. The strength of the hypothesis is then determined by comparison of the observed results and the results that were predicted on the basis of the hypothesis.
This is a terrific method--which, as noted above, functions to evaluate the scientist's understanding.
But science doesn't work this way in all fields. Sometimes it is more straightforward to design an experiment to ask a question--such as "what is the genetic code? What does UUU code for?" In such a case, it is more appropriate to acquire data from an experiment, and then use the data to build an explanation.
In some ways, these differences are profound. They color the way scientists in different fields think about inquiry, and they color the way they communicate their findings to one another. Scientists in one field may not be able to communicate easily, if at all, with scientists in another field.
At another level, however, the differences are minor. No matter what field, scientists use data--observations--to build understanding, to develop explanations. Based on those explanations, scientists develop methods by which to extend their understanding. The only real difference is whether one describes that understanding at the beginning of "one round" of investigation (i.e. stating the Hypothesis first), or at the end of "one round" of investigation (i.e. building an explanation of the data).
It can be argued that we do our students a disservice to present the Scientific Method as the one correct way to do science. This is especially true for young students. The Scientific Method is designed to evaluate one's own understanding, which for young students is still minimal. In addition, the nature of this approach makes it capable of proving that hypotheses are wrong, but incapable of proving that anything is right. Using this approach can easily have the unintended consequence of teaching students that "science" is just a way to prove that they are always wrong. That's not much fun.
In contrast, it can be argued that it should be more successful to present science as a method of asking questions. How does XXX work? Why is YYY like it is? With a question, a student's results will, at worst, not provide an answer. At best, the results will lead to students' thinking about what the results mean, and lead to authentic scientific thinking.
The Scientific Method calls for stating an hypothesis, stating the predictions, and then doing the investigation to find out if the predictions are met. It is very important to understand what an "hypothesis" actually is. It is not the predictions themselves. It is the scientist's current working model for how her topic of investigation works. It is derived from previous observations, and is a statement that offers an explanation of those explanations.
Is an hypothesis an "educated guess?" Perhaps; some scientists who use this term consider it to be a valid description. Unfortunately, it suggests the idea that scientists guess at the answers to questions, rather than evaluate observations and propose the most plausible explanation for them. We suggest that it is better to call an hypothesis a working model that explains a set of observations. To the extent that a scientific investigation tries to learn new things about questions that have not yet been solved -- so the answers are not yet known -- any working model will have a certain amount of guesswork. But that guesswork is strongly constrained by the observations (data) that have been accumulated since the investigation began. It is more like logical analysis than like guessing.
The Hypothesis and the Predictions
The fundamental method of testing hypotheses is to state the hypothesis clearly, and then articulate a number of things that should be true if the hypothesis is correct. Then, see if these predictions are borne out. The hypothesis is my understanding. The predictions are things that should happen if my understanding is right. However, it is common to use "hypothesis" to mean "prediction." We see this very often in science fairs, when students write "hypothesis" and then list guesses about what they think the results of their experiment will be. Let us strive to keep these terms straight.
In the classroom, it is a good idea to have students write predictions as if/then statements. "If I do this, then that should happen." We should add one more preliminary clause to this kind of statement: "If the hypothesis is correct, then doing this should cause that to happen."
This significantly changes the way we teach science in the early years. It is not helpful, and not scientific, to ask students to guess what will happen if we do something (like drop a can of regular Coke and a can of Diet Coke into a bucket of water). Unless students already have some observations, they cannot formulate an hypothesis to explain those observations. Without an hypothesis, they cannot make scientifically-valid predictions. They have no choice but to guess.
In 2002, Rebecca Reiff and coworkers interviewed research scientists to find out what constitutes "inquiry" in research. What activities do scientists actually engage in during the course of an investigation? The purpose of the study may not have been to assess the frequency with which scientists use the Scientific Method, but a result of the study is the clear finding that real inquiries are messy.
A modification of the findings of Reiff et al is shown here:
from: Harwood, W. S. (2004b) A New Model for Inquiry. Journal of College Science Teaching 33: 29-33
after Reiff, R., W.S. Harwood, and T. Phillipson. 2002. A scientific method based upon research scientistsí conceptions of scientific inquiry. Paper presented at the AETS, Charlotte, NC.
The diagram is intended to convey the finding that scientists often go back and rethink portions of a study, or carry out multiple experiments within a larger study, or communicate their findings at any time as a study proceeds. It is particularly instructive to ask students to identify each of these activities (or those of the "official" Scientific Method) while reading about an investigation (or while watching a movie documenting an investigation). A common, and to some students surprising finding is that "they don't do it in order." At best, an investigation is likely to have a number of episodes involving the lower left quadrant of this diagram--articulating the expectation, carrying out an experiment, examining and interpreting the data.
It is particularly interesting and helpful to think of these activities when working through a "non-scientific" problem-solving scenario, such as fixing a car or a lamp that doesn't work. All of these activities are represented. Basic problem-solving uses authentic scientific reasoning, and we use it naturally and intuitively.
In short, to communicate to students how science is done, it is best not to describe only one or two scientific styles. It is best not to imply that science is done in any rigid and codified way, but instead emphasize that each type of investigation follows a logical flow of thought that is accessible to all of us. In most instances, our students, just like ourselves, naturally engage in authentic scientific thinking when we solve everyday problems. We just don't call it "science" because it's too straightforward.
As mentioned above, the nature of the scientific questions in a field influences the methodology and the logic of experimental design in that field. It also influences the traditions of communication--and how to write scientific papers.
When educators ask students to write Lab Reports, they typically recommend the basic organization of the scientific literature. This makes sense, since this basic organization is the accepted format for formal communication. However, this basic organization can be wildly different in different fields and in different journals.
The Traditional Journal Format seems simple enough:
3. Materials and Methods
But, let's look at some variations:
Journal Ecology Journal of Biological Chemistry Nature Order of Sections Abstract Abstract Abstract replaced with introductory paragraph Introduction Introduction Introduction Materials and Methods Results Results Results Discussion Discussion Discussion Materials and Methods, in small font; similar to a "technical appendix" Materials and Methods condensed into the figure legends References References References
Why is there this difference? Does it matter?
As mentioned above, in Ecology, the Scientific Method really matters. Issues are complex. The most sophisticated reasoning involves careful description of the working hypothesis and careful design of critical tests of that hypothesis (i.e. determining whether predictions are borne out). Therefore:
Abstract: brief summary
Introduction: detailed description of the hypothesis to be tested by the work, and a careful listing of the predictions made by the hypothesis
Materials and Methods: all of the information pertaining to how the experiments were set up
Results: straightforward guide to the findings, which may be as simple as "the result of Experiment 1 is shown in Figure 1; the result of Experiment 2 is shown in Figure 2." There is no description of experimental design or setup, because that is in the Methods section. There is no discussion of the interpretation of results, because that is in the Discussion.
Discussion: interpretation of the results, and comparison of the findings to the predictions stated in the Introduction, with a conclusion of whether the hypothesis is supported or ruled out.
References: listing of other papers cited in the report.
This "standard" journal format works very well for this type of presentation. Note, however, that the format places a restriction on the nature of the work: a single paper can describe only experiments for which all of the relevant information is available beforehand--for which one can present a testable hypothesis at the outset.
In biochemistry, the traditions are different. The official Scientific Method isn't used. More variables can be controlled (often all of the variables can be). Therefore:
Abstract: brief summary
Introduction: brief literature review, bringing readers up to date on the problem under investigation, followed by a statement of precisely what the researchers investigated.
Results: a brief introduction to the overall design and critical variables examined in the first experiment, followed by a quick guide to what the results are. This is followed by a brief consideration of alternative interpretations of the results, and identification of factors that may preclude an unambiguous conclusion. This leads into the setup of the next experiment, designed to distinguish among these alternatives. Essentially, the paper continues in this manner until the relevant issues have been addressed, and the complete set of data lead to only one reasonable conclusion--and a statement of what that conclusion is.
Discussion: a return to the literature review, in order to put the new findings into the larger context of the field.
Materials and Methods: details of how each experiment was done, specifying concentrations of each chemical in the solutions used, suppliers of reagents, times and temperatures of reactions, etc. Some of this may be repeated cursorily in the Results section for variables that are central to the design and interpretation of particular experiments. However, the end-of-chapter location and the smaller font size reflect the fact that no one needs to read this section to follow the paper. This section is merely the details of performance, to ensure that others can reproduce the experiments.
References: listing of other papers cited in the report.
The "standard" journal format works well for this type of presentation. In many journals in what we might call the "biochemical tradition" the order of the sections is the same as in the "organismal biology tradition." However, the Materials and Methods section always has the same role as described here for JBC. In the journal, Cell, the original format was the traditional one; after some years, the Methods section was moved to the end of the paper; after some additional years, the font size was reduced to save space.
In this type of presentation, much of what "should be" in the Methods section has moved into the Results section--logic behind experimental design and some of the methodological details. Some of what "should be" in the Discussion is also in Results, such as interpretation of data. A simple reason is that in this field, one experiment is not enough to make significant findings. Additional experiments are needed to serve as controls, to test alternatives, etc. Because it is necessary to perform--and describe in the paper--these additional experiments, and because later experiments depend on the results of earlier experiments, one cannot describe the later experiments in the Introduction. One cannot outline hypotheses to test without the data upon which the hypotheses are built. Therefore, the official Scientific Method is not used in this field or in this format of scientific writing.
The journal, Nature, is in its own class. It is an all-purpose journal for science of all fields. By putting the Methods into the figure legends, readers can tell what the details are for each experiment, regardless of the authors' primary field or scientific "literary tradition."
For educators asking students to write lab reports in the format of a scientific journal, and for students writing such reports, we summarize the two approaches described above (Scientific Papers), and also provide a pdf file for download.
Dec. 22, 2014