Wednesday, April 22, 2009

Physics vs Chemistry: Fight!

I often contemplate the differences between these two areas of study. Also, I hear fellow undergrads argue for one or the other, usually divided along the lines of their respective major. Anymore, I think they're so interrelated that I find it hard to find a difference between the two, except for the phases of matter that they most often deal with.

Back in the days when science was new, Physics dealt with understanding the fundamental laws of the universe, and it was Chemistry that was making the attempt at understanding the fundamental pieces that the universe was composed of. Both of these fields also grew out of a long standing philosophical tradition that can be traced back to the days of the pre-Socratics, and exemplified by Aristotle. Buuuut... that's going a bit further back than I think is necessary to understand what's different, anymore, about these two sciences, if indeed they ever really were different.

Physics, as a science, really began with Newton. It could be traced back further to Galileo, Kepler, and Ptolemy, but he's the big man that laid out a comprehensive scientific theory. Chemistry, likewise, has a big man on campus -- Dalton. Dalton's atomic theory brought back the idea that the universe is composed, at its smallest level, of indivisible particles called "Atoms", and was proposed around the same time as Newton's theory. Likewise, work done by Boyle, Cavendish, and Lavoisier contributed to Chemistry, but Dalton's the guy who proposed the first scientific theory of the atom. (Though mad props must be given to Cavendish, who isolated Hydrogen, Oxygen and created the "element" water -- of four elements fame -- from two different elements of "Air", thereby disproving the idea that four elements created everything and paving the way for the atomic theory)

Newton's theory can be summed up as such:
1) An object at rest remains at rest. An object in motion will stay in motion

2) Momentum is related to Force directly, and is related to mass inversely (F/m=a, from the previous post, except Newton did propose his law in terms of Momentum, a concept accounting for both the mass of an object, and the movement it's currently undergoing)

3) If two objects interact, the force exerted by object A on object B is equal in magnitude and opposite in direction to the force exerted by object B on object A.

And now, Dalton's Atomic Theory:

1) All matter is made of atoms. Atoms are indivisible and indestructible.

2) All atoms of a given element are identical in mass and properties

3) Compounds are formed by a combination of two or more different kinds of atoms.

4) A chemical reaction is a rearrangement of atoms

From these two rough outlines, while both are attempting to deal with fundamental pieces of the universe, it seems that initially Physics dealt with a macro-world: How whole objects interact, how cannonballs fly, how wheels spin. Conversely, Chemistry dealt with a micro-world: Fundamental pieces that make up all things, what is actually happening in the micro world, and understanding what effect that has in the macro world.

How things seem to change. Now its the physicists attempting to delve deep into the universe, and the Chemists sit content at the atomic level. Actually, this history makes sense when you think about the things that inspired these two progenitors of the physical sciences. Newton created calculus to better understand astronomy. Dalton collected weather data on a daily basis for 57 years. Newton watched objects moving far away that he had no hopes of understanding without attempting to understand how all objects move. Dalton watched condensation, evaporation, clouds -- a macro world understood through a micro-world of millions of particles interacting with one another.

It's all the physical universe, but when, say, researching a cell, I haven't broken out the quantum equations to understand how it works. I applied concepts traditionally assigned to the realm of chemistry (and, of course, Biology). But, those ideas are in turn heavily influenced by physics (and math), which itself is heavily influenced by math. At present, we don't have a cohesive enough physics model to build towards understanding all biological science, and if we did, it would match up with the findings of biology. It would just be another way to explain it, and the same would happen in building a physics theory of chemistry. Chemistry knowledge, which may one day be obsolete, would serve as the bridge of knowledge between physics and biology, if such a theory is possible.

So, the question still remains, what's the difference? Size? Well, in a sense, yes: I think size is it, in its own way. Not that physicists don't study classical physics anymore, far from. There are still people researching and applying classical physics for purposes other than engineering. But rather, in the number of particles we each deal with. As a chemist, we deal with "System" models most of the time. The "system" model is a method of understanding something, and it's something you define yourself -- generally, chemists define systems as "What's in the beaker". We talk about the energy of a system. We talk about "How Often" a collision between atoms occurs -- not that we really know how often it occurs, exactly, but we do have a way of quantifying it. Physicists deal with points: Displacement of an object, energy transferred from one object to another, the behavior of a single electron. Even with an object of oddly displaced mass, there is an acknowledgment that many particles are moving: But they use the concept of an imaginary representative particle (called the center of mass) in order to apply point-particle models to the object. This isn't always the case, but I have yet to come across having to deal with systems of billions and billions (may Carl Sagan rest in peace) of particles explained by the characteristics of the fundamental particles in my physics studies.

But then you have the physics of condensed matter, dealing with 10^23 particles, and physical chemistry, dealing with the size of a single nucleus. So, the interplay between the two is muddied even further. Which is better? Neither. What's the difference? No idea. It may be the reason Chemistry is given the definition of "The Study of Change" -- it's hard to distinguish what's really different between the two, when we both deal with the physical world at a basic level, sometimes modeled as single points, sometimes billions of points, sometimes a beaker of chemicals, sometimes a ball of mass. In essence, they're really the same: It's the approach that is different. The chemist's explanation can stop at the point we relate a phenomena to an element or compounds composition of elements. The physicist's explanation stops at the point where they have a general rule that can be applied to anything in the universe. Beyond that -- well, I'm still figuring it out.

Friday, April 17, 2009

Dembski's Argument for Intelligent Design

This is a little off-topical from what I want to blog about, as it relates to biology, but I recently read Dembski's paper "Intelligent Design as a Theory of Information". It's an older paper (1998), but it attempts to justify Intelligent Design as a proper scientific theory of biology. Now, I am no biologist -- I have a general working knowledge of biology, but far from in depth -- but I am a scientist (in training), and have a more firm, if not complete, grasp of science, the scientific method, and the philosophy behind science, and my critique of Dembski's paper relies on these concepts.

I don't expect everyone to read the entire paper, but the critique makes more sense if you're at least passingly familiar with it. As such, I present the abstract here:

For the scientific community intelligent design represents creationism's latest grasp at scientific legitimacy. Accordingly, intelligent design is viewed as yet another ill-conceived attempt by creationists to straightjacket science within a religious ideology. But in fact intelligent design can be formulated as a scientific theory having empirical consequences and devoid of religious commitments. Intelligent design can be unpacked as a theory of information. Within such a theory, information becomes a reliable indicator of design as well as a proper object for scientific investigation. In my paper I shall (1) show how information can be reliably detected and measured, and (2) formulate a conservation law that governs the origin and flow of information. My broad conclusion is that information is not reducible to natural causes, and that the origin of information is best sought in intelligent causes. Intelligent design thereby becomes a theory for detecting and measuring information, explaining its origin, and tracing its flow.
Dembski is essentially setting out to scientifically prove two points, all the while using those two points to "Science-ify" ID.

Next Dembski defines information as "...the actualization of an event to the exclusion of other events". He compares this to the common sense definition, namely, that information is "the transmission of signals across a communication channel". He references two philosophers whose work, related to this paper, is in the philosophy of the mind. And, yes: The mind, when presented with information, has to tune out of the majority of the massive amount of information being presented to it by the senses in order to properly function and focus. At the end of this section, Dembski states:

"Information needs to [be] referenced not just to the actual world, but also cross-referenced with all possible worlds."

He builds to this subtly, all the while making, more or less, low-key insightful definitions of what information is, and what we may need to consider when considering how information behaves. But this is the first statement that bespeaks the nature of Dembski's argument; it is philosophical, not scientific. Namely, the reference to all possible worlds, as concieved in Anslem's ontological argument for the existence of God, is in no way scientific, or even related to science. No matter what may have happened in our world, one core assumption in science is that the natural universe is deterministic: the entire natural world follows laws, and those laws are immutable. We may not know, exactly, what those laws are, but that doesn't change the laws' status with regards to existence. In addition, just because we have a model of probability, that does not change the determinist assumption core to scientific inquiry. For example: The Heisenberg Uncertainty Principle states that we can never know both the location and momentum of an electron, and the quantum model of the atom relies upon the idea that an electron exists in more than one location at one time, and uses probability to describe how likely an electron will be at one location at a certain time. That does not change the idea of the universe being deterministic. These are models of the physical world -- a statement of "is", a model attempting to understand an absolute certainty of how probable the electron will be present at a certain location, and the ability to predict how the atom will behave based upon that probability. It's still determinist -- it's just unfamiliar to how we usually think of determinism. Secondly, in the grand metaphysical sense there are other possible worlds: But there is no way of understanding those worlds, no matter how close they relate to ours, in a scientific way. Science delves into the natural world, and the natural world only. The natural world is the one we live in, the one where the things that happen in the realm of our senses is the only one we study. Even in a possible world where, everything else being the same as ours, a quarter flipped a year ago lands heads up instead of tails up is a world that science does not and can not understand, as we have no way to sense that world.

The next two sections of the paper delve into more definitions that are attempting to link information theory to the study of biology. First, Dembski derives a method of measuring information, as measurements are necessary to science. He uses the analogy of a deck of cards and poker hands. His example states two possibilities: a royal flush, and all other possible hands. He then goes through some probability mathematics and applies information theory concepts to show that there is more information in knowing that we obtained a royal flush than there is in knowing we obtained one of the other possibilities. The argument follows. Intuitively, if you have a hand of "one of every other possibility", there are any number of hands you could possibly have, while the specifications of "Royal Flush" require exact cards, so you actually have more information by knowing you have a royal flush as opposed to a set consisting of several possibilites. There is also a definition integral to his argument, namely, "Complex Information". Complex information is information similar to the Royal Flush -- it has a larger magnitude of information than "Simple Information", and that complexity indicates some sort of correlation between possible events. Dembski states at the end of the first section:

This notion of complexity is important to biology since not just the origin of information stands in question, but the origin of complex information.
He has yet to establish the connection between complex information and the study of biology. Earlier in the paper, he quotes the honorable Biophysical Chemist Manfred Eigen (Who is a Grade A scientific bad ass):

In Steps Towards Life Manfred Eigen (1992, p. 12) identifies what he regards as the central problem facing origins-of-life research: "Our task is to find an algorithm, a natural law that leads to the origin of information." Eigen is only half right. To determine how life began, it is indeed necessary to understand the origin of information. Even so, neither algorithms nor natural laws are capable of producing information.
But that still doesn't establish the link between information theory and biology. Also note that Steps Towards Life is a popular science book which, while probably insightful, can easily be taken out of context. In addition, it is the opinion of a man that, while blazingly brilliant, can still be wrong, and also has not established the link between information and biology, scientifically speaking. I don't mean to demean Manfred Eigen in any way with this -- but, that's the process. Opinions are wonderful to debate in a philosophical sense, and can often inspire people in many ways, both scientifically and otherwise, but opinion does not equate to science.

The next portion distinguishes between "Specified Complex Information" and "Unspecified Complex Information". He uses the example of an archer shooting at a wall so large that he can not miss, but gives two pertinant scenarios: One in which the archer paints the target before he shoots and hits a bulls eye, and one in which the archer paints a target after the arrow hits the wall and makes it look like a bulls eye. He covers some other possibilities, but essentially, the scenario where the archer shoots the arrow and then hits a bulls eye is equatable to "Specified Complex Information", and it is the type of information that can lead us to scientifically understand that the archer is a good archer.

Dembski then continues by generalizing the above scenario: Basically, that patterns established before they are tested, but then verified by tests, are the type of patterns one knows to be linked to causality. The patterns established after having witnessing an event may be causally related, but they may also be fabrications, similar to the scenario with the archer painting a target around his arrow. He then compares this generalization to the study of life, as life obviously can't formulate a hypothesis about itself before it exists. In this paragraph, he states:

But what about the origin of life? Is life specified? If so, to what patterns does life correspond, and how are these patterns given independently of life's origin?
Which needs more clarification as to what exactly he's asking for. How is it conceivable to separate the patterns of life from their origin, and why is that necessary? Dembski seems to be critiquing all of scientific inquiry here because it is formulated a posteriori -- but that's what all scientific inquiry is based upon. It is only through experience that we gain ideas of how the world works, then through experimenting with those ideas that we confirm that they are, indeed, good scientific ideas. Newton was inspired by the movement of planets. Dalton was inspired by the formation of storms. While a fair amount of theoretical reasoning has to go into science, theory is nothing without experimentation -- which Dembski acknowledges, but he's rejecting the thought of basing theory upon experience on the sole basis that then the theory is more likely to be favored. It's a great question to pose, for the philosopher of science, but it is this subjectivity that the scientific method attempts to overcome. Bringing up a difficulty in performing scientific inquiry to critique a theory derived from performing that scientific inquiry is, still, not scientific, but philosophical. Dembski is free to reject the confines of the scientific method, but if he does so, he can not then claim to have a scientific theory, as he did not reach that theory through the process of science.

The next paragraph, which I will quote in full, is where Dembski argues for the link between information theory and biology, as well as science in general:

Information can be specified. Information can be complex. Information can be both complex and specified. Information that is both complex and specified I call "complex specified information," or CSI for short. CSI is what all the fuss over information has been about in recent years, not just in biology, but in science generally. It is CSI that for Manfred Eigen constitutes the great mystery of biology, and one he hopes eventually to unravel in terms of algorithms and natural laws. It is CSI that for cosmologists underlies the fine-tuning of the universe, and which the various anthropic principles attempt to understand (cf. Barrow and Tipler, 1986). It is CSI that David Bohm's quantum potentials are extracting when they scour the microworld for what Bohm calls "active information" (cf. Bohm, 1993, pp. 35-38). It is CSI that enables Maxwell's demon to outsmart a thermodynamic system tending towards thermal equilibrium (cf. Landauer, 1991, p. 26). It is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness (cf. Chalmers, 1996, ch. 8). It is CSI that within the Kolmogorov-Chaitin theory of algorithmic information takes the form of highly compressible, non-random strings of digits (cf. Kolmogorov, 1965; Chaitin, 1966).
So, essentially, Dembski is claiming that all science can be modeled by information theory. But he has no scientific basis for this -- only a philosophical argument, which, again, is not science. It's true that science deals with information, mathematical models, and computer programs to better understand the world. But that still does not establish a direct scientific connection between information theory and all other areas of science. Furthermore, if a scientific connection were established between information theory and, suppose, just biology, then unless there was a reason to reject the theory of evolution and replace it with information theory, then information theory's model of biology would conform to the model of evolution. By analogy, we don't have a sub-atomic particle model of how an animal behaves at the moment, but unless evolution were somehow disproven, then the sub-atomic model of population shifts would conform to the evolutionary model.

What Dembski claims is that information theory is superior to all other sciences, and thereby claiming that any law formulated in information theory will trump all other scientific laws. This, also, goes against a basic philosophy of science concept: Theories are not proven, only disproven. Unless we have a reason to reject a scientific theory, we continue working with it. There is no superior science -- the natural world is deterministic, we study the natural world, so all conclusions, no matter what facet of that natural world we study, will, in the end, match each other. In science, one does not see all the theories before them, and then start a new theory that needs to be worked out. The scientific world would forever be reformulating ideas and starting the work of Newton over again if that were the case. One builds upon the ideas that have so far shown to be good scientific ideas. One is right to question assumptions or ideas that have come before them, but if there is not scientific evidence, or, essentially, a reason to reject those ideas, then those ideas are assumed to be correct for the purposes of building a body of knowledge related to the natural world.

The next section is titled "Intelligent Design". Here, Dembski states:

In this section I shall argue that intelligent causation, or equivalently design, accounts for the origin of complex specified information.
He continues to describe how a psychologist determines whether or not a rat has learned how to navigate a maze. The maze must be complex, in order to eliminate the chance of the rat solving the maze by shear luck, and the rat then must demonstrate that it has memorized the series of turns it takes to get to the other end of the maze. This is a method for determining if the rat has learned, and thereby, demonstrate that it has made an intelligent choice. There is also an analogy drawn to the difference between writing a sentence, and spilling a bottle of ink on paper. In one case, someone directs the pen, in the other, the ink randomly spills out. For further clarification, Dembski also references a story about an American listening to someone speak Chinese: There is design, but it is incomprehensible to the American, simply because he lacks the knowledge of the Chinese language. But this does not stop it from being an Intelligent choice. Then Dembski states:

The actualization of one among several competing possibilities, the exclusion of the rest, and the specification of the possibility that was actualized encapsulates how we recognize intelligent causes, or equivalently, how we detect design. Actualization-Exclusion-Specification, this triad constitutes a general criterion for detecting intelligence, be it animal, human, or extra-terrestrial. Actualization establishes that the possibility in question is the one that actually occurred. Exclusion establishes that there was genuine contingency (i.e., that there were other live possibilities, and that these were ruled out). Specification establishes that the actualized possibility conforms to a pattern given independently of its actualization.
Dembski then states that this pattern for recognizing intelligent causation exactly matches up to the criteria for recognizing CSI. Implicitly, because of Dembski's claim to linking CSI to to all of science, and because the confirmation of CSI follows exactly how psychologists confirm that an acting being intelligently makes a choice, it follows that then what science studies, CSI, is generated from an intelligent cause.

The problem with this argument is that he has not properly established a link between CSI and all other science. Further, he has not established even a philosophical argument for the link between a psychologist determining whether a mouse has learned, and the determination of CSI. Technically, if the confirmation of CSI were grounded in scientific inquiry, then it's painstakingly obvious that it would follow the same pattern that a psychologist uses to determine if a mouse has learned a maze: They'd both be scientific. In addition, just because a process can be formulated in such a way that they are seemingly the same, does not mean they pertain to the same things -- in one example, a psychologist determines how a mouse learns, and attempts to generalize those findings to other mice and, ultimately, other animals. This has nothing to do with an intelligent causation to explain the existence of life, and everything to do with how animals learn. Dembski fails to establish a philosophical bridge, as well as a scientific one, from mouse to, essentially, God. So, his conclusion does not follow from his premises, a textbook non sequitur error.

Further, the first statement demonstrates how this argument is not a scientific one: That intelligent causation accounts for CSI. Even if his argument had followed, it would not matter. One can rationalize a good many things with complete validity, and still be wrong. An argument can have validity, but no experimental support. Here Dembski presents an a priori rationalization for an intelligent causation to the origins of life. Even had it validity, it would not have experimental support, which is essential to the process of science.

The final paragraph outlines Dembski's postulated Conservation Law. In it, he critique's Eigen for attributing the origin of CSI to natural causes, because, in his opinion it can not be explained by natural causes. He then claims, because he has proposed a law of conservation, that information theory as applied to biology is a scientific theory. He continues to argue that pure chance -- the type of randomness proposed by Epicurus, where the universe follows no law other than randomness -- can not account for CSI. He continues to argue that neither can a Darwinian approach account for the existence of CSI (and, hence, life) because Darwin's theory only deals with how life changes over time, not how it was initially generated. Finally, because of this, Dembski concludes that natural causes can not account for the existence of CSI, and goes on to expound upon the implications this holds for scientific inquiry. Namely, in analyzing the origins of CSI, Dembski uses a systems-surroundings model, defines his system as the natural universe, which contains CSI that can neither be generated or destroyed. Because CSI can neither be generated or destroyed, it must have come from somewhere which, in Dembski's view, is the surroundings: The intelligent causation.

This parellels both Paley's watchmaker argument, as well as Aquinas' first cause argument for the existence of God. I, personally, disagree with both arguments, but that is irrelevant. What is relevant is that Dembski continues to make a priori rationalizations for the existence of an intelligent cause, all the while claiming that his argument is a scientific one. This is patently false. There is no method, there is no experiment, there is only suppositions. As beautiful as philosophy is to study, masking it as science because you disagree with the conclusions of science is not science. The fact that Dembski spends roughly half the paper talking about probability mathematics and another third of the paper referencing basic psychology and some concepts related to information theory does not change the fact that Dembski is, essentially, making an ontological argument for the existence of an intelligent designer. Dembski demonstrates this when he claims that the origin of life can not be explained through natural causes, as science only deals with natural causes.

Also, I want to briefly address a philosophical point: Namely, the social implication that science and religion are somehow at odds. I claim that they are in no way related. As I critique Dembski for attempting to apply the scientific method to the existence of God, I similarly critique Dawkins. Not that this necessarily bolsters my argument, I'm only claiming consistency. God is a metaphysical question. Science is an epistemic method to understanding the natural world, and only the natural world. God, by most definitions, is somehow outside the natural world. Therefore, science can say nothing about God, or, as Dembski puts it, an intelligent designer. If God is, by definition, the natural world, then and only then can science interpret God, and those conclusions will be unaltered by this Spinozan derived definition of God.

Because science has nothing to do with God, you can go on believing whatever it is you will with regards to God no matter the conclusions of science. Just realize that making claims about the natural world because of supernatural reasons, such as dating the world 6000 years old because of biblical record, will not be taken seriously by anyone who accepts the scientific method. After all, as the scientific method has nothing to say about God, God has nothing to say about the natural world, aside, possibly, as an a priori rationalization for the existence of it.

Basically, I'm just stating that questions of science and questions of God mix like oil and water. It is your personal conviction that determines their relative densities.

Wednesday, April 15, 2009

Approximations of Truth

The question of how to know Truth is a fundamental question of philosophy. Truth with a capital T has been debated and sought after by every philosopher pretty much ever. In everyday life we do this too. The joke, "I saw it on TV, it must be true!", or simply asking how somebody knows what they're talking about. In arguing politics, God, or various other uncouth dinner conversations we'll reference a class, a life experience, a book we read. We'll talk about how we were raised, what's acceptable, and why it's acceptable. This is the every day man's search for Truth, and it's similar to any philosopher's search for Truth, it's just not published (though I don't want to denigrate the expertise of those who study philosophy -- I just think we, on a day-to-day basis, loose contact with the fact that we're essentially answering the same questions, only in different ways, and possibly at different levels of understanding).

So, how do we know Truth? That is the spawn of a lengthy discussion and inner dialogue, one in which I am still looking for. However, there is a misconception about science that I think is very important to understanding it -- that science is truth. Or, more importantly, the misconception that scientists think science is truth, but the rest of the world knows it's just science. So, without further ado, here's a quick run-down of what I think of the interplay between science and truth.

1) "Just" Science

To be clear, I am not speaking out against the scientific method in the least. It is, in my opinion, the most surefire epistemic method for understanding the natural world. There are a few assumptions made in scientific inquiry, but that's alright -- we need to make assumptions in order to come closer to truth. In the days of Euclid, mathematicians were trying to go about proving everything, and it was he who said, "Fuck proving that lines are straight -- they just are" (roughly). He said a number of other things related to geometry, but aside from his enormous contribution to mathematics, he also made an enormous contribution to logic: Not everything can be proven. In fact, one has to accept certain propositions in order to move on and build. That's what science does -- makes a few assumptions that really are not terribly controversial, and builds a knowledge of the natural world using them.

EDIT: I need to admit a mistake. Aristotle actually points out that one needs to start from some point in order to build a system, and he is dated older than Euclid. I'm not sure if any Pre-Socratics pointed this out, as well, but this at least pushes the date further back, and as Aristotle is the first person to give a strictly formal account of logic, it wouldn't be surprising if he's the first to point out this feature.

2) Science, and truth

So, we gain a knowledge of the natural world. But what exactly does this knowledge entail? How do we KNOW (in uber-skeptic parlance) that what we deduce in scientific inquiry is true? Well, strictly speaking, we don't. Science proves nothing. Technically, science only disproves things. The fact remains that, even given that we discover everything in the universe, we will not know if we have discovered everything in the universe. There can always be something else -- it's how science grows. We notice something, and attempt to come up with a reasonable explanation and description of that something. We test that description, and if everything matches up, declare that our description is good. The problem is, this is not always the case. It was probably a good description. Maybe there were some implicit assumptions in our description we didn't realize. Maybe we hadn't encountered a certain element just yet, due to our inability to detect a more subtle feature of the world, or due to that elements scarcity. So, while scientific inquiry deduces good explanations of the natural world, they are not the capital "T" Truth truths that we know, in an absolute sense, are true. They're just damn good approximations of Truth.

3) The limitation of truth in the natural world

So, what're we to do? Well, science is about it, when it comes to the natural world. Unless you're taking up the philosophical banner of absolute relativism, or have recently been convinced of Epicurus' physics, the natural world follows laws (again, rough). Our formulation of those laws may be off, but that doesn't change them existing. We, human beings, are not naturally in tune to these laws, can not deduce them from pure thought, and need to test the universe to understand them.

4) So, how do we find Truth?

Good question! It's one I think about myself. Supposedly, you could reach truth if you have a valid argument and all of the propositions in it are true. But there's no way of telling if your propositions are true. It's intuitive. It's subjective. And as Truth doesn't change (well, I guess that depends on who you ask), but we change all the time, we could know truth and not know that we knew it, and change again in pursuit of the elusive ends of our questioning. It's actually what Plato talks about in his "Symposium" -- it is the lot of the philosopher, the lover of wisdom (and therefore truth) to love and pursue wisdom, but he can never know it, only search.

So, in closing, while the scientific method shouldn't be applied to all areas of life -- that'd be mildly rediculous -- I think settling for approximations of truth ain't too bad a deal.

Monday, April 13, 2009

Just What is Chemistry Anyway?

I realize maybe I got a little ahead of myself -- I should start simple, and actually, this is a question with an answer that bugs me.

My first day of freshman chemistry, chemistry was defined as "The study of change". It didn't make much sense to me at the time. I thought chemistry was about elements, beakers, drugs, and explosives. Not that I started studying chemistry for these reasons, that's just what came to mind. So, when defining chemistry, I expected "The study of the elements" or "The study of chemicals" -- but "The study of change"? Not at all.

Now I think I'm beginning to glimpse the reason for this definition. In the first semester, we dealt with chemical reactions like sodium hydroxide with water, which dissolves and heats the water -- a temperature change. We saw liquids combine to form solids -- a phase change. We observed as pink indicator disappeared in a beaker, indicating how acidic our solution was -- a color change. In essence, everything in chemistry can be related back to directly observing changes.

Still, isn't that what all science is about? A block changes position, but its physics. A group of flies change genes, but that's biology. We come up with explanations for change all the time -- so why does chemistry get labeled "The Study of Change"?

It bugs me. I don't think that the description really elucidates what students are about to study, and even after a fair amount of core material, I sit mildly perplexed. So why give a definition like this? I think I'd much prefer something along the lines of "Chemistry is the study of atoms". It's frank, direct, and while a bit boring, honest. While chemistry no longer claims to look at the fundamental pieces of the universe, that does not really matter. We study atoms. We study how they interact with one another, and while that is not the fundamental makings of our world, everything is still made of them, and understanding how they interact is understanding our universe. Just in a different way than, say, the general theory of relativity.

Still, boring definitions are not a great way to start off a class. You want to inspire wonder, awe, and excitement for the things to come, as well as engage minds to start thinking about the subject matter. Saying "Chemistry is the study of atoms" doesn't exactly teach very many people anything new. So, I offer an alternate definition:

Chemistry is the study of how a massive number of minuscule particles interact on the atomic level and what effect that has in our everyday world.

Maybe a bit cloggy, but still better than "We're gonna talk about atoms, which originated from the mind of Democritus, but Aristotle blahblahblah..." Alright, I'm hamming it up, but still: There has to be something that actually describes what we're going to talk about in terms that people can understand, while simultaneously not bringing it down to a level where people don't grow as students. That's the great challenge of teaching: We must become experts in a field, understand it, and then be able to to inspire how we constructed these ideas from the bottom up in a group of individuals with differing minds and perspectives. So, while I won't say my definition is the best, I will say it's gotta be better than "Change" and "Atoms".

Friday, April 10, 2009

Lego's, the building blocks of life

Organic chemistry is the study of carbon chemistry. The term "Organic" is a hold over from way back in the 1800's when it was believed that the molecules of life could only come from life -- a sort of distinction between the matter of inanimate objects and the matter of living beings. This isn't too far off: In fact, Carbon behaves in ways that no other element does. So much different, in fact, that it and water are considered necessities when searching for life. The main reason Carbon is thought necessary for life is that it is the only element that can form exceptionally long chains -- up to millions of atoms in a single molecule! The molecule closest to it in chemical nature is Silicon, and the longest chain of silicon is roughly around 10 Silicon atoms long. However, the distinction is untrue. It was disproven when a chemist by the name of Wohler (that "o" is actually an omlaut) created a compound known as Urea from ammonium and cyanic acid. Urea was a well known compound in Wohler's time that originated from mammalian urine (hence urea), but both ammonium and cyanic acid were classified as inorganic compounds, essentially showing that organic matter can originate from inorganic.

When studying organic chemistry, there is an analogy I like to employ. That analogy is Legos. I played with Legos as a kid quite a bit. And it's funny, but Legos, made of Carbon, create a great analogy for Carbon and all of the elements involved in the chemistry of Carbon. When studying organic chemistry, we classify different combinations of elements into "functional groups". When certain elements are bonded to other elements in a certain pattern, they exhibit similar traits -- observable chemical traits. They interact with each other in certain ways, break off, form new bonds, and become new compounds. Essentially, each group is like a Lego: You have small, stout blocks that build the basis for many Lego structures -- the Carbon atoms of Legoland -- and you have long blocks, similar to long "R" carbon chains ("R" just denotes "Carbon chain"). You have specialized blocks that can only fit in certain places, like the cannons, or the flags and flag poles, or the little switches. These are similar to the other functional groups in organic chemistry: They all behave in a certain way (due to their chemical make up) and can only attach to other groups because of their behavior (Think of the castle gates: Feasibly, they can serve as gates, or grates on the ground, but they aren't very good rockets for your space Lego sets).

Carbon chemistry, in this sense, is just playing with Legos. Except, with chemistry, you can't use your hands to pick a block off and re-stick it somewhere. The building blocks are too small. You have to figure out ways to interact with the building blocks without picking them up and putting where you want them: And this is where reaction mechanisms come in. Mechanisms, as a whole, are a step-by-step diagram of what occurs on the chemical level during the process of a chemical reaction. You show where electrons move from and to, what charges various elements have (which can attract or repel electrons), and which elements attach to other elements. The electrons in carbon chemistry can be compared to the nubs on top of the blocks in Legoland -- they keep the blocks together. Armed with the knowledge of a given element or functional group's tendencies, you can pick apart carbon groups, reattach other functional groups, and end up with something entirely different. How that different thing behaves and how you get there depend on a lot of things -- more than I want to go into with this particular blog post -- but the basic analogy holds. You pick apart the blocks of life, and reattach those blocks to build a shape that, in Lego talk, may have started as a house, but is now an attack helicoptor.

Wednesday, April 8, 2009

Tropic Thunder and the Inevitability of Physics

In the movie “Tropic Thunder”, the character played by Tom Cruise makes a statement: “Speedman is a dying star. A white dwarf headed for a black hole. That's physics. It's inevitable.” He then proceeds to dance while his goofy yes-man also dances in the background while shoving money in the face of Speedman’s agent to get him to stop worrying about Speedman’s plight. It’s funny as hell. I laughed a considerable amount. However, in the back of my mind, I kept thinking “You're misrepresenting physics!!!” Maybe I’m taking too much from a blockbuster absurdist comedy, but I couldn’t help but think, “Maybe this is how people perceive physics. Maybe they think it’s inevitable, certain, and complete -- movies are a decent representation of mass cultural attitudes” So... is physics inevitable?

While a goal of physics is to predict what will happen in the physical world based upon observations of what the physical world has done so far, it is far from inevitable and complete. Physics, like all sciences, is based upon a unifying idea (or theory) supported by observations (empirical results). The results presented in classical physics have been confirmed time and again, but that does not mean that they are not subject to change.

Now, did Newton start with observation and then move to creating a system to explain those observations? I have no idea. In fact, it doesn’t matter. What does matter is that the theory is backed by observations.

Observations of what? Well, glad I could write in a format to force that question from you! Classical physics, and physics in general, is an exercise in constructing a system of understanding of the entire physical universe. Usually physics takes a “subtractive” approach – it tries to find a root cause for events. We could construct an understanding of the physical world by testing ideas that deal with, say, the chemical make-up of a substance. While that is important to physics, it’s not the end. Instead, it tries to see what all objects do (Though the cross-over between physics and chemistry is great, they’re still very different. More to come in future blog posts!).

So, to give a starting point, from general observations we can say that all objects seem to be subject to movement. They move from point A to point B. There isn’t a single object that doesn’t seem to do this. While it is possible that such an object exists, we have no reason to believe that it does, and if something doesn’t move, from our everyday experience, it’s usually because something is in its way, not because the object itself simply does not move by virtue of a characteristic internal to itself (something usually referred to as an “intrinsic” or “intensive property”).

So, all objects move, that’s terrific. But how? What causes an object to move? Well, this can be answered in several ways, but I think the simplest answer is “ a Force”. You may be familiar with the term from high school physics and Newton’s Second Law expressed mathematically: F = ma. But if not, the concept is fairly simple and associated with the normal definition: Force is something that pushes an object. You force a door open. You shove a box. You pull a rope. Gravity pulls you towards the ground. All of those are forces, and "Force" is just the general concept for anything such as that. You’ll notice that Force (the F) is defined here by two terms, m and a, defined as mass and acceleration respectively.

Well, no shit!, you say, of course objects move because of acceleration! Hold your horses, I ain't done. If we divide the equation by the “m” term, then we get

F = a

Now, to help with the explanation to come, I want to go over the classical physicist’s definition of mass:

Mass is the property internal to an object that resists changes in movement.

It's a different definition than what is normally used, namely because it's not as intuitive, nor does it really reference things we think about on a day to day basis. Normally, mass is explained simply as "Stuff", or "Matter". But there is a reason for this particular definition – because we’re dealing with movement of an object, we define mass in terms that includes only that object, instead of the “stuff” that an object might be made of. You’ll note, if we somehow had a way to quantify Force (and we do), that if we have a larger force then we’ll have a larger acceleration, because “Force“ is on top. Simultaneously, if we had a way to quantify mass (which we also do), then we’ll have a smaller acceleration. Intuitively speaking, this is because the object we’re dealing with is heavier. Think of shoving a basketball. Now think of shoving a bowling ball. Which seems harder to move? Which moves faster if you were to shove them both with the exact same amount of Force? Well, the basketball would accelerate more of course.

This brings us to another one of those stickler points that physics text-books get hung up on: “Acceleration”. In day-to-day speech, it’s actually a lot closer to the physics definition than its given credit, I think. But then, we usually don’t say “I wish that fucker in front of me would accelerate!”, it’s normally more like “Damnit, speed up!”, and that’s where the hang up occurs. Speed, in the jargon of the physicist, can be equated to velocity in some direction, and velocity is the measure of how much time it takes for an object to move from one place to another, or the measure of how far an object would go in a given amount of time if it were to continue going at the same speed in that direction (which is exactly what your speedometer measures). So, again back to the jargon, acceleration is a change in speed. In specific, it’s how much time it takes for you to change your speed from one velocity to another, which itself is just a measure of how fast it takes to get from point A to point B. Think of your speedometer, again. When you start to push the "Go" pedal, usually it only takes a few seconds to reach the "30 mph" mark, but it takes longer to go from "70 mph" to "80 mph" -- your acceleration is less than what it was, but your velocity is greater. This all relates back to the movement of objects -- in physics, we call this displacement -- And now we’re back to where we began: movement! Objects move. How? Force. A force changes an object’s acceleration, a measure of how fast an object changes its speed in some direction, which in turn is a measure of how fast an object goes from point A to point B. And now we have a simplistic sort of explanation that we can attempt to apply to all objects, because all objects move, and all objects have mass.

Now, back to Tropic Thunder and the inevitability of physics: This is a basic explanation of the concepts held in Newton’s Second Law, but it’s not comprehensive. While I’ve made an argument for an explanation to explain all objects, we have to go out and test the ideas. To do a test, we can’t just say “Yeah, looks pretty good”, we have to have measurements. This is an attempt at being objective. People are biased, especially when they develop ideas of their own, so while pure objectivity is impossible, a strong attempt can still be made. Usually, as physicists, we rely upon math to create rules. This allows us to find numbers that, if our original thought is true, should correspond to measurements that we’ll make during the experiment. It also removes a lot of interpretation from data. Now, numbers can be manipulated in a number of ways (pun intended), so as a physicist, you usually try to stick to the raw measurements as much as possible to give support to a mathematical expression of an observation. If those observations don’t match up to what you thought you'd get -- for example, if you push both a bowling ball and a basketaball with "1 Force", and you measured their masses beforehand, you would be able to find what acceleration you should get, but then find that their accelerations are totally different from what you predicted -- then the experiment wins out. The theory (or law, or hypothesis, or whatever) is bunk. The end. Game over. Back to the drawing board. Try something new.

And that’s why the line in Tropic Thunder kind of bugs me – physics is one of the oldest sciences (at least as we currently understand the term "science"), and classical Newtonian Mechanics have a metric-fuck ton (This really should be an SI unit) of experimental support, but that still doesn’t mean “It’s Inevitable”. It means, up to this point, under the assumptions made by Newtonian physics (namely the three laws), our experiments have matched up to the theory. Simultaneously, this doesn’t mean you can just throw out all the experiments that have been collected and start from scratch – that would similarly be neglecting the experimental side of physics (as well as the idea that we usually build upon previous findings, rather than start over) – but it does mean that physics is confirmed by experiment, and is therefore subject to change if further experimentation reveals a flaw in the theory.

I think this point is missed mostly because in class it’s a lot easier to grade tests where there’s a numerical answer at the end. In addition, it’s easier to write tests that require you to apply theory. It’s also easier to teach to a test written in this manner rather than teaching about the thought that goes behind science. Plus, theory is important to understand and implement, so it's not bad to have these skills. But science is not that simple. And I find it rather sad that “Physics” is inevitable, but “Biology”, a science applying the same epistemic principles as physics, is held in contention for its fundamental theories. But, that is the subject of a blog for another day! This one’s already long winded enough.

PS: I say this is a simplistic explanation, because, well... it's simplified – some questions you may have asked during this explanation might include “Where does Force come from?”, “What happens when objects hit each other?”, and “How does that equation explain objects moving in more than one direction?” – and those are great questions to ask. Keep up the inquiry! Personally, I recommend taking a class because nothing can replace a teacher, but you can usually find a cheap physics text book, or check one out from your local library, or find other information on physics on the internet -- or, if you're feeling really daring, you could set up an experiment.

Friday, April 3, 2009

Experimental Error

There is a phrase I come across in grading papers -- a phrase that is misunderstood and used incorrectly by undergraduates everywhere. I know this because I, too, misused this phrase to magically explain away all faults. That phrase is "Experimental Error". A faulty R squared value here, an unexpected color there, a strange vapor developing, an abysmal percent yield: All an encapsulated in a simple syllogism:

1. My experiment should do "X"
2. My experiment did not do "X"
3. Therefore, experimental error strikes again!

I don't know why we, as undergraduates, grasp onto this phrase. Perhaps it's because it seems simple -- after all, we're introduced to science handed down to us by the hands of Newton, et al., as an algorithm for answering multiple choice questions.

This is an entirely understandable stance.

Before studying science in college, scientists worked with equations, and said strange things. Incomprehensible to myself, I accepted that they knew what they were talking about, and I would have to be content with understanding the depth of life from the perspective of a book-ish layman. After all, while they understood what they were talking about, how could they possibly know how to live? They clam up inside of labs with white coats discussing BORING subjects, unlocking the secrets of super-technology and miracle drugs. Who wants to do that?

Well, now I dream of getting into research, but that's not what I'm trying to drive at. The point I'm trying to make is: This is not what science is. If Newton had applied the above syllogism to the problems of physics, he could have, essentially, concluded anything he wanted and explained that the dastardly villain "Experimental Error" had again interrupted the proper data from corresponding to his explanations!

So, briefly, I want to clarify: Experimental Error is not a human characteristic. Usually, when using experimental error to explain results, students will mention the weighing of reagents, or spilling liquids. This generates a great mental image for me: Students, rushing about full of frenzy, jittering with the excitement of discovery, they forget how to hold containers, how to read balances and spill chemicals (usually acids, or volatile organics) on counter tops, their notebooks, themselves!!! Someone forgot to mention they were using ether, they light a bunsen burner, and the entire class ignites into a conflageration of epistimic glory!

It's not this exciting in the lab, but the mental image picks me up in the middle of a the dry task of grading.

So, if not that, as witnessed by the sallow looks of entertainment deprived Freshmen, what? What is experimental error? Well, to try a simple definition: Experimental error is error we can not control due to the nature of experimentation. Now what the fuck does that mean? What can't we control? Well, it's not something I really grasped for a long time, and I think it's a difficult concept to grasp unto itself -- but suppose you're in the market for buying your first car. Not just a junker to get you by, but actually buying a car that you'll drive for awhile. It's a big purchase, and there are a lot of options out there. So, what do you do? You read what you can on types of cars, different brands, different models, different years. You look up the Kelly Blue Book value. You check the newspaper daily. You look at dealership prices. In general, you get a feel for what is already there from people who know what they're talking about, and then you take some cars for a spin. You get a feeling for what you like, you listen to the engine, and eventually, based upon what you've read about and what you've experienced, you make a purchase. The experimental error, in this situation, would be everything you didn't know about, everything you couldn't cover, due to your position as a fresh consumer -- maybe the car you bought has bad wiring, but nothing went wrong when test driving it. Maybe there was a better deal across town on the exact same car, but you didn't see it. That isn't exactly experimental error, but it's getting at it: It's something, because you are not all knowing and can't take every measurement ever conceived of everything, you just can't help. You eventually just make a jump: It's an educated jump, based upon current knowledge and experience, but a jump none-the-less, and then you find out, later on, if you made a good jump or not.

This isn't, in the strictest sense, what the scientific method is all about. There's a lot more to it, some of which I'm still trying to comprehend. But this, a common experience in most people's lives (eventually, anyways), is closer to the scientific method than the undergraduate rationalization of experimental error -- and we're supposed to be studying this stuff!

So why, exactly, do we formulate science this way? I could raise awareness of the inadequacies of our educational system, but that's about as vague and useless as fortune cookie advice. I don't have the answer to the question, it's a question for you, the reader! to think about. I know I will be.

Also, I want to introduce myself as a new blogger: I am a chemistry undergraduate minoring in physics at a small liberal arts college. I want to make particular emphasis about that -- the undergraduate status, not the liberal arts college -- because I am in no way an authority on the subjects I want to blog about. I am fresh, new, thinking, formulating, and quite possibly wrong in all instances. But I do put forth a good amount of effort into understanding what I'm talking about before I start talking about it, so everything I write will be in good faith, at least. I work as a TA at my college for the introductory chemistry labs, and I find that if I explain what I'm learning I feel I understand what I'm learning more. I will update weekly/bi-weekly, and the subjects will include: Science! Chemistry! Physics! Philosophy of Science! Science Education! Popular Science! So tune in next week!