Monday, December 7, 2009

The Two Pillars of Chemistry

It's that time of the semester where time spent outside of finishing projects and preparing for finals is scarce, but I wanted to get a quick post in. I've been wanting to do this post for awhile, so it should come fairly easily:

I remember well my days in Chem II where I learned that there chemical reactions were reducible to two major concepts: Kinetics and Thermodynamics. The first deals with how fast a reaction proceeds, and the second deals with what is energetically favorable. And the great thing is that this can all be displayed using only one graph:


This is a reaction energy diagram that I nabbed from here. The energy being referred to is the Potential Energy of the main molecules under study in a chemical reaction: What you start with and what you finish with. In this particular diagram, the starting molecule (labeled Reactants) only undergoes one quick change before becoming a new molecule (labeled Products). You'll notice that in this reaction the products have a lower Potential energy than the Reactants. Because of the Law of Conservation of Energy, this energy doesn't just disappear, but is released into the environment (Which, as far as chemists are concerned, is "Not the Molecule). An everyday example of this happens if you have a gas-burning stove, or inside your car engine. The energy is released and heats up your food, or drives the piston down. The comparison between the Potential energy of your starting products to your ending products is what chemists use to gauge whether or not a reaction is "Thermodynamically favorable", and entails Pillar One of chemistry: Thermodynamics. In this particular reaction diagram, the reaction is thermodynamically favored because the products have a lower potential energy than the reactants -- this works, in a lot of ways, like gravity. Objects "like to" get closer the center of the earth, and molecules like to have a lower potential energy.

I actually often give chemicals personalities and say "Likes to do..." more often than may be healthy, but personification helps in simplifying the theory.

And now: THE SECOND PILLAR OF CHEMISTRY

(OK, I fess up: these aren't official pillars, I'm just raising a big hullabaloo)

Deals with the section of the graph where the Potential Energy raises temporarily. The maximum of this curve is termed the "Transition State", because... well, to sound redundant, it's very transitory, and it's in transition from one molecule to another. The higher this maxima is in comparison to the starting Potential Energy of the Reactants, the slower the rate is going to be.

So, really, the two pillars of chemistry tie back to the most fundamental concept of dynamics: Energy. But one quick look at Diamond tells you that it's important to separate Energy into Kinetics and Thermodynamics -- it is thermodynamically favorable for Diamond to spontaneously decompose into Graphite. However, the transition state is so high, this is very... very... slooooowwwww.....

Tuesday, November 24, 2009

Feminist Man in the Midwest

That sentence describes me. I can not say that this was always the case, but I can say that I've acquired the wisdom to call myself a feminist for a couple years now. I don't particularly feel like describing my journey into feminism, but after a brief conversation at the skeptical conference I want to outline why I'm a feminist.

In the first place feminism is not some monolithic man-hating organization. Anyone reading this probably already knows that, but I frequently here it described that way. There is a considerable amount of diversity within feminism and what it means to be a feminist. I honestly can't even claim to have an expert understanding of all feminist positions. I can only claim that I think strict gender roles and expectations can be harmful to individuals, and that I choose to help establish gender equality in my day-to-day life (as that's where it really begins).

So what does all that entail? For me, it just involves speaking up and asking questions. There are a lot of mores and folkways that strike me as silly and outmoded, but I would like to replace those social constructions with better ones. So I question norms in the hopes of finding better answers. Feminism encompasses some moral positions that anyone ought to defend, like rape prevention, access to birth control, and pay equality. Nobody argues against these things (well, OK, there are a few who argue against birth control, but it's nonsensical). I've argued against feminism without realizing the contradiction. But feminism, as a philosophical position, is beneficial to both men and women: It puts our social expectations in human terms, general terms that can be fulfilled by anyone in spite of their sex. I think that this more general formulation helps us to respect each other as humans, which is really what I think feminism is all about. There may be general sexual trends within a population, but we ought to also find ways to encompass those who don't follow those trends. Feminism is one of those answers.

Feminism can cover a host of issues and topics, none of which I am an expert on, but most of which I find interesting and enjoy discussing and reading about. I tend to approach feminism more from a male perspective, and think that the social expectations of men can be harmful and should therefore be questioned. (shocking, I know, seeing as I'm male).

Sunday, November 22, 2009

Skepticon II

For the longest time I found the notion of an atheist movement to be odd. While I have been an atheist for a long time now, I thought people found meaning in religion, and it didn't seem like to nicest thing in the world to go around removing people's meaning. Further, it seemed odd to form organizations around the idea that God Is Dead. I wasn't always as certain of this as I am now, but I figured that anyone who bothered to actually continue looking for truth would at least be able to rationalize one way or the other, and while I was sure that Atheism was the right conclusion, at least theism offered a structure for individuals to tackle moral problems.

I no longer feel this way. At least entirely. I still don't feel terribly great about poking holes in people's beliefs, but there are good reasons to believe things and bad reasons to believe things. Further, while I am intrigued in continuing the philosophical debate on the existence and nature of God, as well as everything that might entail, I am certain now that a movement for atheists is a good thing. I was convinced of this by Skepticon II.

The main problem, as I hinted to above, that I had with the New Atheists was that I perceived it as a destructive movement as opposed to a generative movement. I knew that God did not equate to goodness, and took offense when someone thought I couldn't be good because I didn't believe in God, but it seemed supremely silly to me to gather together to destroy the beliefs of others. Quite simply, this isn't the case. If Skepticon II is a good sampling of what the New Atheism has to offer, then while I disagreed with individual's that spoke there, that was a common theme amongst many people. And my impression was that this sort of disagreement and debate was encouraged. This means that, while we all agree on the non-existence of God, there are still questions and problems that we all still have and disagree on.

So, while it seems that Atheism would be destructive, it was the exact opposite: It was generative to the point that everyone had a point of contention with something which was a widely positive experience to myself -- especially because everyone there never once listed "The Bible" as a good reason to do something.

Further, while I have a group of atheist friends that I generally hang around, I'm a fairly quiet and complacent fellow who doesn't speak out to many people. While I enjoy and very greatly value this group of friends, it was also fun just to hang around people who are relatively similar to myself in their general metaphysical world view and to feel that I wasn't fundamentally alone. There was a community of people who wanted to bullshit about science, literature, music, politics, teaching, philosophy, alcoholic drinks, often all in the same conversation. This was something else that wasn't stated explicitly, but that seemed I noticed: The New Atheism is an intellectual movement. The speakers all had an intellectual discipline, and they shared their specialty in their speeches -- something I highly enjoyed. I especially enjoyed seeing science being shared glibly with anyone who chose to show up. Further, the science was embraced by those who attended (at least, those whom I talked to). It was not shunned as some hum-drum boring routine you have to go through in order to pass a class. (Sorry for the minor bias towards the science, but it is what I study. I also enjoyed the philosophers and historians, as well as the debate on the existence of God)

So, it is a generative movement, and it is a movement that actually values intellectual labor (something desperately lacking in my experience). Further, it's filled with enthusiastic individuals who enjoy finding like-minded people (which, really, who doesn't?). I find that hard to object to. Thank you to all who set it up and all the speakers who came.

Wednesday, November 18, 2009

Research for Killing

I just returned from a presentation given by a man who works for the US Army in developing better ordinance. The primary reason for my visit was to ask him about his ethical justifications on doing research to further the cause of war. His primary reasons were:

1) For the people in uniform, so that they can come back home.
2) A human in a democracy follows the will of that democracy even if he disagrees with the democracies stances, and attempts to make political change if he does disagree with those stances, but still supports all political decisions.

I can't accept these as good ethical reasons, but I'm glad he answered without hesitation, and he acknowledged that it was actually a difficult quandary -- so he was aware that there was a gray boundary.

As I interpret his ethical justification the reason is "Patriotism!" which fits well with the zeitgeist of our times, but I fail to see that as a good ethical argument for just about anything. If all actions that are patriotic are justifiable so long as they're vindicated by some form of democratic unity, then the south was right to own slaves. I find any justification on the weapons industry hard to justify because you're dealing with something that's pretty fundamental, ethically -- you're furthering man's ability to kill people. And if the 20th century tells us anything, furthering that ability doesn't really deter use. It just makes as that much better at killing people, exactly as the research intended.


Plus there's this whole side to it that makes me think that they're taking the easy way out: It's friggen' easy to destroy things. It's much, much harder to actually produce something useful or interesting.

I think I'm going to be a curmudgeon when I grow up. *grumble, grumble, grumble...*

Thursday, November 12, 2009

Unpacking Equations

Equations are poetry. In the abstract they signify shapes, in science we add the significance with units and measurements. It is the cross-over between shape and meaning that creates the poetry of equations. Looking at a common example:


The poetic meter of equations comes from the standard method of algebra. It helps in unpacking the meaning. This reads: The gravitation force between two objects is the mass of object one multiplied by the mass of object two multiplied by a constant, divided by the square of the distance between those objects. This is really just a first step in understanding, as that it a lot of information to process. Actually, I think the reason we use equations is because they help us to process massive amounts of information with less effort. Plus they're all objective 'n shit, which Scientists happen to think is a good way to stay hip with the kids.

The first reading is akin to substitution. You have mathematical symbols that can be translated into words, and stating those relationships using words helps in understanding what an equation is saying in the grand scheme of things. In this case I understand that the mass of both objects can differ, so if the mass of either object changes, so will my force. In this case, it has a positive correlation between the force, whereas an increase in distance has a negative one -- or, in more accessible language, the heavier the objects involved are the greater the gravitational force between them, and the further apart they are the lesser the gravitational force is between them. Something else to note is the fact that the decrease happens at a squared rate, where mass is only linear (unless, of course, you increase the mass of both objects under consideration by the same amount). All that's left is big "G" which never changes. It's actually just something that's determined by measuring, and it's a factor that makes this equation work.

So, the equation states a relationship between things we observe. But if they're an actual relationship, we can also determine other parts from the Force, such as the mass of an object in space, without actually measuring that mass on a scale. Or, for this same equation, we can determine the Potential Energy of an object.

The definition of energy is a Force applied across a distance, or for the above:

dF = G((m*M)/r^2) dy

where "m" is the mass of any object on the earth, and M is the mass of the earth. I put it in y so that it will appear more familiar in the end. In this, we simply integrate from point zero (the ground) to whatever point above the ground we're interested in, and thus obtain:


PE = (m*M) [(-1/r)] from 0 to y = -GmM/y

And so we have a statement about the universe from the above equation that required a little digging to see. Big M and G do not change, and the potential energy is the negative of an inverse relationship between the mass of the object on earth and the distance that object is moved away from the surface of the earth. Not only did this require a little digging, if you haven't had a background in Calculus then it probably didn't make as much sense. While it is preferable to be lucid, I'm trying to make a point: That math is a language. The meter of a poem and the conventions of language bring out the meaning in lines. The operators in math is this meter that creates the poem describing what we see, and thereby, letting us as humans understand  at a deeper level than once we did. While what I use and look for in poetry might differ, the experience is largely the same. You read an equation over and over again, looking for the implicit relationship and meaning, and make connections over time that reveals a deeper truth -- in the case of poetry, about the emotion, and in the case of equations, about the universe.

Tuesday, November 10, 2009

Infinity and Electron Probability

A thought today from Pchem:

Infinity is a relative term. One meter away from the nucleus of an atom is infinity, and 10 billion billion kilometers away from the sun is infinity. Since infinity is a general concept, rather than a number, it can be defined anywhere. So, if we consider the probability of finding an electron such-and-such a distance from the nucleus, we can find the probability that it will be from that point inwards, or the probability of finding the electron between two points by doing the same method but subtracting the smaller value. We know that the probability of finding the electron converges to 0 at infinity, but infinity can be anywhere we set it to be. Supposing you want to find the probability of finding the electron on Mars (as was the example given today), you can find the probability between "Nucleus and Mars" (A very high number), and you can then find the probability between"Just beyond Mars and Infinity". Then you can subtract "Nucleus and Mars" probability from "Just beyond Mars and Infinity" to get a real probability of finding the electron on Mars. I think this all arises because we can set infinity anywhere we want (which is necessary for the concept of infinity to be of any use).

Tuesday, November 3, 2009

It ain't that weird

For all the hullabaloo I've read in popular science books and the strong emphasis my physical chemistry text book places on the differences between classical and quantum mechanics, half-way through the semester I'm sitting here saying to myself: It ain't that weird. I half-way wonder if the only reason it seemed weird initially was because everyone told me how friggen' weird quantum mechanics are. Sure, an electron doesn't behave like a baseball. Is there really any reason why we think that it should? Even in Physics 1, whenever dealing with real objects we would make it clear that we were inventing a point that made all the classical laws apply (Center of Mass), but that this point wasn't a real point, so that if the object were destroyed mid-flight, the center of mass would still continue due to inertia. And, actually, the originators of quantum mechanics knew that it would be absurd to propose a physical system that entirely violated what had already been observed, so they built equations around the idea that as you took the limit of them that you would get classical results. So what gives? Why does every voodoo mystic and half-baked spiritualist in the world think the deep secret of the universe lies in quantum mechanics? I certainly acknowledge that I'm going at this at the depth of Chemistry, and not at the depth of physics (half way through and we've just started spectroscopy. I'm told that physicists tend to finish their first semester of quantum with solving the hydrogen atom), but all the quantum "Weirdness" is still there.

Really, the quantum concept can be introduced utilizing series and sequences. And seeing as we don't exactly live at the size of electrons and can only interpret spectroscopic data to make inferences about what's going on, it makes perfect sense that the wave equation is an abstract description of what's going on, and we need observable values that we in the macroscopic world actually can see. In fact, it almost makes MORE sense than trying to plot out the trajectory of electrons and protons, because we can't actually see these things, and testing what we can see is exactly what science is all about.

Maybe it's the shift from determinism to the "probabilism" (no, not a real word) that really gets people, but half-way through, and fully realizing that quantum mechanics aren't yet entirely complete... I seriously enjoy learning about and thinking about them, but I'm just not quite grasping what's so weird about them. Difficult? Certainly. Abstract? Yes. But the same held (and still holds) true in my classes on chemistry, physics, and mathematics.

Monday, November 2, 2009

"Why Evolution is True" by Jerry Coyne

I enjoy pop-sci books written by those qualified to write them. Jerry Coyne certainly meets that criteria on "Why Evolution is True", but he also fulfills the other part of why I enjoy reading pop-sci: I learn in an entertaining and easy sort of way. The majority of the time Coyne reviews a good chunk of data collected thus far that supports the theory of evolution while demonstrating the basics of how the scientific method works. However, despite doing this, one does not need a background in science to understand the arguments for evolution -- everything is straightforward and fairly easy to comprehend. There is some occasional ribbing of theism involved, but the ribbing is directed towards the current creationist movement that biologists have to contend with more than the grand philosophical questions of theism. This approach shows that Coyne is more concerned about the scientific stance of evolution and the reasons for its truth rather than any particular over-arching metaphysical stance. Some reviews term this ribbing as "Preaching to the choir", but Coyne never lets on what his particular religious stance is. Instead his overall concern isn't the existence or non-existence of God, but the lack of proper scientific argument from self-described creationists and the Intelligent Design community.

What I found particularly enjoyable was his treatment of the debates on evolution within the biological community. Not being a biologist, and having taken all of a single college course on biology, I found it refreshing to be able to review the variations on evolution currently being debated. Overall, Coyne presents the truth of evolution in an entertaining way with references to boot. I would recommend the book to those not in biology but wanting to have a clearer understanding of why the theory of evolution is on par with the atomic theory, as well as a deeper understanding of the social issues at hand (the last chapter covers these) from the standpoint of a biologist who is currently working in the field. We need more popular science books just like this.

Thursday, October 22, 2009

The Second Law of Thermodynamics

The Second Law clicked today. It took two hours of work at a chalk board along with conversations with a professor (who happens to be very generous with his time), but it clicked in my head, and the interpretation that helped it click was the statistical formulation of the Second Law. So, for me, the most confusing part of the second law is NOT how esoteric it is -- it's far from esoteric. It makes perfect sense and matches up with what we observe. To me, describing the Second Law
as "In spontaneous processes the entropy of the universe tends to increase, where entropy is the measure of disorder" is the confusing part. This statement makes sense, but only if you're familiar with the jargon. And even then, I was still left with wondering "So... why is this, again...?" While you can always ask why (and one ought to), the statistical interpretation satiates the confusing "Why?" for the "Hm, I wonder Why?" kind of why -- bridging the gap from frustrating unfamiliarity to curiosity.


But stating the statistical formulation takes a lot more room. I'll still take a go at it, however.

Suppose you have a chunk of energy. You split that energy into 10 equal parts to observe how it behaves, and you have two metal blocks that can absorb that energy. Placing all 10 equal parts into one of the metal blocks (We'll say so that the energy heats up the block, since I am referring to thermodynamics here) and sitting it next to the other metal block, you sit and wait to see what happens. The heat from the first block should heat up the second block until they're about the same temperature. For our purposes, this is no different than when you let your soup cool off to room temperature, or your ice melts in a glass of water, or when you cuddle up with someone when you feel cold. Eventually heat will be transferred until you reach the same temperature. At this point, heat transfer seems to stop. Ice does not later boil, the soup does not freeze, and you and your partner remain at about the same temperature (though there are some extra complications involved with cuddling, since human bodies produce their own heat, but for rough analogy and everyday experience, it works). Something stops the transfer of heat from continuing in the same direction that is initially observed. Something also stops the transfer of heat from going back to where it used to be (Hot soup, ice cubes, you stay cold). This "Something" is the Second Law of Thermodynamics. From the 10 pieces of energy analogy above:

You have two blocks of metal. However, those blocks of metal have places to store this energy -- atoms. Everything has atoms that it can store energy in. The question really becomes which atoms hold what amount of energy. This is a question that can be addressed mathematically with a concept termed "Multiplicity". Multiplicity is the number of ways you can store those 10 units of energy in however many atoms are present in the metal block. You can place all 10 in the first atom you touch, or spread them out in 10 different atoms, or put 5 in one atom and 5 in another. These are all different ways to arrange this amount of energy. Even so, if all 10 of the energy units are still in the first block, this would mean that the block is at the same temperature (if you'll recall that our energy units tell us how hot our blocks are) no matter how they are arranged within the individual atoms that make up the block. This is something called a "Macrostate" -- a mathematical description of what we observe, namely, the temperature of the block. However, the "Microstate", or the mathematical description of how the energy units are distributed amongst the individual atoms in the block, still plays a crucial role. See, if we take into consideration the second block of metal we just touched to the first block (Let's suppose that both of the blocks are the same size), we essentially double the number of atoms our 10 units of energy can spread between. We also increase the number of macrostates from the single one before (Where our block stayed at the temperature of 10 units of energy that we placed there) to 11 different possible macrostates -- 9 units of energy in the first block, 1 unit of energy in the second block, or 8 units of energy in the first block and 2 units of energy in the second block, so on and so forth.

So the question becomes: Which macrostate is the most likely one to observe? From common experience, we know that things tend to have the same temperature as one another if given enough time, such as the soup cooling off in a room example above. So we should expect that what we observe will be 5 units of energy in the first and 5 units of energy in the 2nd block, given enough time. But why? That is where the term for "Microstates" comes in. It turns out that when you have 5 in the first and 5 in the second, you have more possible ways of distributing the energy throughout the different atoms than you do with any other macrostate. So, it just becomes a statistical issue: There are more possible ways for the Macrostate 5/5 to be observed, therefore it is the one most often observed. There may be some oscillations about this point, but we still observe this more often than anything else.

Now the real kicker is that when dealing with the real world, one deals with more than 10 energy units. We deal with billions upon billions of energy units. And, as atoms are awfully small, we also deal with billions upon billions of atoms. So, with such large numbers the oscillations about the midpoint become immeasurable. So, while oscillations are dictated by probability to occur, as every possible way to arrange the energy in the atoms is just as likely as any other way, we don't notice them due to the sheer improbability of that happening. Like, much more than 10^23. I'm not sure how to express how improbable it is to feel an object heat up without anything heating it up(as it is REALLY FRIGGEN IMPROBABLE), but as you've never experienced it in your life, and I am confident in saying that, you too can feel confident that the 2nd Law is pretty sound stuff! Cool factoid: another common experience unrelated to heat, table salt dissolving in water is an entropy driven process, which is to say that without the 2nd Law of Thermodynamics, table salt wouldn't dissolve.

Thursday, October 15, 2009

The Stories of Problems, and visuals

While visualizability is far from a necessary component in a physical system, I still find fictional visualizations beneficial to working problems. I imagine energy as a sinusoidal beam, heat as a cloud of these beams, and electron probabilities as a static mist. I think it helps me to create a narrative of the events, which can make arranging appropriate questions to ask myself easier in the mental array of problem solving techniques. I have recently started developing a visualization for circuits by using water pipes. Except, not. I imagine they're large, already filled pipes that require motors to both pull and push the water, because the fluid is just that dense; or, I try to think of it as a steam like substance under pressure, but so high in mass that it's very stubborn to move, so it just needs two motors. I try to avoid thinking about liquid water, because water is blue, and I imagine that electrons are blue, so I'm trying to keep the visual for the flow of positive charge separate from the visual of electrons that I use, say, when comparing electronegativities, because their stories are different. Maybe something more "Yellow"-like

I highly recommend it. Even if the visualizations are somewhat false, I've found them to be helpful in the problem-solving area.

Wednesday, October 14, 2009

Teaching Experience, 3

This is an experience I've noticed over my tutoring that happens with most students, in general.

If you ever ask someone, "Does that make sense?" they will always, always, always answer "Uh-huh" (or "Yes", or another general colloquial affirmation). I could say "the delta G favors dissociation" to someone memorizing the solubility rules, and they'll only start to nod their heads, look a little confused, but they will answer "Yes" with at least a .99 probability -- I haven't tested that, but I hypothesize that it would happen.

I wouldn't say that I suddenly get the pass on this one, either. If I'm struggling with a concept, I'll often just blurt the first thing that comes to mind to see if it sticks and see if I'm anywhere near the right track. If someone asks if I understand, I'll say "Yes", wait a minute, and then ask a question directly related to what I was just told. Sometimes the answer will be the exact same thing that they just said.

So this got me to thinking about a general possible maxim for teaching: Never ask your students if they understand. Always assume that they do not understand. When they look bored, then that is the point at which they understand.

This isn't always necessary, as sometimes an individual's body language will let you know whether or not they understand the concept. But some people, including myself, are tricky at hiding it... in the hopes that they don't embarrass themselves (at least, that's my personal motivation), and in the hopes that something later will make it all click together.

I am going to start testing this tomorrow.

EDIT: The phrase is a habit. I totally fail.

Monday, October 12, 2009

"Uncertainty" by David Cassidy

Last night I finished Cassidy's biography on Heisenberg, and so wanted to write a brief review.

The author is a scientist-turned-historian writing a biography on a great scientist. As such, the book is really writing three stories that all occur simultaneously. The obvious one is the life that Heisenberg led. You also get a brief synopsis of his scientific achievements as they were developed and published. To put both of these stories in context, however, the third story being told is a pseudo-personal history of Germany. To give the reader a better understanding of this history, Cassidy will give brief anecdotes about the figures that appear in Heisenberg's life that Heisenberg would not have known, such as the activities of Oppenheimer during the second world war, or the actions of influential Nazi party individuals that, entirely unknown to Heisenberg, essentially saved his life.

There is a historical controversy about Heisenberg dealing with his actions during World War II. The author takes great pains to tread around this with tact, and succeeds at doing so while giving information around the controversial events. He lays out why certain pieces of evidence are suspect, historically speaking, but because these pieces of evidence seemed wrapped up in the controversy, he gives the evidence and its subsequent argument.

While I do not mean to denigrate the efforts of historians, as a scientist-in-training I personally think that the interest in those controversial events lies not in the exact truth of them, but rather in the ethical implications attached either way. If this book can be said to have a theme outside of the main subject matter, the "ethics of science" is the most prominent. This is far from surprising, as World War II really encompasses that question as a whole. I honestly don't think the question was considered before the fall of Nazi Fascism and the bombing of Hiroshima and Nagasaki. However one falls on the question of ethics, the life of Heisenberg is an excellent first stepping stone for addressing the intersection between ethics and science, and as such, this is a book any scientist (or ethical philosopher) ought to be interested in reading.

Monday, October 5, 2009

Undergraduate Research

While it may be a pain in the ass for the professor involved, I have to say that I'm happy that this class is a required part of my undergraduate degree. Especially when put in contrast to the upper-level science courses I am currently taking, which half the time cease to have a lab component complementing the theory -- not that theoretical classes are bad unto themselves, as there's a lot of material out there from which one has to play catch-up with. But I've been forced to learn about a subject I've never had a class in by way of teaching myself from current literature. I haven't done a single experiment, I've only given myself a beginning background in an area. And the ability to utilize things like scifinder or pubmed or the ACS website, and teach yourself (with a little help from my advisor, I must admit) about a topic... I can't help but think these are invaluable skills for work that I hope to be doing in the future. And they aren't skills I ever used in a class room setting, because their you're more concerned with problem solving, memorization, and finding answers in your text-book index.

Additionally, there's an emotional satisfaction to it all -- becoming familiar with an area in order to do original research. But I wouldn't argue that is prime reason for including things in curriculum.

Sunday, October 4, 2009

Hydrogen Bonding

Last I mentioned wanting to go over the reason why drinking alcohol, despite being heavier, has a lower boiling point than water. The explanation lies not just in chemical bonding, but in a specific type of chemical bond: The hydrogen bond. In order to understand hydrogen bonding, however, I think one needs to understand chemical bonding in general.

A chemical bond is what holds molecules together. When you have something like H2O, a chemical bond holds the two hydrogen atoms to the oxygen atom. By this definition a hydrogen bond isn't strictly a chemical bond, as it does not hold molecules together, but rather is a way to describe the interaction between a large group of molecules. However, they are related, as a sort of "Bonding" occurs between multiple molecules. Behold, the molecular shape of water!


You'll notice that, with respect to the atoms involved, it has what is called a "Bent" shape that resembles the shape of the letter "V". The four dots around the oxygen atom represent electrons that the oxygen carries around with it. The important thing to know about those electrons in this case is that they are negatively charged, like magnets, which have both a positive and a negative side.


Or a North and South side, as in this picture. Same idea. In fact, if you've played with magnets, water behaves in much the same way: The negative side of water is attracted to the positive side of water. The negative side of water is the side with the oxygen, because oxygen "likes" to carry around electrons (relative to hydrogen). The positive side of water is around the two hydrogen atoms for two reasons: Hydrogen atoms are single protons, which have a positive charge, and as stated before, the oxygen atom "likes" to carry around negative charge much more than hydrogen does. So the oxygen atom not only has the four electrons that it normally carries around, but it will also carry both hydrogens' electrons around. This causes the entire water molecule to become "polar" in the same way that the bar magnets above are polar: With a North and a South side.

Hydrogen bonding is this sort of interaction: Where one side of a molecule will have hydrogen atoms attached to atoms, like Oxygen, which will carry much more negative charge than hydrogen will. This causes a polarity on the molecule, and then large groups of that molecule will interact with itself, where the negative side will be attracted to the positive side. This won't cause true chemical bonds, as they aren't new molecules, but the interaction is enough to have an effect on macroscopic observations, such as boiling point.

To relate this back to the post on distillation: you'll notice from the diagrams in the previous post that drinking alcohol also happens to have an oxygen atom with a hydrogen atom attached to it, which makes it suspiciously similar to alcohol. In fact, this is the case, and some hydrogen bonding occurs in drinking alcohol. However, you'll also notice that there are two hydrogen atoms attached to the oxygen in water, and only one in the case of drinking alcohol. This allows for multiple hydrogen bonds to form, which makes water more attracted to itself than alcohol is to itself. Because water is attracted to itself than alcohol is to itself, it takes more energy (and hence a greater temperature) to cause it to boil. So in the distillation process, alcohol will evaporate before water because of the effects of hydrogen bonding are greater on boiling point than the molecular weights.

This sort of explanation is the essence of chemistry. There are a number of physical things one can measure. There are a number of attributes to a given compound. But the desired end goal is to find a molecular explanation for a macroscopic observation -- something that the hydrogen bond easily does in this case. Also, for further reading, check out the effects of the hydrogen bond on DNA configuration and the density of ice. It has a great deal of explanatory power across several differing areas of study, as well as theoretical justification in physics. These are all the makings of great scientific facts.

Wednesday, September 9, 2009

Distillation

This apparatus is what a chemist uses to distill things. There is a long cylindrical tube connected to a flask that sits on a heat source. The tube connects to another, similarly shaped tube that sticks out from its side, and is pointed downward. This tube, called the "condenser" has water running through a cavity between the inside and outside of the glass -- sort of like having a glass tube within a slightly larger glass tube. This tube ends in a spout, where some sort of receptacle is placed for collection. In the picture above, the receptacle is a graduated cylinder with a red plastic bottom.

What occurs macroscopically in a distillation is pretty common to everyday experience: You add heat to some liquid, and the liquid evaporates up the tube and eventually travels through the condenser, where the water quickly cools the vapor, and drips out of the spout and into the receptacle. In particular, this is how liquor companies obtain higher concentrations of alcohol. When you make alcohol, the alcohol is fully dissolved in water -- like beer, or wine. The trick to higher alcohol content lies in... Chemistry!

So, suppose a beaker full of recently made alcohol -- it will be clear, and from appearances look to be the same liquid. This is because alcohol is miscible in water, which is the opposite of what happens when you mix oil and water. No matter how much water and alcohol you mix together, they will always freely intermingle. So, you're left with a beaker of water and alcohol molecules:

Behold the power of paint! The blue atoms with two red atoms coming off of them is a water molecule. The other one is a molecule of the drinking variety of alcohol. It has a blue atom as well because both water and alcohol have Oxygen in them.


How would you separate these?

A quick look at ethanol's MSDS sheet tells us that the alcohol has a boiling point of 78 degrees Centigrade. Water's boiling point is 100 degrees centigrade. Attempting to boil the mixed liquid seems like a good idea. And, in fact, this is how alcohol and water are separated -- first the alcohol evaporates and is collected, then the water will stop evaporating. If you want to keep them separate, you stop the distillation once you have collected the majority of your alcohol. How does one tell when that happens?

You'll notice in the photograph a thermometer. If we plot a graph of the amount of liquid collected on the x-axis versus the temperature of the vapor (which corresponds to the liquid's temperature) on the y-axis, you'll see something like this:

I chose this image on purpose because it displays the two types of distillation on the same graph -- simple and fractional. They both have roughly the same shape, only fractional distillation has a much larger spike in its temperature. We'll come back to this sh0rtly.

Note also that alcohol, which evaporates first, has a lower boiling point than the water. Also note that the temperature in the graph climbs as the distillation occurs. This is because the vapor evaporating has an increasing number of water molecules, which require a higher temperature to vaporize. So, you know that you have collected as much alcohol as you can when you reach a mid-point on the graph, which you can determine experimentally by running the whole distillation once through.

Also note in this graph that the fractional distillation has a much sharper jump in temperature. This is because, initially, you are evaporating mostly alcohol and leaving most of the water, but then suddenly you only have water. In the simple distillation, the rate of change of the ratio of alcohol to water is much more gradual (prepositional phrase glory, right there). That is because...

*fan-fare!*

The fractional distillation simulates doing a simple distillation hundreds of times over! Well, I'm uncertain about the actual factor, but it does simulate it going over and over again. The photograph above shows a set up for fractional distillation. If it were a simple distillation, the flask carrying the mixture wouldn't be connected to a long vertical tube, but would be next to the condenser. In the vertical tube are placed several glass beads. As the vapor rises, it condenses on the beads (since the beads are cooler than the vapor), and the heat from more vapor gradually warms up the bead until the condensate evaporates again. This occurs time and time again, with some of the liquid pouring back down into the initial flask. Each time this occurs, the mixture becomes a little more concentrated in the chemical with the lower boiling point -- in this case, the drinking alcohol. With a simple distillation, this occurs only once, but the beads essentially simulate many simple distillations in a row.

Now, an oddity here -- you'll notice from the Paint drawn beaker diagram above that the alcohol molecules are actually larger than the water molecules. The molecular weight of alcohol is, roughly, 46 grams per mole. Water's molecular weight is 18 grams per mole. Yet, despite having more mass (thereby giving the impression that it will need more heat, which can be roughly thought of as energy, to turn into a gas), the alcohol has a lower boiling point. Stay tuned for this explanation next time! Whenever next time is. This is a busy semester.

Saturday, September 5, 2009

Derivations

Two years over and done with, and I have a good feeling for reading equations. This isn't always the case -- I'm still unpacking things as complex as, say, the Schrodinger equation, but give me something along the lines of chemical kinetics, a classical mechanics problem, or the ideal gas law: Yeah, I feel pretty good about reading the relationship. Just as I feel comfortable with reading equations, this year has a new angle being thrown at me: Deriving equations from other equations.

Holy shit, derivations are difficult. So far, I have no real "feel" for where to begin in deriving. I just write down two or three related equations, isolate some variables, do some substitutions, and play with the rules of logarithms hoping that all my random math play will, in the end, give me the equation that I'm looking for. To say the least, this doesn't help. I've been walked through deriving the ideal gas law using classical mechanics, and the derivation itself makes complete sense. But now, left on my own, I feel entirely stuck.

The current problem: Derive P^gamma V = constant from PT^f/2 = constant, where gamma = f+2/2, and f is the degrees of freedom. So, I have both forms of the ideal gas law, the first law of thermodynamics, a definition for work, and the equipartition theorem of energy... I think I could google something up, but this wouldn't help me in knowing how to actually derive equations, rather than follow arguments.

If you have any kind of method for deriving equations, then this is my desperate cry for help. In the end, I'll get it. But it'd be nice to see what other people do if and when they derive equations.

EDIT: In solving, I found a new "method" for derivations. Working backwards. By playing with the "end" result in the same way that I played with the beginning result, I was able to see a familiar form that I knew I could convert the beginning result to. Other than that... no method, really. More intuition.

Thursday, September 3, 2009

Teaching Experience, 2

Alright, so teaching is much much more complicated than tutoring, granted, but it's where I get my practice in the craft at this point in time. And as it's easier, it's a good place to practice, because I get to directly see results. It probably also helps that the people I tutor come willingly and are paying for their classes. So, it's like baby-teaching. Nevertheless, it's a good field to practice and develop my teaching skills, so I'm still labeling it "Teaching Experience"

Today, we covered Unit Conversions and basic chemical nomenclature. Nomenclature is hard to teach because there aren't any real patterns to pick out, and there is quite a bit of data to memorize. As such, you just have to memorize by use, so the best way to teach it is to do it. In a tutoring session, that seems difficult, but upon reflection now, I think naming drills may have been the ideal solution. Must pocket this idea for the future.

Unit Conversions are fairly simple, but they still stump a lot of people. So, like most people, I use the picket-fence method, AKA Dimensional Analysis -- However, I've found in teaching that the use of big unfamiliar words gets in the way of the concept, so it's usually better to introduce the concept first, and then the big unfamiliar word attached to that concept. There isn't a real reason I can think of why, other than the big unfamiliar word sounds scary, so those who are low on confidence (like those who like to go to tutoring sessions) will often shoot themselves down before the concept is introduced. Further, something else that I've found great for tutoring is to start doing the work on the board, but only write what is stated by the students. That way they have to do the thinking, and you're not stuck there giving another lecture that the students have already heard. It's a bit silly to do that in a tutoring session, especially when the lecture didn't get through to them. Sometimes I throw hints in there, or to make things easier I'll explain a single step and do it so they don't become frustrated, but overall I find letting students do the work teaches them better than doing it for them.

Also,I love having more than one student at tutoring. I haven't experienced this until recently, as I'm only recently in a position where we have open tutoring. But today, when one student understood the problem and the other didn't, the first student jumped right in to answer the other students question: So the first student was reviewing the material while the second student was having it explained in a way that maybe is a little simpler than I'm explaining it -- I'm used to dimensional analysis, as well as Chemistry in general, so the terms I'm used to working in may in fact be above the head of those taking Freshman Chemistry. In fact, not just may, but ARE. That is one of the great difficulties in teaching -- you become proficient in a subject, but it becomes difficult to explain the subject because in becoming proficient you generally forget some of the simple steps in between that you used to have to take consciously in order to solve problems. Or you just assimilate simple terms into more complex terms in order to store a greater amount of information. Then you have to unpack all that knowledge, and lead people along step by step from the beginning while not intimidating them, entertaining them, and being there friend while still maintaining a position of authority and respecting their values and way of thinking but modifying it in such a way that they become better thinkers and learn the actual subject matter.

Fascinating. Difficult. Rewarding. Undervalued.

Wednesday, September 2, 2009

Heisenberg

I'm currently reading "Uncertainty" by David Cassidy in conjunction with my P-chem class. While I can't currently write a review of the book, as I haven't finished yet, I do have to say that reading about his early life is a serious motivator for myself. He learned how to apply Calculus to Physics during his high school years through self-study. I'm in my mid-twenties, and while I've progressed in that direction to a point that I'm feel pretty confident with it now, man! I did that with the help of professors lecturing me on that very topic. Looking at the educational ability of the greats around the turn-of-the-century is humbling and inspiring.

Also, interesting fact: Max Plank, Albert Einstein, and Werner Heisenberg all graduated from the same "Gymnasium" -- early 20th century German equivalent to our High Schools. Implicated reason for this: the rich ensured that the best teachers were teaching at their Gymnasium by way of spending money on them. This isn't pointed out to denigrate the ability of these great men, but it does make you wonder about those who think teachers are already payed enough. (Totally anecdotal evidence reinforcing my personal bias in action)

Monday, August 31, 2009

Math and Science: Dehumanizing?

An interesting article reviewing Harper's magazine (I tried to get to the original article, but no such luck without money, and I just so happen to be a student) article "Dehumanizing: When math and science rule the school" -- link to CJR, I got this via Symmetry Mag.



I have no problem with liberal arts studies -- in fact, I encourage them and enjoy them myself. The problem I have with the above is: In what way are the sciences dehumanizing? If the point is more to speak up in favor of a liberal arts education, I would be in full support. But it strikes me as particularly silly to claim that math and science are dehumanizing, setting them up as some sort of Human anti-Human dichotomous interaction where one or the other wins out, and we have to set out to find the mean between them. Was I always as interested in math and science as I currently am? Far from. But I was also a 19 year old wanna-be artist. I would expect someone in the humanities, whose grown up a bit, to realize that the conflict between the two isn't intrinsic to the subjects, but a skewing of national culture values being directed towards the things that have "Practical" value or economic returns, which is something that scientists also have to deal with.

Wednesday, August 26, 2009

Because they are useful...

I ran into an interesting paragraph today. It stated the equation F = ma is used because... it's the fundamental equation in classical mechanics, and it helps to describe a lot of physical phenomena. Essentially, because it is useful. This was described in conjunction with a correlative equation in quantum mechanics that I can't begin to explain, so I'm not typing it out. There was a similar statement made in my Heat and Thermodynamics class that I'm taking: It claimed that Energy was THE fundamental concept of all of physics, and as such, evaded definition. This all brought home to me how much the philosophy of science is seriously influenced by Descartes and all the early modern philosophers: I've personally read that fundamental things escape definition being propagated by Descartes, Locke, and Hume. This shouldn't come up as much of a surprise, seeing as Descartes laid down fundamental work for calculus, and Hume is credited with seriously developing the philosophy behind the scientific method (Taking empiricism to its logical conclusions and inadvertently making a reductio ad absurdum argument for the existence of induction as a separate logical system, in my humble opinion). But this still surprises me.

The process of first principles in logical systems is arational, granted. But the idea that we use concepts in science simply because they are useful for describing the physical world seems, to me, to be a bit off from the idea that we are, indeed, understanding the physical world. I'm fine with stating that science only describes things in useful ways, and that is why we use them, but this description really gives little reason why we would choose one scientific explanation over another, or why even differing disciplines would, indeed, come to the same conclusions. I mean, by this, I could essentially adopt Aristotelian teleology in my description, claim that it's useful for understanding, and stand back satisfied with that use. However, just try and publish a scientific paper today where you ascribe purpose to your explanation, and I sincerely doubt it'll fly. To me, it seems that the "use" approach for validating the logical beginnings of scientific descriptions falls flat. I think the reason for this statement is to cut down the number of assumptions one has to make in making scientific pronouncements (which I would claim is a good thing) -- but unless there is some other validation method, I'm thinking that we are indeed still assuming that our minds are interpreting truth about the physical universe, but we're post hoc attempting to erase the fact that we're making this assumption.

So, sure, they're useful, and that's great. Maybe I'll change my mind when I realize there are other criteria that can be applied to first principles. However, I think it's a far more elegant solution to just admit that we're making something up that sounds like it might be right, then validating it empirically, and assuming all the while that our minds have some connection to the truth of the universe.

Thursday, August 20, 2009

The Photoelectric Effect

You cover this topic in your first semester of Freshman chemistry. But at the time I have to say that I failed to grasp the weirdness (and pure scientific genius) of Einstein's nobel prize winning experiment dealing with the photoelectric effect. I don't pretend to be able to condense a good 50 years of scientific inquiry into one blog post, so I'm just going to focus on the single part of the photoelectric effect that I seriously missed in Freshman chem, and am only now beginning to grasp in Physical Chemistry.

The energy of an electron ejected from a metallic surface depends not on how much light is hitting said metallic surface, but rather how often the light hits the metallic surface. To illustrate this odd phenomena, suppose a ball being hit by another ball (in a perfectly elastic collision for ease of explanation):

Now, in mechanics (and if you're familiar with pool) we would expect the ball to hit the other ball, stop, and for the second ball to continue in motion, like so:

This would happen no matter how hard we shoved the original ball. As long as there was still kinetic energy in the initial ball when it hit the second ball, then the second ball will be sent away. This is somewhat still the case with regards to the photoelectric effect, but not exactly. The photoelectric effect deals mainly with light waves and electrons. The electrons are in a metal, which if you've ever opened anything electronic, you'll notice that it's filled with metal wires. That's because electrons easily move through metals, which is why metals can carry a current. This also makes the electrons easily knocked away from the metal by a source of energy like, say, light.


There's one more very important thing to consider -- at the time of this experiment, light was thought to be a wave (For some very good reasons, explained at Built on Facts). The energy transmitted from light was thought to be dependent upon a given light beams "Intensity", which was determined by the wave's Amplitude. This is important because the electrons held in the metal have 1) a certain amount of energy they need to absorb in order to knock whatever force is holding them in place away, as well as 2) however much more energy is added to the electron to get it moving. However, when Einstein flashed both bright and dim lights on his sheet of metal, there was no change in how many electrons were ejected. This means that the amount of energy from a light beam was not dependent upon its intensity. Further, there was a given Frequency where electrons were no longer ejected. So, the energy from light must be dependent upon how frequent each wave hit the metal, as opposed to how many waves hit the light in a given time. To illustrate this:






In these two drawings, the light with one "wave", but more frequency (illustrated by the shorter wave lengths, as all light travels at a constant speed) has a GREATER amount of energy than the drawing on the left with four "waves" hitting the metal plate all at the same time. Now, in everyday life, supposing you throw four balls at an equal amount of Force at a target and they all hit at the same time, you're going to transfer all of the energy in each of those those balls into the target at the same time -- which will give you a greater overall impact. If you were to throw them separately, but more frequently, the net energy transfered to the target would be the same, but the impact would be 1/4 of the initial example each time you threw the ball. In the case of the light waves, not only is the energy transfer greater with frequency, but so is the overall impact! This is completely contrary to everyday intuition (which is fine, as physical systems aren't actually supposed to do anything. we just observe what they do, then describe them). This all ALSO resulted in confirming a separate experiment by Planck that stated the same thing, but came from different angles -- which gave further support that the energy in a light wave is not dependent upon intensity, but instead on frequency. The end all equation to this all was:

E = hν

Where E is energy, h is a physical constant (called Plank's Constant), and nu (ν) is the frequency of a given wave.


Now... OK... admittedly, here's where things are still shady for myself... but the photoelectric effect also demonstrates the dual nature of light: That light exhibits both wave-like and particle-like features. The best formulation I can come up with here is that, supposing we have a wave, like above, and we know the frequency that the wave needs to be at in order to knock electrons loose (as frequency corresponds to energy). We set that frequency to just above what is required to detect electrons being knocked loose. Now, we spread that light beam out with a lens over an entire metallic surface. What is observed? Whether the light is spread out or focused on a single point, the same number of electrons are ejected. If we had a wave, the entire wave would be spread out over the surface, and we would then also have less energy transferred to the plate, and we would expect electrons to not be knocked loose. However, since we still observe electrons not only be knocked loose, but the exact same number of electrons being knocked loose, we have to conclude that light energy comes in packets. And THAT is the beginning of quantum (meaning "piece") mechanics.

Holy. Fuckin'. Shit.


(apologies for any bungling to actual history of this discovery, or even misrepresentation of the theory. Really, I'm just beginning to grapple with these concepts. Someday, this stuff'll make even more sense)

Wednesday, August 19, 2009

Conceptual Question of the Day

Classes started today. Also, I think I'm going to try to treat this more like a traditional blog, which means more frequent updates with less thought out content. Sweet. Though I'd like to also throw in some good content when I feel I have the time. For now, however, my semester is lookin' hella busy, and I don't think I have the time to write an end-of-the-week recap of my thoughts on science. Instead, I'm just blundering along and pushing out junk, hoping that something sticks to the inner walls of knowledge.


So, of all the questions I raised to myself today, this is the one I remember as the most interesting: Suppose a ball is coming towards you. You do not know the origin of the ball. Before the ball looks like it's going to hit, how do (or can?) you distinguish between a) a ball coming towards you, and b) a 4 dimensional "sphere" entering the familiar 3 dimensions.

Tuesday, August 4, 2009

Teaching Experience, 1

Let it be known that I want to be a teacher. It isn't what I want to do when I first graduate, but it is what I want to become in the end. So, I try to explain things to people as well as keep up on my philosophy of teaching. The other week I had a good teaching experience, and have recently read Whitehead's "Aims of Education", which has me thinking about teaching in general. The experience went like so:

My brother visited me. He has recently graduated from High School and is currently working some low income jobs before he goes to college. In conversation he made a comment where he felt uncertain about evolution. I asked what, and specifically he thought that random mutation was an odd concept. Particularly, he found it difficult to believe that random mutation could create viable species over time, because he found the idea of "Random" to be arbitrary, and he thought that if a species mutates that it would be more likely to die. I explained what "Random mutation" actually meant -- not that it just happens, that there are explanations for the mutations, but the causes are out of anyone's control and therefore are labeled "random" -- and that he was completely correct in his assumption that a mutation is more than likely kill an animal. It was only in the rare cases where a mutation actually helped a species pass on its genetic code and survive better than its peers that the mutation is passed on. I also noted that there was more to speciation than random mutation, such as sexual selection or dramatic geographic separation, etc. Later we visited my campus' museum of rocks, and the museum of stuffed birds. We saw fossil records of now extinct species, and stuffed animals of species still alive. Later we visited our towns' zoo. Once we reached the zoo, my brother would comment about certain features of an animal, how these features helped that animal survive, and essentially out-compete other animals in certain ways.

So, in an afternoon, he had the groundwork of a theory given to him, and then he was able to make deductions from that theory about actual animals that he experienced. I'm sure Rousseau would be proud right now, but I'm a little uncertain about Dewey (of whom I am a large admirer of). My brother obviously learned something, and started applying that knowledge to what he saw in the everyday world. Which is awesome, and for a passing incident where I hadn't really prepared anything of the sort and we were just hanging out, very awesome from my perspective, as he's grasped the foundations accepted by the scientific community. These are all important. However, as teaching should be about process, what I taught him was not the process of science. He learned how to make logical conclusions from a given framework of knowledge. Which is, in fact, a fantastic skill, and of great use in the scientific method. However, there was no induction that occurred -- we didn't have a large sampling of animals from which we induced the hypothesis of natural selection, but rather, we walked amongst the animals looking for positive confirmation of a generally accepted hypothesis. This is all well and good, but it's not scientific, and it's not teaching the scientific method, but rather the analytic method.

But then there's the practical side of things: Most places have access to zoos. But do they have access to wildlands that easily show speciation? We could substitute in photographs, but that would certainly not be what Whitehead would agree to, as he's seems to be more of a mixture of a Romantic/Utilitarian educator. I don't know if I agree with him entirely, but I certainly saw something awesome occur while we visited the zoo -- the application of accepted theory, and the acceptance of accepted theory. It wasn't the whole method of science, but analytics is certainly an important part of science. Perhaps the scientific method, as a whole, could be spread out amongst the various sciences? Leave the Null-Hypothesis to the physical sciences, as physical objects are in easy supply to any school budget?

Thursday, July 23, 2009

Arational Process

I am currently fascinated by the process by which one selects a hypothesis to test over other hypotheses, and that one can't test a hypothesis all unto itself. Funnily enough, even that is a hypothesis.

We have some concept of the universe we want to test -- a hypothesis -- and we select to test it out of several others. It all seems to match up after observations are made, but that matching may only be us looking for positive reinforcement of our own idea. So you also test a second hypothesis at the same time, the Null Hypothesis. The Null Hypothesis states what evidence would prove our initial hypothesis conclusively false.

But still, in the midst of this, there isn't a step by step process by which we choose a hypothesis -- there is no mechanism, no real way of knowing how to choose the best hypothesis. There are guidelines, but ultimately, science doesn't care how one chooses an idea to test. All science really is is a method for testing the "soundness" of an idea.

And even when the idea is validated, we often later will recount, reform, and rephrase our understanding of the universe. And... well, that fascinates me. It drives the point home that science is, while a rational process, is also an arational process at its heart. And it makes me wonder: Are all bodies of knowledge similarly arational? Euclid didn't have a method for choosing his postulates. Aristotle didn't have a method for distinguishing between his "Causes" -- it was essentially just really smart people pulling stuff out of their ass. If not math, science, or philosophy, what is fully rational? Logic?

Thursday, July 9, 2009

Knowing your Audience

One ought to "Know Your Audience", as they say in all general composition classes. But doing this is harder than it sounds. It is a difficulty I've run into in tutoring, TA'ing, attempting to understand popularization with this blog, and just general explanation of science in conversation. I do not think this is as hard with fiction as it is with non-fiction, in particular, with science. There's a certain amount of distinction that comes with scientific understanding, but simultaneously, so long as we're not talking about the popularly held HARD sciences, or a specialization of the harder sciences, I've come across the notion that it ought to be "Common Sense for Common People" -- at least in my attempted explanations of chemistry.

Generally, I figure people know what atoms are, and molecules, and that the entire universe is composed of them. However, beyond that and the existence of the periodic table, I grow uncertain about the layman's knowledge, as anything beyond that was what I learned in my college courses. I recall specifically going into an explanation about water, one time, and thought it necessary to go over the shapes of molecules, but this gave insult to the person I was talking to, who thought I was just trying to show off my knowledge, and also who thought I, being the "Science Major of Awesome", was trying to belittle him, who was a "Liberal Arts" major (Though I really, honestly wasn't. I value knowledge as a whole, and find competitive distinction between the two classes of knowledge as trivial and silly)

So, through this experience I realized that one has to assume some knowledge, otherwise people take insult, and then they'll shut you out. But on the other side of what I perceive to be a thin line is assuming too much knowledge. You get blank looks, but no one wants to admit that they're ignorant, which I think is especially reinforced by this notion of "Common Sense"-ness that comes with the basic physical sciences (which I would define as anything not commonly associated with hardness, ie, not quantum mechanics, string theory, space related, or drug related). Everything else, from my anecdotal and probably somewhat off perspective, and I am mostly referring to information I learned in my general chemistry and my Physics I class, is treated as if one should just know this common sense stuff, when in fact there's no way one could know all of this "Common Sense" stuff without taking the time to learn it.

So, where do you err? It would seem talking "above" people would be better, because then at least, if they are so inclined, can look things up they don't understand. This under the assumption that the alternative shuts everyone off to learning, which isn't the best of assumptions -- I'm sure some people are patient with reviewing things they already know. It just seems a difficult problem to surmount in determining which path to lean towards (as, ideally, you'll just find that happy medium on the thin line) when you "Know Your Audience", especially when your audience can have varying amounts of technical knowledge.

Because I try to avoid the "Ivory Tower" feel of science, and encourage people to keep up with scientific progress, I think leaning towards the "Insult your Audience" side is better. However, as I am also often still learning things myself, and an explaining them in order to better understand them, I think I unintentionally lean towards the "Mystify your Audience" approach.

Monday, June 29, 2009

Assumptions in Science

It is a hobby of mine to collect assumptions in the scientific method as I have a personal interest in philosophy in general, and the philosophy of science in specific. I try to keep them to a bare minimum and disprove assumptions, usually analytically. So, in this blog post, I am going to list some assumptions general to the scientific method that I do not think the method would work without, and give some commentary. I would appreciate input on the assumptions listed, as well as suggestions for further assumptions.

If there are Laws in nature, then those laws do not change with respect to time or space.

I don't think its necessary to assume that Laws do, in fact, exist, because science is a inductive process based in empiricism. So, if Laws exist, we will observe them -- they are not assumed to exist. However, because of the nature of science to build on the work of others, and because it sometimes takes time to fully understand the limits of a theory (Look at Newton)
, we assume that the Laws do not change from one time to another. They are, in this sense, eternal. I think it is better to state the assumption like this than to say that "Time Exists" or
"Space Exists" or "Laws Exist", because these are things that are either difficult to define outside of empirical definitions, or they are things that we do not know exist. If Time does not exist, then of course the laws won't change with respect to time, because a non-existent entity can't effect an existing one. It also doesn't presuppose that we will actually find order in the universe. We hope to find order, sure, but we can't say that we will find order without performing an experiment.


Our Physical World is Deterministic

This is an assumption that I've come to question as of late. I state "Physical World" because science only deals with the physical world. Further, the scientific method does not deal with any other possible physical world, but the one in which we live, because that is the only one which we can empirically verify, which is the highest form of verification in scientific inquiry. However, the term "Deterministic" is one that requires a bit of elucidation.

If by "Deterministic" all we mean is "Physical Laws can not be violated" then I am fine with the assumption of Determinism as an assumption (or, really, that's more of a definition). However, philosophically speaking, Determinism has a much wider meaning. Generally it means that every event from the beginning of time was determined before all events occurred. This can be demonstrated with a Thought Experiment: Supposing we know the physical laws of a photon, and we are present to observe the beginnings of the universe, then we can determine, through a long series of calculations, exactly where the photon is going to go.

However, I do not think we assume Determinism. I think by saying that Determinism is a major assumption in the scientific method, we're putting the cart before the horse. Rather, the evidence amassed through the scientific method suggests that our physical universe is a deterministic one. However, even within the confines of Monism (that the universe does not have any parrelel realities that act in different ways. Generally compared to Dualism, which is generally attributed to Descartes), and that Monism is our Physical Universe, things aren't necessarily deterministic in the grand sense that everything is predetermined before it happens. Rather, it is deterministic in the sense that physical laws can not be violated, and so action is limited, but only within the confines of physical laws, not completely Deterministic as it is usually defined.



There is a Truthful connection between our mind and the Universe

This is a recent one I came upon, so I haven't thought about it as much. It basically assumes that science, in general, is coming closer to the truth about things, rather than the truth about the way we think about things. There is no logical reason for assuming this, but it seems to be working so far. It's the sort of assumption one makes if they either believe in Dualism, or are not purely empirical, such as that demonstrated by David Hume. Science, dealing with Induction to understand data, and Deduction as means for formulating Hypothesis's and understanding of several Induction's, does not only deal with pure empiricism. Rather, it hops between "Types". These types are somewhat separate unto themselves and can be regarded as "Methods to Knowledge".

EDIT: Going through these posts again, I've realized that I've changed the most on this post. I fully disagree assumption 1 and 2, and I think "assumption" 3 can be well argued for, and therefore doesn't count as an assumption -- though it may have to be argued for in a "philosophic" sense, so it may still be an assumption within the domain of science if one accepts that these things are distinctly different at this point.

Wednesday, June 3, 2009

Colloids

With summer comes employment, and with employment comes less learning, and with less learning comes less blogging. In addition, my summer studies are centering around broader philosophical studies than what is topical for this blog, so expect a decrease in posting for the summer months.

However, during a pub-crawl with my friends a few weeks ago, the subject of colloids came up. I poured the beer out too fast, and it foamed over. I knew foam to be a colloid, I knew alloys to be colloids, but I had no recollection of what distinguished a colloid from a solution. Both are heterogeneous mixtures of substances with molecules dispersed fairly regularly throughout a medium. Generally, solutions are liquids that have solids dissolved in them, though they can also have a combination of liquids. Colloids don't have a specified state: In fact, the type of colloid depends upon the states of the dispersion medium (analogous to the solvent) and the thing being dispersed through that medium (analogous to the solute). So, really, in a prima facie way, it seems that colloids are just a more general terminology for solutions.

So I broke out my gen-chem book, and found out I was mistaken -- the difference between colloids and solutions is the size of the molecules, or groups of molecules. In both, a molecule or ion is solvated, or completely surrounded by the solution. But in a colloid, the groups of molecules are much larger, between 1 * 10^3 pm to 1 * 10^6 pm (picometers). For comparison, the bond length of a Helium to Helium molecule is 300 picometers. Another common example of a colloid is found in soap -- when soap molecules interact with grease, they embed into the grease while keeping a single part of the soap molecule on the outside of the grease. The part embedded in the grease is attracted to oily things, and the part on the outside of the grease is attracted to water -- so running water will then push the grease along. This is a colloid composed of a clump of molecules attracted to each other, but dispersed in another medium, and much larger than a molecule in a solution. Another good day-to-day example of colloids can be seen if you go for a walk at the park. If you've seen light streaming through the branches, this is because the light is reflecting off of dust in the air. In fact, this is a common way to distinguish between colloids and solutions, and is known as the Tyndall effect (this picture demonstrates a colloid of a solid in a liquid). The molecules in a solution are so small that they don't interfere with the visible light spectrum, but the molecules or groups of molecules in a colloid are large enough to do so.

So this brought me to another question regarding chemical philosophy: While we can observe the Tyndall effect to distinguish between colloids and solutions, do the sizes of the molecules matter very much aside from the fact that they interact with visible light? I've done kinetics experiments revolving around a solutions ability to absorb light. So, even though we can't observe the interaction with our eyes, the molecules do still interact with light, don't they? Is the terminology of solutions a bit too simplistic? After all, there is a point in solutions where you have to ask, what is the solvent and what is the solute? What if you have more than two liquids and a solid? Proteins can grow to reach sizes like this, and yet they are only one molecule solvated by water. Does that make our DNA colloidal, and what point does this distinction elucidate? After all, we could also just say that solutions with really big particles in them interact with the visible light spectrum and be done with it. But then we'd be expanding the terms of solution to include things that are clearly not mixed in the same way that salt water is mixed, such as mayonnaises, or beer foam. But they are also slightly different from a solid block of, say, iron. But just because there seems to be this odd in between zone where we're uncertain about how to classify and understand given solutions, it seems rather ad-hoc to just make up a term and rationalize a distinction. So, what's the point of colloids, and what terms should we use in distinguishing between types of solutions, if indeed we ought to revise them at all? I'm going with the ambiguous and open-ended ending.

Saturday, May 16, 2009

Just one little drop...

A chemist stands, swirling an Erlenmeyer flask with a small sample of liquid in the bottom, holding the flask up to a Buret that drips liquid into it. It drips at a rate of about 1 drop every 2 seconds, and Oddly, the two liquids combining are clear, but each time the liquid from the Buret comes into contact with the liquid in the Erlenmeyer flask, a bright pink liquid evolves, and disappears in the swirling motion of the chemicals. Then, the chemist notices how the pink liquid dissipates at a slower rate, so he slows the rate in order to observe the reaction of every drop. One drop is added. The chemist swirls, and the pink disappears. Another drop is added. The solution in the Erlenmeyer begins to fully turn pink, but with a little swirling, it slowly goes clear. Almost there, the chemist allows half of a bead of liquid to form, and stops the rate entirely. He pulls out a stirring rod, and grabs the half-bead with it, then swirls it into the Erlenmeyer solution. The solution turns a mild shade of pink, but does not dissipate. The experiment is over -- but what just happened, and why was the chemist interested in it?

This was a description of a Titration experiment. This is an analytical tool for determining how much chemical stuff is dissolved in a known amount of liquid, and the bane of students in General Chemistry II. It combines the ideas of atoms, ions, dissolution, chemical reactions, and moles -- that's a lot of theory, and while I realize everyone knows what an atom is, I don't know if everyone will remember what an ion is, or why salt dissolves in water. Nonetheless, I don't think its absolutely necessary to fully understand these concepts to get the gist of what a Titration is all about.

Take some baking soda (do it!), and put some of it into two different cups -- I put 1/2 teaspoon into two coffee cups. Then fill one of the cups with water halfway, and the other one quarter of the way up with water. Break out the household vinegar, and a tablespoon, and drop one tablespoon of vinegar into each glass, and watch what happens. Keep doing this. (this can get messy, so it might be best to do this in the sink) This is a little rough, but when I did it at home, it seemed to illustrate the point -- you'll notice that bubbles stop forming with roughly equal amounts of vinegar, even though one cup has twice the amount of water in it. This suggests that the water has nothing to do with the reaction, just the baking soda and the vinegar.

That's because it's true -- the water is the medium through which the chemical reaction takes place, or in chemical parlance the "solvent". It's where the chemicals you're interested in float around, find each other, and react.

Just so you know, there is a chemical reaction going on between the water and the chemicals you're interested in, but it has nothing to do with the one that evolves the bubbles, and is called "Dissolution". Basically, it's what happens when you put sugar in your milk, or when you mix salt and water. The water molecule separates all the molecules packed together in that grain of sugar or salt and surrounds them, which is what renders them "invisible", since molecules are too small to see all by themselves.

Now, back to titration -- that's basically what you performed in the kitchen! But there are a few differences.For starters, chemist's use instruments that are known to produce better accuracy and precision. This is primarily a quantitative experiment, which leads to the next point of difference: Normally, the liquid in the Erlenmeyer flask has an unknown amount of chemicals you're interested in in it. You usually know that it's in water, or can easily tell, and you can tell what type of chemical you need to react with it with litmus paper. Further, when preparing a chemical to react with the unknown, you have complete control: You can measure the amount of chemical you dissolve into water. Then, you react a known amount of chemical with an unknown amount of chemical, and when the reaction doesn't happen anymore, you know that all of the unknown amount has reacted and that that unknown amount is equal to the amount of chemical you prepared to react with it.

One other little snag in this is, how do you know when a chemical reaction is complete? Most of them aren't as dramatic as the reaction between baking soda (Sodium Bicarbonate) and Vinegar (Acetic Acid). This is what the pink color was all about in the story above -- this is another chemical present in the mixture. It doesn't interfere with the reaction between the two chemicals you're interested in, but it changes color whenever your prepared chemical comes into contact with it. This way, if the pink disappears, you know that all of your prepared chemical reacted with the unknown chemical, and you continue. If pink stays, even just a little bit, then you know you've reached what is generally termed the "Equivalence Point" -- the point where a solution's pH changes dramatically from acidic to basic with the addition of a small amount of either an acid or a base.

This is why the chemist was taking so much care near the end of his experiment. It doesn't take a lot to accidentally go past the equivalence point. Even one little drop can add too much of your prepared chemical, and then your calculation for how much chemical amount was added will be off far enough that you'll have only a very rough idea of how much chemical amount was in the unknown, rather than a good idea.

Now, this can get much more complex, but the general idea holds: You have some unknown amount of molecules floating around in some water, and you want to know how many molecules are there. So you throw in a chemical that will react with those molecules, and when they're done reacting, you do a little math and figure out the unknown -- pssh, who says chemistry is hard to understand? It's just colors, numbers, and bubbles.

Friday, May 8, 2009

Crystals

So, the end of semester is here, and summer looms at me with its tasty treats of indolence and self-education. With that, I'm looking back over the past semester, and trying to think of the most important things I learned. While actually comparing knowledge in terms of importance is, IMO, somewhat meaningless, it's good to think back about what you learned, and answering this question is an excellent catalyst for that type of intellectual probing. It's a toss-up between my physics course, and my organic course. In my physics course chemical theory suddenly clicked. Working problems starting from the basic SI units helped me understand what I was talking about when I was talking about the energy in a system, or in understanding the derivation of the gas laws. Still, while theory is important to understand, and this helped clarify chemical theory for me, I have an even more difficult time in connecting theory to experience -- in lab I often feel like I'm just pouring two liquids together, and shit happens, while in a lecture exam, some Grignard reagent attacks a ketone which is then protonated in a second step by a dilute acid. It's a world of imaginary particles and rationalized diagrams, where the lab is a world of color changing liquids. I have to actively think about theory after an experience to connect the two together, so because of that, I think the most important discovery I made was a personal fascination with crystals.

In organic chemistry, in most labs, we would combine liquids to form crystals. Place a liquid with some sort of chemical dissolved in it contained within a beaker into a water-ice bath, and observe. Initially, pure liquid. Then, slowly, small specks of a solid begin to form, barely noticeable. Without poking or prodding, the specks grow larger, clumping together, forming uniform shapes, even though they form independently of each other. These uniform shapes depend on the exact compound being created, but indeed, they are uniform. Look at common table salt -- each cube looks, more or less, as if they're a uniform shape. The same thing would happen in lab, only in different shapes, and it would happen by virtue of being surrounded by a cold source.

One day while recrystalizing a certain product, I made the connection to atoms. Small particles slowly clumping together in a uniform shape -- similar to crystals. Essentially, watching crystals form gave me an experiential understanding of atoms and compounds. It was the closest I could get, with the naked eye, to seeing atoms and compounds interacting. Now, the naked eye type of experience isn't necessary in understanding a given scientific concept, but I think it helped in my lab skills and in my understanding of an experiment. Suddenly, liquids weren't turning brown, but simple sugars were reacting with copper ion in Benedict's Reagent, Double bonds were attacking bromine, and esters were being cleaved by basic solutions to form soap. I felt more confident in using the framework of theoretical knowledge in order to understand an experimental situation. I saw the atoms reacting with one another, forming new compounds, and solidifying in energetically favorable positions. I watched crystals grow, and beheld the beauty and simplicity in the atomic theory of matter.

It's a sublime moment for me when theory is understood in experiental and experimental terms, and I sit back watching nature and feel like I actually understand what's going on. All the work involved in understanding the material -- my retraction from having as much of a social life, neglection of my hobbies, interruption of a normal sleep schedule, as well as the actual intellectual labor -- suddenly becomes worthwhile.

And to think I came back to school just to get a better job than working in a warehouse; I never thought I'd love science this much. But, there you have it -- I love crystals. I think they're the coolest things ever, and the purification of compounds by recrystalization has become my favorite process in lab. It reminds me of Primo Levi's description of distillation -- there's a certain elegance to the application of theory, and observing that elegance in action will always fascinate me.

Monday, May 4, 2009

The Simplest Solution

Due to end-of-semester busyness, I was not able to update over the weekend. But, I want to try and stay on my self-assigned blog schedule, and now I'm just studying for finals, so a day late is better than a week, right?

I already had the idea that science tried to break things apart. But, generally, I always thought this was to find the most basic understanding of the universe -- to be able to explain causation from understanding the way that everything works. I think this is still a part of it, but there's another part to it too.

The human mind can only compute so much.

To demonstrate, see the following physics problem (and if you've had Physics I before, I'm sure you've solved this problem before):

A 1 kg rock is suspended by a massless string from one end of a 1 meter measuring stick. What is the mass of the measuring stick if it is balanced by a support force at the .25 meter mark?

I always find pictures to be useful when solving physics problems, so the first things first:




Really, that's about as complex as I normally draw, just to help me visualize a given scenario. In this problem, there is the word "Massless", which is just a fancy way of saying that the string that connects the rock to the meter stick doesn't have to be accounted for, so we're really just dealing with the rock, the meter stick, and the fact that the meter stick isn't moving even with the rock attached to it and the fact that the balancing point is located .25 meters away from where the rock is connected.

The main concept that needs to be applied here is the concept of Torque. Torque can be written in a number of ways, mathematically, but conceptually it's fairly simple, and related to my previous post talking about Force. When talking about Force, the examples and problems usually use cannon balls, footballs, or cars. That's because they easily relate to something called Translational Motion -- which is just the movement of an object from Point A to Point B. You throw a ball, it goes from your hand, Point A, to some spot on the ground, Point B, and there are a host of equations one can use to predict where that point will be based upon how hard you throw the ball, what angle you throw the ball at, and what the ball interacts with on the way there. These equations all have analogous equations that relate to another type of motion: Rotational Motion. Rotational motion is still motion, but it behaves differently than Translational Motion -- not so different that the Laws of the Universe are different, but we have to model them differently because their Translational motion would be a lot harder to model mathematically than it would if we were to just measure the motion of spinning things by the angles they travel through. So, really, it's still Point A to Point B motion, but instead of measuring things in meters, where the direction would constantly be changing, you measure things in Θ (Theta), a generic symbol meaning "Angle".

Torque is the rotational analogue to Force. But instead of F = ma, you have τ = Iα. τ is the Greek letter Tau, and it stands for Torque, which is rotational Force. α is the Greek letter alpha, and it stands for rotational acceleration (with units of radians/second^2, instead of meters/second^2).

This leaves "I". "I" stands for "Moment of Inertia", which does not explain itself as well as "Mass" does, so it requires a bit of explanation itself. Similarly to mass, if the Moment of Inertia is greater, it takes more Torque to gain a greater angular acceleration. But with rotational motion, you have to take more into account than the mass of an object. You also have to take into account how far away a mass is from the center of rotation. And, as you're actually dealing with a large number of particles all revolving around a single point (we'll call this point the "axle"), all of which may have different masses than each other, and most likely are at different lengths from the axle, this can easily get pretty complex. To be technically correct, you would have to find the distance a single particle is from the axle, find its mass, and compute its individual Moment of Inertia -- which is easy enough when you have only one particle. The equation for the Moment of Inertia of a single particle is "I=mr^2", where m is the mass and r is the distance from the axle. So, you square the distance of the particle from the axle, and multiply it by its mass. But when you're dealing with, say, a wheel, there are a lot of particles.

The way to then tackle a problem like the one above is the realize "Hey, this thing basically has an axle at .25 meters, and it has Torque being applied to that axle due to the Force of gravity. Even better than that, the thing isn't moving, so we know that the Torques are equal on both sides. So, the Torque of Left side is equal to the Torque of Right side, so I'll set their equations equal to one another. I know the mass of the rock, if I can figure out the Moment of Inertia for the Left side and the Moment of Inertia for the Right side, then I can find the mass of the meter stick".

Or, mathematically speaking, Iα(left) = Iα(right) from τ = Iα

This is where I made a mistake in tackling the above problem. There is a way to get around having to add up each individual particle, and in fact this simplification at least makes the moment of inertia calculable by hand. For example, when looking at pulleys (another favorite of physics problems) First, you assume that the particles are, more or less, the same mass, as the object is made of the same material -- a good assumption. Then, because the shape of a wheel is a regular shape where the outside of the wheel is equidistant from the axle, you can actually say "Hey! That pulley's a hellalot like a cylinder!", and make another assumption that is more or less correct: that the pulley will behave as if we had a perfect cylinder. The equation for the Moment of Inertia of a perfect solid cylinder is well known, so you can just plug it into the above equation and work away. It's 1/2mr^2, in case you're curious.

The problem is, the above problem is NOT a perfect cylinder, nor is it anywhere close to one. So, my first instinct was to go back to the basic definition for "I", where you can find "I" for any solid object (as that's what I'm dealing with). This involves integrating the volume of an object with respect to its mass which, quite honestly, is a pain in the ass -- at least for me. And actually, this is what I learned: It's not that there is anything wrong with taking the above approach, but you want to simplify the problem to make it easy, digestible, and understandable. And there is such a solution to the above problem, I just didn't see it initially.

It deals with a concept known as "Center of Mass". Center of Mass lets you treat a whole object as if it were a point particle. You mathematically find some fictional point near the object that the object will follow in motion, so you can use the equations you normally use for a point -- which are easier to deal with than whole objects. It also deals with how you define your system. Before I was looking at the system as "Left Side" and "Right Side", but that combines the mass of the meter stick on the left side, which itself is unknown, and I would have to use more algebra to find the unknown. Instead, if I look at the above problem as "Rock" and "Meter stick", then I at least have less algebra to do.

So, applying the idea of "Center of Mass" to the above problem, I have the rock. The objects weight is concentrated at the left end of the meter stick, due to the massless string, so I can treat the rock like a point at the left end of the meter stick. Then I have the meter stick. By itself, the meter stick, assuming that the mass of the meter stick is more or less spread out evenly (a good assumption), has a Center of Mass at its center. In relation to the axle we're dealing with, that puts the point particle .25 meters away on the right side of the axle, which is the exact distance of the rocks center of mass. So, the above picture can now be drawn as:


I put the circle and arrows in to emphasize the fact that we're really dealing with Torque here, even though this isn't a wheel. What can be seen in the picture is the two torques we're dealing with are in opposite angular direction, and are equidistant from the axle. The beauty to this solution lies in the fact that I is easy to find (mr^2, because now they're points). Also note, because the sum of the Torques are equal to one another, we have no angular acceleration to deal with (which means, technically, we wouldn't have any torque, but the above Torque equation is ACTUALLY written as "The sum of the Torques" with a Σ before τ to denote "Add Torques up" and differs slightly from the mathematical definition of Torque. I just wanted to tie the idea of Torque into Force from before)

So, we substitute "τ" for "Iα", then drop α because there isn't any, and are left with I = I. Substitute mr^2, and you're left with mr^2 = mr^2, and looking at the picture, you see that the Center of Mass is equally distant, so it follows that the masses of the two particles must be the same.

Had I started with Center Mass, I would've realized that the points were equally distant from one another, and that the meter stick wasn't going anywhere, so they'd have to be equal in mass too, and I could have solved this in less than 1 minute.

And that's when it dawned on me -- we can really make things as complex as we want. It's not the complexity in science that we even want. It's the simplicity. We're dealing with a highly complex universe that takes time to understand, and there's no way we'd understand it if all we do is take the clunkiest path to understanding. We want to break things apart and find the root cause of phenomena, sure, but in addition to that we just want to be able to understand the phenomena themselves without going through a huge and sometimes difficult to follow line of thinking (as I did above).

So, that's why you look for the simplest solution -- because we can only compute so much in our head at once, and there's a certain satisfaction that comes with a simple explanation if that little bit explains a whole lot.