A response to Napier and Howard and the Younger Dryas impact hypothesis

As I have indicated in an earlier blog, I wrote an article about the conjectures Graham Hancock makes in his book entitled Magicians of the Gods for Skeptic magazine (Conjuring Up a Lost Civilization: An Analysis of the Claims Made by Graham Hancock in Magicians of the Gods).   I also appeared on the Joe Rogan Experience (JRE) as a guest of Michael Shermer to debate Graham Hancock and Randal Carlson.    I never realized just how popular Hancock was and is until I read some of the downright hostile and vitriolic comments directed at Michael Shermer and me after we were on the JRE.  I posted some of them on my Facebook page to amuse my friends and family.  I should point out that although I don’t like being the brunt of hate speech, I firmly believe in their freedom of expression.  Having said this, I did take seriously a comment made by Bill Napier, an astronomer who is a honorary professor of astrobiology in the Center for Astrobiology at Cardiff University.  Napier has published several articles on the Younger Dryas Impact Hypothesis (YDIH).  He submitted a letter to a website called the Cosmic Tusk which is owned by George Howard.  Howard is one of the co-authors on the Firestone et al.[1] paper – the first article which postulated a comet impact about 13,000 years ago.  It would appear that I ruffled both Howard and Napier’s feathers in questioning the scientific validity of the comet strike on the JRE.  The article is entitled Open Letter: Napier on Defant and the Joe Rogan Podcast.  Of course, Hancock, who has an article presented on the site, wasted no time in referencing the comments by Howard and the letter by Napier to support the validity of his position.

Let me repeat what I stated in my Skeptic article.  Although I remain open minded about the possibility of a comet strike 12,900 years ago, I am extremely skeptical that it had an impact on the Clovis culture or the megafauna of North and South America.  I also applaud the scientists who are proponents of the comet strike for their handling of the YDIH by publishing results in refereed scientific journals.  Because Hancock has latched onto the hypothesis to explain supposedly how Atlantis was destroyed, I had to get into the details of the scientific debate on the YDIH.   My position is that there is still a great amount of controversy in the literature surrounding the hypothesis, and I don’t wish to become an advocate for either side until the debate has played itself out in the literature.  This is how science works.  But there is little doubt in my mind and apparently in Napier’s mind based on his comments, that the comet strike (if there was one) had nothing to do with lost civilizations.

The most important point I want to address is Napier’s claim that I incorrectly described how comets break up when I was on the JRE.  He states: “Marc Defant told us that “the comet guys are getting hit pretty hard,” but alas, he backed this up with a blatantly wrong description of comet evolution.”  I did not specifically talk about comet evolution nor did I try to show “the comet guys” were wrong.  I simply suggested that there were problems with some of the postulates including how comets break up.  I stated: “It [Comet Shoemaker-Levy 9] broke up because of the gravitation of Jupiter.  You would not expect them [comets] to break up entering into the [earth] atmosphere… and then separate.”   In other words, it is difficult to explain with physics how a comet entering the atmosphere would disperse after it breaks up through a proposed airburst so that megafauna and the Clovis culture would be affected over two continents.  Napier claims that “Marc and Michael seem to have been misled by unrefereed nonsense from a few people with no expertise or track record in cometary dynamics, and ignorant of its extensive, long-running literature.”  Perhaps Napier misunderstood what I was saying.  I carefully read the YDIH literature prior to the paper I wrote for Skeptic and my appearance on the JRE.  The comments above were specifically taken directly from a paper published in a refereed scientific journal (J. of Quaternary Science) by Holliday et al.[2] – some of the scientific opponents of Napier and Howard.  I quote from the Holliday paper: “No physical mechanism is known to produce an airburst [an exploding comet]  that would affect the entire continent… They [referencing Wittke et al.[3] who propose a model for the breakup of a comet] state that the impactor probably broke apart in solar orbit before encountering Earth, as do most comets ‘including Comet Shoemaker-Levy 9’.  However SL9 was orbiting Jupiter, not the sun, when it broke apart, and, moreover, most comets do not break up in solar orbit.  The reason that all the fragments of SL9 collided with Jupiter is because they were in orbit around Jupiter.  The processes that led to the multiple impacts on Jupiter do not apply to comets in solar orbit or for approaches to Earth… Moreover, a spontaneous breakup in solar orbit, such as Comet 73P/Schwassmann-Wachmann would have had to be exquisitely timed in order for an expanding cloud of debris to strike the Earth.  Dispersed impacts of multiple fragments would be at least 1000 times less frequent (probable) than the impact of a single nucleus, which is already an extraordinarily rare impact”.

main-qimg-1e3b60f1af94711de6ed99830fc9e33a-cDark areas on Jupiter where Comet Shoemaker-Levy 9 hit.

It is true that I am not an expert on comet dynamics, but I know how to read and understand the scientific literature.  I am going to presume that Napier misunderstood my position and that although he may disagree with Holliday et al., my statements were based on their refereed scientific article.  In fact, Holliday et al. challenge Napier et al.[4] in their paper for suggesting that “passage through a cluster of fragments from a broken comet would probably ‘yield several impactors with energies up to 5,000 megatons, fully adequate for surface melting’… However, cometary impactors of this energy would be about 1 km in diameter and there is no physical mechanism to prevent them from striking the ground and forming 10-km diameter craters.”  Of course, I also addressed the lack of craters at the YD boundary when I was on the JRE.  I would only add that Napier might be better served by addressing Holliday et al. and other scientists directly involved with the YDIH in the scientific literature as opposed to publishing letters in the Cosmic Tusk – a site that clearly appeals to a nonscientific audience interested in the speculative and supernatural.

It does not appear that Napier had a hidden agenda in criticizing me, but I wish I could say the same for Howard.  His comments are characteristic of some of the vitriol I received from the general public for simply disagreeing with Hancock – not the kind of professionalism one might expect from a person who lauds himself as a scientist (and skeptic of all things).  I honestly think it is unbefitting of me to address ad hominem and sarcastic attacks like “Defant showed zero humility and spoke of comets with the familiarity of a volcanologist.”  However, I do think it is instructive for my students to see that “scientists” are human and sometimes resort to personal attacks rather than addressing the details of the science.   Basically Howard is engaging in silly turf wars where credentials and opinion of behavior become more important than the scientific rigor of what is being said.  As far as I can tell, Howard has a BA degree in political science from the University of North Carolina so I am not sure he should be throwing rocks from his proverbial glass house.  One wonders how he can make the claim that “He may or may not know it, but Defant’s received wisdom… is 1960’s [sic] grade school comet science [sic]” (I refer the reader to my discussion above in refutation of the few actual scientific criticisms Howard makes – basically a repeat of what Napier says).

Walter Alvarez, a structural geologist, and his father Luis Alvarez, a physicist (passed), were responsible for discovering the impact of a 10 km meteorite that hit earth 66 million years ago and led to the eventual extinction of the dinosaurs.  I wonder if Howard dismisses their discovery because they were not astronomers?  Two geochemists also participated in the discovery which is my specialty (I am a professor of geochemistry and my PhD is specifically in that field) – I apply geochemistry to volcanic rocks to understand processes within the earth.  Had Howard taken the time to research my background before he made his attacks, he would have found my book Voyage of Discovery: From the Big Bang to the Ice Age (about the history of the universe, earth and life), my general science course entitled Origins, my video lectures particularly on astronomy (including comets), and my Tedx talk on Why We are Alone in the Galaxy as some evidence that I might know a bit more about the subject than “grade school comet science”.  I note that Howard’s contention in the Firestone et al.[5] paper that the formation of the Carolina Bays (his claimed area of research) by comet strikes 12,900 years ago has been thoroughly debunked based on new dating[6].  It would be unprofessional on my part to attribute his incorrect interpretation of the Carolina Bays to a lack of understanding because he is a political scientist, so I won’t.

I find it ironic that Howard claims to be a skeptic and yet publishes (and links) to a paper by Hancock who, among other things, touts his lost civilization: Hancock on the Younger Dryas Impact Hypothesis since 2007.  Let’s not forget that Hancock claims that we are in mortal danger until 2040 from another comet strike based on interpretations he makes from so-called asterisms at Glöbekli Tepe.  Apparently Hancock, a self-described reporter with an undergraduate degree in sociology (see the JRE starting at about 2:12:47 where we are discussing the Mayan calendar – he claims to be just a reporter) has more bonafides than a “volcanologist” in Howard’s mind.  Hancock’s credentials notwithstanding, I hardly think Howard can consider himself a skeptic when he allows someone to tout fantastic unsupported stories about lost civilizations on his website.  I presume the Cosmic Tusk is more interested in giving a voice to people who agree with Howard, regardless of their far-fetched notions about the supernatual, than scientists interested in physical reality.


[1] Firestone, R. B., et al., 2007. Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas cooling. Proc. Natl. Acad. Sci., v. 104, 16016-16021.

[2] Holliday, V. T., et la., 2014. The Younger Dryas impact hypothesis: a cosmic catastrophe. J. Quaternary Sci., v. 29, 515-530.

[3] Wittke, J. H., et al., 2013, Evidence for deposition of 10 million tonnes of impact spherules across four continents 12,800 years ago: Proc. Natl. Acad. Sci., v. 110, p. E2088-E2097

[4] Napier, W.A., et al., 2013, Reply to Boslough et al.: Decades of comet research counter their claims: Proc. Natl. Acad. Sci., v. 110, p. E4171.

[5] Firestone et al., 2007,  op cit.

[6] For example, Meltzer, D. J. et al., 2014, Chronological evidence fails to support claims for an isochronus widespread layer of cosmic impact indicators dated to 12,800 years ago: Proc. Natl. Acad. Sci., v. 111, p. E2162-E2171.

Conjuring Up a Lost Civilization: updating the debate with Graham Hancock

My article entitled Conjuring Up a Lost Civilization: An Analysis of the Claims Made by Graham Hancock in Magicians of the Gods[1] was published in Skeptic magazine on Sept. 14, 2017   It has been an interesting road to the final publication.  Michael Shermer (editor of Skeptic magazine) asked me to join him on the Joe Rogan Experience podcast #961 and Youtube video cast (JRE) in a debate with Graham Hancock and geologist Randal Carlson which I have linked (Hancock invited planetary scientist Malcolm LeCompte).  The three and half hour podcast has had millions of views on various platforms including 1.4 million on Youtube.  According to Shermer in an accompanying article in Skeptic, during the week of the show it was downloaded 120 million times “putting him [Rogan] on par with the biggest talk show hosts on television” and making our podcast one of the most popular in the world (I became involved in the debate at 2:04:45).

Rogan is normally a neutral host based on what I have seen of some of his earlier shows, but that was certainly not the case in this podcast.  The early sections of the show pitted Rogan, Hancock, and Carlson against Shermer as he tried to defend our skepticism toward Hancock’s book.   I will let Shermer’s words in his article stand without further comment.

Shermer writes a monthly article for Scientific American.  By sheer coincidence, his article on Hancock’s book was posted the day of the podcast.   Shermer muses in his Skeptic article about how Hancock wrote him after the podcast vociferously objecting  to “the rubbishing of his life’s work” in the Scientific American article.  Shermer explains that he is truly sorry that Hancock felt personally attacked and that was not his intention.  I was also the brunt of Hancock’s outrage over my skepticism about the contentions he makes in his book.  I posted an early version of the article here on my blog — I use my blog primarily as a way to show students how science and logic can be used to address controversial claims in various fields (e.g., GMOs, fracking, global warming, etc.).  The original article had many editorial changes but due to copyright precedence by Skeptic I only showed the original article submitted.  Prior to the JRE podcast, Hancock found the original article and became visibly angry during the podcast over what I had written (I do not wish to inflame him further by asserting that he was angry but Hancock used the word angry on Facebook after the podcast: “Judging from the Youtube comments to my recent appearance on the JRE I have been transformed into a hate figure because I expressed anger at the article posted online in January 2017 by Marc Defant”).  Like Shermer, I had no intention of personally attacking Hancock and was apologetic that he misunderstood my intentions when I first came on the podcast.  I cannot speak for Shermer but I think skeptics, myself included, are somewhat befuddled by the acceptance of non-scientific claims by many in the general population.  Hancock’s book  is an international bestseller and will continue to influence uncritical minds for generations.  People like Carl Sagan and James Randi started taking people to task in the 1970s for making outrageous and unfounded claims (e.g., Erich von Däniken and his alien visitation scenario) which eventually led to the formation of skeptic societies.  The goal is not to personally attack anyone but to demonstrate how faulty some of the more fantastic claims are.  I deeply support Hancock’s right to his opinion, but as scientists, I believe we have a duty to dispute fantasy that masquerades as scientific inquiry — for example, von Däniken’s books.

I need to identify some misunderstanding which Hancock has taken full advantage of, and I have been unable to comment about while I waited for the publication of my paper by Skeptic magazine.   After the podcast, Shermer asked me to remove my blog post on Hancock because, obviously, he wanted people to read the article in Skeptic.  I readily agreed to remove the article.  In a weird twist, Hancock and many of the viewers of the podcast insisted that this demonstrated that I had made claims in the blog that were incorrect.  That is not true.  As I stated in the podcast, I stand by the claims I made about his book both on the podcast and in the Skeptic paper.  In fact, I elaborated on what I said on the JRE in my Skeptic article and have added additional comments below that were not in the magazine article.  My original revised paper after the JRE was so long Skeptic had to cut some of my arguments to avoid a book-size article.   Consequently, I elaborate below on several of the topics not addressed in the Skeptic paper particularly because I never got a chance to express them in detail on the JRE.


The gravity of the situation

In Hancock’s discussion of the  Incan archaeological site Sacsayhuaman in Peru, he recounts how he was guided around the ruins by some mystic named Jesus Gamarra, who thinks that the finely crafted rocks at the site were not done by the Incas but by ancient people in a time when “gravity was lower” so the huge blocks could be moved more easily. I would not pay much attention to this unscientific remark other than to make note of Hancock’s astonishing comment about this “theory”: “The lowered gravity is linked in his mind [Gamarra] with the notion that the earth once made much closer orbits around the sun—an orbit of 225 days and an orbit of 260 days—before settling in to its present 365 day path. He could be right [my emphasis]; new science suggests that the orbits of the planets are not fixed and stable.” Hancock does not seem to realize that the gravitational pull on an object at the surface of the earth has practically nothing to do with the period of the orbit of the earth around the sun. How could Gamarra be right?  As Newton discovered in the 1600s, the gravitational pull between two objects, say the earth and a large rock, is directly related to the mass of the earth and the rock and the inverse of the square of their distance.

In fairness to Hancock I quote his remarks from the JRE: “He [referencing me] just presents me as buying what Jesus Gamara says.  If that’s the standard that you are going to have in Skeptic magazine you have a serious problem…I do say he [Gamarra] may be right, but I don’t say he is right.  I say this is not my interest.”  I attempted to explain that my focus was not on whether he agreed with Gamarra but on his statement that rocks might not be as heavy in the past because the “period of the orbit of the earth around the sun” changed.  He completely deflected away from my contention that changing the orbit around the sun has virtually nothing to do with how heavy rocks might be on earth.  I believe the misunderstanding of the basic laws of gravity is important – it makes him suspect on other grandiose contentions he makes concerning science.

Sacsayhuamán,_Cusco,_Perú,_2015-07-31,_DD_27The superb construction at Sacsayhuaman in Peru.

I have attempted to see if the statement could be read any differently (he skated around the issue in the debate), but I fail to understand how. Hancock states that Gamara may be correct in his contention that the rocks were lighter in the past because the orbit of the earth around the sun may have been 225 days (or 260 days). Instead of pointing out that a 225-day orbit around the sun in 900 AD when Sacsayhuaman was built seems silly (calendars have been around for more than 2000 years continuously demonstrating 365-day years), Hancock states “he may be right”.  He may be right that the earth in 900 AD was where the orbit of Venus is now?  But even if the earth in some mystical way was on a 225-day orbit a quick back of the envelop calculation using Newton’s equation of gravity demonstrates that the weight of a rock on earth will not be significantly impacted by the gravitational pull of the sun.


Moai and Easter Island

During the JRE, Shermer brought up the moai, the megaliths on Easter Island.  More than 900 moai were sculpted between 1250 and 1500 AD, hauled to their present locations, and erected upright.  One of them weighs over 82 tons.  Many of the moai remain in the main quarry on the island – Rano Raraku. Hancock never addressed Shermer’s point that huge magaliths can be made by people with limited technological skills.  I later reemphasized to Hancock that the hunter gatherers on Easter Island had no problems constructing megaliths larger than those at Glöbekli Tepe with stone tools.  Why should we call upon the involvement of advanced civilizations?  But Hancock again deflected, surprised that I would suggest that hunter gatherers lived on Easter Island: “What’s there to hunt and gather on a tiny island – have you been to Easter Island? I have, six times, and you can walk across it in three hours.  What’s there to hunt and gather on that?”

AhuTongarikiMoai of Easter Island

Apparently what he did not realize is that when Polynesians (the Rapa Nui people) first arrived on Easter Island probably around 300 AD, the island was a lush tropical paradise.  Jared Diamond, in his book Collapse[2], proposed that the Rapa Nui hunter gatherers committed ecocide which led to the destruction of the island’s lush ecosystem and forced the Rapa Nui founders’ descendents to depend on a meager agriculture.  The increased population of the Rapa Nui on Easter Island according to Diamond led to deforestation and overkill of native species, decimating the ecosystem.  The point that Shermer and I were trying to emphasize is that hunter gatherers can make sophisticated megaliths without the help of “Magicians”.  Hancock postulates in his book that Magicians of the Gods were needed also at Easter Island.  No explanation has ever been given as to how these Magicians got to Easter Island or why but Hancock believes they were required to help the native population to construct the moai.

But alas, the argument is not specifically about the hunter gatherers on Easter Island.  It is about whether it takes an “advanced civilization” to create large hand-carved megaliths – hand carved with stone tools I might add!.  Not a single professional archaeologist working on Easter Island has ever suggested a requisite advanced civilization.   They all agree they were quarried locally with stone tools by the indigenous population[3].


The End of the World

In Hancock’s 1995 book Fingerprints of the Gods[4], Hancock suggests that the end of the world is approaching, all neatly summarized in a section at the end of his book entitled “The End of the World”. He particularly references December 23, 2012 (from the Mayan calendar) as a likely date for the end, but in Magicians he adroitly does not dwell on why these doomsday scenarios never came to be. He does an about face and tells us he never meant for these dates to be taken literally and sheepishly states in Magicians: “it is important to be clear that in signaling the decades around 2012 as the end of a great cycle, the Maya were not speaking of the end of the world, as such, but rather of the end of an age.”  I am reminded of Ghostbusters II when Peter Venkman (Bill Murray) is interviewing Milt Angland (Kevin Dunn) on his new book entitled The End of the World for Venkman’s television show World of Psychics.  When Angland announces that the end of the world will be on New Year’s Eve at midnight, Venkman astutely notes that choosing a date so close to the present is not wise for book sales.  After Angland gives a painful “fugue state” response as to why the date must be correct Venkman quips: “For your sake, I hope you’re right.”  My advice to Hancock — pick an end of the world date that does not occur in your lifetime!

In response to this early version of the article, Hancock posted the following comment in his defense on his Facebook page: “Are my views not allowed to evolve, then, or react to new information? Must I never stray from the line I took in a 1995 book? If I did so would that be ‘scientific’?”  Of course his views are allowed to change.  But predicting the end of the world from the Mayan calendar is not scientific.  One might expect that being wrong on such an important event as the end of the world would chasten him.  Not so!  After the end of the world did not occur in 2012, he continued down the same road making this comment in Magicians:

“It is possible, indeed highly probable, that we are not done with the comet that changed the face of the earth between 10,800 BC and 9600 BC.  To be quite clear,… some suspect that ‘the return of the Phoenix’ will take place in our own time — indeed by or before the year 2040 — and there is danger that one of the objects in its debris stream may be as much as 30 kilometers (18.6 miles) in diameter.  A collision with such a large cometary fragment would, at the very least, mean the end of civilization as we know it, and perhaps even the end of all human life on this planet.  Its consequences would be orders of magnitude more devastating than the Younger Dryas impacts 12,800 years ago that left us as a species with amnesia, obliged to begin again like children with no memory of what went before.”

First Hancock assumes that a comet struck about 13,000 years ago – still a hotly debated research topic.  Second, as I discuss in my Skeptic article, he contends that the precession of the earth tells him that we will be struck by another large comet some time before 2040.  As I explained in Skeptic, precession has nothing to do with cyclical comet collisions.  Predicting the end of the world seems to fascinate his audience but where is the science?  It is complete speculation equivalent to soothsaying (I suspect that this will irritate Hancock but I want those that are willing to listen to know that predicting the end of the world through speculation has nothing to do with science – nothing personal here).


[1] Hancock, G., 2015. Magicians of the God: The Forgotten Wisdom of Earth’s Lost Civilization: St. Martin’s Press.

[2] Diamond, J., 2005, Collapse: How Societies Choose to Fail or Suceed: Viking Press, 592p.

[3] See for example Van Tilbug, J., 1994, Easter Island: Archaeology, Ecology and Culture, Smithsonion Institution Press, 191p.

[4] Hancock, G., 1995. Fingerprints of the Gods: The Evidence of Earth’s Lost Civilization: Three Rivers Press.


The Psychological Impact of Devastating Events and the Misuse of Terrorism

At some time between 5 and 7 pm on November 13, 1985, the Mayor of Armero, Colombia, Ramon Antonio Rodriguez, was notified to evacuate the city because of the potential for an eminent mudflow geologists call a lahar from recent activity on nearby Nevado del Ruiz volcano.   Rodriguez  was aware of the lahars that had destroyed the town in 1595 and 1845, of the hazard map created in October showing the threat to Armero, and the fact that Armero resided on a delta at the base of the volcano built up from centuries of previous lahars.   And yet reports document that the mayor told the citizens of Armero to remain in their homes where they would be safe.  At 11:30 that night, a thunderous wall of mud 40 meters high swept through the town at 40 mph killing 23,000 residents and depositing a lahar on the city and surrounding land[1] [2].


The volcanic hazards map of Nevada del Ruiz completed by Italian volcanologists first published about a month before Armero was destroyed (from Voight[3]).


Ramon Rodriguez died that night, and we are left to wonder what he was thinking.  We should not be too hard on him – he was faced with a difficult decision: evacuate a city of 28,700 and risk the political ramifications if he was wrong or calm the people about an almost inconceivable event.   As Noble Prize laureate and renowned psychologist Daniel Kahneman notes in his book Thinking Fast and Slow[4]: “when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.”  In other words, there was no way Rodriguez could recognize from his limited experiences how devastating a massive lahar could be.  In the parlance of psychology, Rodriguez had no way of updating his personal model of reality with the magnitude of destruction from lahars.  I suspect he substituted what he knew – the bad political ramifications of evacuation if nothing happened – for what he did not.  I was there 4 days after the event and although as a volcanologist I have seen hundreds of ancient lahar deposits, there was nothing that prepared me for what I saw at Armero – an existential experience of epic proportions.

It’s instructive to see how substitution can work.  Students were asked the following questions:

How happy are you these days?

How many dates did you have last month?

There was surprisingly no correlation found among the answers suggesting that dating was not the foremost factor in the happiness of student lives.  Another group of students was asked the same questions but in the reverse order:

How many dates did you have last month?

How happy are you these days?

There was a huge and significant correlation between the answers.  Happiness is a difficult question to answer if even possible.  But if students are primed with a question about dating and then asked their happiness status, they can equate it to their dating.  It is classic substitution.  Kahneman states: “the students who had just been asked about their dating did not need to think hard because they already had in their mind an answer to a related question: how happy they were with their love life.  They substituted the question to which they had a readymade answer for the question they were asked.”

The heuristic is important.  People often make judgments by addressing their feelings or emotions.  They may ask “How do I feel about it?” rather than “What do I think about it?” substituting emotions for cognition.   Combine this with the results of experiments that show we tend to “underweight” rare events that we have not experienced and disaster becomes almost inevitable.  How many Wall Street advisers foresaw  the financial debacle of 2008?  How many FBI agents warned of an eminent attack on the World Trade Center in 2001?  And even when the experts had targeted Armero as a site of destruction, it was difficult for Rodriguez to sound the alarm.


Why terrorism?

Almost the opposite happens after humans are confronted with a horrifying incident.  Kahneman notes that over about a three-year period beginning in 2001 there were 236 Israeli citizens killed in suicide bombings of buses.  Emphasizing that approximately 1.3 million people ride Israeli buses on any given day,  the probability of dying in a bus attack is incredibly small.  But that is not how Israelis saw the risk.  They avoided buses and when they were forced to ride, many spent apprehensive moments surveying other riders for packages that might conceal bombs.

We know at some intellectual level that terrorists commit atrocities like bus bombings to sew fear into the fabric of a nation.  They win if they disrupt a nation’s sense of security.  But that still does not prevent us from associating potential harm with buses after a spate of bus attacks even when there are much greater threats (e.g., car accidents).  Kahneman’s research allows him to note: “An extremely vivid image of death and damage, constantly reinforced by media attention and frequent conversations, becomes highly accessible [in our memories], especially if it is associated with a specific situation such as the sight of a bus. The emotional arousal is associative, automatic, and uncontrolled, and it produces an impulse for protective action.”

You can see how vivid images of terror might be important for our survival in hunter-gatherer societies where genetic selection was honed.  Watching a saber-toothed tiger attack and kill a fellow hunter would certainly seem to fall under the heading “life altering event”: much more so than simply being told of an attack.  And perhaps the resulting trauma and anxiety predisposed the spared hunter to avoid a similar attack – potentially selecting for and passing on genetically the neural circuitry that produced the emotional response.  Post-traumatic stress disorder may be an avoidance strategy – “natures” way of emphasizing the significance of a terrible event.

It is easy to understand how terrorism foments fear and a demand for action.  It also presents an opportunity for various constituencies of the government to initiate actions that supposedly protect us while taking away freedoms and justify immense spending under the guise of counterterrorism.  Files released by Edward Snowden in 2013 identify a so-called “black budget” of $53.6 billion that targets terrorism.  As of 2013, the United States had spent more than $500 billion since 9/11 fighting terrorism[5] – that’s over $2,000 per adult US citizen.  Perhaps more importantly, The Heritage Foundation has documented 60 “Islamist-inspired terrorist plots against the United States” thwarted since 9/11[6].  That may seem like a significant number but as we shall see, many of these plots were done by mentally incompetent men lured into sting operations by the FBI.  And the cost turns out to be a staggering 8.3 billion taxpayer dollars per plot quashed.  But since the 9/11 terrorist attack that killed almost 3,000 people, only 97 victims have been murdered by Muslim extremists.  Most of the deaths occurred in four well publicized attacks: the 2009 Fort Hood shooting (13 killed), the 2013 Boston Marathon bombings and subsequent shootout (5 killed including police), 2015 San Bernardino attack (14 killed), and the 2016 Orlando nightclub shooting (49 killed)[7].  Your odds of being killed by a foreign-born terrorist over your lifetime are 1 in 45,808, odds that fall between being killed by a tornado (1 in 60,000) and being killed by an animal (1 in 30,167)[8].  And shouldn’t we be asking why counterterrorism funds were marked as “black budgets” which we would never have known about if it were not through the efforts of Snowden?   I suspect that the government does not want us  to know the astronomical amounts of money being spent to “protect” us from a handful of terrorist attacks

I don’t wish to make light of those that have died by terrorist attacks or serious efforts by various segments of the constabulary to prevent attacks, but the number of deaths by terrorism pales in comparison to other types of violent deaths.  In the United States, your chances of dying in a traffic accident are astronomically higher than death from a terrorist attack.   There were more than 30,000 deaths related to traffic accidents in 2015 and more in 2016[9], but there are no government initiatives to fight traffic accidents.  If logic ruled our government’s decision making, some of that “terrorist” funding would go toward highway safety.  We are confusing priorities because of the “emotional arousal” terrorist attacks create – which is precisely the reason our enemies use them.

In case you think resources spent to prevent another 9/11 are protecting America from copious similar attacks, think again.  Once the major “easy targets” were secured, the FBI has gone to Herculanean efforts to find and arrest terrorists that may not be who the FBI says they are.  The FBI now spends more on terrorism than traditional forms of crime such as financial corruption and organized crime.  As of 2015, there have been about 175 arrests made by the FBI related to terrorism through their network of more than 15,000 informants (up from 1,500 in 1975): by far and away the largest network of domestic spies in history.  These informants can make as much as $250,000 for every terrorist case brought to the FBI[10].   According to investigative reporter Trevor Aaronson many of the arrests of “mentally ill or economically desperate people” involve informants that are criminals and con men themselves (Aaronson has also documented cases where the FBI has used the threat of being placed on the no fly list and other unseemly leverage, including the threat of prosecution, to “encourage” Muslims to become informants).  In several cases, the FBI, through its informants, has provided weapons and cash to mentally challenged or disturbed individuals to encourage them to plot terrorist attacks – only to arrest them once they take the bait.  In other cases, these so-called radicals went forward with an attempted attack because they wanted the money offered by the FBI informants, not because they were committed to jihad.[11]  After the arrests, the FBI, in a fashion reminiscent of Elliot Ness, announces the thwarting of another imminent terrorist attack which helps assure further funding from Congress.  Isn’t this precisely what Osama bin Laden would have hoped for — massive resources being thrown at ghost terrorists?

As Aaronson points out, directly after 9/11 the G-men expected a second wave of terrorism from so-called Muslim sleeper cells.  But the intense hunt for Al Qaeda leaders and the wars in Afghanistan and Iraq discombobulated any central planning within potential radical Muslim networks, and the second wave never materialized.  The FBI had to adapt and focused on stopping lone-wolf attacks by Muslims within the United States radicalized by the internet – the so-called home grown radical terrorists.  The FBI routinely uses their sting operations on the smorgasbord of hapless individuals (many of the cases border on entrapment) to justify large budget requests to Congress.  It is a vicious cycle not to mention it dupes an uncritical public.  On top of everything else, while the vast array of informants search for primarily incompetent, often-times homeless mentally challenged persons, the small number of real terrorists (those capable of generating serious attacks) are sometimes not outed and go on to strike targets such as in San Bernardino or Boston.

Aaronson’s book is filled with examples of mentally handicapped, homeless, or mentally ill Muslims that were targeted by the FBI as potential lone-wolf terrorists.  The FBI gets informants to contact them, supplies them with all the necessary paraphernalia for an attack, and  then arrests them once they “pull the trigger” on a planted fake bomb.  The recent arrest and sentencing of John T. Booker serves as an example of the continued strategy used by the FBI.

Booker, a Topeka, Kansas native converted to Islam when he was a senior in high school.  In March, 2014, when he was 20, Booker posted on Facebook that he wanted to wage jihad on the United States.  Not surprising, the post caught the attention of the FBI who had their informants make contact (no public information exists as to how much the informants were paid).  As is typical in lone-wolf setups, the FBI informants provided Booker with inert bomb and materials and even drove him to Fort Riley where he wished to launch his attack.  Once he attempted to explode the fake bomb, the FBI arrested him[12].

Booker’s public defender  stated that he thought his client was mentally ill, but eventually Booker cut a plea bargain with the FBI sparing Booker from the death penalty.  He pled guilty on February 3, 2016, to counts of “attempted use of a weapon of mass destruction” and “attempted destruction of government property by explosion”.  He was sentenced to 30 years.

The question constantly asked by Aaronson is “would people like Booker have actually committed a crime if they were not enabled by the FBI?”   There is little doubt that a seemingly mentally ill Booker felt anger and expressed that anger.  But was he capable of doing harm without enablers?  We will never know, but that has not kept various agencies of the government from promoting Booker as an extreme danger to society.  Prosecuting Acting Assistant Attorney General Tom Beall stated:  “Violent extremism is a threat to America and all its people…  Our goal is to prevent violent extremists and their supporters from inspiring, financing or carrying out acts of violence.”  Violent extremism?  Supporters?  One thing seems certain when the FBI goes to Congress with hands out, Booker will be morphed into as ruthless and cunning an adversary as the FBI can portray him to make sure it is clear they are on the job.  Not only will taxpayers be footing the bill for continued seemingly exorbitant FBI funding but they will be paying  to incarcerate these lone-wolf targets whether they are truly dangerous or not.

Michael German, a 16-year veteran former FBI agent told Aaronson “If you are the terrorism agent in a benign Midwestern city, and there is no terrorism problem, you don’t get to say, ‘There’s no terrorism problem here.’  You still have to have informants and produce some evidence your doing something.”  Terrorist funding requires terrorist arrests.


Lone-wolf terrorist John T. Booker.


Did bin Laden win?

I am convinced with some certainty that somewhere within the archives of Congress there is a law that demands a catchy acronym for any significant Act of Congress, thus explaining the stupefying name “Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act” which we know as the USA PATRIOT act.  bin Laden must have been the happiest terrorist within the caliphate when the act was rushed through Congress and signed by George W. Bush on October 26, 2001!  What could delight a terrorist more than seeing a free democracy overreact to 9/11 by diminishing the republics freedoms?  I am not just alluding to the inconvenience of waiting in massive lines at the airport to have your body scanned or sending shoes through x-ray machines so we can feel secure knowing there is no shoe bomber on our jets.  No, we can thank the Patriot Act for enabling the National Security Agency (NSA) to establish its massive phone data collection activities (which supposedly were stopped in 2015 after the uproar caused by Snowden exposing the secret operation – now the phone companies collect the data).

In late 2013, the New York Times reported that the CIA, under the guise of the Patriot Act, is collecting data on financial transactions both into and out of the United States[13].  The Times inferred that the secret spying operation offered “evidence that the extent of government data collection programs is not fully known and that the national debate over privacy and security may be incomplete.”

You need to know that prior to 9/11, deciding to investigate anyone required credible evidence that the suspect was involved in a crime.  Not so anymore.  After the Patriot Act, the FBI could essentially use profiling on the basis of religious affiliation to develop persons of interest.   But as I have emphasized, the targets are oftentimes petty miscreants.  Aaronson states: “While the cases involve plots that sound dangerous – about bombing skyscrapers and synagogues and crowded public squares – if you dig deeper, you see that many of the government’s alleged terrorists seem hopeless; they are almost always young and down on their luck, penniless, without much promise in their lives, easily susceptible to a strong-willed informant’s influence.  They’re often times blustery punks.”

It seems to me that one of the goals of America’s war on terrorism should be to protect the freedoms guaranteed by our democracy — that is, to take  us back to a time prior to 9/11 when we were not saddled with such things as Patriot Acts which spy on citizens or when we did not spend massive amounts of money on secret programs.  The goals definitely should not be to use terrorism as an opportunity to build up secret administrative budgets in the search for “ghost” terrorists while deemphasizing other forms of crime.  Spending to protect American citizens should be commensurate with the threat – not the perceived threat as Kahneman and Aaronson have so eloquently described.

[1] Armero tragedy https://en.wikipedia.org/wiki/Armero_tragedy

[2] Voight, B., 1990, The 1985 Nevado del Ruiz volcano catastrophe: anatomy and retrospection: J. Volcanol Geotherm. Res., v. 42, p. 151-188.

[3] Op cit. Voight 1990

[4] Kahneman, D., 2013, Thinking Fast and Slow: Farrar, Straus and Giroux, 499p.

[5] Gellman, B. and Miller, G., 2013, ‘Black Budget’ summary details U.S. spy network’s successes, failures and objectives: Washington Post, Aug. 29.

[6] 60-terrorist plots since 9/11: http://www.heritage.org/terrorism/report/60-terrorist-plots-911-continued-lessons-domestic-counterterrorism#_ftn1

[7] Terrorism in the United States: https://en.wikipedia.org/wiki/Terrorism_in_the_United_States#2000.E2.80.9309

[8] How likely are foreign terrorists likely to kill Americans: http://www.businessinsider.com/death-risk-statistics-terrorism-disease-accidents-2017-1

[9] https://www.archives.gov/research/military/vietnam-war/casualty-statistics.html

[10] Trevor Aaronson TED talk, 2015, https://www.ted.com/talks/trevor_aaronson_how_this_fbi_strategy_is_actually_creating_us_based_terrorists

[11] Aaronson, T., 2015, How the FBI strategy is actively creating US-based terrorists, TED, https://www.ted.com/talks/trevor_aaronson_how_this_fbi_strategy_is_actually_creating_us_based_terrorists#t-53317

[12] The Kansas City Star, 2017, http://www.kansascity.com/news/local/crime/article163344123.html

[13] New York Times, 2013, http://www.nytimes.com/2013/11/15/us/cia-collecting-data-on-international-money-transfers-officials-say.html

Science versus the “eye test” in selecting the college football playoff teams

Note: this paper will be published in Skeptic Magazine in March, 2017

In case you are not familiar with how college football determines the four teams that are picked to contend for the national championship, I refer you to the Selection Committee Protocol which is a guide on how the committee chooses the four playoff teams at the end of the regular season and after the league championship games.  The first words of the protocol are telling: “Ranking football teams is an art, not a science.”  The protocol specifically calls into question any rigorous mathematical approach: “Nuanced mathematical formulas ignore some teams who “deserve” to be selected.”  For those that are not aficionados of the college football selection, the previous selection process used computer polls as one third of the formula to determine the final two teams (before the four-team process was initiated in 2014 – the other two thirds of the formula came from the Associated Press and Harris polls).  What I hope to show in this essay is that 1) humans are primed with natural biases (whether they realize it or not) and therefore, are not effective at simultaneously considering the huge amounts of data available and 2) computer algorithms are spectacularly successful at analyzing massive data bases, sometimes called “deep data”, to ascertain the best choices for the playoff system.

So what are the guidelines that instruct the 13 member college playoff panel?  They are somewhat obvious and include “conference championship wins, strength of schedule, head-to-head competition, comparative outcomes of common opponents, and other relevant factors such as key injuries that may have affected a team’s performance during the season or likely will affect its postseason performance.”  I hasten to point out that strength of schedule can only be determined by “nuanced mathematical” rigor.  The guidelines fall into two categories: facts (e.g., conference champions) and opinions (e.g., whether a key injury will impact team performance).   My argument is to eliminate opinions and choose the final 4 teams in the most rational and unbiased fashion — that is, use computer algorithms.  Exceptions to the computer rankings could be made by the committee when facts like conference championships play an important role.  For example, if Notre Dame and Florida State University each had one loss at the end of the season but the computer rankings had Notre Dame above FSU, the committee might override the computer rankings and choose FSU over ND if FSU won the Atlantic Coast Conference championship (ND is not in a conference and therefore cannot win conference championships).  Let me spend some time justifying my proposed selection process.

I have created a table below which shows the top 15 teams in the final polls of the 2015 season along with their won-loss records for reference purposes.  The first two polls are the AP and Coaches polls and the final two are the Sagarin and Colley computer polls.  Keeping in mind that the computer algorithms that determine the computer polls have no human intervention (the data on wins and score differentials are simply entered into the matrices), it is remarkable that the computer polls agree so closely with the human polls particularly within the top 5 teams (remember there are 128 teams in the NCAA Division I Football Bowl Subdivision – FBS in 2015).  The details of the computer algorithms are discussed at the end of the essay.

rankingsRankings from the final week of the 2015 season.

I agree that informed opinions can be important and a group of football experts might have insights into the game that mere mortals might not.  Many of the committee members are former coaches and athletic directors, but I am concerned that opinions from former coaches and athletic directors might be tainted by the teams and conferences they come from (they might not even know they are biased).  What is the difference between an informed and prejudiced opinion?  I am not sure, but can these men and women be truly neutral?   There is a massive amount of scientific research that shows that we have difficulties being unbiased.  Nobel laureate Daniel Kahneman has written an entire book on heuristic and cognitive biases1.  A good example comes from witnesses of crimes or traffic accidents.  Any good detective knows to take eye-witness testimonies before the witnesses have had a chance to discuss the event because studies show that witnesses that share information will tend to foster similar errors about the event.   And the research also shows that eye-witnesses are notoriously inaccurate.  Elizabeth Loftus has written an immensely entertaining book about her research involving witness error and bias if you care to delve into the details2.  Loftus writes: ” In my experiments, conducted with thousands of subjects over decades, I’ve molded people’s memories, prompting them to recall nonexistent broken glass and tape recorders; to think of a clean-shaven man as having a mustache, of straight hair as curly, of stop signs as yield signs, of hammers as screwdrivers; and to place something as large and conspicuous as a barn in a bucolic scene that contained no buildings at all.  I’ve been able to implant false memories in people’s minds, making them believe in characters who never existed and events that never happened.” A recent study based on statistical analyses has shown that the writers’ and coaches’ college football polls are significantly affected by such things as point spreads and television coverage3.

If humans have all of these susceptibilities toward bias, why do we use humans to choose who plays in college football’s vaunted playoff system?  Well, because humans are biased — they think they can choose better than nuanced mathematical formulas.  But the nuanced mathematical formulas are completely unbiased — in other words, they use only raw game data usually related to wins/losses and score differentials.  We can remove the most biased element in the system, humans, by relying specifically on science, logic, and mathematics rather than art or whatever else the committee protocol calls human intervention.  It is absolutely archaic in the day of big data to ignore analytical models in favor of subjective comparisons.  Do the coaches, atheletic directors, and politicians (e.g., Condoleezza Rice is on the committee but has virtually no “football experience” in her background) that make up the football committee understand the value of algorithms?  I am not sure, but there is a wealth of research that says they should.

From my experience on football boards and chat rooms, nothing gets a fan’s dander up more than claiming computer models are unbiased.   But they are.  They start with only formulas melded together into computer algorithms.  And the algorithms are based on sound mathematical principles and proofs.  There are many college football algorithms used in computer ranking models and many of them are considered proprietary.  More than 60 are compiled on www.mratings.com.  The math can get pretty intense, but if you are interested in the details of the known algorithms, I recommend the book Who’s #1?: The Science of Rating and Ranking by Amy N. Langville at the College of Charleston and Carl Meyer at North Carolina State University4.  The people that do these algorithms are steeped in mathematical prowess.   For example, the Colley matrix was done by Wes Colley after he completed his PhD at Princeton University in astrophysics.  Although the math can get tricky, the principle is rather simple. The algorithms typically involve matrices and vectors that simultaneously consider not only which team beat which opponents but which teams the opponents beat, and which teams those opponents beat out into n dimensions.  In addition, difficulty of schedule and score differentials can also be incorporated into the algorithms.  When we watch college football we get a sense of which team is best by comparing how each team plays against its opponents.  But such opinions are hopelessly mired in the biases and are enhanced by the fact that preseason polls skew our perceptions before the season begins.  The algorithms do precisely what human perception is trying to do but without any biases and simultaneously with a huge array of data 5.

I don’t understand the reluctance of the powers to be in college football to incorporate mathematical equations into the  playoff system.  These types of algorithms permeate the business community.  Google’s PageRank ranks web pages using some of the same algorithms as computer models do to rank teams.  Although it is a carefully guarded secret, Langville and Meyer 6 concluded, based on patent documents, that Google’s algorithm uses the Perron-Frobenius theorem  which is also used by James Keener in his football rankings 7.   The BellKor company won a Netflix prize of 1 million dollars for writing an algorithm that was 10% better than the one created by Netflix.  Every time Netflix suggests a movie, it is exploiting the same kinds of algorithms used in football rankings.  In fact, Langville and Meyer applied the algorithms behind the Colley and Massey methods to the Netflix movie ratings database and came up with a list of the top movies in their book (pages 25-27).  No one complains about page ranks or movie suggestions being too nuanced in rigor.  Can you imagine a committee trying to ascertain page ranks?  No one promotes the “eye test” to rank web pages even though the eye test is commonly prescribed as the only legitimate way to determine which teams are the best in college football.  Isn’t it obvious that this is about control rather than human abilities versus computer algorithms?

Deep data has been integrated into almost all sports.  Witness the way professional baseball has dispensed with the traditional positions on the field in favor of moving players to positions where the hitter is most likely to hit.  It is not unusual to see a shortstop in shallow center.  The game was primarily changed when computers had the computing power to handle the large amounts of data that could be collected.  Read Moneyball by David Lewis 8 to see how manager Billy Beane used statistics (now called sabermetrics) to take a struggling Oakland Atheletics team to the playoffs.  Opinions of seasoned professional scouts that relied on the eye test to recruit talent have gone the way of the dodo bird.

In my opinion, nothing seems more egregious in the polls than the way teams are punished for losing when they play difficult schedules and other teams are favored for winning even with cupcake schedules.  Let’s pretend we can determine the thirteen best teams in college football before the season and the number 13 team has scheduled all the top twelve teams during the regular season.  The number 13 team could finish the season 0-12.  The AP and Coaches polls would be extremely hard on the team and they would never rank the team in the top 25 even though we know by definition they are the 13th best team in the nation.  But the computer algorithms would recognize the logic behind the difficult schedule and although they might not rank them 13th, they would probably have a good showing.  The counter to this example is a team with a fluff schedule.  The polls are notorious for ranking teams with perfect records higher than is sometimes justified when strength of schedule is considered.  In theory, any team in the FBS could win all their games if they played lesser ranked opponents on their schedule.  Fortunately it appears that the playoff selection committee has recognized that strength of schedule is an important factor and they do consider it.  However, the committee’s willingness to consider head-to-head games seems logically misplaced.  Let’s go back to our top 13 ranked teams again.  If the number four team lost to the number 5 team and the number 5 team lost to the number 13 team,  the committee would indubitably place the number 5 team into the playoffs over the number 4 team based on the silly head-to-head rule even though the computer algorithms would recognize the problem and consider the entire schedule of each team.

Although we don’t like to admit it, statistically improbable events can have a huge impact on single games which may never be noticed by the committee (or the computer algorithms for that matter – see the section on betting below).   If anyone saw the national championship last year you could not be faulted for thinking that Clemson may have been the best team in the country even though they lost to Alabama (it hurts me to say this because I am an alumnus of Alabama and a huge Crimson Tide fan – I go way back to the Bear).  Alabama had an onside kick that they recovered.  It appeared to change the momentum of the game and yet, the probability of that onside kick being perfectly placed seems unlikely.  Watch it for yourself below.  Bama also needed a kickoff return and a few turnovers on their way to a 45-40 national championship victory.  The point being that minutiae that would otherwise not have a big impact can and does play a role.  It is the butterfly effect which teaches us that there is no right answer when it comes to rankings.  The best we can do is create an unbiased mathematical system rooted in statistics and deep data with as little input as possible from naturally biased humans.

Last year I set out to test the college football computer algorithms by setting up a spreadsheet which monitored theoretical bets of $100 on each of the college games beginning on November 8 through the college bowl games.  I waited until late in the season because, in theory, the algorithms work better with more data.  I used Sagarin‘s predictor ranking which includes score differentials and home-team advantage.  First a few words about these items.  It is true that teams can run up the score although it rarely happens on a consistent basis.  But most algorithms correct for large score differentials9 to avoid any advantages gained in the rankings from running up scores.  Home-team advantage is an interesting subject in itself and is usually attributed to psychological effects of playing in a stadium full of the home-team fans.  But these effects are difficult to test.  The subject has also been addressed in the scientific literature, and much to my surprise, some studies show that referees can be influenced by the home crowd.  For example, Harvard professor Ryan Boyko and his colleagues found that statistically referees favored the home team in 5,244 English Premier League soccer matches over the 1992 to 2006 seasons10.  Regardless of the reasons for the effect of home-field advantage, algorithms can correct for it.  Sagarin calculates a score to be added to the home team when betting.

The results of my theoretically betting are shown below for each week (the bowl season caused the number of games to vary in December).  Had I bet $100 on each of the 287 games monitored I would have lost $700.  So what’s so terrific about a ranking system that loses money in Vegas?  It is simple – point spread.    If Vegas lost money on college football games, there would be no betting.  It is common for the media to give point spreads in games as a reflection of who Vegas thinks will win the game.  But spreads are not about who is favored, the spreads are about enticing bettors to bet.  With a point spread, Vegas does not have to predict winners, all they need to do is entice bettors by making the point spread seem to favor one side or the other.  Vegas knows how to make money on all those built-in biases we have.  They collect a fee (called a vig or juice) for handling the bet, and as long as they have about the same number of losers and winners, they take home a tidy profit.  To make sure the winners and losers are equal, they shift the spread during the course of the week as the bets are placed assuring that the same amount of bettors are on both sides of the spread.  Even the computer algorithms can’t beat the crafty Vegas’ bookies.  Even though the computer algorithms are very good at predicting winners (near 60%), no algorithm (or, for that matter, any human) can beat spreads on a consistent basis11.  But people keep trying.

tableAmount of money theoretically lost using Sagarin rankings.

Langville and Meyer point to two reasons why computer algorithms don’t beat point spreads; 1) The computer algorithms are created to predict rankings not score differentials.  In the computations, they ignore important scoring factors such as strength of defensive backfields against high-octane passing attacks which might create lopsided scores even though the rankings rate both teams average.   Then there are always the statistical flukes that occur in games as mentioned above which cannot be predicted.  2) Spreads are also difficult to predict particularly in football because points are tabulated in sets of usually 3, 6 or 7.  Therefore, games tend to be multiples of these numbers rather than simple evenly distributed numbers.

I must conclude from the data that the only way to select the four teams that play at the end of the year in college football is to use computer algorithms.  There should still be a committee that decides how to weigh such things as league championships.  It will also be extremely important to make sure that the algorithms used are completely understood by the committee (no black-box proprietary claims).  The algorithms need to be analyzed to determine which equations and factors give the most meaningful results and changed accordingly.  Score differentials should be included within the algorithm after they have been corrected for the potential of teams running up the scores.

Appendix – a brief overview of linear algebra and rankings

There is no way in an essay I can do justice to the subject.  But I did want to emphasize how these equations eliminate any bias or human influence.  I highly recommend the Khan Academy if you want a brief overview of linear algebra.

Rather than use my own example, I have decided to use the data presented by Langville and Meyer because it is easier to understand when every team in the example has played every other team in the division.  The data shown below comes from the 2005 Atlantic Coast Conference games.

accThe 2005 data from the Atlantic Coast Conference

The Massey method of ranking teams was developed by Kenneth Massey for his honor’s thesis in mathematics while he was an undergraduate at Bluefield College in 1997.  He is currently an assistant professor at Carson Newman University.  Using his equations, the table above can be converted into a linear algebra equation of the form Mr = p where M is the matrix containing information about which teams played which other teams, r is the rating factor (which is equated to the ranking), and p is the sum of each team’s score differentials:


Note the diagonals of M are the games played and each -1 in the matrix shows that each team played every other team.  The last row is a trick Massey used to force the ranks to sum to 0.  The solution is calculated by inverting the matrix M and multiplying times p to obtain the following results 12:


  1. Kahneman, D. (2011) Thinking, Fast and Slow: Farrar, Straus and Giroux.
  2. Loftus, E. and Ketcham, K. (1994) The Myth of Repressed Memories: False Memories and Allegations of Sexual Abuse: St. Martin’s Press
  3. Paul, R. J., Weinbach, A. P., and Coate, P. (2007) Expectations and voting in the NCAA football polls: The wisdom of point spread markets: J. Sports Economics, 8, 412
  4. Langville, A.N. and Meyer, C. (2012) Who’s #1?: The Science of Rating and Ranking: Princeton University Press
  5. I would like to thank Amy Langville for suggested changes here
  6. see ref. 4
  7. Keener, J. (1993) The Perron-Frobenius theorem and the ranking of football teams: Society for Industrial and Applied Mathematics, 35, 80
  8. Lewis, D. (2003) Moneyball: W. W. Norton & Company
  9. see ref. 4
  10. Boyko et al. (2007) Referee bias contributes to home advantage in English Premiership football: Journal of Sports Sciences, 25, 1185
  11. see ref. 4
  12. see ref. 4 for details

Genomics – a brave new world

Embryonic stem cells (ES cells) are remarkable.  They come from animal (including human) embryos and can morph into any cell in the body such as brain, bone marrow, intestine, muscle, or blood cells.   Biologists call them pluripotent and can isolate them from an embryo and grow them in laboratory petri dishes.  In the halcyon days of early stem-cell research, it did not escape the attention of scientists that genetic changes could be made to an ES cell, the cell could then be inserted back into an embryo, and the embryo placed into the womb where it would differentiate into all the cells of the body with the new genetic modification.  The process became so widespread in the early 1990s that biologists referred to the genetically modified animals born as transgenic.  An example that caught the attention of the world was a mouse that had a gene from a jellyfish that made it glow in the dark (under blue lamps).  It was as if a grand gift had been given to geneticists that enabled them to understand how genes functioned.  Mice could be made to double in size, develop Alzheimer’s disease, grow cancer tumors, age prematurely, increase memory, or erupt with epilepsy all through gene manipulation. It was a remarkable way for scientists to study genetic diseases.  There was just one problem — human ES cells did not respond favorably to genetic modifications the way mouse ES cells did.   There would be no transgenic humans anytime soon even if ethical issues were overcome.

Mouse_embryonic_stem_cellsEmbryonic mouse stem cells

Meanwhile, geneticists were probing a myriad of other ways to correct specific genetic disorders.   One group focused on a gene called ornithine transcarbamylase (OTC) which codes for an enzyme that breaks down proteins in the liver.  Without the enzyme, a product of the protein, ammonia, accumulates throughout the body.  As you might imagine, ammonia buildup in the body can have devastating consequences, and most children do not survive into adulthood with the genetic disorder.   Enter Jesse Gelsinger who had a mild case of OTC deficiency1.  Mark Batshaw and James Wilson, then at the University of Pennsylvania, postulated that they could add the OTC gene to a cell’s DNA from Gelsinger and insert it back into his body via an adenovirus (viruses reproduce by entering the body and injecting either DNA or RNA into a living cell, effectively taking over the cell to reproduce more copies of themselves).  The hope was that the virus would insert the corrected DNA into Gelsinger’s liver cells which would then synthesize the requisite enzyme needed by Jesse.   The treatment worked in mice but had mixed results in monkey trials — some monkey immune systems responded in drastic ways causing liver failure and other disorders.  Batshaw and Wilson responded by making the virus less potent and reducing the dose for the proposed trial with Gelsinger.  In 1997, they approached the Recombinant DNA Advisory Committee (RAC) of the government’s National Institute of Health for approval.  RAC agreed, and Jesse and his father were excited volunteers, convinced that Jesse’s close encounters with death from food reactions, regimented diet, and the plethora of pills he took was coming to an end.

On September 9, 1993, Jesse began his trial of viral injections.  Four days later he was dead from a massive immune reaction to the virus.  The press reports set off a chain reaction energizing Congress to initiate hearings, district attorneys investigating, the university back pedaling, and official inquiries launched by the FDA and RAC.  When it was discovered that there was a “pattern of neglect” with the research by Batshaw and Wilson, the FDA halted all trials in other laboratories and a strict moratorium fell over the entire research discipline.  We will never know how much of the response was an effort to point blame away from governmental agencies, but Batshaw and Wilson became the “fall guys” and genetic research would be impacted for a decade.  I recognize the need for caution and ethical considerations, but I also know that there are children dying from diseases like OTC every day.  I am sure many of them and their parents would gladly accept the chance of survival beyond childhood via potentially risky experiments.  Did we throw the baby out with the bath water?  After all, Jesse died wanting to help others by finding a cure for OTC.  He did not die from the trial’s basic premise.  He died because his body had highly reactive antibodies to the virus because he had probably been exposed to a similar adenovirus in his past.

Fortunately, Jesse’s disturbing death did not affect genetic diagnosis – attributing genes to diseases.  Examples include the BRCA1 gene associated with breast cancer, CNV mutations linked to schizophrenia, and ADCY5 and DOCK3 genes related to neuromuscular disease.  I highly recommend Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History2 for further reading.

But to set the stage for the technology available today, we need to look at in vitro fertilization (IVF).  In IVF, an embryo is formed from the fertilization of an egg by a sperm outside of the body.  The single-cell embryo is bathed with nutrient-rich fluids in an incubator and left to divide for three days until there are 8 to 16 cells.   The embryo is then implanted into a woman’s womb.   Remarkably, if a few cells are removed from the growing embryo in the incubator,  the embryo is unaffected.  It simply replaces the lost cells.  Usually several eggs are harvested for IVF and fertilized.  Cells can then be removed from each embryo and genetically tested or screened for mutations allowing only a fertilized egg with no known serious genetic disorders to be implanted in the womb.  Genetic testing in this way has been done since the late 1980s and is referred to as preimplementation genetic diagnosis (PGD).  It is eugenics without the terrible baggage that the word has carried from past diabolical experiments (think Mengele and the Nazis).  But that does not mean that the method has not been misused.  PGD is being used surreptitiously to select for sex particularly in India and China even though selecting for gender is banned there.  It is estimated that as many as 10 million females have “disappeared” from PGD, abortion, infanticide, or neglect of female children3.

Blausen_0060_AssistedReproductiveTechnologyDiagram of in vitro fertilization – Wikipedia

According to Mukherjee, there have been three principles that guide doctors in deciding which embryos will not be implanted during IVF.   First, the gene needs to lead to a serious life-threatening disease with almost 100 percent chance of the child or adult developing the disease.  Cystic fibrosis  is a good example – a single gene causes the genetic disease.    The disorder affects the lungs primarily, causing chronic coughing from frequent lung infections.   Life expectancy is about 46.  The misery is not limited to the lungs.  Sinus infections, poor growth, clubbing of digits, fatty stools, and infertility (among males) are just some of the side effects.  Second, the development of the gene will lead to “extraordinary suffering”.  And finally, there must be a consensus among the medical community that the intervention is morally and ethically sound and the family involved has complete freedom of choice.

Even so, the Roman Catholic Church (and other religious institutions) has strongly objected to IVF and related gene technologies.  John Hass, a Catholic theologian, states: “One reproductive technology which the Church has clearly and unequivocally judged to be immoral is in vitro fertilization or IVF. Unfortunately, most Catholics are not aware of the Church’s teaching, do not know that IVF is immoral, and some have used it in attempting to have children…  In IVF, children are engendered through a technical process, subjected to “quality control,” and eliminated if found “defective.”4.  Honestly, I don’t understand where this moral imperative comes from.  If there is a God, He/She must have understood that we would eventually discover how to cure genetic diseases.  Apparently Hass and the Church find no fault with technologies that would correct the problem after the embryo is in the womb but chaff at the idea of choosing to avoid the disease before the embryo is placed in the womb.   I suspect that Hass might change his mind if he had to watch someone die slowly from a disease like cystic fibrosis5.  Clearly, our society will continue to grapple with the ethical and moral issues of gene technologies particularly now that research is making social engineering theoretically “available”.  Mukherjee discusses the identification of a gene related to psychic stress to emphasize how blurred the ethical decisions are potentially becoming.  Where society draws the line is going to be as important as the genetic technology itself.  But these ethical dilemmas are just the tip of the iceberg.

Improved safety and more careful oversight has gradually led to better research.  New viruses have been developed to effectively deliver gene-altered DNA or RNA to cells that avoid catastrophic immune responses similar to what happened to Jesse Gelsinger.    In 2014, viral delivery systems successfully treated hemophilia – the genetic disorder that prevents blood from clotting.  And although the setback that genetic engineering suffered in the 1990s in the aftermath of Jesse’s death had been overcome as the new millennium approached, germ-line therapy was set back again when George W. Bush drastically reduced the use of ES cells in federal research programs in 2001.   Germ-line therapy is the modification of the human genome in reproductive cells so that the modified gene is passed on to offspring.  Imagine ridding genomes of gene mutations that cause cystic fibrosis or breast cancer (BRCA1) forever in families.  Yet because ES cells are frequently obtained from embryos left over from IVF, Bush clamped down on the research (presumably based on pressure from the religious right) which nearly extinguished United States progress in the field for nearly a decade.  I understand the abortion debate, but collecting ES cells from embryos that will never be implanted in woman’s womb seems to be carrying the abortion issue to drastic extremes.

Jennifer Doudna of the University of California, Berkeley and Emmanuelle Charpentier of the Helmholtz Centre for Infection Research knew from earlier research that bacteria had RNA that could find and recognize DNA in a virus and then deliver a protein which cut the virus DNA, thus disabling it – an effective way bacteria fought off viral attacks.  By 2012, they were not only able to program the process to seek and cut any specified section of DNA, but they learned how to flood the region near the cut with desired DNA fragments that the cut DNA incorporated into its genome.  In effect, they had created a gene splicing technique they designated CRISPR/Cas96 (clustered regularly interspaced short palindromic repeats).  In other words, Doudna and Charpentier had discovered a means to exchange a serious mutant gene like the cystic fibrosis gene with a harmless gene.  The dawn of genetic editing had begun7.

About the same time that Doudna and Charpentier were developing the CRISPR technology, scientists at Cambridge, England and at the Israeli Weizmann Institute were discovering how to make ES cells into primordial germ cells – these are the cells that develop into the sperm and egg in the embryo.  The brave new world predicted by Huxley nearly 100 years ago in 1932 is upon us.  The technology is now available to form a germ line cell which can be genetically modified with CRISPR technology.  The modified cells can then be converted to sperm and eggs to form an embryo which will produce a genetically modified human through IVF – a transgenic human.  However, as you might imagine, there are strict controls and bans on this research in the United States based on ethical and moral issues.  Scientists are forbidden to introduce genetically modified cells that will develop into embryos directly into humans and ES cells cannot be genetically modified if they will form into sperm and egg cells.  Most other countries have followed the US lead with similar bans.  Mukherjee tries to explain the concern: “The crux, then, is not genetic emancipation (freedom from the bounds of hereditary illness), but genetic enhancement (freedom from the current boundaries of form and fate encoded by the human genome).  The distinction between the two is the fragile pivot on which the future of genome editing whirls.”  It is clear that we are wrestling with our past history of the misplaced promotion of horrible eugenics programs.  I asked Doudna to clarify the reason for a moratorium: “the moratorium is not a call to outright ban engineering of the human germ line. Instead, it suggests a halt to such clinical use until a broader cross section of scientific, clinical, ethical, and regulatory experts, as well as the public at large have a chance to fully consider the ramifications.”

But we may not have the luxury of waiting until the ethics and morals of the science are thoroughly debated.  In 2015, Junjiu Huang and his team at Sun Yat-sen University in Guangzhou, China, used CRISPR to eliminate a gene that causes a blood disorder in human embryos.  There were problems in the products and the procedure was stopped (although there was never any intention of allowing the embryos to mature in a womb).  The experiments set off international alarms and the scientific journals Nature, Cell, and Science refused to publish the paper.  The paper was eventually published in Protein + Cell.  Huang has made it clear that he will continue to pursue experiments to correct problems that surfaced during the previous work.  “They did the research ethically” noted Tetsuya Ishii of Hokkaido University in Sapporo, Japan in Science, but several genetic watchdog groups called for an end to the procedures.  Other scientists including a Nobel laureate were not disturbed by the research as long as the experiments were limited to clinical applications8.

Microinjection_of_a_human_egg.svgGenetic editing in human embryos.

The incident with Junjiu Huang reminds me of the work that has been done on game theory.   As far back as the 1920s one of the leading lights in mathematics, John von Neumann at the Institute of Advanced Study where Albert Einstein and Kurt Gödel worked (closely associated with Princeton University),  sought to define, through mathematical expressions, logical procedures in games that could be applied to real-life scenarios.  In his superb book Prisoner’s Dilemma: John von Neuman, Game theory, and the Puzzle of the Bomb,  William Poundstone summarizes von Neumann’s work: “Von Neumann demonstrated mathematically that there is always a rational course of action for games of two players, provided their interests are completely opposed9.”  One of the early applications of work on game theory came when the United States was deciding to build a hydrogen bomb – a huge leap in destructive capabilities compared to the atomic bomb.  Many prominent scientists, such as Robert Oppenheimer, the director of the Manhattan Project, were outspoken against it.   Seemingly they reasoned, the best strategy would be to cooperate with the Soviet Union whereby both countries would agree not to develop the H-bomb.  The research was expensive and it would generate thousands of bombs that would be stockpiled and probably never be used.  Game theory logic did not concur.  There was only one possible step according to the logic of “game” brinkmanship between the US and the Soviets – build the H-bomb no matter whether the Soviets were willing to agree to a moratorium or not.  There was simply no way to be absolutely sure the Soviets would live up to any potential agreement.

I think the same strategy is true with germ-line experiments.  The logic is clear – it seems the Chinese are going to develop the technology regardless of what we do and not having the technology while other countries do could be detrimental to the best interests of the United States.  The value of developing germ-line therapy seems even more crucial than, say, the H-bomb because the therapy will potentially lead to cures for horrible genetic diseases.  I recognize the need to be discreet and careful, but we also need not dally on something so important.   In December of 2015, the International Summit on Human Gene Editing was sponsored by the US National Academy of Sciences, the US Academy of Medicine, the Chinese Academy of Sciences, and the Royal Society of London.   The planning committee summarized recommendations for “the development and human applications of genome editing” with agreements made to have future summits.  The recommendations can be reviewed in an editorial by Theodore Friedmann in Molecular Therapy10.  All I can say is that the sides are talking, and that is important.  The research continues with some controls.

  1. In Jesse’s case, the gene was not inherited but was caused by a mutation in only one cell before birth.  The result was unusual in that not all of his cells were OTC deficient as might be expected if he had inherited the trait.
  2. Mukherjee, S. (2016) The Gene: An Intimate History, Scribner
  3. see ref. 2
  4. see for example, Haas, J. M. (1998) Begotten not made: A catholic view of reproductive technology
  5. I was raised a Roman Catholic, and I know that Catholics believe in divine inspiration.  That is, they believe the Pope with or without the input of his advisers makes a decision on the morality of the issue with the understanding that the decision is inspired directly by God.  I would hasten to point out that the terrorists that took down the World Trade Center believed they were divinely inspired also so believing does not make it so.  I sometimes wonder if these men (and I emphasize men because there are no women in the upper echelons of the Holy See) ever wonder if their opinions are really divinely inspired.   They place a great deal of confidence in a decision that will bring immense misery into the world – consider all those Catholics that refuse to use IVF and have children with serious genetic disorders
  6. The Cas9 was the protein that performed the cutting.
  7. see Exterminating invasive species with gene drives
  8. Kaiser, J. and Normile, D. (2015) Embryo engineering study splits scientific community: Science, 348, 486-487
  9. Poundstone, W. (1992) Prisoner’s Dilemma: John von Neuman, Game theory, and the Puzzle of the Bomb: Anchor Books
  10. Friedmann, T. (2016) An ASGCT Perspective on the National Academies Genome Editing Summit: Molecular Therapy, 24, 1-2

Taking the “pseudoscience” out of fingerprint identification

After the Madrid terrorist bombing on March 11, 2004, a latent fingerprint was found on a bag containing detonating devices.  The Spanish National Police agreed to share the print with various police agencies.  The FBI subsequently turned up 20 possible matches from their database.  One of the matches led them to their chief suspect, Brandon Mayfield, because of his ties with the Portland Seven (Mayfield, a lawyer, represented one of the seven American Muslims found guilty of trying to go to Afghanistan to fight with the Taliban in an unrelated child custody case) and his conversion to Islam (Mayfield was in the FBI database because of his arrest for burglary in 1984 and his military service).   FBI Senior Fingerprint Examiner Terry Green considered “the [fingerprint] match to be a 100% identification”1.   Supervisory Fingerprint Specialist Michael Wieners and Unit Chief, Latent Print Unit, John T. Massey with more than 30 years experience “verified” Green’s match according to the referenced court documents.  Massey had been reprimanded by the FBI in 1969 and 1974 for making “false attributions” according to the Seattle Times2.  Mayfield was arrested and held for more than 2 weeks as a material witness but was never charged while the FBI argued with the Spanish National Police about the veracity of their identification.  Apparently the FBI ignored Mayfield’s protests that he did not have a passport and had not been out of the country in ten years.  They also initiated surveillance of his family by tapping his phone, bugging his home, and breaking into his home on at least two occasions3.  All legal under the relatively new Patriot Act.

Meanwhile in Spain, the Spanish National Police had done their own fingerprint analysis and eventually concluded that the print matched an Algerian living in Spain — Ouhnane Daoud.  But the FBI was undeterred.  The New York Times4 reported that the FBI sent fingerprint examiners to Madrid to convince the Spanish that Mayfield was their man.  The FBI outright refused to examine evidence the Spanish had and according to the Times “relentlessly pressed their case anyway, explaining away stark proof of a flawed link — including what the Spanish described as tell-tale forensic signs — and seemingly refusing to accept the notion that they were mistaken.”

The FBI finally released Mayfield and followed with a rare apology for the mistaken arrest.  Mayfield subsequently sued, and American taxpayers shelled out $2 million when the FBI settled the case.  More importantly, the FBI debacle occurred during a debate among academics, government agencies, and within the courts about the “error rate” associated with fingerprint analyses5.  But before I address the specific problems with fingerprint identification let’s talk about the Daubert v. Merrell Dow Pharmaceuticals (1993) court case.  The details are fairly banal and would have been meaningless to this essay except for the fact that it reached the Supreme Court and established what is now referred to as the Daubert standard for admitting expert witness testimony into the federal courts6.   In summay, the judge is responsible (a gatekeeper in Daubert parlance) for making sure that expert witness testimony is based on scientific knowledge7   Furthermore, the judge must make sure the information from the witness is scientifically reliable.  That is, the scientific knowledge must be shown to be the product of a sound scientific method.  Finally the judge must ensure that the testimony is relevant to the proceedings which loosely translated means the testimony should be the product of what scientists do – form hypotheses, test hypotheses empirically, publish results in peer-reviewed journals, and determine the error in the method involved when possible.  Finally the judge should make a determination of the degree the research is accepted by the scientific community8.

“No fingerprint is identical” – it has become almost a law of nature within forensic fingerprint laboratories.  But no one knows whether it is true or not.  That has not stopped the FBI from maintaining the facade.  In a handbook published by the FBI in 19859 they state: “Of all the methods of identification, fingerprinting alone has proved to be both infallible and feasible”.  I think that fingerprints are an exceptionally good tool in the arsenal of weapons against crime, but it is essentially unscientific to perpetuate infallibility.  The fact is that the statement “all fingerprints are not identical” is logically unfalsifiable10.  And the more scientists argued against the infallibility of fingerprinting the more the FBI became entrenched in their position after the Mayfield mistake11.  Take, for example, what Massey said shortly after the Mayfield case: “I’ll preach fingerprints till I die. They’re infallible12.”  It may be true that no fingerprints are perfectly alike (I suspect it is true) but it is also true that no fingerprint of the same finger is alike.  The National Academy of Sciences asserted that “The impression left by a given finger will differ every time, because of inevitable variations in pressure, which change the degree of contact between each part of the ridge structure and the impressions medium13.”  The point therefore becomes not if all fingerprints are unique but whether laboratories have the abilities to distinguish between similar prints, and if they do, what is the error in making that determination.

U.S. District Judge Louis H. Pollak ruled in a January, 2002, murder case that fingerprint analyses did not meet the Daubert standards.  He reversed his decision after a three-day hearing.  Donald Kennedy, Editor-in-Chief of Science opined “It’s not that fingerprint analysis is unreliable. The problem rather, is that its reliability is unverified either by statistical models of fingerprint variation or by consistent data on error rates14 15.”  As one might expect, the response by the FBI and federal prosecutors to Pollak’s original ruling and subsequent criticism was a united frontal attack not based on statistical analyses verifying the reliability of fingerprint identification but the infallibility of the process based on more than 100 years of fingerprint identification conducted by the FBI and other agencies around the world.  The FBI actually argued that the error rate was zero.  FBI agent Stephen Meagher stated during the Daubert hearing16, to Lesley Stahl during an interview on 60 Minutes17, and to Steve Berry of the Los Angeles Times during an interview18 that the latent print identification “error rate is zero”.  How can the error rate be zero when documented cases of error like Mayfield exist?  Even condom companies give the chance of pregnancy when using their product.

In 2009, the National Academy of Sciences through their committee The National Research Council produced a report on how forensic science (including fingerprinting) could be strengthened19.  Perhaps the most eye-opening conclusion of the report is that analyzing fingerprints is subjective.  It is worth quoting their entire statement: “thresholds based on counting the number of features [see diagram below] that correspond, lauded by some as being more “objective,” are still based on primarily subjective criteria — an examiner must have the visual expertise to discern the features (most important in low-clarity prints) and must determine that they are indeed in agreement.  A simple point count is insufficient for characterizing the detail present in a latent print; more nuanced criteria are needed, and, in fact, likely can be determined… the friction ridge community actively discourages its members from testifying in terms of probability of a match; when a latent print examiner testifies that two impressions “match,” they [sic] are communicating the notion that the prints could not possibly have come from two different individuals.”   The Research Council was particularly harsh on the ACE-V method (see the diagram below) used to identify fingerprint matches: “The method, and the performance of those who use it, are inextricably linked,and both involve multiple sources of error (e.g., errors in executing the process steps, as well as errors in human judgment).”  The statement is particularly disconcerting because, as the Research Council notes, the analyses are typically performed by both accredited and unaccredited crime laboratories or even “private practice consultants.”

fingerprinting copyThe fingerprint community in the United States uses a technique known via an acronym ACE-V – analyses, comparison, evaluation, and verification.  I give an example here to emphasize the basic cornerstone of the process which involves comparison of friction-ridge patterns on a latent fingerprint to known fingerprints (called exemplar prints).  Fingerprints come in three basic patterns: arches, loops, and whorls as shown at the top of the diagram.  The objective in the analysis is to find points (also called minutiae) defined by various patterns formed by the ridges.  The important varieties are shown above.  For example, a bifurcation point is defined by the split of a single ridge into two ridges.  I have shown various points on the example fingerprint.  Once these points are ascertained by the examiner, the points are used to match to similar points in the exemplars in their relative spatial locations.  It should be obvious that the interpretation of points can be problematic and is subjective.  For example, note the region circled where there are many “dots” which may be related to ridges or may be due to contaminants.  There is still no standard used in the United States for the number of matching points required to obtain a “match” (although individual laboratories do set standards).  Computer algorithms, if used, provide a number of potential matches and examiners determine which of the potential matches, if any, is correct.  The method appears straight forward but in practice examiners have trouble agreeing even on the number of points due to the size of the latent print (on average latent prints are typically one fifth of the surface of an exemplar print), smudges and smearing, the quality of the surface, the pressure of the finger on the surface, etc.20  There is another technique developed in 2005 called Ridges-in-Sequence system (RIS)21.  For a more detailed description of latent fingerprint matching see Challenges to Fingerprints by Lyn and Ralph Norman Haber 22

Now you might be thinking that the Mayfield case was unusual given the FBI and other agencies promote infallibility, but Mayfield seems to be the tip of the iceberg!  Simon Cole of the University of California, Irvine23 has documented 27 cases of misidentification (Cole excluded cases of matches related to outright fraud) up through 2004 and underscores the high probability of many more incorrect undetected cases because of the relatively large number of documented mistakes that have slipped through the cracks (Cole uses the term “fortuity” of the discoveries of misidentification) — particularly when the FBI and other agencies are very tight lipped about detailing how they arrive at their conclusions when there is a match.  These are quite serious cases involving people that spent time in prison for wrongful charges related to homicides, rape, terrorist attacks, and a host of other crimes.

It is worth looking at the Commonwealth v. Cowans case because it represents the first fingerprint-related case overturned on DNA evidence via the Innocence Project.   On May 30, 1997, a police officer in Boston was shot twice by an assailant using the officers own revolver.  The surviving officer eventually identified Stephen Cowans from a group of eight photographs and then from a lineup.  An eye-witness that observed the shooting from a second story window also fingered Cowans in a lineup.  The assailant, after leaving the scene of the crime, forcibly entered a home where he got a glass of water from a mug.  The family present in the home spent the most time with the assailant and, revealingly, did not identify him in a lineup.  The police obtained a latent print from the mug and fingerprint analyzers matched it to Cowans24.  The conflict between eyewitness’ testimonies made the fingerprint match pivotal and led to a guilty verdict.  After five years in prison, Cowans was exonerated on DNA evidence from the mug that showed he could not have committed the crime.

What do we know about the error (or error rate) in fingerprint analyses?  Recently, Ralph and Lyn Haber of Human Factors Consultants have compiled a list of 13 studies (that meet their criteria through mid-2013) that review attempts to ascertain the error rate in fingerprint identification25.   In the ACE-V method (see diagram above) the examiner decides whether a latent print is of high enough quality to use for comparison (I emphasize the subjectivity of the examination – there are no rules for documentation).  The examiner can conclude that the latent print matches an exemplar, making an individualization (identification), she can exclude the exemplar print (exclusion – the latent does not match), or she can decide that there is not enough detail to warrant a conclusion26.  The first thing to point out is that no study has been done where the examiners did not know they were being tested.  This poses a huge problem because examiners tend to determine more prints inconclusive when being examined27.  Keeping the bias in mind, let’s look in detail at the results of one of the larger studies reviewed by the Habers.

The most pertinent extensive study was done by Ulery et al.28.  They tested 169 “highly trained” examiners with 100 latent and exemplar prints (randomly mixed for each examiner with latent-exemplar pairs that did not match and those that did match).  Astoundingly, for pairs of latents that matched exemplars, only 45% were correctly identified.  The rest were either misidentified (13% were excluded that should have been matched and a whopping 42% found to be inconclusive that should have been matched).  I recognize that when examiners are being tested they have a tendency to exclude prints that they might otherwise attempt to identify, but even with this in mind, the rate is staggering.  How many prints that should be matched are going unmatched in the plethora of fingerprint laboratories around the country?  Put in another way, how many guilty perpetrators are set free on the basis of the inability of examiners to match prints?   Regarding the pairs of latent and exemplar prints that did not match, there were six individualized (matched) that should not have been — a 0.1% error.  Even if the error is representative of examiners in general (and there is plenty of reason to believe the error rate is higher according to the Habers), it is too high.  Put another way, if 100,000 prints are matched with a 0.1 percent error rate, 100 individuals are going to be wrongly “fingered” as a perpetrator.  And the way juries ascribe infallibility to fingerprint matches, 100 innocent people are going to jail.

There are a host of problems with the Ulery study including many design flaws.  For one thing, the only way to properly ascertain error is through submitting “standards” as blinds within the normal process of fingerprint identification (making sure the examiners do not know they are attempting to match known latent prints).   But there are many complications involved in the procedure that begins with not having any agreed upon standards or even rules to establish what a standard is29.  I have had some significant and prescient discussions with Lyn Haber on the issues.  Haber zeroed in on the problems at the elementary level: “At present, there is no single system for describing the characteristics of a latent.  Research data shows [sic] that examiners disagree about which characteristics are present in a print.”  In other words, there is no cutoff “value” that determines when a latent print is “of such poor quality that it shouldn’t be used”.  Haber also notes that “specific variables that cause each impression of a finger to differ have not been studied”.

The obvious next step would be to have a “come to Jesus” meeting of the top professionals in the field along with scientists like the Habers to standardize the process.  That’s a great idea, but none of the laboratory “players” are interested in cooperating — they are intransigent.  The most salient point Haber makes in my opinion is the desire by various agencies to actively keep the error unknowable.  She states that “The FBI and other fingerprint examiners do not wish error rates to be discovered or discoverable.  Examiners genuinely believe their word is the “gold standard” of accuracy [but we most assuredly know they make mistakes] .  Nearly all research is carried out by examiners, designed by them, the purpose being to show that they are accurate. There is no research culture among forensic examiners.  Very very few have any scientific training.  Getting the players to agree to the tests is a major challenge in forensic disciplines.”  I must conclude that the only way the problem will be solved is for Congress to step in and demand that the FBI admit they can make mistakes, work with scientists to establish standards, and adequately and continuously test laboratories (including their own) throughout the country.   While we wait, the innocent are most likely being sent to jail and many guilty go free.

A former FBI agent still working as a consultant (he preferred to remain anonymous) candidly told me that the FBI knows the accuracy of various computer algorithms that match latents to exemplars.  He stated “When the trade studies were being run to determine the best algorithm to use for both normal fingerprint auto identification and latent identification (two separate studies) there were known sample sets against which all algorithms were run and then after the tests the statistical conclusions were analyzed and recommendations made as to which algorithm(s) should be used in the FBI’s new Next Generation Identification (NGI) capability.”  But when I asked him if the data were available he said absolutely not “because the information is proprietary” (the NGI is the first stage in the FBIs fingerprint identification process – they match with the computer and send the latent with closest matches to the analyzers).  Asking for the computer error rate should not be proprietary – the public does not have to know the algorithm to understand the error on the algorithm.

Of course, computer analyses bring an additional wrinkle to the already complex determination of error.  Haber states “Current estimates are such that automated search systems are used in about 50% of fingerprint cases.  Almost nothing is known about their impact on accuracy/error rates.  Different systems use different, proprietary algorithms, so if you submit the same latent to different systems (knowing the true exemplar is in the data base), systems will or will not produce the correct target, and will rank it differently… I am intrigued by the problem that as databases increase in size, the probability of a similar but incorrect exemplar increases.   That is, in addition to latents being confusable, exemplars are.”   I would only emphasize that the FBI seems to know error rates on the algorithms but has not, as far as I know, released that data.

To be fair, I would like to give the reader a view from the FBI perspective.  Here is what the former FBI agent had to say when I showed him comments made by various researchers: “When a latent is run the system generally produces 20 potential candidates based on computer comparison of the latent to a known print from an arrest, civil permit application where retention of prints is permissible under the law etc.  It is then the responsibility of the examiner from the entity that submitted the latent to review the potential candidates to look for a match.  Even with the examiner making such a ‘match’ the normal procedure is to follow up with investigation to corroborate other evidence to support/confirm the ‘match’.  I think only a foolish prosecutor would go to court based solely on a latent ‘match’… it would not be good form to be in court based on a latent ‘match’ only to find out the person to whom the ‘match’ was attached was in prison during the time of the crime in question and thus could not have been the perpetrator.”  Mind you, he is a personal friend whom I respect so I don’t criticize him lightly, but he is touting the standard line.  Haber notes that in the majority of cases she deals with as a consultant “the only evidence is a latent”.

I suspect that the FBI along with lesser facilities does not want anyone addressing error because the courts may not view fingerprints as reliable, no, infallible, as they currently do, and the FBI might have to go back and review cases where mistaken matches are evident.  As a research geochemist I have always attempted to carefully determine the error involved in my rock analyses so that my research would be respected, reliable, and a hypothesis drawn from the research would be based on reality.  We are talking about extraordinary procedures to determine error on rock analyses.  No one is going to jail if I am wrong.  I will leave you with Lyn Haber’s words of frustration: “No lab wants to expose that its examiners make mistakes.  The labs HAVE data: when verifiers disagree with a first examiner’s conclusion, one of them is wrong.  These data are totally inaccessible… I think that highly skilled, careful examiners rarely make mistakes. Unfortunately, those are the outliers.  I expect erroneous identifications attested to in court run between 10 and 15%.  That is a wild guess, based on nothing but intuition!  As Ralph [Haber] points out, 95% of  cases do not go to court.  The defendant pleads.  So the vast majority of fingerprint cases go unchallenged and untested. Who knows what the error rate is?…  Law enforcement wants to solve crimes.  Recidivism has such a high percent, that the police attitude is, If [sic] the guy didn’t commit this crime,  he committed some other one. Also, in many states, fingerprint labs get a bonus for every case they solve above a quota… The research data so far consistently show that false negatives occur far more frequently than false positives, that is, a guilty person goes free to commit another crime.  The research data also show — and this is probably an artifact — that more than half of identifications are missed, the examiner says Inconclusive.  If you step back and ask, Are fingerprints a useful technique for catching criminals, [sic] I think not!  (These comments do not apply to ten-print to ten-print matching.)”

  1. The quote is from a government affidavit – Application for Material Witness Order and Warrant Regarding Witness: Brandon Bieri Mayfield, In re Federal Grand Jury Proceedings 03-01, 337 F. Supp. 2d 1218 (D. Or. 2004) (No. 04-MC-9071)
  2. Heath, David (2004) FBI’s Handling of Fingerprint Case Criticized, Seattle Times, June 1
  3. Wikipedia
  4. Kershaw, Sarah (2004) Spain and U.S. at Odds on Mistaken Terror Arrest, NY Times, June 5
  5. see the following for more details: Cole, Simon (2005) More than zero: Accounting for error in latent fingerprint identification: The Journal of Criminal Law & Criminology, 95, 985
  6. Actually the Daubert standard comes not only from Daubert v. Merrell Dow Pharmaceuticals but also General Electric Co. v. Joiner and Kumho Tire Co. v. Carmichael
  7. I can’t help but wonder what it was based on prior to Daubert.
  8.  It remains a mystery to me as to how a judge would have the training and background to ascertain if an expert witness meets the Daubert standard, but perhaps that is best left for another essay
  9. Federal Bureau of Investigation (1985) The Science of Fingerprints: Classification and Uses
  10. What I mean by unfalsifiable is that even if we could analyze all the fingerprints of all living and dead people and found no match, we still could not be absolutely certain that someone might be born someday with a fingerprint that would match someone else.  Some might think that this is technical science speak but in order to qualify as science the rules of logic must be rigorously applied.
  11. Cole, Simon (2007) The fingerprint controversy: Skeptical Inquirer, July/August, 41
  12. Scarborough, Steve (2004) They Keep Putting Fingerprints in Print, Weekly Detail, Dec. 13
  13. National Research Council of the National Academies (2009) Strengthening Forensic Science in the United States: A Path Forward: The National Academy of Science Press
  14. Error rate as used in the Daubert standard is somewhat confusing in scientific terms.  Scientist usually determine the error in their analyses by comparing a true value to the measured value, inserting blanks that measure contamination, and usually doing up to three analyses of the same sample to provide a standard deviation about the mean of potential error for the other samples analyzed.  For example, when measuring the chemistry of rocks collected in the field, my students and I have used three controls on analyses:  1) Standards which are rock samples with known concentrations determined from many analyses in different laboratories by the National Institute of Standards and Technology, 2) what are commonly referred to as “blanks” (the geochemist does all the chemical procedures she would do without adding a rock sample in an attempt to measure contamination), and three analyzing a few samples up to three times to determine variations.  All samples are “blind” – unknown to the analyzers.  The ultimate goal is to get a handle on the accuracy and precision of the analyses.  These are tried and true methods and as I argue in this essay, a similar approach should be taken for fingerprint analyses.
  15. Kennedy, Donald (2003) Forensic science: Oxymoron?, Science, 302, 1625.
  16. Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) 509 US 579, 589
  17. Stahl, Lesley (2003) Fingerprints 60 Minutes, Jan. 5.
  18. Berry, Steve (2002) Pointing a Finger: Los Angeles Time, Feb. 26.
  19. see ref. 13
  20. Haber, L. and Haber, R. N. (2004) Error rates for human latent fingerprint examiners: In Ratha, N. and Bolle, R., Automatic Fingerprint Recognition Systems, Springer
  21. Ashbaugh, D. R. 2005 Proposal for ridge-in-sequence: http://onin.com/fp/ridgeology.pdf
  22. Haber, L. and Haber, R. N. (2009) Challenges to Fingerprints: Lawyers & Judges Publishing Company
  23. see ref. 5
  24. One of the biggest criticism of the fingerprint community comes from the lack of blind tests — fingerprint analyzers often know the details of the case.  Study after study has shown that positive results are obtained more frequently if a perpetrator is known to the forensic analyzers – called expectation bias: see, for example, Risinger, M. D. et al. (2002) The Daubert/Kumbo Implications of observer effects in forensic science: Hidden Problems of Expectation and Suggestions, 90 California Law Review
  25. Haber, R. N. and Haber, N. (2014) Experimental results of fingerprint comparison validity and reliability: A review and critical analysis: Science and Justice, 54, 375
  26. see The Report of the Expert Working Group on Human Factors in Latent Print Analysis (2012) Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach: National Institute of Technology
  27. see ref. 24
  28. Ulery, B. T., Hicklin, R. A., Buscaglia, J., and Roberts, M. A. (2011) Accuracy and reliability of forensic latent fingerprint decisions: Proc. National Academy of Science of the U.S.
  29. see ref. 14

The asbestos coverup

When the World Trade Center was being built in 1973, Dr. Irving Selikoff, an expert on asbestosis and cancers caused by asbestos, was an outspoken critic of the wholesale spraying of the floors of the two structures with insulator containing copious quantities of asbestos for fire-proofing.  He knew the potential dangerous hazards of asbestos as did the asbestos industry.  Fortunately not all floors were insulated because New York City instituted a ban on the spraying of asbestos in the same year.  Fast forward almost 30 years when the plumes of dust rolled over lower Manhattan after the collapse of the World Trade Center towers on 9/11.  The brave souls that rushed to help survivors and participate in the cleanup along with the many people that lived and worked in the area were exposed to one of the most serious carcinogens ever documented – asbestos in its many forms.  One of the most deadly results of inhaling the tiny asbestos fibers that permeated the World Trade Center clouds is the nearly always fatal cancer mesothelioma (known to be caused only by asbestos).  Unfortunately, the cancer often shows up decades after exposure.  What many people do not realize is that asbestos has still not been banned in the United States even though the asbestos community has known internally since at least the 1930s that it was not only harmful but deadly.  The asbestos executives and their hired doctors promulgated a disinformation campaign that asbestos was and is harmless knowing full well that these claims were patently wrong1.

Selikoff first came to prominence in 1964 when he organized an international symposium on the “Biological Effects of Asbestos” through the New York Academy of Sciences.  Selikoff, through his position as the director of the Environmental Sciences Laboratory at the Mount Sinai Hospital in New York, was able to persuade the International Association of Heat and Frost Insulators & Asbestos Workers union to provide him with workers’ medical profiles2.  He presented four papers at the conference on the results of his epidemiological studies of the union workers.  There was no mistaking his results — working with asbestos insulation caused an increase in death by 25 percent from not only mesothelioma but asbestosis, lung cancer and even cancers of the stomach, colon and rectum.  His independent research could not be buried by the asbestos industry as they had with their subsidized research, and Selikoff’s results were reported widely in the press.  Selikoff’s team even found that insulator workers who smoked were ninety times more likely to get some form of asbestos-related cancer than those workers that did not smoke.

I don’t want to appear sanctimonious, but the dangers due to asbestos Selikoff and others reported in 1964 should have caused the asbestos industry pause – maybe even force them to attempt to improve working conditions.  But as in other industries with similar threats, the asbestos executives circled the wagons and then went on the offensive.  The Asbestos Textile Institute’s lawyers (the asbsetos industry’s public relation’s arm to promote asbestos products) sent letters to the New York Academy of Sciences and Selikoff warning them about the impact of their “damaging and misleading news stories”.  Their smear campaigns began by attacking Selikoff’s medical credentials and the quality of his work.   For years, the asbestos industry stalked Selikoff and others at conferences and meetings attempting to undermine their work.   More details can be found in Jock McCulloch and Geoffrey Tweedle’s outstanding book entitled Defending the Indefensible: The Global Asbestos Industry and its Fight for Survival.  

It is astounding the lengths the asbestos industry went to suppress information they deemed adverse and to circulate disinformation cranked out by their hired doctors and researchers.  Asbestos executives also turned to the largest public relations firm in the world – Hill & Knowlton – a sort of hit squad of lawyers with a ubiquitous presence in undermining science damaging to their clients which included Big Tobacco3.  But perhaps what can only be described as turpitude, the companies led the disinformation campaigns while laborers in a whole slew of industries from mining to textiles worked in deplorable conditions that caused sickness and death.  In the Libby mine in Montana, for example, not only was fibrous asbestos dust so thick in some areas of the open-pit mine it was hard for workers to see each other.  The dust blew into the nearby town causing asbestos illness and death to residents (the Libby mine was eventually closed due to the huge number of tort claims by families struck by illness and death related to the operations).  It was common for the industry to fire workers that developed asbestosis or cancer to avoid the appearance of illnesses related to asbestos.  When it became clear to the industry that mesothelioma was a serious public relations nightmare, their public relation’s machine went into full overdrive focusing on two strategies.  1) Reassuring people that asbestos-related diseases were caused only by the inhalation of large amounts of fiber dust over long periods of time (internal memorandums clearly show that the companies involved knew this was not true).  2) Foisting the argument on the public that mesothelioma was the result of blue asbestos and that other types of asbestos, such as chrysotile, were safe (once again, internal memorandums show that the companies knew this to be patently untrue).

The diagram below shows the world production numbers for asbestos from 1900 through 2015.  One might think that the asbestos industry would have been crippled by Selikopf’s research reported in 1964.  But production actually increased through the 1960s and went on increasing into the late 1970s before tort claims began to impact the industry.  But even today, worldwide production has not decreased below the early 1960s output due mostly to production in developing nations.  The diagram is a testimonial to the success of the asbestos industry’s ability to undermine solid scientific research with political clout and the financial resources to promote their agenda – asbestos is safe.  We have seen the same thing in many other industries like Big Tobacco with smoking and Exxon with global warming.  McCulloch and Tweedle make a salient point: “Put another way, nearly 80 per cent [sic] of world asbestos production in the twentieth century was produced after the world learned that asbestos could cause mesothelioma!”

Asbestos2Data from Virta4 for 1900 through 2003, Virta for 2004 through 2006 (consumption), and Statista for 2007 through 2015.

Imagine that you are the mayor of a small town dependent on tourism, and doctors in the village are reporting an outbreak of a bacterial disease that is killing 40 percent of those being infected.  You decide that reporting the disease to the CDC or WHO would harm the financial health of your town and you seek to suppress the seriousness of the outbreak.  You tell tourists they have nothing to worry about and chastise the local news affiliates by telling them they are acting hysterically and causing undue panic.  Would anyone deny that you are guilty of a serious criminal act?  This is essentially what the asbestos industry did over many decades, and yet no one in the asbestos industry has served a day jail time for their actions.  In fact, they were so successful in their disinformation campaign that even  today as mentioned above asbestos is not banned in the US even though cheap substitutes exist and asbestos has been banned in other industrial nations such as France and Britain.  I asked Dr. Jock McCulloch why and his response is telling: “There is no easy answer to your question nor to the adjacent one as to why 2 million tons of asbestos will be mined and used globally during 2016. One of the key factors has been the corporate corruption of the science (which began in the 1930s) and the other is the baleful behaviour of Canada at international forums- due in the main to federal/Quebec politics. And then there is Russia, its political climate and anti-western reflexes.”  Both Canada and Russia have been and are huge producers of asbestos and Canada with the help of scientists at McGill University funded by the asbestos industry (one of the reasons why scientists should remain independent in their research) has been instrumental in persuading other governments to act gingerly against asbestos interests.

Distressing research now shows that trivial exposure to asbestos can cause cancers.  The Harvard paleontologist Stephen Jay Gould died of cancer caused by asbestos fibers perhaps from asbestos within ceiling tiles.  Actor Steve McQueen died at the age 50 from mesothelioma probably from asbestos exposure when he worked in a break repair shop (breaks are lined with asbestos).   Many instances of cancer among family members of miners and other laborers in the asbestos industry have been attributed to exposure to asbestos fibers brought home on clothing.  I think about the lives destroyed by asbestos when I read the words of McCulloch and Tweedle:  “Central to the strategy was a policy of concealment and, at times, misinformation that often amounted to a conspiracy to continue selling asbestos fibre irrespective of the health risks.”  I might add that attempts to force the asbestos industry to warn their workers about the dangers of asbestos were averted.  And although most mining and manufacturing has moved out of industrialized nations, the developing world has picked up the slack — places like Swaziland where laborers have few protections and little legal recourse for compensation from asbestos illnesses.  Records through litigation have turned up showing that industry officials thought black workers were far less sophisticated than those in the US or Europe about hazards to their health and sought to take advantage of them.

Stephen_Jay_Gould_2015,_portrait_(unknown_date) Stephen Jay Gould Steve_McQueen_1959Steve McQueen

Sadly, the large asbestos companies (18 in all) were able to avoid paying thousands of tort claims in the US by declaring bankruptcy through Chapter 11.  Bankruptcy implies that a company is insolvent, but due to the Manville Amendment passed by Congress in 1994 to help the asbestos industry, companies only need to show that future liabilities exceed the assets of the company in order to declare bankruptcy.  The insurance companies pulled a similar “fast one” by shuttling liabilities into shell companies that also declared bankruptcy.   I am very much for free and open trade but companies should be held responsible for travesties, and the bankruptcy claims are tantamount to highway robbery in my humble opinion.  Many of those who lost out on benefits and claims were already on the edge of poverty from unemployment and the medical costs from their ailments.  I might also point out that the American taxpayer is the ultimate source of support to these workers and their families because the asbestos companies were able to weasel their way out of their responsibilities to their employees and/or those harmed by their products.  It may be important to remind the reader that it is estimated that between 15 to 35 million homes contain Libby asbestos as insulation.  Asbestos is a problem that is not going away quickly.

I understand that industries like asbestos employ a large number of people (at one time in the 1960s, more than 200,000 people worked in the asbestos industry) and many of these workers would have difficulties finding new jobs elsewhere if the industries were closed overnight.  But there are various steps that should be taken based on what we have learned from the asbestos travesty when future industries are found to be responsible for harm to their workers.  1) It should be a crime to purposely mislead the public and/or workers on safety issues of products.  This must include the purposeful undermining of peer-reviewed science.  The penalties should be stiff and include jail time.  Laws need to be enacted accordingly.  2) Workers and their families need to be informed of the dangers in clear language in order that they may decide whether they wish to take the risk of continued employment in the industry.   3) In cases like asbestos where it is clearly a dangerous hazard, the product should be phased out by substitution of other products and eventually banned.   4) Workers and those impacted by the product should be entitled to compensatory damages through the establishment of funds in negotiations with the government.  5) And finally, American companies should be prohibited from moving their operations to nations that have lax laws that permit workers to be exposed to the hazardous products.  If corporate America can’t police itself (and I don’t think they can based on the tales of woe involving tobacco, pesticides, global warming, etc.) the government must step in.

  1. McCulloch, J. and Tweedale, G. (2008) Defending the Indefensible: The Global Asbestos Industry and its Fight for Survival: Oxford University Press
  2. Selikoff recruited Dr. E. Cuyler Hammond who had already published his landmark research on the link between smoking and lung cancer
  3. Oreskes, N. and Conway, E. M. (2010) Merchants of Doubt: Bloomsbury Press
  4. Virta, R. L. (2006) Worldwide Asbestos Supply and Consumption Trends from 1900 through 2003: USGS Circular 1298

The new black gold – fracked methane gas and oil

The term fracking conjures up so many knee-jerk-bad reactions that I am hesitant to broach the subject.   I suppose if I am going to wade into the topic I should give some bona fides to display my knowledge of the petroleum industry, but not too many bona fides so that I might be seen as a talking wonk for the gas industry.  I worked for one year as an engineer for a well service company called Schlumberger (world’s largest) and two as a geologist with Shell Oil.   Shell gave its geologists full responsibility for drilling a well from the time it was proposed to production if it hit oil.  Of the 11 wells I proposed, 3 hit oil which was above the industry standard in producing fields in the late 1970s and early 1980s.  Eventually I realized my calling was in teaching and research and left to go back to school for my PhD.  But not before I got a pretty good idea of how the industry works.

The process of drilling is not complicated although the devil can be in the details.  A rig contains strings of thirty-foot drill pipe which attach to a tri-cone tungsten carbide bit (see the image below).  The bit spins from drives or motors as drilling fluid, called mud (contents vary but clay, water and lubricants are typical), is pumped through the pipe string to keep the bit cool, increase pressure, and bring the rock debris from drilling back to the surface along the outside of the pipe.  One of the technological marvels developed in modern times is the ability to direct the drill bit to specific locations with pin-point accuracy by knowing where the bit is in three-dimensional space usually thousands of feet below the surface.  Directional survey measurements are complex but are based on measurements while drilling through various instruments.  These advances have enabled horizontal drilling which has become important in fracking.

800px-Tete-de-foreuse-p1010272Rama, Wikipedia

I would be remiss not to emphasize the importance given to protecting the water table when drilling.  State and Federal regulations require the well to be sealed off at least 50 feet below where potable groundwater can be produced, and those laws have been in place as far back as anyone can remember.  The drill pipe is tripped (pulled completely out of the hole) when regulators deem the surface casing should be set to protect the water table (something on the order of 500 feet usually).  The casing is cemented in place, and if it is done correctly,  we know from the drilling of hundreds of thousands of wells over many decades that the water table is protected.  After the surface casing is set, drilling is continued until the target zone is reached.  The pipe is tripped again and the entire well is generally set with cemented production casing.  The hole is plugged at the bottom usually up to 50 feet below the horizon of interest.  The casing is perforated by tools that blow holes in it precisely where the rock containing oil and/or gas exists.  Lisa Margonelli has written an excellent book entitled Oil On the Brain about the details of drilling and its impact on the politics of many countries like Nigeria and Venezuela1.

When I worked for Schlumberger, it was my job to determine if production casing should be set by running tools in the hole.   The measurements produced records called well logs that gave us information about not only the rock below but whether it contained producible oil or not.  Drilling is a chancy business, not for the faint of heart.  Most wells never produce a drop of oil.  I have seen many an owner of a wildcat well near tears as he realized from the logs that the well was a “duster”.  That has changed to a great extent in the new-world order of gas and oil production through fracking.  The new targets — usually oil shales — were discovered decades ago by previous drilling.  They were ignored because shales do not naturally flow under the pressures at depth.  Shale is very porous but not permeable.  You need permeable rocks to produce oil and/or gas, or so it was thought.

That was before Mitchell Energy, a midsized exploration and production company, drilled the S. H. Griffin #4 well in North Texas into the oil- and gas-rich Barnett Shale in 1997.  They used fracking techniques to produce large quantities of methane gas from what was traditionally seen as non-producible rock.  If you are interested in more of the details, read Gary Sernovitz’s immensely entertaining and witty book The Green and the Black2.  Sernovitz, even with ties to the petroleum industry, takes a rather neutral approach to adjudicate the brouhaha over fracking.  One of the highlights of the book is his look at the impacts of the new United States gas and oil reserves on the political and economic scene.

The S. H. Griffin #4 not only produced gas, it produced it in steady quantities (1.5 million cubic feet per day).  So how does fracking make an otherwise impermeable rock produce as if it was a well at the height of the oil boom of the 1960s in the United States?  Fracking sounds ominous and sinister and conjures up visions of rock being fractured all the way to potable water zones.   But it is nothing of the sort — pure fiction.  The technique took decades of testing and experimentation in wells to develop.  The secret is hydraulic pressure from fluids injected into the well to cause the shale to fracture.  The fracturing is usually limited to about 300 feet in an outward radius around the drill hole.  And don’t forget, the drill holes typically go down for thousands of feet below the surface and are protected with cemented casing that has only been perforated in small sections usually at the bottom of the hole where the target rock exists.

It did not take companies long after fracking became successful to incorporate horizontal drilling, another United States technological advance, into the new smorgasbord of production proficiencies.  With the ability to target a bit within inches of a desired location, drillers learned how to gradually arc a pipe into the horizontal (see image below).   The technology turned out to be a bonanza when combined with fracking.  Companies drilled and set casing directly within and parallel to the oil shales enabling them to frack large sections of the rock which sent production through the ceiling.


The chemicals used in fracking were originally a trade secret, but people talk, and once the word was out, companies like Halliburton published the composition of their fracking liquids.  Turns out 90 percent of the frack is made up of water, 9.5 percent consists of a proppant which is usually sand, and only 0.5 percent consists of the scary chemicals often used to undermine the industry.  The sand serves as a support to keep the fractures (caused by the pressurized fluid) propped open so gas and/or oil will flow.  I am not going to pull punches here.  It takes a lot of water to frack a well.  Sernovitz estimates that a typical frack (an average of 22 stages) uses between 4 and 8 million gallons of water and about 6 million pounds of sand.  Unfortunately, not all of the fracking fluid stays in the hole.  Some resurfaces.  Today the water that comes back is reused or disposed of by pumping it into former producing fields in a concerted effort to make sure the chemicals within the water (even if they are only 0.5%) are placed out of harms way.

It has been widely reported that fracking causes earthquakes.  Actually the disposal of water being pumped into the ground (usually from fracking) causes the seismic activity.  Perhaps it seems like a trivial difference, but the public seems to have the idea that the pressure from fracking is so great that it directly causes earthquakes.  The typical increase in seismic activity in a state like Oklahoma is usually effectively mitigated by diverting the injection of water from fields responsible for the activity or requiring the water to be disposed of via other methods.  There can be little doubt that the earthquakes are associated with well injection and regulatory commissions need to fully address the problems.

The HBO premier of Gasland, a 2010 documentary about the natural-gas industry in general and fracking in particular, was probably responsible, at least in part, for New York State banning fracking and a great deal of misunderstanding about natural gas and its impact on the environment.   I have two conflicting opinions about the documentary by Josh Fox.  1) It is clearly tarnished with misrepresented science, almost hysterical overreaction, and historical inaccuracies.  The documentary has been thoroughly taken to task by Energy in Depth.  2) Having said that, there is no question that it is emotionally moving.  It was difficult to watch people whose lives have been impacted badly by the failures of the gas industry.  My conclusion — Gasland was necessary to open a national debate about the issue which has led to more government oversight and less rogue shortcuts leading to serious problems.  However although there will always be problems associated with any industry, drilling for natural gas and/or oil on land in the United States is relatively safe to groundwater.  We simply have to make sure that casing practices are properly implemented.  Water taps catching fire in Dimock, Pennsylvania, happened because of sloppy cement work and poor casing in 27 holes during the early days of drilling in the State (gas leaked through the casing into the surrounding water table).   I find it reprehensible that companies would not protect the water table at all costs and fully agree that the companies cited deserve the penalties they received and payouts they had to make to people they injured.

Finally, I need to emphasize that in 2015 the Environmental Protection Agency (EPA) did a summary paper entitled Assessment of the potential impacts of hydraulic fracturing for oil and gas on drinking water resources and concluded that “Assessment shows hydraulic fracturing activities have not led to widespread, systemic impacts to drinking water resources”.  We can conclude that the gas industry has made mistakes, but we cannot contend that our drinking water is in danger because of fracking despite claims to the contrary in sources like Gasland.

Let’s not forget why Fox started filming the documentary – to protect his vacation home in a pristine part of Pennsylvania near the border with New York.  I get it.  No one wants a drill rig in their back yard even if it is only there for 40-days worth of drilling.  By the way, if you want to read a reasoned and enlightening book about how people are affected adversely by drilling, I recommend Seamus McGraw’s The End of Country: Dispatches from the Frack Zone3.  He weighs the potentially bad impacts of drilling with a healthy dose of understanding that gas and oil companies are filling a demand created by the United States and other world consumers.   Unfortunately, Fox never examines the financial impacts of shutting down the fracking industry.

I recently wrote an article on the serious implications of global warming particularly related to the increase of athropogenic gases in our atmosphere.  Of the three major fossil fuels, coal is, by far, the worst polluter of carbon dioxide followed by petroleum.  Natural gas is the least (see figure below showing the effects of anthropogenic gases as radiative forcing).  In fact, Sernovitz has emphasized that “the United States has led the world in carbon dioxide emissions reduction because of shale gas [use of methane gas instead of coal]”.


IPCC Fifth Assessment Report 2013

It would be unfair not to point out that methane leaks into the atmosphere directly from the production of methane gas contributing to anthropogenic gases (as methane) also, but according to the EPA in a report entitled Overview of Greenhouse Gases: “Methane (CH4) emissions in the United States decreased by 6% between 1990 and 2014.”  During the period from 2007 to 2014, natural gas production was increased tenfold according to the US Energy Information Administration database.   The EPA goes on to comment that “During this time period [1990 to 2014], emissions increased from sources associated with agricultural activities, while emissions decreased from sources associated with the exploration and production of natural gas and petroleum products.”   Note the lack of effect from the natural gas boom between 2007 to 2014 in the graph below showing total United States methane emissions (converted to carbon dioxide equivalents).   In a paper funded by the green-friendly Environmental Defense Fund (EDF) and published in the Proceedings of the National Academy of Science, Allen et. al4 estimated from measuring 190 onshore gas locations that about 0.42 percent of the methane produced leaks from drilling and completion of the wells.   The EPA is working with the gas companies to further reduce this figure but, once again, it is hardly having the impact sources such as Gasland have portrayed.


The oil production in thousands of barrels per day since 1966 from the top ten oil producing countries (as of 2015) is shown in the diagram below.  One of the most startling aspects of the graph is that the United States has become the World’s largest producer of oil.  It’s not Saudia Arabia or Russia, it’s the United States.  What is even more remarkable is that our world lead came through good old fashion American know how — the technology that enabled the United States’ producers to frack horizontally.   I am no flag waver, but there is no denying how the United States has transformed itself.  The halcyon days of the 1960s when the United States led production worldwide were thought to be gone forever (see figure).  By the early 1980s, even secondary recovery processes in declining oil fields could not up American production.   Our decline in oil production continued until about 2005 when fracking began to be felt.  The dramatic impact of that technology can be seen by the subsequent rise in production for the last 10 years in the graph below.  However, our increased production does not meet our ever-increasing demand, but it not only helps our trade deficit but decreases our dependence on oil from the troubled Middle East and a hostile Russia.  Along with the increase in oil production, we have also become the world’s leader in the production of natural gas (don’t forget that both oil and natural gas have less impact on climate change than coal).

kbdData from BP

I asked Gary Sernovitz what he thought about America’s new role as a leading oil and natural gas producer: “One of the strange things about the gas boom is that even as prices have gone down, and activity has gone down (because of low prices), volumes have still gone up—a credit to how productive have been [sic] the wells in the Northeast US.  This year [2016] gas production is down slightly, but we’re still producing 34% more than the Russians so no risk of losing our crown. 2015 was the year that we exceeded Saudi Arabia in total oil production, and became the world’s largest oil producer. We’ve temporarily lost that crown in 2016, but I’d expect [our] prices to recover for that leadership to happen again soon.  And I do think we’re still by far the largest oil and gas producer, despite the dip in oil production because of prices, as we’re far ahead of Russia on oil now too.”

So I would like to summarize the article by stating categorically that we need to curb anthropogenic gasses (carbon dioxide, methane, etc.).  But attempting to shut down the oil and gas industry in the United States because of fracking and/or to solve the climate change problem is like trying to take out a drug cartel to stop drug usage in the United States.  The only way we are going to reduce our dependency on oil and gas is to reduce the increasing need for it.  Fracking is relatively safe to the consumer and looks to be giving America another chance to remain less dependent on other suppliers while we find alternative sources to replace or at least curb America’s craving for energy.

  1. Margonelli, L. (2007) Oil on the Brian: Adventures from the Pump to the Pipeline: Doubleday
  2. Sernovitz, G. (2016) The Green and the Black: The Complete Story of the Shale Revolution, the Fight over Fracking, and the Future Energy: St. Martin’s Press
  3. McGraw, S. (2011) The End of Country: Dispatches from the Frack Zone: Random House
  4. Allen, D. T. et. al (2013) Measurements of methane emissions at natural gas production sites in the United States: Proceedings of the Natl. Acad. Science: 110, 17768–17773

Diamond rush

A few lucky souls have stumbled on diamonds in glacial debris around the Great Lakes and further north into Canada for centuries.  Geologists have known that the sources of those diamonds represented a vast wealth of hidden treasure somewhere in the frozen tundra of northern Canada, but it was not until the late 1980s that a couple of cowboy geologists, Chuck Fipke and Stewart Blusson, painstakingly ferreted their way back to the source. But I am getting way ahead of the story.

Diamonds are brought to the surface from deep within the upper mantle via unusual igneous rocks called kimberlites (and sometimes lamproites).   I recognize I run the risk of losing my readers by delving into the nature of kimberlites, but to a geologist like myself kimberlites are crazy types of rocks.  Typical magmas (and lavas) like basalt form by partial melting of the mantle.  Kimberlites, on the other hand, are geologically unique because although they form from partial melting of the mantle, the melting is significant enough for these rocks to resemble compositionally (not precisely) the mantle itself.  They are referred to as ultramafic rocks as compared with basalts which are mafic (mafic means rich in magnesium and iron – two of the most abundant elements in the mantle).

Diamonds actually don’t form in kimberlites.  Think of kimberlites as a conveyor belt bringing diamonds that form under high temperatures and pressures (from about 125 to 175 kilometers1) to the surface relatively fast, before they can reequilibrate (breakdown) into other compounds like graphite or carbon dioxide.  Diamonds are not forever.  Many an exploration program has had its hopes dashed with the discovery of kimberlite full of octahedral or other cubic forms of graphite — degraded diamonds2.

Exploration for diamonds can be excruciatingly frustrating.  There are 6,400 known kimberlite pipes worldwide but only 30 or so have become viable mines — that’s about 0.5% chance that a discovered kimberlite will turn into a producing mine.  It’s true, diamondniferous kimberlites are hard to find, but you don’t need many diamonds to make a mine.  High-grade diamond kimberlites only contain a few carats per ton of rock.  That’s enough to make any geologist rich beyond her dreams.  Kimberlites form at greater than 200 kilometer depths (200 to 600 km) and are enriched in volatiles (e.g., carbon dioxide and water) that make the magmas not only buoyant but explosive.  They literally “blow” through the upper mantle and crust in perhaps a matter of hours (rates postulated are about 14 km/hr) forming carrot-shaped pipes called diatremes (see the diagram below).  The faster the better for diamond preservation.  But they also have to pick diamonds up along the way or incorporate them as the magma forms.  Kimberlites can contain as much as 25 to 50 percent rock within their magma acting as an elevator to the surface for mantle material helping geologists understand the mantle3.

VolcanicPipeAsbestos Wikipedia

After half a century or more of serious diamond exploration. we have learned that diamond-bearing kimberlites form below the cratons.  The cratons are the ancient regions of continents containing rocks greater than 2 billion years old.  There is still great debate about how the cratons formed, but every continent is rooted in these ancient environs.   If you are looking for diamonds, go to the cratons.  Before the 1980s, diamond kimberlite mines had been developed on every craton of all the continents except Antarctica and North America.  Diamonds come from two major sources: mantle rock (e.g., peridotite) and eclogite (metamorphosed basalt).  Diamond formation in peridotites occurred primarily in the Archean centered on a time about 3 to 3.3 billion years ago but some dates are as young as 1.9 billion years ago.  Eclogite diamonds tend to be younger from 1 to 2.9 billion years ago.

Where does the carbon come from to form diamonds?  No one knows for sure, but most researchers think that the carbon along with sediments and volatiles were subducted through plate tectonics (the ecologites brought up by kimberlites are likely ancient subducted ocean floor)4.  I am interested, through my own research, on how the cratons formed and when subduction began.  Many geologists pooh-pooh the idea that subduction could have begun so early in earth history so it is satisfying to see how diamond research supports the early existence of plate tectonics and subduction.  My colleagues and I have contended for years that the cratons are the result of ancient subduction.

Imagine Chuck Fipke in the 1980s looking out over the vast expanses of northern Canada contemplating all the diamonds he believed had to be out there in the craton hidden below tons of glacial deposits.  Those damnable glacial deposits were the reason no one had discovered pipes in Canada5.  The map below shows the furthest extent of the glaciers 17,000 years ago and the site of the diamond pipes eventually discovered.  Fipke also had to contend with De Beers, the giant cartel that controlled the world’s diamond markets. They were actively exploring with their practically unlimited resources.  I worked for De Beers as a consulting geologist for a time in the mid 1990s in Russia, and I can assure you, they are a force to be reckoned with.

diamond countryBase map from Wikipeida

By the mid 1980s, geologists had discovered that the mantle material brought up by kimberlites could aid them in their exploration thanks to a geochemist named John J. Gurney at the University of Capetown.  Diamonds form in equilibrium at specific temperatures and pressures with other minerals more abundant than diamonds.  Gurney, funded by Superior Oil, analyzed extensive mineral assemblages from kimberlites with and without diamonds and found that there are chemical signatures in the minerals that show up when diamonds are present.   One of the more famous diagrams is that of the chromium and calcium concentrations in garnets from the mineral assemblages.  Garnets fall into two groups on the diagram called G10 and G9 and virtually all garnets that occur with diamonds fall within the G10 field shown below.  As mentioned before, diamonds can reequilibrate in kimberlites and become graphite or evaporate away as carbon dioxide.  The diagram shows the line of stability under chromium saturation where diamonds will breakdown.  Some diamonds remain stable in the graphite field because the conditions do not last long enough to degrade the diamonds.  But if G10 garnets fall above the diamond-graphite equilibrium line it is a pretty sure bet you are on the right track for diamondiferous kimberlites.  And that is precisely what Fipke kept finding in in his samples of glacial debris as he flew along with Blusson (who not only has a PhD but is a pilot) periodically sampling them.  The long-gone glaciers were pointing the way.

garnetAfter Nowicki et al., 2007

At the time in the mid 1980s, geologist understood the relationships between these indicator minerals and diamonds, but how could the information be used to find the kimberlites in the Canadian craton?  What was unique about Fipke and his partner Blusson was the way they approached the problem.  They knew that the glaciers were powerful enough to gouge out the relatively soft kimberlite and carry the indicator minerals long distances destroying any signs of the kimberlites at the surface and subsequently burying them under debris carried by the glaciers when they melted.  They reasoned that they might be able to sample glacial deposits and “walk” the indicator minerals back to their source.  Standard Oil liked the idea and funded their exploration at first.  No one knew then that it would take eight years, millions of exploration dollars, and several companies before they hit pay dirt.  De Beer’s geologists also knew the answer was in the glacial remains, but to them it was a nine to five job and the season ended after 8 weeks of summer collecting.  For Fipke, it was a life’s dream, and nothing terminated his resolve collecting well into the cold months of the far north.

Fipke and Blusson focused on eskers (see the esker shown below) which are sinuous ridges of stratified sand and gravel deposited by water flowing in tunnels of ice within or under the glaciers.  As the glaciers recede the ridges remain like compasses indicative of the direction the water and ice once flowed.  If the glaciers rumbled over kimberlites, the proof would be in the streams that carried the glacial till away.   They kept going even after Standard Oil called it quits.  The G10 garnets kept telling them they were on the right road and the mining giant BHP believed them when they began running out of money.  Dia Met, the company Fipke and Blusson formed, signed a sweet deal with BHP.  BHP agreed to fund the exploration for a 51% stake.  Within six months after teaming with BHP, Fipke had come to a point where the G10 garnets disappeared near Lac de Gras.  Fipke knew he was close to the source.  As the story goes, he noticed a lake from the air that looked like it sat in a bowl-shaped depression near where the G10 garnets disappeared.  He had to have a sample of the rock in that depression.  They landed the plane on the lake, rowed to shore, and started to dig, but after many hours they were still in glacial till.  They decided to walk the shoreline for a better place to dig.  That is when Fipke’s son Mark, found a piece of kimberlite.  They were all ecstatic — the lake must sit on the pipe.  Gurney eventually analyzed the mineral assemblage and verified that it was highly likely to be a diamond-bearing kimberlite.  BHP quickly flew a geophysical survey which showed a distinct structure below the lake.

FulufjalleteskerEsker in Sweden (Hanna Lokrantz Wikipedia)

BHP and Dia Met started quietly staking as much land around the lake as they could.  Kimberlite pipes frequently occur in bundles so it was imperative that they obtain rights to as large a region as possible before word got out of the find.  While they were staking, BHP flew a drill rig in by helicopter and cored 455 feet under the lake pulling out beautiful samples of kimberlite 33 feet below the glacial debris with 80 plus small diamonds. Canadian law requires that companies announce to their shareholders when a potentially profitable body is found.  On November 12, 1991 they announced the results from the core including the fact that a few gem-quality diamonds had been recovered from the core.  All hell broke loose, and the rush was on by large and small companies alike to stake as close to BHP’s claims as possible in the hopes that other pipes might be buried nearby.   BHP would go on to discover more than 150 kimberlite pipes helping to make Canada the third largest producer of diamonds in the world.  De Beers even found a few mines.  Fipke and Blusson became billionaires overnight (if you don’t count the 8 years of exploration).

The image below shows the Etaki mine – one of the producing mines staked within Fipke’s original claims.   The large circular depressions in kmberlite represent part of the open-pit mining operations BHP is running.


Untitled-1 copyEkati mines from the air (Google Maps)

  1. Shirey, S. B. and Richardson, S. H. (2011) Start of the Wilson Cycle at 3 Ga shown by diamonds from subcontinental mantle: Science 333, 434-436
  2. Pearson, D. G., Davies, G. R., Nixon, P. H., and Milledge, H. (1989) Graphitized diamonds from a peridotite massif in Morocco and implications for anomalous diamond occurrences: Nature, 338, 60-62
  3. Russell, J. K., Porritt, L. A., and Hilchie, L. (2013) Kimberlite: rapid ascent of lithospherically modified carbonatitic melts: In Pearson, D. G. et al, Proceedings of 10th International Kimberlite Conference Vol. 1 p. 195-210
  4. Nowicki, T. E., et al. (2007) Diamonds and associated heavy minerals in kimberlite: A review of key concepts and applications: Developments in Sedimentology, 58, 1235-1267
  5. Cross, L. D. (2011) Treasure Under the Tundra: Canada’s Arctic Diamonds: Heritage House Publishing Co

Why the hysteria over genetically engineered crops?

Last summer I attended the annual fourth of July parade in our local town with my family.  We enjoy watching the floats, pageantry (I am embellishing a bit here), and the copious quantities of candy thrown at us.  Nearly every local business has a float — well a truck with the company name on it serves as a float in many instances.  The local politicians, constabulary, high-school marching bands, queens of various vegetable festivals, local junior baseball teams, etc. join the queue.  The obligatory paper advertisements are handed out by the business participants lauding their merchandise.

During the parade, I had a paper shoved in my face about the problems with genetically modified organisms (GMOs).  I had just heard a wonderful Ted talk about how safe GMOs were by genetic scientist Pamela Ronald so the proclamation caught my attention.  I realized that the polemic was being passed out by a local health-food store.  There was an obvious conflict of interest – by creating suspicions that GMOs were unhealthy or even harmful the store benefited by encouraging people to buy the non GMOs they sold.  Disinformation to make a buck?  The World Health Organization, the United Nations Development Programme, National Academy of Sciences (US), American Medical Association, American Association for the Advancement of Science, Food and Drug Administration, American Cancer Society, and more than 270 other prestigious groups including many Academy of Sciences in other countries have gone on record through numerous reports that GMOs are safe.

I spent a bit of time in my last essay on global warming bemoaning how the subject has become a political hot potato because of disinformation by Exxon (and I mentioned other examples such as Big Tobacco, the National Football League with concussions, and creationists).  Was the radical left on a disinformation campaign also?  It certainly appears so.  As a scientist I know how difficult it is to achieve a consensus on a hypothesis.  Scientists have no time for unsupported opinions – they demand empirically supported results.  I don’t deny that politics plays a role, but I like to think, at the end of the day, that the accepted theories that make it through the labyrinth of scientific scrutiny are extremely sound.  Let’s not forget that scientists have egos and you get intellectual brownie points for debunking someone’s work.  It’s a jungle out there as I have discovered first hand as a research professor.  When I see the community of scientists fundamentally agreeing on a topic, I find it fairly convincing (scientists agreeing is an amazing thing in itself).  I don’t mean to imply that science cannot make mistakes – there are some notorious examples.  But I cannot think of a better way to make educated decisions – based on the research from the experts in the scientific community.  Unsupported opinions just don’t cut it even if the people are well meaning.

The case of Golden Rice demonstrates the horrendous impact anti-GMO groups can have in a rush to prevent GMOs from reaching the marketplace1.  According to Scientific American Golden Rice had passed the health and safety issues for commercial use by 2002.  Syngenta had genetically engineered Vitamin A from corn (beta-carotine) into rice.  Syngenta altruistically turned over all the monetary interests for the use of the rice to a non-profit organization to avoid any interference from anti-GMO groups that fight biotech companies for profiting on GMOs.  The only hurdle left was regulatory approval.  In 2015, Golden Rice was among seven products that won the Patents for Humanity award, but the rice is still not in use anywhere (The Golden Rice Now advocacy group tells me that the Philippines and Bangladesh are expected to have Golden Rice available in 12 months – some time in the middle of 2017).  Amazingly, the life-saving rice is strenuously opposed by environmental and anti-globalization activists who object to GMOs.

!1280px-Golden_RiceInternational Rice Research Institute (IRRI)

In 2014, Justus Wesseler of the Technische Universität München and David Zilberman of the University of California quantified the economic impact caused by the resistance2.  They estimate that at least $199 million dollars were lost per year over the previous decade just in India.  They likened the loss to a metric called life years which they calculated to be 1.4 million in India alone which reflects deaths, blindness and related health disabilities from not having access to Vitamin A.  Unfortunately children are the hardest hit.

I want to emphasize that the Golden Rice case is more than a battle over perceived danger by the anti-GMO movement in the face of contrary scientific evidence.  There are people dying while Greenpeace, the Sierra Club, and other misguided organizations wage war over unclear principles and leftists ideals.  And of course there is always the Non-GMO Project which was created by health-food retailers to sew seeds of doubt (I don’t know if the pun was intended or not) “who oppose a technology that just happens to threaten their profits” according to Scientific American.  I should make it clear that my criticism of Greenpeace and the Sierra Club is not done lightly.  They serve a real purpose in helping to preserve our environment.  But when the science argues against them and lives are at stake, we need to bring them to task.  Let me dive into the science that argues against radical and mindless battles over GMOs.

The National Academy of Sciences has just released a consensus 407-page report entitled Genetically Engineered Crops: Experiences and Prospects which reviewed decades of research on genetically engineered (GE) crops.  Their conclusions find that GE crops are economically beneficial, safe for humans and livestock, and have adequate regulation.  The data is overwhelming impressive and I will take the time to summarize some of the major points.

Humans have been modifying crops for 10,000 years.  A good example is the domestication of maize in Meso-America.  Teosinite, shown in the left of the diagram below, is a grass that went through a series of human selections of rare mutations to develop modern-day maize grown throughout the world (shown in the right part of the diagram).  The point is that humans have been modifying crops through selection of beneficial traits for millennia.


In 1985, the United States was the first country to approve a GE crop, and by 1994 a GE tomato, which delayed ripening, was produced for sale.  Through 2015 about 12 percent of the land available for crop production contains GE crops (the number goes to 50% in the US).  The figure below shows which GE crops are currently being produced and where.  Europe, Russia, and most of Africa have been particularly resistant to GE crops as you can see from the map.


There are three major types of GE crops: 1) Herbicide resistant traits which allow the crop to survive herbicide application to kill weeds or insects.  2) Insect resistant traits which typically incorporate a gene code from Bacillum thuringiensis (Bt) to the crop, killing insects when they feed on the plant.  3) Virus resistant traits which keep the plants from being susceptible to specific plant viruses.  It is important to note that most of the crops are modified to resist one insect, virus, or herbicide.  Drought tolerance, nonbrowning (e.g., with potatoes and apples), various colors in flowers, stability of oil to suppress trans-fats, enhancement of omega-3 fatty acids are other examples of GE traits in commercial production.

The NAS report reviews studies conducted comparing the production of  GE crops to non-GE crops in mind-numbing detail.  But some clear important conclusions have been summarized below (I quote to avoid any misrepresentation of the information).  Please note that I have not included all the findings because many are quite esoteric.  I refer the reader to the NAS report for more details.

  1. “Although results are variable, Bt traits available in commercial crops from introduction in 1996 to 2015 have in many locations contributed to a statistically significant reduction in the gap between actual yield and potential yield when targeted insect pests caused substantial damage to non-GE varieties and synthetic chemicals did not provide practical control.”  Potential yield is the theoretical yield a crop could achieve if water and other nutrients are in adequate supply and there are no losses to pests and disease.
  2. “In areas of the United States where adoption of Bt maize or Bt cotton is high, there is statistical evidence that insect-pest populations are reduced regionally, and the reductions benefit both adopters and nonadopters of Bt crops.”
  3. “In all cases examined, use of Bt crop varieties reduced application of synthetic insecticides in those fields. In some cases, the use of Bt crop varieties has also been associated with reduced use of insecticides in fields with non-Bt varieties of the crop and other crops.”
  4. “The widespread deployment of crops with Bt toxins has decreased some insect-pest populations to the point where it is economically realistic to increase plantings of crop varieties without a Bt toxin that targets these pests. Planting varieties without Bt under those circumstances would delay evolution of resistance further.”
  5. “Planting of Bt varieties of crops tends to result in higher insect biodiversity than planting of similar varieties without the Bt trait that are treated with synthetic insecticides.”
  6. “Although gene flow has occurred, no examples have demonstrated an adverse environmental effect of gene flow from a GE crop to a wild, related plant species.”
  7. “Crop plants naturally produce an array of chemicals that protect against herbivores and pathogens. Some of these chemicals can be toxic to humans when consumed in large amounts.” I emphasized naturally here because the statement pertains to the production of chemicals by non-GE crops.
  8. “Conventional breeding and genetic engineering can cause unintended changes in the presence and concentrations of secondary metabolites.”  This is not only important but emphasizes the need for oversight in the approval of GE crops.  However, NAS also concluded: “U.S. regulatory assessment of GE herbicide-resistant crops is conducted by USDA, and by
    FDA when the crop can be consumed, while the herbicides are assessed by EPA when there are new potential exposures.”
  9. Regarding safety, NAS concluded: “In addition to experimental data, long-term data on the health and feed-conversion efficiency of livestock that span a period before and after introduction of GE crops show no adverse effects on these measures associated with introduction of GE feed. Such data test for correlations that are relevant to assessment of human health effects, but they do not examine cause and effect.”  In others words, GE crops appear to be safe for the animals that consume them and for humans that consume either these animals or the GE crops directly.
  10. “The incidence of a variety of cancer types in the United States has changed over time, but the changes do not appear to be associated with the switch to consumption of GE foods. Furthermore, patterns of change in cancer incidence in the United States are generally similar to those in the United Kingdom and Europe, where diets contain much lower amounts of food derived from GE crops. The data do not support the assertion that cancer rates have increased because of consumption of products of GE crops.”
  11. “The committee found no published evidence to support the hypothesis that the consumption of GE goods has caused higher U.S. rates of obesity or type II diabetes.”
  12. “The committee could find no published evidence supporting the hypothesis that GE foods generate unique gene or protein fragments that would affect the body.”
  13. “The committee did not find a relationship between consumption of GE foods and the increase in prevalence of food allergies.”
  14. “The similarity in patterns of increase in autism spectrum disorder in children in the United States, where GE foods are commonly eaten, and the United Kingdom, where GE foods are rarely eaten, does not support the hypothesis of a link between eating GE foods and prevalence of autism spectrum disorder.”
  15. “On the basis of its understanding of the process required for horizontal gene transfer from plants to animals and data on GE organisms, the committee concludes that horizontal gene transfer from GE crops or conventional crops to humans does not pose a substantial health risk.”
  16. “The available evidence indicates that GE soybean, cotton, and maize have generally had favorable outcomes in economic returns to producers who have adopted these crops, but there is high heterogeneity in outcomes.”
  17. “Exploitation of inherent biological processes—DNA binding-zinc finger proteins (ZFNs), pathogen-directed transcription of host genes (TALEs), and targeted degradation of DNA sequences (CRISPR/Cas)—now permit precise and versatile manipulation of DNA in plants.”
  18. “New molecular tools are further blurring the distinction between genetic modifications made with conventional breeding and those made with genetic engineering.”
  19. “Treating genetic engineering and conventional breeding as competing approaches is a false dichotomy; more progress in crop improvement could be brought about by using both conventional breeding and genetic engineering than by using either alone.”
  20. “In some cases, genetic engineering is the only avenue for creating a particular trait. That should not undervalue the importance of conventional breeding in cases in which sufficient genetic variation is present in existing germplasm collections, especially when a trait is controlled by many genes.”
  21. “Although genome editing is a new technique and its regulatory status was unclear at the time the committee was writing this report, the committee expects that its potential use in crop improvement in the coming decades will be substantial.”  I think this is an extremely important conclusion.  If we want to continue to feed the world we are probably going to become more dependent on GE crops particularly if population continues to increase at present rates.
  22. “Genetic engineering can be used to develop crop resistance to plant pathogens with potential to reduce losses for farmers in both developed and developing countries.”
  23. “Genetic engineering can enhance the ability to increase the nutritional quality and decrease antinutrients of crop plants.”
  1. There are similar accounts of environmental groups shutting down a genetically modified eggplant in India, Bangladesh, and the Philippines.  Another involved a genetically modified potato which was resistant to specific herbicides.  A large food chain under pressure from environmental groups refused to purchase genetically modified potatoes and the project was shut down.  Farmers then introduced a new herbicide for the non-genetically modified potatoes grown instead
  2. Wesseler, J. and Zilberman, D. (2014) The economic power of the Golden Rice opposition: Environmental and Development Economics: 19, 724-742