Science versus the “eye test” in selecting the college football playoff teams

Note: this paper will be published in Skeptic Magazine in March, 2017

In case you are not familiar with how college football determines the four teams that are picked to contend for the national championship, I refer you to the Selection Committee Protocol which is a guide on how the committee chooses the four playoff teams at the end of the regular season and after the league championship games.  The first words of the protocol are telling: “Ranking football teams is an art, not a science.”  The protocol specifically calls into question any rigorous mathematical approach: “Nuanced mathematical formulas ignore some teams who “deserve” to be selected.”  For those that are not aficionados of the college football selection, the previous selection process used computer polls as one third of the formula to determine the final two teams (before the four-team process was initiated in 2014 – the other two thirds of the formula came from the Associated Press and Harris polls).  What I hope to show in this essay is that 1) humans are primed with natural biases (whether they realize it or not) and therefore, are not effective at simultaneously considering the huge amounts of data available and 2) computer algorithms are spectacularly successful at analyzing massive data bases, sometimes called “deep data”, to ascertain the best choices for the playoff system.

So what are the guidelines that instruct the 13 member college playoff panel?  They are somewhat obvious and include “conference championship wins, strength of schedule, head-to-head competition, comparative outcomes of common opponents, and other relevant factors such as key injuries that may have affected a team’s performance during the season or likely will affect its postseason performance.”  I hasten to point out that strength of schedule can only be determined by “nuanced mathematical” rigor.  The guidelines fall into two categories: facts (e.g., conference champions) and opinions (e.g., whether a key injury will impact team performance).   My argument is to eliminate opinions and choose the final 4 teams in the most rational and unbiased fashion — that is, use computer algorithms.  Exceptions to the computer rankings could be made by the committee when facts like conference championships play an important role.  For example, if Notre Dame and Florida State University each had one loss at the end of the season but the computer rankings had Notre Dame above FSU, the committee might override the computer rankings and choose FSU over ND if FSU won the Atlantic Coast Conference championship (ND is not in a conference and therefore cannot win conference championships).  Let me spend some time justifying my proposed selection process.

I have created a table below which shows the top 15 teams in the final polls of the 2015 season along with their won-loss records for reference purposes.  The first two polls are the AP and Coaches polls and the final two are the Sagarin and Colley computer polls.  Keeping in mind that the computer algorithms that determine the computer polls have no human intervention (the data on wins and score differentials are simply entered into the matrices), it is remarkable that the computer polls agree so closely with the human polls particularly within the top 5 teams (remember there are 128 teams in the NCAA Division I Football Bowl Subdivision – FBS in 2015).  The details of the computer algorithms are discussed at the end of the essay.

rankingsRankings from the final week of the 2015 season.

I agree that informed opinions can be important and a group of football experts might have insights into the game that mere mortals might not.  Many of the committee members are former coaches and athletic directors, but I am concerned that opinions from former coaches and athletic directors might be tainted by the teams and conferences they come from (they might not even know they are biased).  What is the difference between an informed and prejudiced opinion?  I am not sure, but can these men and women be truly neutral?   There is a massive amount of scientific research that shows that we have difficulties being unbiased.  Nobel laureate Daniel Kahneman has written an entire book on heuristic and cognitive biases1.  A good example comes from witnesses of crimes or traffic accidents.  Any good detective knows to take eye-witness testimonies before the witnesses have had a chance to discuss the event because studies show that witnesses that share information will tend to foster similar errors about the event.   And the research also shows that eye-witnesses are notoriously inaccurate.  Elizabeth Loftus has written an immensely entertaining book about her research involving witness error and bias if you care to delve into the details2.  Loftus writes: ” In my experiments, conducted with thousands of subjects over decades, I’ve molded people’s memories, prompting them to recall nonexistent broken glass and tape recorders; to think of a clean-shaven man as having a mustache, of straight hair as curly, of stop signs as yield signs, of hammers as screwdrivers; and to place something as large and conspicuous as a barn in a bucolic scene that contained no buildings at all.  I’ve been able to implant false memories in people’s minds, making them believe in characters who never existed and events that never happened.” A recent study based on statistical analyses has shown that the writers’ and coaches’ college football polls are significantly affected by such things as point spreads and television coverage3.

If humans have all of these susceptibilities toward bias, why do we use humans to choose who plays in college football’s vaunted playoff system?  Well, because humans are biased — they think they can choose better than nuanced mathematical formulas.  But the nuanced mathematical formulas are completely unbiased — in other words, they use only raw game data usually related to wins/losses and score differentials.  We can remove the most biased element in the system, humans, by relying specifically on science, logic, and mathematics rather than art or whatever else the committee protocol calls human intervention.  It is absolutely archaic in the day of big data to ignore analytical models in favor of subjective comparisons.  Do the coaches, atheletic directors, and politicians (e.g., Condoleezza Rice is on the committee but has virtually no “football experience” in her background) that make up the football committee understand the value of algorithms?  I am not sure, but there is a wealth of research that says they should.

From my experience on football boards and chat rooms, nothing gets a fan’s dander up more than claiming computer models are unbiased.   But they are.  They start with only formulas melded together into computer algorithms.  And the algorithms are based on sound mathematical principles and proofs.  There are many college football algorithms used in computer ranking models and many of them are considered proprietary.  More than 60 are compiled on www.mratings.com.  The math can get pretty intense, but if you are interested in the details of the known algorithms, I recommend the book Who’s #1?: The Science of Rating and Ranking by Amy N. Langville at the College of Charleston and Carl Meyer at North Carolina State University4.  The people that do these algorithms are steeped in mathematical prowess.   For example, the Colley matrix was done by Wes Colley after he completed his PhD at Princeton University in astrophysics.  Although the math can get tricky, the principle is rather simple. The algorithms typically involve matrices and vectors that simultaneously consider not only which team beat which opponents but which teams the opponents beat, and which teams those opponents beat out into n dimensions.  In addition, difficulty of schedule and score differentials can also be incorporated into the algorithms.  When we watch college football we get a sense of which team is best by comparing how each team plays against its opponents.  But such opinions are hopelessly mired in the biases and are enhanced by the fact that preseason polls skew our perceptions before the season begins.  The algorithms do precisely what human perception is trying to do but without any biases and simultaneously with a huge array of data 5.

I don’t understand the reluctance of the powers to be in college football to incorporate mathematical equations into the  playoff system.  These types of algorithms permeate the business community.  Google’s PageRank ranks web pages using some of the same algorithms as computer models do to rank teams.  Although it is a carefully guarded secret, Langville and Meyer 6 concluded, based on patent documents, that Google’s algorithm uses the Perron-Frobenius theorem  which is also used by James Keener in his football rankings 7.   The BellKor company won a Netflix prize of 1 million dollars for writing an algorithm that was 10% better than the one created by Netflix.  Every time Netflix suggests a movie, it is exploiting the same kinds of algorithms used in football rankings.  In fact, Langville and Meyer applied the algorithms behind the Colley and Massey methods to the Netflix movie ratings database and came up with a list of the top movies in their book (pages 25-27).  No one complains about page ranks or movie suggestions being too nuanced in rigor.  Can you imagine a committee trying to ascertain page ranks?  No one promotes the “eye test” to rank web pages even though the eye test is commonly prescribed as the only legitimate way to determine which teams are the best in college football.  Isn’t it obvious that this is about control rather than human abilities versus computer algorithms?

Deep data has been integrated into almost all sports.  Witness the way professional baseball has dispensed with the traditional positions on the field in favor of moving players to positions where the hitter is most likely to hit.  It is not unusual to see a shortstop in shallow center.  The game was primarily changed when computers had the computing power to handle the large amounts of data that could be collected.  Read Moneyball by David Lewis 8 to see how manager Billy Beane used statistics (now called sabermetrics) to take a struggling Oakland Atheletics team to the playoffs.  Opinions of seasoned professional scouts that relied on the eye test to recruit talent have gone the way of the dodo bird.

In my opinion, nothing seems more egregious in the polls than the way teams are punished for losing when they play difficult schedules and other teams are favored for winning even with cupcake schedules.  Let’s pretend we can determine the thirteen best teams in college football before the season and the number 13 team has scheduled all the top twelve teams during the regular season.  The number 13 team could finish the season 0-12.  The AP and Coaches polls would be extremely hard on the team and they would never rank the team in the top 25 even though we know by definition they are the 13th best team in the nation.  But the computer algorithms would recognize the logic behind the difficult schedule and although they might not rank them 13th, they would probably have a good showing.  The counter to this example is a team with a fluff schedule.  The polls are notorious for ranking teams with perfect records higher than is sometimes justified when strength of schedule is considered.  In theory, any team in the FBS could win all their games if they played lesser ranked opponents on their schedule.  Fortunately it appears that the playoff selection committee has recognized that strength of schedule is an important factor and they do consider it.  However, the committee’s willingness to consider head-to-head games seems logically misplaced.  Let’s go back to our top 13 ranked teams again.  If the number four team lost to the number 5 team and the number 5 team lost to the number 13 team,  the committee would indubitably place the number 5 team into the playoffs over the number 4 team based on the silly head-to-head rule even though the computer algorithms would recognize the problem and consider the entire schedule of each team.

Although we don’t like to admit it, statistically improbable events can have a huge impact on single games which may never be noticed by the committee (or the computer algorithms for that matter – see the section on betting below).   If anyone saw the national championship last year you could not be faulted for thinking that Clemson may have been the best team in the country even though they lost to Alabama (it hurts me to say this because I am an alumnus of Alabama and a huge Crimson Tide fan – I go way back to the Bear).  Alabama had an onside kick that they recovered.  It appeared to change the momentum of the game and yet, the probability of that onside kick being perfectly placed seems unlikely.  Watch it for yourself below.  Bama also needed a kickoff return and a few turnovers on their way to a 45-40 national championship victory.  The point being that minutiae that would otherwise not have a big impact can and does play a role.  It is the butterfly effect which teaches us that there is no right answer when it comes to rankings.  The best we can do is create an unbiased mathematical system rooted in statistics and deep data with as little input as possible from naturally biased humans.

Last year I set out to test the college football computer algorithms by setting up a spreadsheet which monitored theoretical bets of $100 on each of the college games beginning on November 8 through the college bowl games.  I waited until late in the season because, in theory, the algorithms work better with more data.  I used Sagarin‘s predictor ranking which includes score differentials and home-team advantage.  First a few words about these items.  It is true that teams can run up the score although it rarely happens on a consistent basis.  But most algorithms correct for large score differentials9 to avoid any advantages gained in the rankings from running up scores.  Home-team advantage is an interesting subject in itself and is usually attributed to psychological effects of playing in a stadium full of the home-team fans.  But these effects are difficult to test.  The subject has also been addressed in the scientific literature, and much to my surprise, some studies show that referees can be influenced by the home crowd.  For example, Harvard professor Ryan Boyko and his colleagues found that statistically referees favored the home team in 5,244 English Premier League soccer matches over the 1992 to 2006 seasons10.  Regardless of the reasons for the effect of home-field advantage, algorithms can correct for it.  Sagarin calculates a score to be added to the home team when betting.

The results of my theoretically betting are shown below for each week (the bowl season caused the number of games to vary in December).  Had I bet $100 on each of the 287 games monitored I would have lost $700.  So what’s so terrific about a ranking system that loses money in Vegas?  It is simple – point spread.    If Vegas lost money on college football games, there would be no betting.  It is common for the media to give point spreads in games as a reflection of who Vegas thinks will win the game.  But spreads are not about who is favored, the spreads are about enticing bettors to bet.  With a point spread, Vegas does not have to predict winners, all they need to do is entice bettors by making the point spread seem to favor one side or the other.  Vegas knows how to make money on all those built-in biases we have.  They collect a fee (called a vig or juice) for handling the bet, and as long as they have about the same number of losers and winners, they take home a tidy profit.  To make sure the winners and losers are equal, they shift the spread during the course of the week as the bets are placed assuring that the same amount of bettors are on both sides of the spread.  Even the computer algorithms can’t beat the crafty Vegas’ bookies.  Even though the computer algorithms are very good at predicting winners (near 60%), no algorithm (or, for that matter, any human) can beat spreads on a consistent basis11.  But people keep trying.

tableAmount of money theoretically lost using Sagarin rankings.

Langville and Meyer point to two reasons why computer algorithms don’t beat point spreads; 1) The computer algorithms are created to predict rankings not score differentials.  In the computations, they ignore important scoring factors such as strength of defensive backfields against high-octane passing attacks which might create lopsided scores even though the rankings rate both teams average.   Then there are always the statistical flukes that occur in games as mentioned above which cannot be predicted.  2) Spreads are also difficult to predict particularly in football because points are tabulated in sets of usually 3, 6 or 7.  Therefore, games tend to be multiples of these numbers rather than simple evenly distributed numbers.

I must conclude from the data that the only way to select the four teams that play at the end of the year in college football is to use computer algorithms.  There should still be a committee that decides how to weigh such things as league championships.  It will also be extremely important to make sure that the algorithms used are completely understood by the committee (no black-box proprietary claims).  The algorithms need to be analyzed to determine which equations and factors give the most meaningful results and changed accordingly.  Score differentials should be included within the algorithm after they have been corrected for the potential of teams running up the scores.

Appendix – a brief overview of linear algebra and rankings

There is no way in an essay I can do justice to the subject.  But I did want to emphasize how these equations eliminate any bias or human influence.  I highly recommend the Khan Academy if you want a brief overview of linear algebra.

Rather than use my own example, I have decided to use the data presented by Langville and Meyer because it is easier to understand when every team in the example has played every other team in the division.  The data shown below comes from the 2005 Atlantic Coast Conference games.

accThe 2005 data from the Atlantic Coast Conference

The Massey method of ranking teams was developed by Kenneth Massey for his honor’s thesis in mathematics while he was an undergraduate at Bluefield College in 1997.  He is currently an assistant professor at Carson Newman University.  Using his equations, the table above can be converted into a linear algebra equation of the form Mr = p where M is the matrix containing information about which teams played which other teams, r is the rating factor (which is equated to the ranking), and p is the sum of each team’s score differentials:

calc

Note the diagonals of M are the games played and each -1 in the matrix shows that each team played every other team.  The last row is a trick Massey used to force the ranks to sum to 0.  The solution is calculated by inverting the matrix M and multiplying times p to obtain the following results 12:

final

  1. Kahneman, D. (2011) Thinking, Fast and Slow: Farrar, Straus and Giroux.
  2. Loftus, E. and Ketcham, K. (1994) The Myth of Repressed Memories: False Memories and Allegations of Sexual Abuse: St. Martin’s Press
  3. Paul, R. J., Weinbach, A. P., and Coate, P. (2007) Expectations and voting in the NCAA football polls: The wisdom of point spread markets: J. Sports Economics, 8, 412
  4. Langville, A.N. and Meyer, C. (2012) Who’s #1?: The Science of Rating and Ranking: Princeton University Press
  5. I would like to thank Amy Langville for suggested changes here
  6. see ref. 4
  7. Keener, J. (1993) The Perron-Frobenius theorem and the ranking of football teams: Society for Industrial and Applied Mathematics, 35, 80
  8. Lewis, D. (2003) Moneyball: W. W. Norton & Company
  9. see ref. 4
  10. Boyko et al. (2007) Referee bias contributes to home advantage in English Premiership football: Journal of Sports Sciences, 25, 1185
  11. see ref. 4
  12. see ref. 4 for details

Genomics – a brave new world

Embryonic stem cells (ES cells) are remarkable.  They come from animal (including human) embryos and can morph into any cell in the body such as brain, bone marrow, intestine, muscle, or blood cells.   Biologists call them pluripotent and can isolate them from an embryo and grow them in laboratory petri dishes.  In the halcyon days of early stem-cell research, it did not escape the attention of scientists that genetic changes could be made to an ES cell, the cell could then be inserted back into an embryo, and the embryo placed into the womb where it would differentiate into all the cells of the body with the new genetic modification.  The process became so widespread in the early 1990s that biologists referred to the genetically modified animals born as transgenic.  An example that caught the attention of the world was a mouse that had a gene from a jellyfish that made it glow in the dark (under blue lamps).  It was as if a grand gift had been given to geneticists that enabled them to understand how genes functioned.  Mice could be made to double in size, develop Alzheimer’s disease, grow cancer tumors, age prematurely, increase memory, or erupt with epilepsy all through gene manipulation. It was a remarkable way for scientists to study genetic diseases.  There was just one problem — human ES cells did not respond favorably to genetic modifications the way mouse ES cells did.   There would be no transgenic humans anytime soon even if ethical issues were overcome.

Mouse_embryonic_stem_cellsEmbryonic mouse stem cells

Meanwhile, geneticists were probing a myriad of other ways to correct specific genetic disorders.   One group focused on a gene called ornithine transcarbamylase (OTC) which codes for an enzyme that breaks down proteins in the liver.  Without the enzyme, a product of the protein, ammonia, accumulates throughout the body.  As you might imagine, ammonia buildup in the body can have devastating consequences, and most children do not survive into adulthood with the genetic disorder.   Enter Jesse Gelsinger who had a mild case of OTC deficiency1.  Mark Batshaw and James Wilson, then at the University of Pennsylvania, postulated that they could add the OTC gene to a cell’s DNA from Gelsinger and insert it back into his body via an adenovirus (viruses reproduce by entering the body and injecting either DNA or RNA into a living cell, effectively taking over the cell to reproduce more copies of themselves).  The hope was that the virus would insert the corrected DNA into Gelsinger’s liver cells which would then synthesize the requisite enzyme needed by Jesse.   The treatment worked in mice but had mixed results in monkey trials — some monkey immune systems responded in drastic ways causing liver failure and other disorders.  Batshaw and Wilson responded by making the virus less potent and reducing the dose for the proposed trial with Gelsinger.  In 1997, they approached the Recombinant DNA Advisory Committee (RAC) of the government’s National Institute of Health for approval.  RAC agreed, and Jesse and his father were excited volunteers, convinced that Jesse’s close encounters with death from food reactions, regimented diet, and the plethora of pills he took was coming to an end.

On September 9, 1993, Jesse began his trial of viral injections.  Four days later he was dead from a massive immune reaction to the virus.  The press reports set off a chain reaction energizing Congress to initiate hearings, district attorneys investigating, the university back pedaling, and official inquiries launched by the FDA and RAC.  When it was discovered that there was a “pattern of neglect” with the research by Batshaw and Wilson, the FDA halted all trials in other laboratories and a strict moratorium fell over the entire research discipline.  We will never know how much of the response was an effort to point blame away from governmental agencies, but Batshaw and Wilson became the “fall guys” and genetic research would be impacted for a decade.  I recognize the need for caution and ethical considerations, but I also know that there are children dying from diseases like OTC every day.  I am sure many of them and their parents would gladly accept the chance of survival beyond childhood via potentially risky experiments.  Did we throw the baby out with the bath water?  After all, Jesse died wanting to help others by finding a cure for OTC.  He did not die from the trial’s basic premise.  He died because his body had highly reactive antibodies to the virus because he had probably been exposed to a similar adenovirus in his past.

Fortunately, Jesse’s disturbing death did not affect genetic diagnosis – attributing genes to diseases.  Examples include the BRCA1 gene associated with breast cancer, CNV mutations linked to schizophrenia, and ADCY5 and DOCK3 genes related to neuromuscular disease.  I highly recommend Siddhartha Mukherjee’s new book entitled The Gene: An Intimate History2 for further reading.

But to set the stage for the technology available today, we need to look at in vitro fertilization (IVF).  In IVF, an embryo is formed from the fertilization of an egg by a sperm outside of the body.  The single-cell embryo is bathed with nutrient-rich fluids in an incubator and left to divide for three days until there are 8 to 16 cells.   The embryo is then implanted into a woman’s womb.   Remarkably, if a few cells are removed from the growing embryo in the incubator,  the embryo is unaffected.  It simply replaces the lost cells.  Usually several eggs are harvested for IVF and fertilized.  Cells can then be removed from each embryo and genetically tested or screened for mutations allowing only a fertilized egg with no known serious genetic disorders to be implanted in the womb.  Genetic testing in this way has been done since the late 1980s and is referred to as preimplementation genetic diagnosis (PGD).  It is eugenics without the terrible baggage that the word has carried from past diabolical experiments (think Mengele and the Nazis).  But that does not mean that the method has not been misused.  PGD is being used surreptitiously to select for sex particularly in India and China even though selecting for gender is banned there.  It is estimated that as many as 10 million females have “disappeared” from PGD, abortion, infanticide, or neglect of female children3.

Blausen_0060_AssistedReproductiveTechnologyDiagram of in vitro fertilization – Wikipedia

According to Mukherjee, there have been three principles that guide doctors in deciding which embryos will not be implanted during IVF.   First, the gene needs to lead to a serious life-threatening disease with almost 100 percent chance of the child or adult developing the disease.  Cystic fibrosis  is a good example – a single gene causes the genetic disease.    The disorder affects the lungs primarily, causing chronic coughing from frequent lung infections.   Life expectancy is about 46.  The misery is not limited to the lungs.  Sinus infections, poor growth, clubbing of digits, fatty stools, and infertility (among males) are just some of the side effects.  Second, the development of the gene will lead to “extraordinary suffering”.  And finally, there must be a consensus among the medical community that the intervention is morally and ethically sound and the family involved has complete freedom of choice.

Even so, the Roman Catholic Church (and other religious institutions) has strongly objected to IVF and related gene technologies.  John Hass, a Catholic theologian, states: “One reproductive technology which the Church has clearly and unequivocally judged to be immoral is in vitro fertilization or IVF. Unfortunately, most Catholics are not aware of the Church’s teaching, do not know that IVF is immoral, and some have used it in attempting to have children…  In IVF, children are engendered through a technical process, subjected to “quality control,” and eliminated if found “defective.”4.  Honestly, I don’t understand where this moral imperative comes from.  If there is a God, He/She must have understood that we would eventually discover how to cure genetic diseases.  Apparently Hass and the Church find no fault with technologies that would correct the problem after the embryo is in the womb but chaff at the idea of choosing to avoid the disease before the embryo is placed in the womb.   I suspect that Hass might change his mind if he had to watch someone die slowly from a disease like cystic fibrosis5.  Clearly, our society will continue to grapple with the ethical and moral issues of gene technologies particularly now that research is making social engineering theoretically “available”.  Mukherjee discusses the identification of a gene related to psychic stress to emphasize how blurred the ethical decisions are potentially becoming.  Where society draws the line is going to be as important as the genetic technology itself.  But these ethical dilemmas are just the tip of the iceberg.

Improved safety and more careful oversight has gradually led to better research.  New viruses have been developed to effectively deliver gene-altered DNA or RNA to cells that avoid catastrophic immune responses similar to what happened to Jesse Gelsinger.    In 2014, viral delivery systems successfully treated hemophilia – the genetic disorder that prevents blood from clotting.  And although the setback that genetic engineering suffered in the 1990s in the aftermath of Jesse’s death had been overcome as the new millennium approached, germ-line therapy was set back again when George W. Bush drastically reduced the use of ES cells in federal research programs in 2001.   Germ-line therapy is the modification of the human genome in reproductive cells so that the modified gene is passed on to offspring.  Imagine ridding genomes of gene mutations that cause cystic fibrosis or breast cancer (BRCA1) forever in families.  Yet because ES cells are frequently obtained from embryos left over from IVF, Bush clamped down on the research (presumably based on pressure from the religious right) which nearly extinguished United States progress in the field for nearly a decade.  I understand the abortion debate, but collecting ES cells from embryos that will never be implanted in woman’s womb seems to be carrying the abortion issue to drastic extremes.

Jennifer Doudna of the University of California, Berkeley and Emmanuelle Charpentier of the Helmholtz Centre for Infection Research knew from earlier research that bacteria had RNA that could find and recognize DNA in a virus and then deliver a protein which cut the virus DNA, thus disabling it – an effective way bacteria fought off viral attacks.  By 2012, they were not only able to program the process to seek and cut any specified section of DNA, but they learned how to flood the region near the cut with desired DNA fragments that the cut DNA incorporated into its genome.  In effect, they had created a gene splicing technique they designated CRISPR/Cas96 (clustered regularly interspaced short palindromic repeats).  In other words, Doudna and Charpentier had discovered a means to exchange a serious mutant gene like the cystic fibrosis gene with a harmless gene.  The dawn of genetic editing had begun7.

About the same time that Doudna and Charpentier were developing the CRISPR technology, scientists at Cambridge, England and at the Israeli Weizmann Institute were discovering how to make ES cells into primordial germ cells – these are the cells that develop into the sperm and egg in the embryo.  The brave new world predicted by Huxley nearly 100 years ago in 1932 is upon us.  The technology is now available to form a germ line cell which can be genetically modified with CRISPR technology.  The modified cells can then be converted to sperm and eggs to form an embryo which will produce a genetically modified human through IVF – a transgenic human.  However, as you might imagine, there are strict controls and bans on this research in the United States based on ethical and moral issues.  Scientists are forbidden to introduce genetically modified cells that will develop into embryos directly into humans and ES cells cannot be genetically modified if they will form into sperm and egg cells.  Most other countries have followed the US lead with similar bans.  Mukherjee tries to explain the concern: “The crux, then, is not genetic emancipation (freedom from the bounds of hereditary illness), but genetic enhancement (freedom from the current boundaries of form and fate encoded by the human genome).  The distinction between the two is the fragile pivot on which the future of genome editing whirls.”  It is clear that we are wrestling with our past history of the misplaced promotion of horrible eugenics programs.  I asked Doudna to clarify the reason for a moratorium: “the moratorium is not a call to outright ban engineering of the human germ line. Instead, it suggests a halt to such clinical use until a broader cross section of scientific, clinical, ethical, and regulatory experts, as well as the public at large have a chance to fully consider the ramifications.”

But we may not have the luxury of waiting until the ethics and morals of the science are thoroughly debated.  In 2015, Junjiu Huang and his team at Sun Yat-sen University in Guangzhou, China, used CRISPR to eliminate a gene that causes a blood disorder in human embryos.  There were problems in the products and the procedure was stopped (although there was never any intention of allowing the embryos to mature in a womb).  The experiments set off international alarms and the scientific journals Nature, Cell, and Science refused to publish the paper.  The paper was eventually published in Protein + Cell.  Huang has made it clear that he will continue to pursue experiments to correct problems that surfaced during the previous work.  “They did the research ethically” noted Tetsuya Ishii of Hokkaido University in Sapporo, Japan in Science, but several genetic watchdog groups called for an end to the procedures.  Other scientists including a Nobel laureate were not disturbed by the research as long as the experiments were limited to clinical applications8.

Microinjection_of_a_human_egg.svgGenetic editing in human embryos.

The incident with Junjiu Huang reminds me of the work that has been done on game theory.   As far back as the 1920s one of the leading lights in mathematics, John von Neumann at the Institute of Advanced Study where Albert Einstein and Kurt Gödel worked (closely associated with Princeton University),  sought to define, through mathematical expressions, logical procedures in games that could be applied to real-life scenarios.  In his superb book Prisoner’s Dilemma: John von Neuman, Game theory, and the Puzzle of the Bomb,  William Poundstone summarizes von Neumann’s work: “Von Neumann demonstrated mathematically that there is always a rational course of action for games of two players, provided their interests are completely opposed9.”  One of the early applications of work on game theory came when the United States was deciding to build a hydrogen bomb – a huge leap in destructive capabilities compared to the atomic bomb.  Many prominent scientists, such as Robert Oppenheimer, the director of the Manhattan Project, were outspoken against it.   Seemingly they reasoned, the best strategy would be to cooperate with the Soviet Union whereby both countries would agree not to develop the H-bomb.  The research was expensive and it would generate thousands of bombs that would be stockpiled and probably never be used.  Game theory logic did not concur.  There was only one possible step according to the logic of “game” brinkmanship between the US and the Soviets – build the H-bomb no matter whether the Soviets were willing to agree to a moratorium or not.  There was simply no way to be absolutely sure the Soviets would live up to any potential agreement.

I think the same strategy is true with germ-line experiments.  The logic is clear – it seems the Chinese are going to develop the technology regardless of what we do and not having the technology while other countries do could be detrimental to the best interests of the United States.  The value of developing germ-line therapy seems even more crucial than, say, the H-bomb because the therapy will potentially lead to cures for horrible genetic diseases.  I recognize the need to be discreet and careful, but we also need not dally on something so important.   In December of 2015, the International Summit on Human Gene Editing was sponsored by the US National Academy of Sciences, the US Academy of Medicine, the Chinese Academy of Sciences, and the Royal Society of London.   The planning committee summarized recommendations for “the development and human applications of genome editing” with agreements made to have future summits.  The recommendations can be reviewed in an editorial by Theodore Friedmann in Molecular Therapy10.  All I can say is that the sides are talking, and that is important.  The research continues with some controls.

  1. In Jesse’s case, the gene was not inherited but was caused by a mutation in only one cell before birth.  The result was unusual in that not all of his cells were OTC deficient as might be expected if he had inherited the trait.
  2. Mukherjee, S. (2016) The Gene: An Intimate History, Scribner
  3. see ref. 2
  4. see for example, Haas, J. M. (1998) Begotten not made: A catholic view of reproductive technology
  5. I was raised a Roman Catholic, and I know that Catholics believe in divine inspiration.  That is, they believe the Pope with or without the input of his advisers makes a decision on the morality of the issue with the understanding that the decision is inspired directly by God.  I would hasten to point out that the terrorists that took down the World Trade Center believed they were divinely inspired also so believing does not make it so.  I sometimes wonder if these men (and I emphasize men because there are no women in the upper echelons of the Holy See) ever wonder if their opinions are really divinely inspired.   They place a great deal of confidence in a decision that will bring immense misery into the world – consider all those Catholics that refuse to use IVF and have children with serious genetic disorders
  6. The Cas9 was the protein that performed the cutting.
  7. see Exterminating invasive species with gene drives
  8. Kaiser, J. and Normile, D. (2015) Embryo engineering study splits scientific community: Science, 348, 486-487
  9. Poundstone, W. (1992) Prisoner’s Dilemma: John von Neuman, Game theory, and the Puzzle of the Bomb: Anchor Books
  10. Friedmann, T. (2016) An ASGCT Perspective on the National Academies Genome Editing Summit: Molecular Therapy, 24, 1-2

Taking the “pseudoscience” out of fingerprint identification

After the Madrid terrorist bombing on March 11, 2004, a latent fingerprint was found on a bag containing detonating devices.  The Spanish National Police agreed to share the print with various police agencies.  The FBI subsequently turned up 20 possible matches from their database.  One of the matches led them to their chief suspect, Brandon Mayfield, because of his ties with the Portland Seven (Mayfield, a lawyer, represented one of the seven American Muslims found guilty of trying to go to Afghanistan to fight with the Taliban in an unrelated child custody case) and his conversion to Islam (Mayfield was in the FBI database because of his arrest for burglary in 1984 and his military service).   FBI Senior Fingerprint Examiner Terry Green considered “the [fingerprint] match to be a 100% identification”1.   Supervisory Fingerprint Specialist Michael Wieners and Unit Chief, Latent Print Unit, John T. Massey with more than 30 years experience “verified” Green’s match according to the referenced court documents.  Massey had been reprimanded by the FBI in 1969 and 1974 for making “false attributions” according to the Seattle Times2.  Mayfield was arrested and held for more than 2 weeks as a material witness but was never charged while the FBI argued with the Spanish National Police about the veracity of their identification.  Apparently the FBI ignored Mayfield’s protests that he did not have a passport and had not been out of the country in ten years.  They also initiated surveillance of his family by tapping his phone, bugging his home, and breaking into his home on at least two occasions3.  All legal under the relatively new Patriot Act.

Meanwhile in Spain, the Spanish National Police had done their own fingerprint analysis and eventually concluded that the print matched an Algerian living in Spain — Ouhnane Daoud.  But the FBI was undeterred.  The New York Times4 reported that the FBI sent fingerprint examiners to Madrid to convince the Spanish that Mayfield was their man.  The FBI outright refused to examine evidence the Spanish had and according to the Times “relentlessly pressed their case anyway, explaining away stark proof of a flawed link — including what the Spanish described as tell-tale forensic signs — and seemingly refusing to accept the notion that they were mistaken.”

The FBI finally released Mayfield and followed with a rare apology for the mistaken arrest.  Mayfield subsequently sued, and American taxpayers shelled out $2 million when the FBI settled the case.  More importantly, the FBI debacle occurred during a debate among academics, government agencies, and within the courts about the “error rate” associated with fingerprint analyses5.  But before I address the specific problems with fingerprint identification let’s talk about the Daubert v. Merrell Dow Pharmaceuticals (1993) court case.  The details are fairly banal and would have been meaningless to this essay except for the fact that it reached the Supreme Court and established what is now referred to as the Daubert standard for admitting expert witness testimony into the federal courts6.   In summay, the judge is responsible (a gatekeeper in Daubert parlance) for making sure that expert witness testimony is based on scientific knowledge7   Furthermore, the judge must make sure the information from the witness is scientifically reliable.  That is, the scientific knowledge must be shown to be the product of a sound scientific method.  Finally the judge must ensure that the testimony is relevant to the proceedings which loosely translated means the testimony should be the product of what scientists do – form hypotheses, test hypotheses empirically, publish results in peer-reviewed journals, and determine the error in the method involved when possible.  Finally the judge should make a determination of the degree the research is accepted by the scientific community8.

“No fingerprint is identical” – it has become almost a law of nature within forensic fingerprint laboratories.  But no one knows whether it is true or not.  That has not stopped the FBI from maintaining the facade.  In a handbook published by the FBI in 19859 they state: “Of all the methods of identification, fingerprinting alone has proved to be both infallible and feasible”.  I think that fingerprints are an exceptionally good tool in the arsenal of weapons against crime, but it is essentially unscientific to perpetuate infallibility.  The fact is that the statement “all fingerprints are not identical” is logically unfalsifiable10.  And the more scientists argued against the infallibility of fingerprinting the more the FBI became entrenched in their position after the Mayfield mistake11.  Take, for example, what Massey said shortly after the Mayfield case: “I’ll preach fingerprints till I die. They’re infallible12.”  It may be true that no fingerprints are perfectly alike (I suspect it is true) but it is also true that no fingerprint of the same finger is alike.  The National Academy of Sciences asserted that “The impression left by a given finger will differ every time, because of inevitable variations in pressure, which change the degree of contact between each part of the ridge structure and the impressions medium13.”  The point therefore becomes not if all fingerprints are unique but whether laboratories have the abilities to distinguish between similar prints, and if they do, what is the error in making that determination.

U.S. District Judge Louis H. Pollak ruled in a January, 2002, murder case that fingerprint analyses did not meet the Daubert standards.  He reversed his decision after a three-day hearing.  Donald Kennedy, Editor-in-Chief of Science opined “It’s not that fingerprint analysis is unreliable. The problem rather, is that its reliability is unverified either by statistical models of fingerprint variation or by consistent data on error rates14 15.”  As one might expect, the response by the FBI and federal prosecutors to Pollak’s original ruling and subsequent criticism was a united frontal attack not based on statistical analyses verifying the reliability of fingerprint identification but the infallibility of the process based on more than 100 years of fingerprint identification conducted by the FBI and other agencies around the world.  The FBI actually argued that the error rate was zero.  FBI agent Stephen Meagher stated during the Daubert hearing16, to Lesley Stahl during an interview on 60 Minutes17, and to Steve Berry of the Los Angeles Times during an interview18 that the latent print identification “error rate is zero”.  How can the error rate be zero when documented cases of error like Mayfield exist?  Even condom companies give the chance of pregnancy when using their product.

In 2009, the National Academy of Sciences through their committee The National Research Council produced a report on how forensic science (including fingerprinting) could be strengthened19.  Perhaps the most eye-opening conclusion of the report is that analyzing fingerprints is subjective.  It is worth quoting their entire statement: “thresholds based on counting the number of features [see diagram below] that correspond, lauded by some as being more “objective,” are still based on primarily subjective criteria — an examiner must have the visual expertise to discern the features (most important in low-clarity prints) and must determine that they are indeed in agreement.  A simple point count is insufficient for characterizing the detail present in a latent print; more nuanced criteria are needed, and, in fact, likely can be determined… the friction ridge community actively discourages its members from testifying in terms of probability of a match; when a latent print examiner testifies that two impressions “match,” they [sic] are communicating the notion that the prints could not possibly have come from two different individuals.”   The Research Council was particularly harsh on the ACE-V method (see the diagram below) used to identify fingerprint matches: “The method, and the performance of those who use it, are inextricably linked,and both involve multiple sources of error (e.g., errors in executing the process steps, as well as errors in human judgment).”  The statement is particularly disconcerting because, as the Research Council notes, the analyses are typically performed by both accredited and unaccredited crime laboratories or even “private practice consultants.”

fingerprinting copyThe fingerprint community in the United States uses a technique known via an acronym ACE-V – analyses, comparison, evaluation, and verification.  I give an example here to emphasize the basic cornerstone of the process which involves comparison of friction-ridge patterns on a latent fingerprint to known fingerprints (called exemplar prints).  Fingerprints come in three basic patterns: arches, loops, and whorls as shown at the top of the diagram.  The objective in the analysis is to find points (also called minutiae) defined by various patterns formed by the ridges.  The important varieties are shown above.  For example, a bifurcation point is defined by the split of a single ridge into two ridges.  I have shown various points on the example fingerprint.  Once these points are ascertained by the examiner, the points are used to match to similar points in the exemplars in their relative spatial locations.  It should be obvious that the interpretation of points can be problematic and is subjective.  For example, note the region circled where there are many “dots” which may be related to ridges or may be due to contaminants.  There is still no standard used in the United States for the number of matching points required to obtain a “match” (although individual laboratories do set standards).  Computer algorithms, if used, provide a number of potential matches and examiners determine which of the potential matches, if any, is correct.  The method appears straight forward but in practice examiners have trouble agreeing even on the number of points due to the size of the latent print (on average latent prints are typically one fifth of the surface of an exemplar print), smudges and smearing, the quality of the surface, the pressure of the finger on the surface, etc.20  There is another technique developed in 2005 called Ridges-in-Sequence system (RIS)21.  For a more detailed description of latent fingerprint matching see Challenges to Fingerprints by Lyn and Ralph Norman Haber 22

Now you might be thinking that the Mayfield case was unusual given the FBI and other agencies promote infallibility, but Mayfield seems to be the tip of the iceberg!  Simon Cole of the University of California, Irvine23 has documented 27 cases of misidentification (Cole excluded cases of matches related to outright fraud) up through 2004 and underscores the high probability of many more incorrect undetected cases because of the relatively large number of documented mistakes that have slipped through the cracks (Cole uses the term “fortuity” of the discoveries of misidentification) — particularly when the FBI and other agencies are very tight lipped about detailing how they arrive at their conclusions when there is a match.  These are quite serious cases involving people that spent time in prison for wrongful charges related to homicides, rape, terrorist attacks, and a host of other crimes.

It is worth looking at the Commonwealth v. Cowans case because it represents the first fingerprint-related case overturned on DNA evidence via the Innocence Project.   On May 30, 1997, a police officer in Boston was shot twice by an assailant using the officers own revolver.  The surviving officer eventually identified Stephen Cowans from a group of eight photographs and then from a lineup.  An eye-witness that observed the shooting from a second story window also fingered Cowans in a lineup.  The assailant, after leaving the scene of the crime, forcibly entered a home where he got a glass of water from a mug.  The family present in the home spent the most time with the assailant and, revealingly, did not identify him in a lineup.  The police obtained a latent print from the mug and fingerprint analyzers matched it to Cowans24.  The conflict between eyewitness’ testimonies made the fingerprint match pivotal and led to a guilty verdict.  After five years in prison, Cowans was exonerated on DNA evidence from the mug that showed he could not have committed the crime.

What do we know about the error (or error rate) in fingerprint analyses?  Recently, Ralph and Lyn Haber of Human Factors Consultants have compiled a list of 13 studies (that meet their criteria through mid-2013) that review attempts to ascertain the error rate in fingerprint identification25.   In the ACE-V method (see diagram above) the examiner decides whether a latent print is of high enough quality to use for comparison (I emphasize the subjectivity of the examination – there are no rules for documentation).  The examiner can conclude that the latent print matches an exemplar, making an individualization (identification), she can exclude the exemplar print (exclusion – the latent does not match), or she can decide that there is not enough detail to warrant a conclusion26.  The first thing to point out is that no study has been done where the examiners did not know they were being tested.  This poses a huge problem because examiners tend to determine more prints inconclusive when being examined27.  Keeping the bias in mind, let’s look in detail at the results of one of the larger studies reviewed by the Habers.

The most pertinent extensive study was done by Ulery et al.28.  They tested 169 “highly trained” examiners with 100 latent and exemplar prints (randomly mixed for each examiner with latent-exemplar pairs that did not match and those that did match).  Astoundingly, for pairs of latents that matched exemplars, only 45% were correctly identified.  The rest were either misidentified (13% were excluded that should have been matched and a whopping 42% found to be inconclusive that should have been matched).  I recognize that when examiners are being tested they have a tendency to exclude prints that they might otherwise attempt to identify, but even with this in mind, the rate is staggering.  How many prints that should be matched are going unmatched in the plethora of fingerprint laboratories around the country?  Put in another way, how many guilty perpetrators are set free on the basis of the inability of examiners to match prints?   Regarding the pairs of latent and exemplar prints that did not match, there were six individualized (matched) that should not have been — a 0.1% error.  Even if the error is representative of examiners in general (and there is plenty of reason to believe the error rate is higher according to the Habers), it is too high.  Put another way, if 100,000 prints are matched with a 0.1 percent error rate, 100 individuals are going to be wrongly “fingered” as a perpetrator.  And the way juries ascribe infallibility to fingerprint matches, 100 innocent people are going to jail.

There are a host of problems with the Ulery study including many design flaws.  For one thing, the only way to properly ascertain error is through submitting “standards” as blinds within the normal process of fingerprint identification (making sure the examiners do not know they are attempting to match known latent prints).   But there are many complications involved in the procedure that begins with not having any agreed upon standards or even rules to establish what a standard is29.  I have had some significant and prescient discussions with Lyn Haber on the issues.  Haber zeroed in on the problems at the elementary level: “At present, there is no single system for describing the characteristics of a latent.  Research data shows [sic] that examiners disagree about which characteristics are present in a print.”  In other words, there is no cutoff “value” that determines when a latent print is “of such poor quality that it shouldn’t be used”.  Haber also notes that “specific variables that cause each impression of a finger to differ have not been studied”.

The obvious next step would be to have a “come to Jesus” meeting of the top professionals in the field along with scientists like the Habers to standardize the process.  That’s a great idea, but none of the laboratory “players” are interested in cooperating — they are intransigent.  The most salient point Haber makes in my opinion is the desire by various agencies to actively keep the error unknowable.  She states that “The FBI and other fingerprint examiners do not wish error rates to be discovered or discoverable.  Examiners genuinely believe their word is the “gold standard” of accuracy [but we most assuredly know they make mistakes] .  Nearly all research is carried out by examiners, designed by them, the purpose being to show that they are accurate. There is no research culture among forensic examiners.  Very very few have any scientific training.  Getting the players to agree to the tests is a major challenge in forensic disciplines.”  I must conclude that the only way the problem will be solved is for Congress to step in and demand that the FBI admit they can make mistakes, work with scientists to establish standards, and adequately and continuously test laboratories (including their own) throughout the country.   While we wait, the innocent are most likely being sent to jail and many guilty go free.

A former FBI agent still working as a consultant (he preferred to remain anonymous) candidly told me that the FBI knows the accuracy of various computer algorithms that match latents to exemplars.  He stated “When the trade studies were being run to determine the best algorithm to use for both normal fingerprint auto identification and latent identification (two separate studies) there were known sample sets against which all algorithms were run and then after the tests the statistical conclusions were analyzed and recommendations made as to which algorithm(s) should be used in the FBI’s new Next Generation Identification (NGI) capability.”  But when I asked him if the data were available he said absolutely not “because the information is proprietary” (the NGI is the first stage in the FBIs fingerprint identification process – they match with the computer and send the latent with closest matches to the analyzers).  Asking for the computer error rate should not be proprietary – the public does not have to know the algorithm to understand the error on the algorithm.

Of course, computer analyses bring an additional wrinkle to the already complex determination of error.  Haber states “Current estimates are such that automated search systems are used in about 50% of fingerprint cases.  Almost nothing is known about their impact on accuracy/error rates.  Different systems use different, proprietary algorithms, so if you submit the same latent to different systems (knowing the true exemplar is in the data base), systems will or will not produce the correct target, and will rank it differently… I am intrigued by the problem that as databases increase in size, the probability of a similar but incorrect exemplar increases.   That is, in addition to latents being confusable, exemplars are.”   I would only emphasize that the FBI seems to know error rates on the algorithms but has not, as far as I know, released that data.

To be fair, I would like to give the reader a view from the FBI perspective.  Here is what the former FBI agent had to say when I showed him comments made by various researchers: “When a latent is run the system generally produces 20 potential candidates based on computer comparison of the latent to a known print from an arrest, civil permit application where retention of prints is permissible under the law etc.  It is then the responsibility of the examiner from the entity that submitted the latent to review the potential candidates to look for a match.  Even with the examiner making such a ‘match’ the normal procedure is to follow up with investigation to corroborate other evidence to support/confirm the ‘match’.  I think only a foolish prosecutor would go to court based solely on a latent ‘match’… it would not be good form to be in court based on a latent ‘match’ only to find out the person to whom the ‘match’ was attached was in prison during the time of the crime in question and thus could not have been the perpetrator.”  Mind you, he is a personal friend whom I respect so I don’t criticize him lightly, but he is touting the standard line.  Haber notes that in the majority of cases she deals with as a consultant “the only evidence is a latent”.

I suspect that the FBI along with lesser facilities does not want anyone addressing error because the courts may not view fingerprints as reliable, no, infallible, as they currently do, and the FBI might have to go back and review cases where mistaken matches are evident.  As a research geochemist I have always attempted to carefully determine the error involved in my rock analyses so that my research would be respected, reliable, and a hypothesis drawn from the research would be based on reality.  We are talking about extraordinary procedures to determine error on rock analyses.  No one is going to jail if I am wrong.  I will leave you with Lyn Haber’s words of frustration: “No lab wants to expose that its examiners make mistakes.  The labs HAVE data: when verifiers disagree with a first examiner’s conclusion, one of them is wrong.  These data are totally inaccessible… I think that highly skilled, careful examiners rarely make mistakes. Unfortunately, those are the outliers.  I expect erroneous identifications attested to in court run between 10 and 15%.  That is a wild guess, based on nothing but intuition!  As Ralph [Haber] points out, 95% of  cases do not go to court.  The defendant pleads.  So the vast majority of fingerprint cases go unchallenged and untested. Who knows what the error rate is?…  Law enforcement wants to solve crimes.  Recidivism has such a high percent, that the police attitude is, If [sic] the guy didn’t commit this crime,  he committed some other one. Also, in many states, fingerprint labs get a bonus for every case they solve above a quota… The research data so far consistently show that false negatives occur far more frequently than false positives, that is, a guilty person goes free to commit another crime.  The research data also show — and this is probably an artifact — that more than half of identifications are missed, the examiner says Inconclusive.  If you step back and ask, Are fingerprints a useful technique for catching criminals, [sic] I think not!  (These comments do not apply to ten-print to ten-print matching.)”

  1. The quote is from a government affidavit – Application for Material Witness Order and Warrant Regarding Witness: Brandon Bieri Mayfield, In re Federal Grand Jury Proceedings 03-01, 337 F. Supp. 2d 1218 (D. Or. 2004) (No. 04-MC-9071)
  2. Heath, David (2004) FBI’s Handling of Fingerprint Case Criticized, Seattle Times, June 1
  3. Wikipedia
  4. Kershaw, Sarah (2004) Spain and U.S. at Odds on Mistaken Terror Arrest, NY Times, June 5
  5. see the following for more details: Cole, Simon (2005) More than zero: Accounting for error in latent fingerprint identification: The Journal of Criminal Law & Criminology, 95, 985
  6. Actually the Daubert standard comes not only from Daubert v. Merrell Dow Pharmaceuticals but also General Electric Co. v. Joiner and Kumho Tire Co. v. Carmichael
  7. I can’t help but wonder what it was based on prior to Daubert.
  8.  It remains a mystery to me as to how a judge would have the training and background to ascertain if an expert witness meets the Daubert standard, but perhaps that is best left for another essay
  9. Federal Bureau of Investigation (1985) The Science of Fingerprints: Classification and Uses
  10. What I mean by unfalsifiable is that even if we could analyze all the fingerprints of all living and dead people and found no match, we still could not be absolutely certain that someone might be born someday with a fingerprint that would match someone else.  Some might think that this is technical science speak but in order to qualify as science the rules of logic must be rigorously applied.
  11. Cole, Simon (2007) The fingerprint controversy: Skeptical Inquirer, July/August, 41
  12. Scarborough, Steve (2004) They Keep Putting Fingerprints in Print, Weekly Detail, Dec. 13
  13. National Research Council of the National Academies (2009) Strengthening Forensic Science in the United States: A Path Forward: The National Academy of Science Press
  14. Error rate as used in the Daubert standard is somewhat confusing in scientific terms.  Scientist usually determine the error in their analyses by comparing a true value to the measured value, inserting blanks that measure contamination, and usually doing up to three analyses of the same sample to provide a standard deviation about the mean of potential error for the other samples analyzed.  For example, when measuring the chemistry of rocks collected in the field, my students and I have used three controls on analyses:  1) Standards which are rock samples with known concentrations determined from many analyses in different laboratories by the National Institute of Standards and Technology, 2) what are commonly referred to as “blanks” (the geochemist does all the chemical procedures she would do without adding a rock sample in an attempt to measure contamination), and three analyzing a few samples up to three times to determine variations.  All samples are “blind” – unknown to the analyzers.  The ultimate goal is to get a handle on the accuracy and precision of the analyses.  These are tried and true methods and as I argue in this essay, a similar approach should be taken for fingerprint analyses.
  15. Kennedy, Donald (2003) Forensic science: Oxymoron?, Science, 302, 1625.
  16. Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) 509 US 579, 589
  17. Stahl, Lesley (2003) Fingerprints 60 Minutes, Jan. 5.
  18. Berry, Steve (2002) Pointing a Finger: Los Angeles Time, Feb. 26.
  19. see ref. 13
  20. Haber, L. and Haber, R. N. (2004) Error rates for human latent fingerprint examiners: In Ratha, N. and Bolle, R., Automatic Fingerprint Recognition Systems, Springer
  21. Ashbaugh, D. R. 2005 Proposal for ridge-in-sequence: http://onin.com/fp/ridgeology.pdf
  22. Haber, L. and Haber, R. N. (2009) Challenges to Fingerprints: Lawyers & Judges Publishing Company
  23. see ref. 5
  24. One of the biggest criticism of the fingerprint community comes from the lack of blind tests — fingerprint analyzers often know the details of the case.  Study after study has shown that positive results are obtained more frequently if a perpetrator is known to the forensic analyzers – called expectation bias: see, for example, Risinger, M. D. et al. (2002) The Daubert/Kumbo Implications of observer effects in forensic science: Hidden Problems of Expectation and Suggestions, 90 California Law Review
  25. Haber, R. N. and Haber, N. (2014) Experimental results of fingerprint comparison validity and reliability: A review and critical analysis: Science and Justice, 54, 375
  26. see The Report of the Expert Working Group on Human Factors in Latent Print Analysis (2012) Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach: National Institute of Technology
  27. see ref. 24
  28. Ulery, B. T., Hicklin, R. A., Buscaglia, J., and Roberts, M. A. (2011) Accuracy and reliability of forensic latent fingerprint decisions: Proc. National Academy of Science of the U.S.
  29. see ref. 14

The asbestos coverup

When the World Trade Center was being built in 1973, Dr. Irving Selikoff, an expert on asbestosis and cancers caused by asbestos, was an outspoken critic of the wholesale spraying of the floors of the two structures with insulator containing copious quantities of asbestos for fire-proofing.  He knew the potential dangerous hazards of asbestos as did the asbestos industry.  Fortunately not all floors were insulated because New York City instituted a ban on the spraying of asbestos in the same year.  Fast forward almost 30 years when the plumes of dust rolled over lower Manhattan after the collapse of the World Trade Center towers on 9/11.  The brave souls that rushed to help survivors and participate in the cleanup along with the many people that lived and worked in the area were exposed to one of the most serious carcinogens ever documented – asbestos in its many forms.  One of the most deadly results of inhaling the tiny asbestos fibers that permeated the World Trade Center clouds is the nearly always fatal cancer mesothelioma (known to be caused only by asbestos).  Unfortunately, the cancer often shows up decades after exposure.  What many people do not realize is that asbestos has still not been banned in the United States even though the asbestos community has known internally since at least the 1930s that it was not only harmful but deadly.  The asbestos executives and their hired doctors promulgated a disinformation campaign that asbestos was and is harmless knowing full well that these claims were patently wrong1.

Selikoff first came to prominence in 1964 when he organized an international symposium on the “Biological Effects of Asbestos” through the New York Academy of Sciences.  Selikoff, through his position as the director of the Environmental Sciences Laboratory at the Mount Sinai Hospital in New York, was able to persuade the International Association of Heat and Frost Insulators & Asbestos Workers union to provide him with workers’ medical profiles2.  He presented four papers at the conference on the results of his epidemiological studies of the union workers.  There was no mistaking his results — working with asbestos insulation caused an increase in death by 25 percent from not only mesothelioma but asbestosis, lung cancer and even cancers of the stomach, colon and rectum.  His independent research could not be buried by the asbestos industry as they had with their subsidized research, and Selikoff’s results were reported widely in the press.  Selikoff’s team even found that insulator workers who smoked were ninety times more likely to get some form of asbestos-related cancer than those workers that did not smoke.

I don’t want to appear sanctimonious, but the dangers due to asbestos Selikoff and others reported in 1964 should have caused the asbestos industry pause – maybe even force them to attempt to improve working conditions.  But as in other industries with similar threats, the asbestos executives circled the wagons and then went on the offensive.  The Asbestos Textile Institute’s lawyers (the asbsetos industry’s public relation’s arm to promote asbestos products) sent letters to the New York Academy of Sciences and Selikoff warning them about the impact of their “damaging and misleading news stories”.  Their smear campaigns began by attacking Selikoff’s medical credentials and the quality of his work.   For years, the asbestos industry stalked Selikoff and others at conferences and meetings attempting to undermine their work.   More details can be found in Jock McCulloch and Geoffrey Tweedle’s outstanding book entitled Defending the Indefensible: The Global Asbestos Industry and its Fight for Survival.  

It is astounding the lengths the asbestos industry went to suppress information they deemed adverse and to circulate disinformation cranked out by their hired doctors and researchers.  Asbestos executives also turned to the largest public relations firm in the world – Hill & Knowlton – a sort of hit squad of lawyers with a ubiquitous presence in undermining science damaging to their clients which included Big Tobacco3.  But perhaps what can only be described as turpitude, the companies led the disinformation campaigns while laborers in a whole slew of industries from mining to textiles worked in deplorable conditions that caused sickness and death.  In the Libby mine in Montana, for example, not only was fibrous asbestos dust so thick in some areas of the open-pit mine it was hard for workers to see each other.  The dust blew into the nearby town causing asbestos illness and death to residents (the Libby mine was eventually closed due to the huge number of tort claims by families struck by illness and death related to the operations).  It was common for the industry to fire workers that developed asbestosis or cancer to avoid the appearance of illnesses related to asbestos.  When it became clear to the industry that mesothelioma was a serious public relations nightmare, their public relation’s machine went into full overdrive focusing on two strategies.  1) Reassuring people that asbestos-related diseases were caused only by the inhalation of large amounts of fiber dust over long periods of time (internal memorandums clearly show that the companies involved knew this was not true).  2) Foisting the argument on the public that mesothelioma was the result of blue asbestos and that other types of asbestos, such as chrysotile, were safe (once again, internal memorandums show that the companies knew this to be patently untrue).

The diagram below shows the world production numbers for asbestos from 1900 through 2015.  One might think that the asbestos industry would have been crippled by Selikopf’s research reported in 1964.  But production actually increased through the 1960s and went on increasing into the late 1970s before tort claims began to impact the industry.  But even today, worldwide production has not decreased below the early 1960s output due mostly to production in developing nations.  The diagram is a testimonial to the success of the asbestos industry’s ability to undermine solid scientific research with political clout and the financial resources to promote their agenda – asbestos is safe.  We have seen the same thing in many other industries like Big Tobacco with smoking and Exxon with global warming.  McCulloch and Tweedle make a salient point: “Put another way, nearly 80 per cent [sic] of world asbestos production in the twentieth century was produced after the world learned that asbestos could cause mesothelioma!”

Asbestos2Data from Virta4 for 1900 through 2003, Virta for 2004 through 2006 (consumption), and Statista for 2007 through 2015.

Imagine that you are the mayor of a small town dependent on tourism, and doctors in the village are reporting an outbreak of a bacterial disease that is killing 40 percent of those being infected.  You decide that reporting the disease to the CDC or WHO would harm the financial health of your town and you seek to suppress the seriousness of the outbreak.  You tell tourists they have nothing to worry about and chastise the local news affiliates by telling them they are acting hysterically and causing undue panic.  Would anyone deny that you are guilty of a serious criminal act?  This is essentially what the asbestos industry did over many decades, and yet no one in the asbestos industry has served a day jail time for their actions.  In fact, they were so successful in their disinformation campaign that even  today as mentioned above asbestos is not banned in the US even though cheap substitutes exist and asbestos has been banned in other industrial nations such as France and Britain.  I asked Dr. Jock McCulloch why and his response is telling: “There is no easy answer to your question nor to the adjacent one as to why 2 million tons of asbestos will be mined and used globally during 2016. One of the key factors has been the corporate corruption of the science (which began in the 1930s) and the other is the baleful behaviour of Canada at international forums- due in the main to federal/Quebec politics. And then there is Russia, its political climate and anti-western reflexes.”  Both Canada and Russia have been and are huge producers of asbestos and Canada with the help of scientists at McGill University funded by the asbestos industry (one of the reasons why scientists should remain independent in their research) has been instrumental in persuading other governments to act gingerly against asbestos interests.

Distressing research now shows that trivial exposure to asbestos can cause cancers.  The Harvard paleontologist Stephen Jay Gould died of cancer caused by asbestos fibers perhaps from asbestos within ceiling tiles.  Actor Steve McQueen died at the age 50 from mesothelioma probably from asbestos exposure when he worked in a break repair shop (breaks are lined with asbestos).   Many instances of cancer among family members of miners and other laborers in the asbestos industry have been attributed to exposure to asbestos fibers brought home on clothing.  I think about the lives destroyed by asbestos when I read the words of McCulloch and Tweedle:  “Central to the strategy was a policy of concealment and, at times, misinformation that often amounted to a conspiracy to continue selling asbestos fibre irrespective of the health risks.”  I might add that attempts to force the asbestos industry to warn their workers about the dangers of asbestos were averted.  And although most mining and manufacturing has moved out of industrialized nations, the developing world has picked up the slack — places like Swaziland where laborers have few protections and little legal recourse for compensation from asbestos illnesses.  Records through litigation have turned up showing that industry officials thought black workers were far less sophisticated than those in the US or Europe about hazards to their health and sought to take advantage of them.

Stephen_Jay_Gould_2015,_portrait_(unknown_date) Stephen Jay Gould Steve_McQueen_1959Steve McQueen

Sadly, the large asbestos companies (18 in all) were able to avoid paying thousands of tort claims in the US by declaring bankruptcy through Chapter 11.  Bankruptcy implies that a company is insolvent, but due to the Manville Amendment passed by Congress in 1994 to help the asbestos industry, companies only need to show that future liabilities exceed the assets of the company in order to declare bankruptcy.  The insurance companies pulled a similar “fast one” by shuttling liabilities into shell companies that also declared bankruptcy.   I am very much for free and open trade but companies should be held responsible for travesties, and the bankruptcy claims are tantamount to highway robbery in my humble opinion.  Many of those who lost out on benefits and claims were already on the edge of poverty from unemployment and the medical costs from their ailments.  I might also point out that the American taxpayer is the ultimate source of support to these workers and their families because the asbestos companies were able to weasel their way out of their responsibilities to their employees and/or those harmed by their products.  It may be important to remind the reader that it is estimated that between 15 to 35 million homes contain Libby asbestos as insulation.  Asbestos is a problem that is not going away quickly.

I understand that industries like asbestos employ a large number of people (at one time in the 1960s, more than 200,000 people worked in the asbestos industry) and many of these workers would have difficulties finding new jobs elsewhere if the industries were closed overnight.  But there are various steps that should be taken based on what we have learned from the asbestos travesty when future industries are found to be responsible for harm to their workers.  1) It should be a crime to purposely mislead the public and/or workers on safety issues of products.  This must include the purposeful undermining of peer-reviewed science.  The penalties should be stiff and include jail time.  Laws need to be enacted accordingly.  2) Workers and their families need to be informed of the dangers in clear language in order that they may decide whether they wish to take the risk of continued employment in the industry.   3) In cases like asbestos where it is clearly a dangerous hazard, the product should be phased out by substitution of other products and eventually banned.   4) Workers and those impacted by the product should be entitled to compensatory damages through the establishment of funds in negotiations with the government.  5) And finally, American companies should be prohibited from moving their operations to nations that have lax laws that permit workers to be exposed to the hazardous products.  If corporate America can’t police itself (and I don’t think they can based on the tales of woe involving tobacco, pesticides, global warming, etc.) the government must step in.

  1. McCulloch, J. and Tweedale, G. (2008) Defending the Indefensible: The Global Asbestos Industry and its Fight for Survival: Oxford University Press
  2. Selikoff recruited Dr. E. Cuyler Hammond who had already published his landmark research on the link between smoking and lung cancer
  3. Oreskes, N. and Conway, E. M. (2010) Merchants of Doubt: Bloomsbury Press
  4. Virta, R. L. (2006) Worldwide Asbestos Supply and Consumption Trends from 1900 through 2003: USGS Circular 1298

The new black gold – fracked methane gas and oil

The term fracking conjures up so many knee-jerk-bad reactions that I am hesitant to broach the subject.   I suppose if I am going to wade into the topic I should give some bona fides to display my knowledge of the petroleum industry, but not too many bona fides so that I might be seen as a talking wonk for the gas industry.  I worked for one year as an engineer for a well service company called Schlumberger (world’s largest) and two as a geologist with Shell Oil.   Shell gave its geologists full responsibility for drilling a well from the time it was proposed to production if it hit oil.  Of the 11 wells I proposed, 3 hit oil which was above the industry standard in producing fields in the late 1970s and early 1980s.  Eventually I realized my calling was in teaching and research and left to go back to school for my PhD.  But not before I got a pretty good idea of how the industry works.

The process of drilling is not complicated although the devil can be in the details.  A rig contains strings of thirty-foot drill pipe which attach to a tri-cone tungsten carbide bit (see the image below).  The bit spins from drives or motors as drilling fluid, called mud (contents vary but clay, water and lubricants are typical), is pumped through the pipe string to keep the bit cool, increase pressure, and bring the rock debris from drilling back to the surface along the outside of the pipe.  One of the technological marvels developed in modern times is the ability to direct the drill bit to specific locations with pin-point accuracy by knowing where the bit is in three-dimensional space usually thousands of feet below the surface.  Directional survey measurements are complex but are based on measurements while drilling through various instruments.  These advances have enabled horizontal drilling which has become important in fracking.

800px-Tete-de-foreuse-p1010272Rama, Wikipedia

I would be remiss not to emphasize the importance given to protecting the water table when drilling.  State and Federal regulations require the well to be sealed off at least 50 feet below where potable groundwater can be produced, and those laws have been in place as far back as anyone can remember.  The drill pipe is tripped (pulled completely out of the hole) when regulators deem the surface casing should be set to protect the water table (something on the order of 500 feet usually).  The casing is cemented in place, and if it is done correctly,  we know from the drilling of hundreds of thousands of wells over many decades that the water table is protected.  After the surface casing is set, drilling is continued until the target zone is reached.  The pipe is tripped again and the entire well is generally set with cemented production casing.  The hole is plugged at the bottom usually up to 50 feet below the horizon of interest.  The casing is perforated by tools that blow holes in it precisely where the rock containing oil and/or gas exists.  Lisa Margonelli has written an excellent book entitled Oil On the Brain about the details of drilling and its impact on the politics of many countries like Nigeria and Venezuela1.

When I worked for Schlumberger, it was my job to determine if production casing should be set by running tools in the hole.   The measurements produced records called well logs that gave us information about not only the rock below but whether it contained producible oil or not.  Drilling is a chancy business, not for the faint of heart.  Most wells never produce a drop of oil.  I have seen many an owner of a wildcat well near tears as he realized from the logs that the well was a “duster”.  That has changed to a great extent in the new-world order of gas and oil production through fracking.  The new targets — usually oil shales — were discovered decades ago by previous drilling.  They were ignored because shales do not naturally flow under the pressures at depth.  Shale is very porous but not permeable.  You need permeable rocks to produce oil and/or gas, or so it was thought.

That was before Mitchell Energy, a midsized exploration and production company, drilled the S. H. Griffin #4 well in North Texas into the oil- and gas-rich Barnett Shale in 1997.  They used fracking techniques to produce large quantities of methane gas from what was traditionally seen as non-producible rock.  If you are interested in more of the details, read Gary Sernovitz’s immensely entertaining and witty book The Green and the Black2.  Sernovitz, even with ties to the petroleum industry, takes a rather neutral approach to adjudicate the brouhaha over fracking.  One of the highlights of the book is his look at the impacts of the new United States gas and oil reserves on the political and economic scene.

The S. H. Griffin #4 not only produced gas, it produced it in steady quantities (1.5 million cubic feet per day).  So how does fracking make an otherwise impermeable rock produce as if it was a well at the height of the oil boom of the 1960s in the United States?  Fracking sounds ominous and sinister and conjures up visions of rock being fractured all the way to potable water zones.   But it is nothing of the sort — pure fiction.  The technique took decades of testing and experimentation in wells to develop.  The secret is hydraulic pressure from fluids injected into the well to cause the shale to fracture.  The fracturing is usually limited to about 300 feet in an outward radius around the drill hole.  And don’t forget, the drill holes typically go down for thousands of feet below the surface and are protected with cemented casing that has only been perforated in small sections usually at the bottom of the hole where the target rock exists.

It did not take companies long after fracking became successful to incorporate horizontal drilling, another United States technological advance, into the new smorgasbord of production proficiencies.  With the ability to target a bit within inches of a desired location, drillers learned how to gradually arc a pipe into the horizontal (see image below).   The technology turned out to be a bonanza when combined with fracking.  Companies drilled and set casing directly within and parallel to the oil shales enabling them to frack large sections of the rock which sent production through the ceiling.

Hydraulic_Fracturing-Related_ActivitiesEPA

The chemicals used in fracking were originally a trade secret, but people talk, and once the word was out, companies like Halliburton published the composition of their fracking liquids.  Turns out 90 percent of the frack is made up of water, 9.5 percent consists of a proppant which is usually sand, and only 0.5 percent consists of the scary chemicals often used to undermine the industry.  The sand serves as a support to keep the fractures (caused by the pressurized fluid) propped open so gas and/or oil will flow.  I am not going to pull punches here.  It takes a lot of water to frack a well.  Sernovitz estimates that a typical frack (an average of 22 stages) uses between 4 and 8 million gallons of water and about 6 million pounds of sand.  Unfortunately, not all of the fracking fluid stays in the hole.  Some resurfaces.  Today the water that comes back is reused or disposed of by pumping it into former producing fields in a concerted effort to make sure the chemicals within the water (even if they are only 0.5%) are placed out of harms way.

It has been widely reported that fracking causes earthquakes.  Actually the disposal of water being pumped into the ground (usually from fracking) causes the seismic activity.  Perhaps it seems like a trivial difference, but the public seems to have the idea that the pressure from fracking is so great that it directly causes earthquakes.  The typical increase in seismic activity in a state like Oklahoma is usually effectively mitigated by diverting the injection of water from fields responsible for the activity or requiring the water to be disposed of via other methods.  There can be little doubt that the earthquakes are associated with well injection and regulatory commissions need to fully address the problems.

The HBO premier of Gasland, a 2010 documentary about the natural-gas industry in general and fracking in particular, was probably responsible, at least in part, for New York State banning fracking and a great deal of misunderstanding about natural gas and its impact on the environment.   I have two conflicting opinions about the documentary by Josh Fox.  1) It is clearly tarnished with misrepresented science, almost hysterical overreaction, and historical inaccuracies.  The documentary has been thoroughly taken to task by Energy in Depth.  2) Having said that, there is no question that it is emotionally moving.  It was difficult to watch people whose lives have been impacted badly by the failures of the gas industry.  My conclusion — Gasland was necessary to open a national debate about the issue which has led to more government oversight and less rogue shortcuts leading to serious problems.  However although there will always be problems associated with any industry, drilling for natural gas and/or oil on land in the United States is relatively safe to groundwater.  We simply have to make sure that casing practices are properly implemented.  Water taps catching fire in Dimock, Pennsylvania, happened because of sloppy cement work and poor casing in 27 holes during the early days of drilling in the State (gas leaked through the casing into the surrounding water table).   I find it reprehensible that companies would not protect the water table at all costs and fully agree that the companies cited deserve the penalties they received and payouts they had to make to people they injured.

Finally, I need to emphasize that in 2015 the Environmental Protection Agency (EPA) did a summary paper entitled Assessment of the potential impacts of hydraulic fracturing for oil and gas on drinking water resources and concluded that “Assessment shows hydraulic fracturing activities have not led to widespread, systemic impacts to drinking water resources”.  We can conclude that the gas industry has made mistakes, but we cannot contend that our drinking water is in danger because of fracking despite claims to the contrary in sources like Gasland.

Let’s not forget why Fox started filming the documentary – to protect his vacation home in a pristine part of Pennsylvania near the border with New York.  I get it.  No one wants a drill rig in their back yard even if it is only there for 40-days worth of drilling.  By the way, if you want to read a reasoned and enlightening book about how people are affected adversely by drilling, I recommend Seamus McGraw’s The End of Country: Dispatches from the Frack Zone3.  He weighs the potentially bad impacts of drilling with a healthy dose of understanding that gas and oil companies are filling a demand created by the United States and other world consumers.   Unfortunately, Fox never examines the financial impacts of shutting down the fracking industry.

I recently wrote an article on the serious implications of global warming particularly related to the increase of athropogenic gases in our atmosphere.  Of the three major fossil fuels, coal is, by far, the worst polluter of carbon dioxide followed by petroleum.  Natural gas is the least (see figure below showing the effects of anthropogenic gases as radiative forcing).  In fact, Sernovitz has emphasized that “the United States has led the world in carbon dioxide emissions reduction because of shale gas [use of methane gas instead of coal]”.

gases

IPCC Fifth Assessment Report 2013

It would be unfair not to point out that methane leaks into the atmosphere directly from the production of methane gas contributing to anthropogenic gases (as methane) also, but according to the EPA in a report entitled Overview of Greenhouse Gases: “Methane (CH4) emissions in the United States decreased by 6% between 1990 and 2014.”  During the period from 2007 to 2014, natural gas production was increased tenfold according to the US Energy Information Administration database.   The EPA goes on to comment that “During this time period [1990 to 2014], emissions increased from sources associated with agricultural activities, while emissions decreased from sources associated with the exploration and production of natural gas and petroleum products.”   Note the lack of effect from the natural gas boom between 2007 to 2014 in the graph below showing total United States methane emissions (converted to carbon dioxide equivalents).   In a paper funded by the green-friendly Environmental Defense Fund (EDF) and published in the Proceedings of the National Academy of Science, Allen et. al4 estimated from measuring 190 onshore gas locations that about 0.42 percent of the methane produced leaks from drilling and completion of the wells.   The EPA is working with the gas companies to further reduce this figure but, once again, it is hardly having the impact sources such as Gasland have portrayed.

USMethaneEmissionsTimeSeriesEPA

The oil production in thousands of barrels per day since 1966 from the top ten oil producing countries (as of 2015) is shown in the diagram below.  One of the most startling aspects of the graph is that the United States has become the World’s largest producer of oil.  It’s not Saudia Arabia or Russia, it’s the United States.  What is even more remarkable is that our world lead came through good old fashion American know how — the technology that enabled the United States’ producers to frack horizontally.   I am no flag waver, but there is no denying how the United States has transformed itself.  The halcyon days of the 1960s when the United States led production worldwide were thought to be gone forever (see figure).  By the early 1980s, even secondary recovery processes in declining oil fields could not up American production.   Our decline in oil production continued until about 2005 when fracking began to be felt.  The dramatic impact of that technology can be seen by the subsequent rise in production for the last 10 years in the graph below.  However, our increased production does not meet our ever-increasing demand, but it not only helps our trade deficit but decreases our dependence on oil from the troubled Middle East and a hostile Russia.  Along with the increase in oil production, we have also become the world’s leader in the production of natural gas (don’t forget that both oil and natural gas have less impact on climate change than coal).

kbdData from BP

I asked Gary Sernovitz what he thought about America’s new role as a leading oil and natural gas producer: “One of the strange things about the gas boom is that even as prices have gone down, and activity has gone down (because of low prices), volumes have still gone up—a credit to how productive have been [sic] the wells in the Northeast US.  This year [2016] gas production is down slightly, but we’re still producing 34% more than the Russians so no risk of losing our crown. 2015 was the year that we exceeded Saudi Arabia in total oil production, and became the world’s largest oil producer. We’ve temporarily lost that crown in 2016, but I’d expect [our] prices to recover for that leadership to happen again soon.  And I do think we’re still by far the largest oil and gas producer, despite the dip in oil production because of prices, as we’re far ahead of Russia on oil now too.”

So I would like to summarize the article by stating categorically that we need to curb anthropogenic gasses (carbon dioxide, methane, etc.).  But attempting to shut down the oil and gas industry in the United States because of fracking and/or to solve the climate change problem is like trying to take out a drug cartel to stop drug usage in the United States.  The only way we are going to reduce our dependency on oil and gas is to reduce the increasing need for it.  Fracking is relatively safe to the consumer and looks to be giving America another chance to remain less dependent on other suppliers while we find alternative sources to replace or at least curb America’s craving for energy.

  1. Margonelli, L. (2007) Oil on the Brian: Adventures from the Pump to the Pipeline: Doubleday
  2. Sernovitz, G. (2016) The Green and the Black: The Complete Story of the Shale Revolution, the Fight over Fracking, and the Future Energy: St. Martin’s Press
  3. McGraw, S. (2011) The End of Country: Dispatches from the Frack Zone: Random House
  4. Allen, D. T. et. al (2013) Measurements of methane emissions at natural gas production sites in the United States: Proceedings of the Natl. Acad. Science: 110, 17768–17773

Diamond rush

A few lucky souls have stumbled on diamonds in glacial debris around the Great Lakes and further north into Canada for centuries.  Geologists have known that the sources of those diamonds represented a vast wealth of hidden treasure somewhere in the frozen tundra of northern Canada, but it was not until the late 1980s that a couple of cowboy geologists, Chuck Fipke and Stewart Blusson, painstakingly ferreted their way back to the source. But I am getting way ahead of the story.

Diamonds are brought to the surface from deep within the upper mantle via unusual igneous rocks called kimberlites (and sometimes lamproites).   I recognize I run the risk of losing my readers by delving into the nature of kimberlites, but to a geologist like myself kimberlites are crazy types of rocks.  Typical magmas (and lavas) like basalt form by partial melting of the mantle.  Kimberlites, on the other hand, are geologically unique because although they form from partial melting of the mantle, the melting is significant enough for these rocks to resemble compositionally (not precisely) the mantle itself.  They are referred to as ultramafic rocks as compared with basalts which are mafic (mafic means rich in magnesium and iron – two of the most abundant elements in the mantle).

Diamonds actually don’t form in kimberlites.  Think of kimberlites as a conveyor belt bringing diamonds that form under high temperatures and pressures (from about 125 to 175 kilometers1) to the surface relatively fast, before they can reequilibrate (breakdown) into other compounds like graphite or carbon dioxide.  Diamonds are not forever.  Many an exploration program has had its hopes dashed with the discovery of kimberlite full of octahedral or other cubic forms of graphite — degraded diamonds2.

Exploration for diamonds can be excruciatingly frustrating.  There are 6,400 known kimberlite pipes worldwide but only 30 or so have become viable mines — that’s about 0.5% chance that a discovered kimberlite will turn into a producing mine.  It’s true, diamondniferous kimberlites are hard to find, but you don’t need many diamonds to make a mine.  High-grade diamond kimberlites only contain a few carats per ton of rock.  That’s enough to make any geologist rich beyond her dreams.  Kimberlites form at greater than 200 kilometer depths (200 to 600 km) and are enriched in volatiles (e.g., carbon dioxide and water) that make the magmas not only buoyant but explosive.  They literally “blow” through the upper mantle and crust in perhaps a matter of hours (rates postulated are about 14 km/hr) forming carrot-shaped pipes called diatremes (see the diagram below).  The faster the better for diamond preservation.  But they also have to pick diamonds up along the way or incorporate them as the magma forms.  Kimberlites can contain as much as 25 to 50 percent rock within their magma acting as an elevator to the surface for mantle material helping geologists understand the mantle3.

VolcanicPipeAsbestos Wikipedia

After half a century or more of serious diamond exploration. we have learned that diamond-bearing kimberlites form below the cratons.  The cratons are the ancient regions of continents containing rocks greater than 2 billion years old.  There is still great debate about how the cratons formed, but every continent is rooted in these ancient environs.   If you are looking for diamonds, go to the cratons.  Before the 1980s, diamond kimberlite mines had been developed on every craton of all the continents except Antarctica and North America.  Diamonds come from two major sources: mantle rock (e.g., peridotite) and eclogite (metamorphosed basalt).  Diamond formation in peridotites occurred primarily in the Archean centered on a time about 3 to 3.3 billion years ago but some dates are as young as 1.9 billion years ago.  Eclogite diamonds tend to be younger from 1 to 2.9 billion years ago.

Where does the carbon come from to form diamonds?  No one knows for sure, but most researchers think that the carbon along with sediments and volatiles were subducted through plate tectonics (the ecologites brought up by kimberlites are likely ancient subducted ocean floor)4.  I am interested, through my own research, on how the cratons formed and when subduction began.  Many geologists pooh-pooh the idea that subduction could have begun so early in earth history so it is satisfying to see how diamond research supports the early existence of plate tectonics and subduction.  My colleagues and I have contended for years that the cratons are the result of ancient subduction.

Imagine Chuck Fipke in the 1980s looking out over the vast expanses of northern Canada contemplating all the diamonds he believed had to be out there in the craton hidden below tons of glacial deposits.  Those damnable glacial deposits were the reason no one had discovered pipes in Canada5.  The map below shows the furthest extent of the glaciers 17,000 years ago and the site of the diamond pipes eventually discovered.  Fipke also had to contend with De Beers, the giant cartel that controlled the world’s diamond markets. They were actively exploring with their practically unlimited resources.  I worked for De Beers as a consulting geologist for a time in the mid 1990s in Russia, and I can assure you, they are a force to be reckoned with.

diamond countryBase map from Wikipeida

By the mid 1980s, geologists had discovered that the mantle material brought up by kimberlites could aid them in their exploration thanks to a geochemist named John J. Gurney at the University of Capetown.  Diamonds form in equilibrium at specific temperatures and pressures with other minerals more abundant than diamonds.  Gurney, funded by Superior Oil, analyzed extensive mineral assemblages from kimberlites with and without diamonds and found that there are chemical signatures in the minerals that show up when diamonds are present.   One of the more famous diagrams is that of the chromium and calcium concentrations in garnets from the mineral assemblages.  Garnets fall into two groups on the diagram called G10 and G9 and virtually all garnets that occur with diamonds fall within the G10 field shown below.  As mentioned before, diamonds can reequilibrate in kimberlites and become graphite or evaporate away as carbon dioxide.  The diagram shows the line of stability under chromium saturation where diamonds will breakdown.  Some diamonds remain stable in the graphite field because the conditions do not last long enough to degrade the diamonds.  But if G10 garnets fall above the diamond-graphite equilibrium line it is a pretty sure bet you are on the right track for diamondiferous kimberlites.  And that is precisely what Fipke kept finding in in his samples of glacial debris as he flew along with Blusson (who not only has a PhD but is a pilot) periodically sampling them.  The long-gone glaciers were pointing the way.

garnetAfter Nowicki et al., 2007

At the time in the mid 1980s, geologist understood the relationships between these indicator minerals and diamonds, but how could the information be used to find the kimberlites in the Canadian craton?  What was unique about Fipke and his partner Blusson was the way they approached the problem.  They knew that the glaciers were powerful enough to gouge out the relatively soft kimberlite and carry the indicator minerals long distances destroying any signs of the kimberlites at the surface and subsequently burying them under debris carried by the glaciers when they melted.  They reasoned that they might be able to sample glacial deposits and “walk” the indicator minerals back to their source.  Standard Oil liked the idea and funded their exploration at first.  No one knew then that it would take eight years, millions of exploration dollars, and several companies before they hit pay dirt.  De Beer’s geologists also knew the answer was in the glacial remains, but to them it was a nine to five job and the season ended after 8 weeks of summer collecting.  For Fipke, it was a life’s dream, and nothing terminated his resolve collecting well into the cold months of the far north.

Fipke and Blusson focused on eskers (see the esker shown below) which are sinuous ridges of stratified sand and gravel deposited by water flowing in tunnels of ice within or under the glaciers.  As the glaciers recede the ridges remain like compasses indicative of the direction the water and ice once flowed.  If the glaciers rumbled over kimberlites, the proof would be in the streams that carried the glacial till away.   They kept going even after Standard Oil called it quits.  The G10 garnets kept telling them they were on the right road and the mining giant BHP believed them when they began running out of money.  Dia Met, the company Fipke and Blusson formed, signed a sweet deal with BHP.  BHP agreed to fund the exploration for a 51% stake.  Within six months after teaming with BHP, Fipke had come to a point where the G10 garnets disappeared near Lac de Gras.  Fipke knew he was close to the source.  As the story goes, he noticed a lake from the air that looked like it sat in a bowl-shaped depression near where the G10 garnets disappeared.  He had to have a sample of the rock in that depression.  They landed the plane on the lake, rowed to shore, and started to dig, but after many hours they were still in glacial till.  They decided to walk the shoreline for a better place to dig.  That is when Fipke’s son Mark, found a piece of kimberlite.  They were all ecstatic — the lake must sit on the pipe.  Gurney eventually analyzed the mineral assemblage and verified that it was highly likely to be a diamond-bearing kimberlite.  BHP quickly flew a geophysical survey which showed a distinct structure below the lake.

FulufjalleteskerEsker in Sweden (Hanna Lokrantz Wikipedia)

BHP and Dia Met started quietly staking as much land around the lake as they could.  Kimberlite pipes frequently occur in bundles so it was imperative that they obtain rights to as large a region as possible before word got out of the find.  While they were staking, BHP flew a drill rig in by helicopter and cored 455 feet under the lake pulling out beautiful samples of kimberlite 33 feet below the glacial debris with 80 plus small diamonds. Canadian law requires that companies announce to their shareholders when a potentially profitable body is found.  On November 12, 1991 they announced the results from the core including the fact that a few gem-quality diamonds had been recovered from the core.  All hell broke loose, and the rush was on by large and small companies alike to stake as close to BHP’s claims as possible in the hopes that other pipes might be buried nearby.   BHP would go on to discover more than 150 kimberlite pipes helping to make Canada the third largest producer of diamonds in the world.  De Beers even found a few mines.  Fipke and Blusson became billionaires overnight (if you don’t count the 8 years of exploration).

The image below shows the Etaki mine – one of the producing mines staked within Fipke’s original claims.   The large circular depressions in kmberlite represent part of the open-pit mining operations BHP is running.

 

Untitled-1 copyEkati mines from the air (Google Maps)

  1. Shirey, S. B. and Richardson, S. H. (2011) Start of the Wilson Cycle at 3 Ga shown by diamonds from subcontinental mantle: Science 333, 434-436
  2. Pearson, D. G., Davies, G. R., Nixon, P. H., and Milledge, H. (1989) Graphitized diamonds from a peridotite massif in Morocco and implications for anomalous diamond occurrences: Nature, 338, 60-62
  3. Russell, J. K., Porritt, L. A., and Hilchie, L. (2013) Kimberlite: rapid ascent of lithospherically modified carbonatitic melts: In Pearson, D. G. et al, Proceedings of 10th International Kimberlite Conference Vol. 1 p. 195-210
  4. Nowicki, T. E., et al. (2007) Diamonds and associated heavy minerals in kimberlite: A review of key concepts and applications: Developments in Sedimentology, 58, 1235-1267
  5. Cross, L. D. (2011) Treasure Under the Tundra: Canada’s Arctic Diamonds: Heritage House Publishing Co

Why the hysteria over genetically engineered crops?

Last summer I attended the annual fourth of July parade in our local town with my family.  We enjoy watching the floats, pageantry (I am embellishing a bit here), and the copious quantities of candy thrown at us.  Nearly every local business has a float — well a truck with the company name on it serves as a float in many instances.  The local politicians, constabulary, high-school marching bands, queens of various vegetable festivals, local junior baseball teams, etc. join the queue.  The obligatory paper advertisements are handed out by the business participants lauding their merchandise.

During the parade, I had a paper shoved in my face about the problems with genetically modified organisms (GMOs).  I had just heard a wonderful Ted talk about how safe GMOs were by genetic scientist Pamela Ronald so the proclamation caught my attention.  I realized that the polemic was being passed out by a local health-food store.  There was an obvious conflict of interest – by creating suspicions that GMOs were unhealthy or even harmful the store benefited by encouraging people to buy the non GMOs they sold.  Disinformation to make a buck?  The World Health Organization, the United Nations Development Programme, National Academy of Sciences (US), American Medical Association, American Association for the Advancement of Science, Food and Drug Administration, American Cancer Society, and more than 270 other prestigious groups including many Academy of Sciences in other countries have gone on record through numerous reports that GMOs are safe.

I spent a bit of time in my last essay on global warming bemoaning how the subject has become a political hot potato because of disinformation by Exxon (and I mentioned other examples such as Big Tobacco, the National Football League with concussions, and creationists).  Was the radical left on a disinformation campaign also?  It certainly appears so.  As a scientist I know how difficult it is to achieve a consensus on a hypothesis.  Scientists have no time for unsupported opinions – they demand empirically supported results.  I don’t deny that politics plays a role, but I like to think, at the end of the day, that the accepted theories that make it through the labyrinth of scientific scrutiny are extremely sound.  Let’s not forget that scientists have egos and you get intellectual brownie points for debunking someone’s work.  It’s a jungle out there as I have discovered first hand as a research professor.  When I see the community of scientists fundamentally agreeing on a topic, I find it fairly convincing (scientists agreeing is an amazing thing in itself).  I don’t mean to imply that science cannot make mistakes – there are some notorious examples.  But I cannot think of a better way to make educated decisions – based on the research from the experts in the scientific community.  Unsupported opinions just don’t cut it even if the people are well meaning.

The case of Golden Rice demonstrates the horrendous impact anti-GMO groups can have in a rush to prevent GMOs from reaching the marketplace1.  According to Scientific American Golden Rice had passed the health and safety issues for commercial use by 2002.  Syngenta had genetically engineered Vitamin A from corn (beta-carotine) into rice.  Syngenta altruistically turned over all the monetary interests for the use of the rice to a non-profit organization to avoid any interference from anti-GMO groups that fight biotech companies for profiting on GMOs.  The only hurdle left was regulatory approval.  In 2015, Golden Rice was among seven products that won the Patents for Humanity award, but the rice is still not in use anywhere (The Golden Rice Now advocacy group tells me that the Philippines and Bangladesh are expected to have Golden Rice available in 12 months – some time in the middle of 2017).  Amazingly, the life-saving rice is strenuously opposed by environmental and anti-globalization activists who object to GMOs.

!1280px-Golden_RiceInternational Rice Research Institute (IRRI)

In 2014, Justus Wesseler of the Technische Universität München and David Zilberman of the University of California quantified the economic impact caused by the resistance2.  They estimate that at least $199 million dollars were lost per year over the previous decade just in India.  They likened the loss to a metric called life years which they calculated to be 1.4 million in India alone which reflects deaths, blindness and related health disabilities from not having access to Vitamin A.  Unfortunately children are the hardest hit.

I want to emphasize that the Golden Rice case is more than a battle over perceived danger by the anti-GMO movement in the face of contrary scientific evidence.  There are people dying while Greenpeace, the Sierra Club, and other misguided organizations wage war over unclear principles and leftists ideals.  And of course there is always the Non-GMO Project which was created by health-food retailers to sew seeds of doubt (I don’t know if the pun was intended or not) “who oppose a technology that just happens to threaten their profits” according to Scientific American.  I should make it clear that my criticism of Greenpeace and the Sierra Club is not done lightly.  They serve a real purpose in helping to preserve our environment.  But when the science argues against them and lives are at stake, we need to bring them to task.  Let me dive into the science that argues against radical and mindless battles over GMOs.

The National Academy of Sciences has just released a consensus 407-page report entitled Genetically Engineered Crops: Experiences and Prospects which reviewed decades of research on genetically engineered (GE) crops.  Their conclusions find that GE crops are economically beneficial, safe for humans and livestock, and have adequate regulation.  The data is overwhelming impressive and I will take the time to summarize some of the major points.

Humans have been modifying crops for 10,000 years.  A good example is the domestication of maize in Meso-America.  Teosinite, shown in the left of the diagram below, is a grass that went through a series of human selections of rare mutations to develop modern-day maize grown throughout the world (shown in the right part of the diagram).  The point is that humans have been modifying crops through selection of beneficial traits for millennia.

cornNAS

In 1985, the United States was the first country to approve a GE crop, and by 1994 a GE tomato, which delayed ripening, was produced for sale.  Through 2015 about 12 percent of the land available for crop production contains GE crops (the number goes to 50% in the US).  The figure below shows which GE crops are currently being produced and where.  Europe, Russia, and most of Africa have been particularly resistant to GE crops as you can see from the map.

gmoNAS

There are three major types of GE crops: 1) Herbicide resistant traits which allow the crop to survive herbicide application to kill weeds or insects.  2) Insect resistant traits which typically incorporate a gene code from Bacillum thuringiensis (Bt) to the crop, killing insects when they feed on the plant.  3) Virus resistant traits which keep the plants from being susceptible to specific plant viruses.  It is important to note that most of the crops are modified to resist one insect, virus, or herbicide.  Drought tolerance, nonbrowning (e.g., with potatoes and apples), various colors in flowers, stability of oil to suppress trans-fats, enhancement of omega-3 fatty acids are other examples of GE traits in commercial production.

The NAS report reviews studies conducted comparing the production of  GE crops to non-GE crops in mind-numbing detail.  But some clear important conclusions have been summarized below (I quote to avoid any misrepresentation of the information).  Please note that I have not included all the findings because many are quite esoteric.  I refer the reader to the NAS report for more details.

  1. “Although results are variable, Bt traits available in commercial crops from introduction in 1996 to 2015 have in many locations contributed to a statistically significant reduction in the gap between actual yield and potential yield when targeted insect pests caused substantial damage to non-GE varieties and synthetic chemicals did not provide practical control.”  Potential yield is the theoretical yield a crop could achieve if water and other nutrients are in adequate supply and there are no losses to pests and disease.
  2. “In areas of the United States where adoption of Bt maize or Bt cotton is high, there is statistical evidence that insect-pest populations are reduced regionally, and the reductions benefit both adopters and nonadopters of Bt crops.”
  3. “In all cases examined, use of Bt crop varieties reduced application of synthetic insecticides in those fields. In some cases, the use of Bt crop varieties has also been associated with reduced use of insecticides in fields with non-Bt varieties of the crop and other crops.”
  4. “The widespread deployment of crops with Bt toxins has decreased some insect-pest populations to the point where it is economically realistic to increase plantings of crop varieties without a Bt toxin that targets these pests. Planting varieties without Bt under those circumstances would delay evolution of resistance further.”
  5. “Planting of Bt varieties of crops tends to result in higher insect biodiversity than planting of similar varieties without the Bt trait that are treated with synthetic insecticides.”
  6. “Although gene flow has occurred, no examples have demonstrated an adverse environmental effect of gene flow from a GE crop to a wild, related plant species.”
  7. “Crop plants naturally produce an array of chemicals that protect against herbivores and pathogens. Some of these chemicals can be toxic to humans when consumed in large amounts.” I emphasized naturally here because the statement pertains to the production of chemicals by non-GE crops.
  8. “Conventional breeding and genetic engineering can cause unintended changes in the presence and concentrations of secondary metabolites.”  This is not only important but emphasizes the need for oversight in the approval of GE crops.  However, NAS also concluded: “U.S. regulatory assessment of GE herbicide-resistant crops is conducted by USDA, and by
    FDA when the crop can be consumed, while the herbicides are assessed by EPA when there are new potential exposures.”
  9. Regarding safety, NAS concluded: “In addition to experimental data, long-term data on the health and feed-conversion efficiency of livestock that span a period before and after introduction of GE crops show no adverse effects on these measures associated with introduction of GE feed. Such data test for correlations that are relevant to assessment of human health effects, but they do not examine cause and effect.”  In others words, GE crops appear to be safe for the animals that consume them and for humans that consume either these animals or the GE crops directly.
  10. “The incidence of a variety of cancer types in the United States has changed over time, but the changes do not appear to be associated with the switch to consumption of GE foods. Furthermore, patterns of change in cancer incidence in the United States are generally similar to those in the United Kingdom and Europe, where diets contain much lower amounts of food derived from GE crops. The data do not support the assertion that cancer rates have increased because of consumption of products of GE crops.”
  11. “The committee found no published evidence to support the hypothesis that the consumption of GE goods has caused higher U.S. rates of obesity or type II diabetes.”
  12. “The committee could find no published evidence supporting the hypothesis that GE foods generate unique gene or protein fragments that would affect the body.”
  13. “The committee did not find a relationship between consumption of GE foods and the increase in prevalence of food allergies.”
  14. “The similarity in patterns of increase in autism spectrum disorder in children in the United States, where GE foods are commonly eaten, and the United Kingdom, where GE foods are rarely eaten, does not support the hypothesis of a link between eating GE foods and prevalence of autism spectrum disorder.”
  15. “On the basis of its understanding of the process required for horizontal gene transfer from plants to animals and data on GE organisms, the committee concludes that horizontal gene transfer from GE crops or conventional crops to humans does not pose a substantial health risk.”
  16. “The available evidence indicates that GE soybean, cotton, and maize have generally had favorable outcomes in economic returns to producers who have adopted these crops, but there is high heterogeneity in outcomes.”
  17. “Exploitation of inherent biological processes—DNA binding-zinc finger proteins (ZFNs), pathogen-directed transcription of host genes (TALEs), and targeted degradation of DNA sequences (CRISPR/Cas)—now permit precise and versatile manipulation of DNA in plants.”
  18. “New molecular tools are further blurring the distinction between genetic modifications made with conventional breeding and those made with genetic engineering.”
  19. “Treating genetic engineering and conventional breeding as competing approaches is a false dichotomy; more progress in crop improvement could be brought about by using both conventional breeding and genetic engineering than by using either alone.”
  20. “In some cases, genetic engineering is the only avenue for creating a particular trait. That should not undervalue the importance of conventional breeding in cases in which sufficient genetic variation is present in existing germplasm collections, especially when a trait is controlled by many genes.”
  21. “Although genome editing is a new technique and its regulatory status was unclear at the time the committee was writing this report, the committee expects that its potential use in crop improvement in the coming decades will be substantial.”  I think this is an extremely important conclusion.  If we want to continue to feed the world we are probably going to become more dependent on GE crops particularly if population continues to increase at present rates.
  22. “Genetic engineering can be used to develop crop resistance to plant pathogens with potential to reduce losses for farmers in both developed and developing countries.”
  23. “Genetic engineering can enhance the ability to increase the nutritional quality and decrease antinutrients of crop plants.”
  1. There are similar accounts of environmental groups shutting down a genetically modified eggplant in India, Bangladesh, and the Philippines.  Another involved a genetically modified potato which was resistant to specific herbicides.  A large food chain under pressure from environmental groups refused to purchase genetically modified potatoes and the project was shut down.  Farmers then introduced a new herbicide for the non-genetically modified potatoes grown instead
  2. Wesseler, J. and Zilberman, D. (2014) The economic power of the Golden Rice opposition: Environmental and Development Economics: 19, 724-742

Plowing through the political morass to understand global warming

Disinformation has become a hallmark of companies and religious sects interested in undermining science.  Please note that I have used disinformation instead of misinformation because these organizations have purposefully spread wrong or bad information to mislead the public and cloud issues discovered by science.  The most famous example comes from the tobacco industry.  Richard Kluger’s book Ashes to Ashes1 won the Pulitzer prize and helped expose the ruse that nicotine is not harmful or addictive.   The tobacco industry funded hundreds of scientific research projects in an attempt to muddy the waters concerning the health risks of smoking.  Who will ever forget the seven CEOs of America’s largest tobacco companies swearing in front of Congress in 1994 that nicotine is not addictive (still on Youtube if you care to reminisce).   One of the leading examples of the travesty was the funding of the Harvard Center for Tobacco Research.  Big Tobacco, through its Council for Tobacco Research, gave millions of dollars to the Center and Dr. Gary Huber2

In a more recent exposé, Mark Fainaru-Wada and Steve Fainaru showed in their book League of Denial how the National Football League actively downplayed the seriousness of trauma to the head3.  For example, to offset concern over head injuries, the NFL and commissioner Paul Tagliabue set up the Mild Traumatic Brain Injury Committee (MTBI) in 1994.  The committee was notorious for denying that concussions did not lead to serious effects.  In 2003, the committee began publishing what would amount to 16 research papers in the coopted science journal Neurosurgery, supporting their contention that the concussion problem was minor.  It certainly helped that the Neurosurgery editor, Michael L. J. Apuzzo, was a major NFL fan4.  The MTBI fuzzy logic included statements such as: “A total of 92% of concussed players returned to practice in less than seven days … More than one-half of the players returned to play within one day, and symptoms resolved in a short time in the vast majority of cases.”   During the same period independent research was discovering chronic traumatic encephalopathy (CTE) in the brains of deceased football players that had suffered numerous concussions throughout their careers.

Then there is the whole creationist movement which has attempted to undermine science through such organizations as the Institute for Creation Research.  There is neither space nor time to delve into the movement’s attempts to promote creationism and more recently intelligent design.   If you want to read a riveting account of one of the recent battle fronts pick up a copy of Monkey Girl by Edward Humes5.  It documents the attempt by the Dover School Board to demand the teaching of intelligent design in science classes which resulted in Kitzmiller v. Dover Area School District legal brouhaha.

Does this sound familiar?  Exxon (now ExxonMobile) formed the Global Climate Coalition in the 1980s to lobby congress and actively dispute the claim that global warming was not caused by anthropic greenhouse gases6.  The organization shut down in 2001 under pressure from numerous groups, but by then, the term global warming had morphed into a political issue entrenched in right-wing politics.    But the time may be coming for ExxonMobile to pay up for its disinformation campaign.  More than a dozen state attorney generals are investigating ExxonMobile for attempting to obfuscate the facts about global warming.  The New York Times reported in March that new documents published by the activist group Center for International Environmental Law shows that  Exxon knew about the dangers of global warming from carbon dioxide through its own research as far back as 1957 and established a campaign that doggedly fought air pollution control.

Admittedly, politics always baffles me.  The science of global warming via anthropic greenhouse gases seems so obvious that I have trouble seeing how the issue could become such a heated-cantankerous argument.  Let me briefly outline the science and you decide.  I have no ax to grind in the debate.  I tend to favor middle of the road solutions.  What I am more interested in is how the issue could ever become hijacked by politics.  If you are the kind of person whose eyes glaze over when you see graphs, hang in there because it is all pretty straight forward.  Before we dig in, let me remind you of a phrase Thomas Henry Huxley, known as Darwin’s bulldog, said: “My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations… Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing.”

Wavelength is plotted below on the x-axis (which is broken down into UV, visible, and infrared radiation) and the spectral intensity is plotted on the y-axis.  The red curve (top panel) represents the span of wavelengths transmitted by the sun and received at the top of the earth’s atmosphere.  Scientists refer to the electromagnetic radiation that comes from the sun as Planck black body radiation because they can reproduce it by heating an opaque non-reflective body (a black body) to a temperature of 5,525 degrees Kelvin.  The red region represents the extent of radiation that is received at the earth’s surface (i.e., not absorbed).  Note that most of the UV radiation is absorbed by the atmosphere (mostly due to the ozone layer fortunately) and most of the visible spectrum gets through.  Animals have evolved to “see” in the visible spectrum almost certainly because light comes to us in this range of the spectrum.

Atmospheric_TransmissionGlobal Warming Art, Wikipedia

The average Planck black body radiation of the earth is represented by the blue curve (255 degrees Kelvin)7   The blue area filled in under the blue curve is the amount of radiation from the earth that escapes into space (emitted by the top of the atmosphere) which is equal to about 15 to 30% transmission.  Virtually all of the near to far infrared is absorbed by greenhouse gasses shown in the panels in the above diagram for each of the gases.  Water plays the biggest role but carbon dioxide has an impact on the far infrared.  You can see this better in the diagram below.

CO2_H2O_absorption_atmospheric_gases_unique_pattern_energy_wavelengths_of_energy_transparent_to_othersNASA

Relatively small increases in carbon dioxide have a profound affect on the absorption of radiation (energy).  In fact, it is so sensitive that carbon dioxide closely follows the temperature changes during the past glacial cycles — the last 420,000 years are shown below.  As you can see, we are in an interglacial warming period and, under normal nonanthropic conditions would expect to enter another ice age in about 10,000 years8.  I should mention that carbon dioxide concentrations were measured from Vostock ice cores taken from Antarctica (the ice traps air bubbles locking in atmospheric concentrations).  Temperatures in the same cores were determined from hydrogen and oxygen isotopes9 10.

co2 NASA/NOAA

Now here is one of the most astounding graphs in environmental science – carbon dioxide and temperature over about the last 20,000 years11.   It shows global temperature in the blue curve relative to the Holocene mean between 6.5 and 11.5 thousand years ago (LGM is the last glacial maximum).  The red curve is an Antarctic composite core temperature profile.  Carbon dioxide concentrations (yellow dots) start to rise as we come out of the last glacial cycle along with temperature about 18,000 years ago and continue to rise until about 8,000 years ago when they become stable (temperature – red curve – and carbon dioxide in the cores lag a bit behind global temperature – blue curve).  Note that carbon dioxide concentrations have stabilized at about 260 ppm (parts per million) over the last 8,000 years or more (the current carbon dioxide changes are too new to be picked up in the cores).

natureShakun et al., 2012

Compare the graphs above to the carbon dioxide data measured directly from Hawaii since 1957 below.  The concentrations went above 400 ppm in 2015 (the annual variations are due to higher output in carbon dioxide by vegetation during the summer in the northern hemisphere – more land in the north), 140 ppm over the average of the last 8,000 years shown in the above diagram.  It is important to point out that carbon dioxide concentrations have not reached this level in more than 3 million years12!

co2_data_mlo

And here is why scientists think that humans are responsible for most of the carbon dioxide increase.  The graph is of temperature from 1880 (annual mean compared to the average between 1951 to 1980).  The red line is a five-year running average.  Temperature begins to increase significantly when carbon dioxide concentrations reach about 340 ppm in the mid 1970s.   The Intergovernmental Panel on Climate Change won the Nobel Peace Prize in 2007 and consists of thousands of the world’s finest scientific minds contributing to their reports.  The IPCC Fifth Assessment Report which was completed in 2014, stated: “It is extremely likely [90 to 100% probability in their words] that human influence has been the dominant cause of observed warming since 1950, with the level of confidence having increased since the fourth report”.

Global_Temperature_Anomaly.svgNASA

So why all the fuss?  Even if we admit that carbon dioxide and global temperatures are rising (and they are also rising in the ocean because carbon dioxide and heat are absorbed from the atmosphere), is this devastating for the future of the world?  To be honest, no one knows for sure.  I could list a plethora of examples of the impact from these changes – sea level rise, species habitats being pushed toward higher latitudes or higher elevations worldwide, ocean acidity, continued melting of the world’s glaciers, etc.   But the data that keeps me up at night (being overly dramatic here) is the tipping point.  Climate models show global temperatures rising between 1 to 2.5 degrees centigrade (1.8 to 4.5 degrees F) between 2050 and 2100.  We don’t know what might throw us into a tipping point – what Wagner and Weitzman call “tail effects” or “black swans” to designate statically low probability extreme events13.  They are referred to as tail events because they occur on the ends of a bell distribution curve (less than 2.1% in the figure below – greater than 2 standard deviations).  These events will be so potentially horrendous that we may not know how to combat them.

Wagner and Weitzman claim that tail events are “profound earth-as-we-know-it-altering changes”.  For example, they may lead to as much as a 30 percent decline in global economic output.  No one knows.  The only thing that is certain is that if we continue to ignore global warming and its effects, we are taking huge potential risks.

Standard_deviation_diagram.svgWikipedia

  1. Kluger, R. (1996) Ashes to Ashes: America’s Hundred-Year Cigarette War, the Public Health, and the Unabashed Triumph of Philip Morris: Vintage
  2. McGarity, T. O. and Wagner, W. E. (2012) Bending Science: How Special Interests Corrupt Public Health Research: Harvard University Press
  3. Fainaru-Wada, M. and Fainaru, S. (2013) League of Denial: Three Rivers Press
  4. Fainaru-Wada and Fainaru suggest that Apuzzo was drawn to the limelight of the NFL as the major reason for allowing dubious papers to be published by the MTBI
  5. Humes, E. (2007) Monkey Girl: HarperCollins Publishers
  6. The CCC was a major opponent to the Kyoto Protocol and was instrumental in persuading the US not to sign it
  7. The black curve is the upper atmosphere at 210 degrees Kelvin and the purple curve is the lower atmosphere near the earth’s surface at 310 degrees Kelvin.
  8. Novacek, M. (2007) Terra: Our 100-Million-Year-Old Ecosystem — and the Threats That Now Put It at Risk: Farrar, Straus and Giroux
  9. The ratio of hydrogen isotopes has a linear relationship with temperature.  The hydrogen isotope (hydrogen has one electron and deuterium has 2 electrons) is lighter than deuterium and preferentially evaporates relative to deuterium from seawater (it takes less energy to evaporate a water molecule with hydrogen than a molecule with deuterium).  In periods of high temperature, more water evaporates concentrating deuterium in seawater relative to the atmosphere.  The relationship between hydrogen and deuterium in the atmosphere is recorded in snow as it falls and becomes part of the glacier sampled.  Temperatures are extrapolated from the ratio of the isotopes of hydrogen.  Oxygen isotopes are used in a similar way to augment the hydrogen isotopes.
  10. Petit, J. R., et al. (1999) Climate and atmospheric history of the past 420,000 years from the Vostok ice core Antarctica: Nature, 399, 429-436
  11. Shakun, J. D., et al. (2012) Global warming preceded by increasing carbon dioxide concentrations during the last glaciation: Nature, 484, 49-54
  12. Nordhaus, W. D. (2015) A new solution: the climate club: New York Review of Books, June 4
  13. Wagner, G. and Weitzman, M. L. (2015) Climate Shock: The Economic Consequences of a Hotter Planet: Princeton University Press

Exterminating invasive species with gene drives

I can’t rave enough about Jennifer Kahn’s recent TED talk entitled Gene Editing Can Now Change an Entire Species.  The talk summarizes the new power of gene drives to eradicate malaria, dengue and yellow fever, and other dangerous diseases spread by insects.  It is truly a brave new world.    CRISPR (Clustered regularly interspaced short palindromic repeats) is a gene editing tool which allows scientists to edit genes precisely.  In Kahn’s phrasing, it is a “word processor for genes”.   Gene drives not only use CRISPR technology to edit genes but also include the mechanics of CRISPR in the DNA to enable the new gene to be passed on to offspring.

Among the intriguing uses of the new technology is to potentially rid the planet of invasive species that threaten indigenous ones.  A good example is the Asian carp that has invaded our Great Lakes.  Incorporate a gene drive in the Asian carp that has it producing only male offspring and within a matter of generations, the Asian carp will be gone.  Of course, Kahn gives the necessary genuflections toward the need to come to agreement on the ethical issues involved, but the future applications are breathtaking.

I remember reading the superb book Rat Island by William Stolzenburg1 about the monumental attempts that have taken place, primarily since the 1970s, to eliminate invasive species, particularly on islands.  Indigenous species are no match for invasive predators such as rats, feral cats, and a host of other animals that have hitched a ride with humans to these islands.  You can imagine why species like the flightless kakapoo, have been decimated or pushed to extinction at the hands of predators these species have been evolutionarily unprepared for.

Kakapoo

Kakapoo

Kahn makes a great point.  Using gene drives to eradicate species may backfire.  What if modified Asian carps return to Asia (let’s not forget, they are invasive species)?  The gene drive could wipe out the entire Asian carp population worldwide.  But the good news is that there is a reversal gene drive which can change the modified carp back to normal should they reach their home territories.  I know the new technology will have to go through a myriad of ethical panels and boards but ultimately it may have come just in time to save many species from the finality of extinction via invasive species.

The primary method of killing invasive rats has been through bait laced with Brodifacoum.  The poison is an anticoagulant and eventually causes the rat’s blood vessels to “leak” and, after about a week, the animal bleeds out.  Studies have shown it is probably a quite painful way to die.  As one might imagine, this has caused angst among conservationist.  On the one hand they want to return habitats to their pristine beauty, but on the other hand, the death of creatures, even rats, has caused somewhat of a “Sophie’s choice”.   The majority of conservationists have come down on the side of eradication in the face of massive island species extinctions.  David Steadman of the University of Florida estimated2 that 8,000 populations have disappeared from the islands of Oceania (about 800 islands).  Turns out there was an unintended outcome of using these poisons (isn’t there always?).  Some birds of prey including the bald eagle were eating the poisoned rats and dying.

Gene drives may be the silver bullet in the eradication of invasive species.  Conservationists using the new technology can avoid the need to kill targeted animals.  And no bykill!  I know, I know, we need to test the process to make sure we avoid all the potential unintended consequences, but even if it is only partially successful, it could be huge for the environment.

I vividly remember reading E. O. Wilson’s The Diversity of Life3 as he detailed the enchanting story of the repopulation of what was left of Krakatoa volcano after it blew itself out of the water in 1883.  It is clear that there is a capriciousness to repopulation.  Many species find themselves carried or blown to remote islands by currents, storms, and even tsunamis.   And in time, they may evolve into new species adapting to the often hostile new environments.  But why are these early inhabitants given precedence over late arrivals?  It appears to rest on the assumption that humans are unnatural – not part of the natural environment.  And therefore, if other animals tag along with us as a human food source for long voyages, or as our pets, or even surreptitiously, they are somehow unnatural in the grander scheme of things, demanding eradication.  You cannot read Rat Island and not be repulsed by the way rats prey on other creatures.   I am certainly not against the eradication of these invasive species, but it is not an easy thing deciding what lives and dies.

  1. Stolzenburg, W. (2011) Rat Island: Predators in Paradise and the World’s Greatest Wildlife Rescue: New York, Bloomsbury
  2. Steadman, D. W. (1995) Prehistoric extinctions of Pacific Island brids: Biodiversity meets zooarchaeology: Science, 267, p. 1123- 1131.
  3. Wilson, E. O., (1999) The Diversity of Life: W. W. Norton & Company

Loss aversion

Daniel Kahneman

Daniel Kahneman

I was struck by the vagaries of the lottery while watching a report on ABC News.  A wiley reporter asked several people if they would sell their Powerball ticket for $4 doubling the $2 they paid for it.  Each person refused the offer and seemed quite incredulous that the reporter would even ask.  I could not help thinking that it sounded like what Daniel Kahneman and Amos Tversky called loss aversion1.   More about that in a moment.

The behavior does seem capricious.  The odds were 1 out of 292 million to win $415 million2, but each person refused to sell claiming they might lose the millions.  Apparently it never occurred to them to sell the ticket and buy two more effectively doubling their chances of winning.  Nor did it make sense to them to sell the ticket and buy two additional tickets one with the same number they sold.  That may require having to split the winnings, but would that be onerous if they won especially considering the almost unimaginable odds against winning?

The magnitude of the chance to win is difficult to put into perspective, but it helps me to think in terms of toothpicks.   There are 250 toothpicks in a typical toothpick box and 73,000 give the requisite 292 million toothpicks needed to simulate the lottery odds.   Imagine only one toothpick in the 73,000 boxes is colored red.  If you could stack each box on top of one another, they would reach about 6,083 feet into the air – more than 800 feet higher than a mile.  Picture all those boxes being emptied, you being blindfolded and asked to scramble through the pile, and coming out with just one.  The odds of you picking the red toothpick are equivalent to you buying the winning lottery ticket.

But let’s get back to the original theme – loss aversion.   Daniel Kahneman is a psychologist, and normally psychologists are not in line to win Nobel Prizes in economics.  But he did in 2002 for his development of Prospect Theory, encompassing loss aversion, which attempts to explain the psychological impact of gains and loses.  He summarized his work in his best selling book Thinking, Fast and Slow3.   Prospect theory recognizes that humans are not completely rational.  They are risk averse and weigh losses much more heavily than gains (that is, loss aversion).

Let’s take for example a wager on the toss of a coin.  Tails you lose $100, heads you win $150.  Common sense tells us that winning more than we lose is a good bet.  After all, Vegas casinos make their livings on much less odds in their favor.  But most people will not take the bet.  They have a loss aversion.  Most require to win at least $200 before they accept the odds.

No need to get into much more detail on loss aversion, but I highly recommend Kahneman’s book if you crave more detail.   We can probably explain the unwillingness of some to sell their lottery tickets for double its’ worth because of loss aversion.  There must be a heuristic that imparts a need to keep that potential winning ticket even in the face of mind-boggling odds.  Perhaps loss aversion is hard-wired into our brains (that is, genetic).  Many of the genetic characteristics that make us human evolved within small hunter-gatherer tribes thousands of years ago4.

If our hunter ancestors were cutting up the carcass of an animal too large to carry, and a lion saw them and charged, it does not take too much imagination to believe they would take what they could and run rather then fight for the remainder.  They certainly would not be brain storming over what the rational decision would be.  You don’t get your genes passed on by taking time to formulate a rational path of action under do or die circumstances.

  1. Kahnerman, D. and Tversky, A. (1992) Advances in prospect theory: Cumulative representation of uncertainty: J. Risk and Uncertainty , 5, 297-323.
  2. The lump sum payout at $415 million is about $202 million after taxes which means paying for a $2 ticket is statistically a bad bet not that any of us would turn the money down.  And this does not consider the possibility of sharing the winnings with other winners.
  3. Kahneman, D. (2011) Thinking, Fast and Slow: Farrar, Straus and Giroux.
  4. Pinker, S. (2002) The Blank Slate: New York: Penguin.