We need more ignorance

Home / We need more ignorance

We need more ignorance

January 14, 2020 | General | No Comments

Books about ignorance and science

I recently read another book with Ignorance in the title: Ignorance: How it Drives Science by Stuart Firestein (2012 by Oxford Press). It says things we all need to hear and think about more often. The book requires minimal effort — it was written to be read in one or two sittings. In the author’s own words: “a couple of hours spent profitably focusing your mind on perhaps a novel way of thinking about science, and by extension other kinds of knowledge as well.” It’s a hard one not to find time for — and perfect for a seminar or discussion-based course. I wish it would have been on my radar before the last class I taught. Follow the above link to Firestein’s webpage to see his other writings on science — including Ignorance‘s sequel, Failure (on my to-read-very-soon list).

Some of you will already know of my appreciation for Herbert Weisberg‘s 2014 book Willful Ignorance: The Mismeasure of Uncertainty. The two books differ in their goals, with Weisberg’s specifically focused on probability and statistics, but they certainly hit a common theme of accepting ignorance as a necessary, and even positive, force in science.

Firestein says this about the purpose of his book: “…to describe how science progresses by the growth of ignorance, to disabuse you of the popular idea that science is entirely an accumulation of facts, to show how you can be part of the greatest adventure in the history of human civilization without slogging through dense texts and long lectures.” The ideas he is communicating are not complicated and I suspect very few disagree with them. So, why do they feel so refreshing and unique? Firestein came to science through an unusual path (theater) and his writings are another reminder of value added from unconventional paths and experiences.

Letting go of negative connotations

We tend to default to the negative connotations associated with the word ignorance. While there are certainly types of ignorance we hope to avoid, defaulting to the negative in science is part of the problems we face today. Acknowledging and recognizing ignorance is not a sign of weakness and does not imply lack of knowledge or lack of expertise. On the contrary — it takes knowledge to understand, or even just glimpse, what we don’t know. Awareness of ignorance is evidence of knowing enough to know what we don’t know — and admitting that there is a lot we don’t know.

Ignorance, all of what we don’t know, and even what we don’t know we don’t know, is the driving force of science. 

Stuart Firestein

Tension between facts and ignorance

As a statistician, I feel the raw tension between the desire for facts (and rules) and honest acknowledgement of ignorance. Looking at the ways statistical methods are often used in practice provides examples of current scientific culture valuing fact finding over understanding sources of ignorance. Enormous value has been placed on using statistical methods as if they are fact detectors — with little acknowledgement of what we willfully and un-willfully ignore to use them like instruments.

Firestein’s take is inspirational for science in general. Because ignorance can take many forms, it’s easy to wander a bit when thinking about the topic. My mind automatically wanders to ways ignorance (or a lack of awareness of ignorance) plays into the current use of Statistics in science and views of Statistics from those outside the discipline. The following thoughts are less inspirational than Firestein’s– but hopefully constructive to put out there.

Ignorance and the field of Statistics

Statistics is viewed (by those who don’t have the knowledge to know better) as a subject that can be taught in one semester (or in some cases -like some medical school programs– just a two-week module). When exposure to the field of Statistics is extremely limited, it is very easy to leave the experience with little awareness of how small the nibble was relative to what’s out there and still being created. And, if most (or all) of a person’s education in Statistics comes from within discipline teachers who never had enough training to see beyond that nibble themselves — then a vicious cycle is fed with a focus on the nibble. Many teachers of Statistics have not themselves been exposed enough to recognize how much they don’t know. Learning “on the job” within another discipline often suffers from this same problem.

I get it. We were all there once, for varying amounts of time, and I still vividly remember the naive feeling of “how could someone get a graduate degree in Statistics — it just doesn’t seem like there’s much to know.” After all, I had spent one semester as a graduate student (not in Statistics) using a calculator to calculate averages, standard deviations, standard errors, t-statistics, and p-values. I could reject and fail to reject with no problem, spout off the stated assumptions, and even recognize a paired situation from an unpaired. I could randomize individuals to groups — even counterbalancing to try to protect against order effects in a within-subject design. It seemed like the right amount of info and I felt quite clear about how it should be used — it was presented as the correct, and only, way to do data analysis. Almost 25 years later, things haven’t changed that much in a lot of disciplines, though calculators have been replaced by statistical software on computers.

Questions and gradual realizations

Luckily, my extreme lack of ignorance was accompanied by a love of thinking about research methods and philosophy of science. As I started on non-Statistics PhD work, discussions about “statistical significance” and use of statistics in general started to nag at me. I didn’t have knowledge to put satisfying words to it — beyond feeling like how we were doing science was too methods-and-stats-results-focused, at the expense of more question-focused investigation. The nagging got strong enough that I left the PhD program and applied to master’s programs in Statistics. At that point, I was aware there was a lot I didn’t know, but I didn’t have enough knowledge to even know how to describe it.

Even when I started the master’s program in Statistics, I was far from constructively ignorant. But, even by the end of the first week, I started to feel the weight of new found ignorance and to be able to put words to it. I would like to say I was enlightened enough to recognize it as a positive sign, but that is not true. I was still fact-focused and every new concept introduced just let in more discomfort about all I did not know. Eventually, I was forced to accept the feeling of ignorance and its positive side, as well as its connection to knowledge. I am now thankful that the memories of that transition are still accessible to me.

Enough to be dangerous?

The phrase “knowing just enough to be dangerous” is often used in the context of Statistics and with good reason. The phrase describes a person thinking they know enough to know what they don’t know, when in fact they do not. They are not aware of their ignorances, and therefore can be dangerous in their over-confident use of Statistics. This isn’t to say it doesn’t happen in other disciplines, but Statistics is in a unique situation because it is relied on across so many scientific disciplines.

It is not productive to place fault on the individuals themselves — as they are just growing up in a system that nurtures it. I experienced it myself. Traditions and seemingly working systems are hard to break. And, ignorance is bliss. Maybe placing more value on recognizing the importance of constructive ignorance in science will help. Maybe it will seep into the view of Statistics held by scientists in other disciplines — or at least get them to question how much they really know and what exists beyond that.

Judging knowledge based on admissions of ignorance

To be honest, I make quick assessments (yes, judgements) of a scientist’s general knowledge of statistical inference based largely on the level of confidence they convey — with extent of knowledge being inversely related to level of confidence conveyed. High levels of confidence are often accompanied by lack of awareness of ignorance. Stated more simply, when meeting with researchers for the first time, I gauge their level of general Statistics knowledge by their attitude around what they state they do and don’t know. Often, those who come in touting their ability to “run their own stats” and who say they don’t really need a statistician, end up have the least depth to their knowledge. It is not uncommon for researchers to say or imply “I’m a statistician too,” as if my years focused on studying Statistics didn’t really add anything beyond what I would have gained getting a PhD in another discipline. On my more gracious days I don’t take it as a sign of disrespect to the discipline, but instead as a sign they haven’t had the opportunity to gain enough knowledge to have awareness of what I could have possibly studied in those years. I understand by appealing to the memories of when I was there myself. But, by focusing on how much they know, they are communicating loud and clear about how much they don’t.

Technical skills don’t imply deep knowledge

Being adept at “running” a particular analysis or fitting a particular type of model using a computer is the technician part of being a statistician, not the scientist part of being a statistician. I often say — I wish I was labeled as an -ologist instead of an -ician. It’s impossible to say how much our “statistician” label affects opinions of what the discipline is all about, but the -ician certainly doesn’t help. Even the statistic- part of the name is an issue. But, that’s another post sitting in the list of drafts.

The point I think deserves attention is this: Carrying out the technician-like tasks does not imply you have yet come up against the hard questions and challenges of inference or that you have thought deeply about the underlying theory and foundations of the tasks you are carrying out.

It can be helpful to make an analogy with other disciplines. One can be a field technician for ecological research, or a lab technician for psychological research, without a deep understanding of the history, theory, and sources of ignorance plaguing, and driving, the field. I may be naive about those in other disciplines feel, but I don’t think people with PhDs in other disciplines come face-to-face with this issue in the way that statisticians do (or at least not as often). I have done a lot of work with ecologists and psychologists, but I would never call myself an ecologist or a psychologist — and particularly not to a colleague with a PhD in the discipline!

It’s a curious phenomenon that I think mainly stems from a view that statisticians are mainly just technicians with skills in calculation and computation — rather than scientists within a discipline of their own. At times I have taken this as simply an annoyance or frustration, but given the huge reliance on statistical inference in and for science, I am convinced this is a huge part of the problems we’re facing in science. And, I don’t think the “data science” craze is helping the situation.

Productive crisis in graduate school

If we talked more about positive, constructive ignorance, it might help ease the pain of a fundamental crisis graduate students often go through when they really bump up against it for the first time. This is the “Oh no! I’m going to have my master’s or PhD soon and now I feel like I know nothing compared to all there is to know. The more I learn, the less comfortable I am with my degree of knowledge!” Understandably, this leads to feelings of serious frustration and even failure. If we carry around the vision of collecting facts from a bucket with a bottom, it can be rough to realize the bucket is actually bottomless — and we will never arrive at the calm place of “knowing enough” we had hoped to achieve in graduate school.

To me, this crisis (in its many forms) is a sign students are ready to move on and a source of pride. One of my favorite parts of teaching and advising was trying to help students turn the fear and frustration into understanding — which means accepting the associated discomfort. It can be quite discombobulating and downright depressing to realize how little all the people we have trusted actually know. We’re all in this together. I wish I would have done more. A seminar like the one Firestein developed that motivated Ignorance could go a long way — especially if it included students from different disciplines.

Social dilemmas of ignorance

The social dilemmas created around ignorance are far from simple — and I want to touch on that just a little more here. As described above, conveying lack of ignorance to a person who has more knowledge than you on a topic can instead be taken as a sign of lack of knowledge. If you really want to demonstrate to someone that you know a lot about a subject, it may be best to start by acknowledging that you recognize there is so much you don’t know. The problem is … this will only gain you respect with those who have enough knowledge themselves to recognize and appreciate it. Otherwise, your admission of ignorance may backfire, particularly if you are supposed to be the one with more expertise about a topic and you are being trusted for the knowledge you are supposed to have. Imagine your doctor walking in and professing ignorance about your condition before giving you a diagnosis and recommendations? The social dance between knowledge and ignorance is complex. And this has made it hard to embrace and acknowledge the importance of ignorance.

Discomfort and curiosity

The realization that with knowledge comes uncomfortable (though exciting!) feelings of ignorance should be a prerequisite for obtaining a graduate degree. That should be the point. Instead, at least in Statistics, we continue to try to cram more and more “facts” and skills into each student in a very short amount of time — as if we have achieved some final state of knowledge already (a bottom on the bucket). Associating the discomfort of ignorance with knowledge and encouraging curiosity about how ignorance and knowledge are related could go a long way toward improving science, and the use of Statistics in science.

Appendix

Here are eight more quotes from the Ignorance I wanted to type up for future reference — and figured I might as well share to hopefully further pique your interest.

We may look at these quaint ideas smugly now, but is there any reason, really, to think that our modern science may not suffer from similar blunders? In fact, the more successful the fact, the more worrisome it may be. Really successful facts have a tendency to become impregnable to revision.

Page 23-24 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

So it’s not so much that there are limits to our knowledge, more critically there may be limits to our ignorance. Can we investigate these limits? Can ignorance itself become a subject for investigation? Can we construct an epistemology of ignorance like we have one for knowledge? Robert Proctor, a historian of science at Stanford University, and perhaps best known as an implacable foe of the tobacco industry’s misinformation campaigns, has coined the word agnotology as the study of ignorance. We can investigate ignorance with the same rigor as philosophers and historians have been investigating knowledge.

Page 30 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

So how should our scientific goals be set? By thinking about ignorance and how to make it grow, not shrink — in other words, by moving the horizon.

Page 50 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

There is also a certain conclusive, but wrong, notion that comes from an explicit number. In a peculiar way it is an ending, not a beginning. A recipe to finish, not to continue.

Page 54 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

Big discoveries are covered in the press, show up on the University’s home page, garner awards, help get grants, and make the case for promotions and tenure. But it’s wrong. Great scientists, the pioneers that we admire, are not concerned with results but with next questions.

Page 57 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

We often use the word ignorance to denote a primitive or foolish set of beliefs. In fact, I would say that “explanation” is often primitive or foolish, and the recognition of ignorance is the beginning of scientific discourse.

Page 167 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

Getting comfortable with ignorance is how a student becomes a scientist. How unfortunate that this transition is not available to the public at large, who are then left with the textbook view of science. While scientists use ignorance, consciously or unconsciously, in their daily activity, thinking about science from the perspective of ignorance can have an impact beyond the laboratory as well.

Page 167 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

Today, however, we find ourselves in a situation where science is as inaccessible to the public as if it were written in classical Latin. The citizenry is largely cut off from the primary activity of science and at best gets secondhand translations from an interposed media. Remarkable new findings are trumpeted in the press, but how they came about, what they may mean beyond a cure or new recreational technology, is rarely part of the story. The result is that the public rightly sees science as a huge fact book, an insurmountable mountain of information recorded in a virtually secret language.

Page 171 Stuart Firestein (2012). Ignorance: How it Drives Science. Oxford Press

About Author

about author

MD Higgs

Megan Dailey Higgs is a statistician who loves to think and write about the use of statistical inference, reasoning, and methods in scientific research - among other things. She believes we should spend more time critically thinking about the human practice of "doing science" -- and specifically the past, present, and future roles of Statistics. She has a PhD in Statistics and has worked as a tenured professor, an environmental statistician, director of an academic statistical consulting program, and now works independently on a variety of different types of projects since founding Critical Inference LLC.

Leave a Reply