From Tim Hartford’s “how to make the world add up” : a study by Dan Kahan showed that scientific literacy actually reinforce biases and tribalism BUT ‘scientific curiosity’ reduces them! The more curious we are the less tribalism influences our views, and thankfully there is no correlation between curiosity and political affiliation so that trait is present across the political spectrum.
Category: Science
Richard Feynman on Science
Richard Feynman “What is science ?”
A fit following to Nate Silver’s estimation of the high proportion of errors in scientific publications. But also a fantastic read on how to look at the world with a sense of wonder, perpetual learning and questioning.
Nate Silver’s book “The Signal and the Noise”
The book by Nate Silver “The Signal and the Noise …” is an amazing read. Very well written, entertaining as well as deep, it holds lessons and learnings that are applicable in our daily personal and professional lives. Its stated purpose is to look at how predictions are made, their accuracy, in several fields : weather, stock market, earthquakes, terrorism, global warming … But beyond that simple premise, it is a real eye opener when it comes to describing some of the deeply flawed ways in which we humans analyze the data we have at hand, and take decisions.
Nate Silver has very skeptical towards the promises of Big Data, and believes that the exponential growth in available data in recent years only makes it tougher to separate the grain from the chaff, the signal from the noise. One of the way he believes we should strive to make better forecasts, is to constantly recalibrate our forecasts based on new evidence, and actively test our models to improve our predictions and therefore our decisions. The key to doing that is Bayesian statistics … This is a very compelling, if complex, use of the Bayes Theorem, and it’s detailed through a few examples in the book.
As he explains, in the field of economics, the US govt publishes some 45,000 statistics. There are billions of possible hypotheses and theories to investigate, but at the same time “there isn’t any more truth in the world than there was before the internet or the printing press”, so “most of the data is just noise, just as the universe is filled with empty space”.
The Bayes Theorem goes as follows :
P(T|E) = P(E|T)xP(T) / ( P(E|T)xP(T) + P(E|~T)xP(~T) )
Where T is the theory being tested, E the evidence available. P(E|T) means “probability of E being true if we assume that T is true”, and notation ~T stands for “NOT T”, so P(E|~T) means “probability of E being true if we assume that T is NOT true”.
A classical application of the theorem is the following problem : for a woman in her forties, what is the chance of her having a breast cancer if she had a mammogram indicating a tumor ? The basic statistics are the following, with their mathematical representation if T is the theory “has a cancer” and E the evidence “has had a mammogram that indicates a tumor” :
– if a woman in her forties has a cancer, the mammogram will detect it in 75% of cases – P(E|T) = 75%
– if a woman in her forties does NOT have a cancer, the mammogram will still erroneously detect a cancer in 10% of cases – P(E|~T) = 10%
– the probability for a woman in her forties to have a cancer is 1.4% – P(T) = 1.4%
With that data, if a woman in her forties has a mammogram that detects a cancer, the chance of her actually having a cancer is of …. less than 10% !!! That seems totally unrealistic – isn’t there an error rate of only 25% or 10% depending how you read the above data ? The twist is that there are many more women without a cancer (98,6%) than women having a cancer at that age (1.4%), so the number of erroneous cancer detections, even if they represent only 10% of the cases where women are healthy, will be very high.
That’s what the Bayes theorem computes – the probability of a women having a cancer if her mammogram has detected a tumor is :
P(T|E) = 75%x1.4% / ( 75%x1.4% + 10%x98.4% ) = 9.6 %
Nate Silver uses that same theorem in another field – we have many more scientific theories being published and tested every day around the world than ever before. How many of these as actually statistically valid ?
Let’s use the Bayes theorem : if E is the experimental demonstration of a theory, and T the fact that the theory is actually valid, and with the following statistics :
– a correct theory is demonstrated in 80% of cases – P(E|T) = 80%
– an incorrect theory will be disproved in 80% of cases – P(E|~T) = 20%
– proportion of correct to incorrect theories – P(T) = 10%
In that case, the probability of a positive experiment meaning a theory is correct is only of 30% – again a result that goes against our intuition, as it seems from the above statistics that the “accuracy” of proving or disproving theories is 80% !!! The Bayes Theorem does the calculation right, and takes into account the low probability of a new theory being valid in the first place :
P(T|E) = 80%x10% / ( 80%x10% + 20%x90% ) = 30 %
There again, events with rare occurrences (valid theories) tend to generate lots of false positives. And this results in real life in a counter-intuitive fact : at the same time as there is a huge proliferation of published scientific research, it has been found that two-thirds of “demonstrated” results cannot be reproduced !!!
So … this book should be IMO taught in school … It gives very powerful and non-intuitive mental tools to make us better citizens, professionals and individuals. I don’t have much hope of this making its way into the school curriculum any time soon, so don’t hesitate, read this book, and recommend it to your friend and family 🙂
Population and topography
Superimposed on the iPad a night picture of the US with a topographical map – shows that population settlements are hugely influenced by the terrain. Also never realised before the US was such a tale of two halves – a plain in the East and mountains and deserts in the West.
“The stuff of thought” by Steven Pinker
This is a book that I have started reading twice already … Not a good sign ? More a sign of laziness on my part, as it is one of the most dense books I know. Each page holds more complex concepts and ideas per page than my brain and attention span can handle.
So before I give up reading it again, let me share the content of the piece I have managed to read through !
Steven Pinker (Wikipedia entry) analyses language as a window into the way we think. He starts by asking the profound question of ‘How do children learn language ?’, and especially focusing on how they know what NOT to say.
For example, one can say :
“I sprayed water onto the roses”
As well as
“I sprayed the roses with water”
We can imagine that hearing such multiple examples allows children to determine that there is a pattern, a rule. It could go like this : ‘if I can say “something actioned object a onto object b”, then I can equally say “something actioned object b with object a” ‘.
But that rule does not always work. For example :
While one can say
“I coiled a rope around the pole”
One cannot say
“I coiled the pole with a rope”
This is a very important issue, more than it first appears. We could try to explain that this is just an exception. But this itself poses another problem. How would children learn these exceptions ? They can determine patterns by listening to adults, but exceptions like the one above would require them to know what is NOT being said. The only way this could happen is for the child to make the mistakes, and get corrected. But this would require massive additional amounts of interactions which don’t seem realistic.
Fortunately there is another explanation offered by Steven Pinker.
We were looking for rules that would involve simple grammatical patterns. Instead it appears that the rule is about a mental picture of the action being performed. This is where the window into our underlying thought processes starts opening.
Our brain seems to classify verbs in categories that correspond to types of actions performed, but with distinctions far from obvious at first glance. For example, brush, plaster, rub, drip, dump, pour, spill all seem to pertain to getting some liquid or goo onto a receptacle. But the fact that you can “smear a wall with paint” while you cannot “pour a glass with water” tells you that you have to classify them in two different types of actions :
– the first type is when you apply force to both the substance and the surface simultaneously : brush, daub, plaster, rub, smear, smudge, spread, swab. You are directly causing the action, with a sense of immediateness.
– the second type is when you allow gravity to do the action : drip, funnel, pour, siphon, spill. You are acting indirectly here, and letting an intermediate enabling force do the work. There is also a sense of the action happening after you have triggered it, instead of immediately.
The fist type allows the “smear a wall with paint” construction, the second type does not.
Going further, the book explains how researchers have used non-existent verbs such as “goop”, telling children that it means wiping a cloth with a sponge, to test how they instinctively use these verbs, to test that the above rules apply … And they do !
So there IS a logical set of rules that explain how we talk, and these rules provide a glimpse into the mental models we use to conceive of reality, even before we verbalise our thoughts.
I hope I gave you a small taste of that fascinating piece of writing, and even more that you will become as curious as I am of this field of study. Another great book by the same author “How the Mind Works” attempts to explain the basic mechanisms we use to think, it is maybe an easier read, but is equally mind-provoking and fascinating.
Now for a sobering fact. When I tried to explain to Aouda what I had just taken hours to understand, she immediately latched on to the answer as something very natural and almost obvious. So some of us are more conscious of what’s happening inside our heads than others 🙂
What will it be for you ? Read one of these books and let me know !
Hans Rosling
Understanding some limits of modern science
Reading a great book “13 things that don’t make sense” by Michael Brooks. It throws great perspective into news you hear about dark matters and other current scientific pursuits. Bottom line we know much less than we pretend to, and there is experimental data that we have trouble fitting into our models. This reminds of scientists saying in the 19th century that everything had been discovered (and parallels in human sciences of “the end of History”). It is great and refreshing to see that we are but in the infancy of understanding the laws governing our world, and there is always going to be room for new Einsteins !
– Posted using BlogPress from my iPad