Making the world a better place.

I have been thinking of what “making the world a better place” means. We aspire to that, but lack of a proper definition does not help – I obviously don’t hope to find a definition that everyone will agree with, but having one for myself would be a good start.
I subscribe to a world with the following characteristics : freedom from dogma, equal access for everyone to opportunities to improve their lives, focus on scientific progress …
Some of these ideas are as fundamental as freedom of religion (or of having no religion), belief that humans are fundamentally all equal in rights (and that any differences we make between various groups are mostly a reflection of our tendency for tribal behaviors) – and by fundamental beliefs I mean that I will have no respect for, and will fight if required, people who uphold the opposite beliefs (fundamentalism, racism …). But some other beliefs are very personal and I don’t expect others to be of the same mind (by which I guess I mean that I will respect other points of view) : belief that a secular society is better for everyone, that technological progress can and will generally be used for the overall good of people…

I just read the declaration of human rights. It is a good start, but it is extremely vague when it comes to defining socially acceptable behaviors. And it defines lofty goals for social support to individuals that are extremely difficult to define in practical terms, and different societies at different stages of development, or with different social beliefs, will need different rules there.

Does this require a new kind of social contract ?

Social Sciences becoming true sciences – a key opportunity for our future.

I have long been thinking that Social Sciences need to become true sciences, subject to experimental verification. The counter argument so far has been that the underlying subjects are too complex and evade modelling. But before we figured out quantum mechanics and its complexities, we had to fumble our way starting with the planetary model of the atom. So social sciences should go back to basics and first try to model simple phenomenons.

Now the emergence of smartphones and explosive growth of sensors is finally providing new means of quantifying social sciences. I have been reading the book “Social Physics: How Good Ideas Spread—The Lessons from a New Science” by Alex Pentland. It’s a difficult read as the style is really poor, but the content is fascinating.
Amazon Link here.

Now think of the implications – if we knew how to really spread memes that drive peaceful cooperation towards economic prosperity in a society, we could make the rebuilding of broken states like Iraq a matter of years instead of generations.

The counterpoint is obviously that the same techniques can be used to make Nazi propaganda seem crude and inefficient. Recent examples, such as Italy under Berlusconi and Hungary under Victor Orban,  show that modern democracies do allow for concentration of media and educational powers in the wrong hands. Imagine what they could do with the power of truly impactful social sciences.

So this could be the most dramatic change in human history. We learnt to master the material world around us, we have now the opportunity to master our own individual and group Psyche. We still behave very much like the primates we evolved as, so getting a scientific grip on what drives our motives, behaviors and actions would truly be revolutionary and evolutionary !

Interesting Concepts I learnt of in 2014.

Confabulation : (Wikipedia) In psychology, Confabulation is a memory disturbance, defined as the production of fabricated, distorted or misinterpreted memories about oneself or the world, without the conscious intention to deceive. Confabulation is distinguished from lying as there is no intent to deceive and the person is unaware the information is false.
(“You are now less dumb”, David McRaney) Neuroscience now knows that confabulations are common and continuous in both the healthy and the afflicted.

Over Fit of Model : example Fukushima where available data showed fewer highest intensity earthquakes than predicted by the Gutenberg-Richter fit. Japanese experts concluded that there was an inflection in the curve, and rationalized that the geological characteristics of the region explained that. As a result, the estimate for the probability of a magnitude 9 event was 1 in 13,000 years vs the Gutenberg-Richter prediction of 1 in 300 years …

Complex System : example heap of sand. Each grain you add from top will either stay in place or go down the heap. Once in a while one will trigger a sand avalanche. Key property : large periods of apparent stasis, sudden and catastrophic failures. Not random, but so irreducibly complex that cannot be predicted beyond a certain level. Differs from Chaos theory. Theoricized by Physicist Per Bak.

Long term thinking.

A major turning point for mankind will be when we change our time horizon for societal actions from a very short term to long term ie one generation or more. Our ability to make real changes is now very high, but all our systems are geared towards short term thinking. eg in the USA we believe that sending lots of folks into jail for low level transgressions is good, but doing so we place massive amounts of folks outside of society and break them for good – the long term impact is just awful. eg if we let a generation grow without proper education, the short term impact is nothing, but it breeds disaster in the long run, which will have no quick fixes.

Random technological innovations ideas

A “brick of light” – a segment of outside wall that is pierced with minuscule holes through which fiber optics bring the light from the outside. The holes are spread out on the outside, and converge to the inside of the house to a tight pattern, that diffuses external light to the inside.

An electronic bookmark allowing synchronization between a paper book and its ebook equivalent, as well as definitions lookup and search on the paper book.

An “assisted Unicycle” – a compact mode of urban transportation on a single wheel. Uses a reaction wheel for stability assistance ?

An intelligent mattress that would real-time modify is shape to best support the sleeper’s body 🙂

Nate Silver’s book “The Signal and the Noise”

 

The book by Nate Silver “The Signal and the Noise …” is an amazing read. Very well written, entertaining as well as deep, it holds lessons and learnings that are applicable in our daily personal and professional lives. Its stated purpose is to look at how predictions are made, their accuracy, in several fields : weather, stock market, earthquakes, terrorism, global warming … But beyond that simple premise, it is a real eye opener when it comes to describing some of the deeply flawed ways in which we humans analyze the data we have at hand, and take decisions.

Nate Silver has very skeptical towards the promises of Big Data, and believes that the exponential growth in available data in recent years only makes it tougher to separate the grain from the chaff, the signal from the noise. One of the way he believes we should strive to make better forecasts, is to constantly recalibrate our forecasts based on new evidence, and actively test our models to improve our predictions and therefore our decisions. The key to doing that is Bayesian statistics … This is a very compelling, if complex, use of the Bayes Theorem, and it’s detailed through a few examples in the book.

As he explains, in the field of economics, the US govt publishes some 45,000 statistics. There are billions of possible hypotheses and theories to investigate, but at the same time “there isn’t any more truth in the world than there was before the internet or the printing press”, so “most of the data is just noise, just as the universe is filled with empty space”.

The Bayes Theorem goes as follows :

P(T|E) = P(E|T)xP(T) / ( P(E|T)xP(T) + P(E|~T)xP(~T) )

Where T is the theory being tested, E the evidence available. P(E|T) means “probability of E being true if we assume that T is true”, and notation ~T stands for “NOT T”, so P(E|~T) means “probability of E being true if we assume that T is NOT true”.

A classical application of the theorem is the following problem : for a woman in her forties, what is the chance of her having a breast cancer if she had a mammogram indicating a tumor ? The basic statistics are the following, with their mathematical representation if T is the theory “has a cancer” and E the evidence “has had a mammogram that indicates a tumor” :

– if a woman in her forties has a cancer, the mammogram will detect it in 75% of cases – P(E|T) = 75%

– if a woman in her forties does NOT have a cancer, the mammogram will still erroneously detect a cancer in 10% of cases – P(E|~T) = 10%

– the probability for a woman in her forties to have a cancer is 1.4% – P(T) = 1.4%

With that data, if a woman in her forties has a mammogram that detects a cancer, the chance of her actually having a cancer is of …. less than 10% !!! That seems totally unrealistic – isn’t there an error rate of only 25% or 10% depending how you read the above data ? The twist is that there are many more women without a cancer (98,6%) than women having a cancer at that age (1.4%), so the number of erroneous cancer detections, even if they represent only 10% of the cases where women are healthy, will be very high.

That’s what the Bayes theorem computes – the probability of a women having a cancer if her mammogram has detected a tumor is :

P(T|E) = 75%x1.4% / ( 75%x1.4% + 10%x98.4% ) = 9.6 %

Nate Silver uses that same theorem in another field – we have many more scientific theories being published and tested every day around the world than ever before. How many of these as actually statistically valid ?

Let’s use the Bayes theorem : if E is the experimental demonstration of a theory, and T the fact that the theory is actually valid, and with the following statistics :

– a correct theory is demonstrated in 80% of cases – P(E|T) = 80%

– an incorrect theory will be disproved in 80% of cases – P(E|~T) = 20%

– proportion of correct to incorrect theories – P(T) = 10%

In that case, the probability of a positive experiment meaning a theory is correct is only of 30% – again a result that goes against our intuition, as it seems from the above statistics that the “accuracy” of proving or disproving theories is 80% !!! The Bayes Theorem does the calculation right, and takes into account the low probability of a new theory being valid in the first place :

P(T|E) = 80%x10% / ( 80%x10% + 20%x90% ) = 30 %

There again, events with rare occurrences (valid theories) tend to generate lots of false positives. And this results in real life in a counter-intuitive fact : at the same time as there is a huge proliferation of published scientific research, it has been found that two-thirds of “demonstrated” results cannot be reproduced !!!

So … this book should be IMO taught in school … It gives very powerful and non-intuitive mental tools to make us better citizens, professionals and individuals. I don’t have much hope of this making its way into the school curriculum any time soon, so don’t hesitate, read this book, and recommend it to your friend and family 🙂