The Love Lab: Using Data Science to Predict the Likelihood of Divorce

The Love Lab

The Love Lab at the University of Washington Professor founded by Dr. John Gottman uses predictive analytics to assess the likelihood of newlyweds having a stable and happy marriage based on observations of the couple taken during a 15 minute conversation on a persistent marital conflict. Gottman’s team then extract information from each second of the video by evaluating emotions and expressions along with physiological data (e.g. changes in heart rate). Emotions are characterized using the SPAFF (Specific Affect Coding System ) code, developed from Ekman & Freisen’s Facial Affect Coding System,  which applies weighting to positive and negative emotions. The weighting scheme for SPAFF codes is provided below.


Weighting Scheme for the SPAFF Codes (Source)

Researchers compile the second by second SPAFF data into a time series and use complex algorithms to project the likelihood of marital success. Dr. Gottman & his team have found that they can predict a couple’s marriage will last longer than 15 years with 90% accuracy after analyzing only a 15 minute conversation. Even more astounding, Love Lab researchers have found that they can be nearly as accurate with their projections of marital success  after analyzing only 3 minutes of the conversation. To demonstrate the power of the Love Lab’s methodology, videos of 3 minute clips were provided to professional marital counselors (~200) whose projections on whether the couple’s marriage would end in divorce was no better than chance.

Key Lessons from the Love Lab

  1. Positive vs. Negative Emotions: Gottman’s team have identified that for a marriage to survive the ratio of Positive to Negative is around 5 to 1
  2. The Four Horsemen: Dr. Gottman’s research has found that the four key drivers, what he refers to as the Four Horsemen: Defensiveness, Stonewalling, Criticism, and Contempt are major variables in marital demise. Furthermore, of the Four Horsemen Gottman has found the most damaging emotion is contempt which Gottman defines as any statement made from a higher level. Contempt is such a strong emotion, the level of contempt a spouse experiences can even be used to predict the number of colds the individual will experience as the stress impacts their immune system.


  1. Blink: The Power of Thinking without Thinking, Malcolm Gladwell, 2005
  2. The Gottman Institute
  3. Predicting Divorce among Newlyweds from the First 3 Minutes of a Marital Conflict Discussion, Sybil Carrere & John Gottman, 1999

The Monte Carlo Method vs. the Normal Distribution: Approximating Uncertainty in the Absence of “Headwinds”

Disclaimer: This post is rather wonkish with statistical and metallurgical discussions.

In a previous post I outlined the use of Bill James (founder of Sabermetrics / Moneyball influencer) similarity index to a metallurgical engineering project (Link). In the example, a statistical model was developed for projecting the strength of a particular alloy with respect to changes in a processing parameter. A Monte Carlo simulation was used to evaluate the distribution in the projected strength as a result of 1,000’s of changes in the model inputs. The result of the simulation gives you the approximate probability of the various outcomes.

The Monte Carlo Method

The Monte Carlo method, as you may have guessed, derives its name from the Monte Carlo casino in Monaco. The approach was invented by a Polish mathematician named Stanislaw Ulam as part of the Manhattan Project (Source). The inspiration for Ulam came from playing solitaire and wondering an easy way to calculate the probability of winning the game, eventually leading him to apply this logic to neutron diffusion (Source). The method involves taking a model and feeding in distributions of the various inputs and recording the outputs over hundreds or thousands of iterations. An example of a distribution generated from a Monte Carlo simulation results for work performed on developing an improved age practice for 7068 aluminum is provided below. The simulation was performed by changes in the time and temperature adjustments to the two step age practice performed following solution heat treatment.


The Monte Carlo method as a tool for generating a distribution of probable outcomes, differs from the classical example taught in Stats 101 courses. Classical empirical modeling is typically introduced using linear regression (think Excel and the linear trend line). Linear regression models are developed such that a line is drawn through the average or expected outcome for an input variable or set of input variables. The residuals or difference between the actual and projected (expected) values at a given point are assumed to be normally distributed and any residual outside of 2 standard deviations from the projected value is considered an “outlier” (Source). This “lazy” approach to modeling the distribution of outcomes can be effective; however, comes with the risk of greatly underestimating the probability of “unlikely” outcomes.

When the Normal Distribution Fails

The defining characteristic of the normal distribution is its central tendency or in layman’s terms, the majority of the data is clustered around the mean. The image below outlines this concept by highlighting the percentage of the data in each area with respect to the number of standard deviations (σ) from the mean (μ). From the image you can see as only 0.1% of the data lies beyond the 3σ point. This feature makes the normal distribution easy to illustrate and drives its use as the basis for tools such as control charts.


Image source

Nassim Nicholas Taleb (NNT), in his book The Black Swan, exhausts the fact that using the Normal (Gaussian) Distribution is dangerous for approximating the likelihood of seemingly low probability outcomes. NNT states that things that are normally distributed face “headwinds” which make probabilities drop faster and faster as you move away from the mean (e.g. height, IQ, etc.). If the “headwinds” are removed the resulting outcomes will become significantly asymmetrical (think 80/20 Pareto principle). NNT illustrates this point by contrasting wealth distribution in Europe and contrasting it with what the distribution would look like if it were normally distributed.

Wealth Distribution in Europe:

  • People with wealth greater than €1 million: 1 in 63
  • Higher than  €2 million: 1 in 125
  • Higher than  €4 million: 1 in 250
  • Higher than  €8 million: 1 in 500
  • Higher than  €16 million: 1 in 1,000
  • Higher than  €32 million: 1 in 2,000
  • Higher than €320 million: 1 in 20,000
  • Higher than €640 million: 1 in 40,000

Normal Wealth Distribution:

  • People with wealth greater than €1 million: 1 in 63
  • Higher than  €2 million: 1 in 127,000
  • Higher than  €4 million: 1 in 886,000,000,000,000,000
  • Higher than  €8 million: 1 in 16,000,000,000,000,000,000,000,000,000,000,000


The above example demonstrates if wealth was  normally distributed, the likelihood of a Bill Gates or Warren Buffett is incomputable and provides a simple lesson in the fragility of the normal distribution when it comes approximating the probability of unlikely outcomes.

Wrapping it Up

“All models are wrong, some models are useful”

-George Box, Industrial Statistician

In process engineering, “head winds” to borrow the term from NNT are made up of the controls imposed on the process inputs. These controls form the basis for the Y = f(x) philosophy touted by Six Sigma books to demonstrate that if the inputs to a process are “in control” the resulting outputs will be as well. The problem with this logic is that it implies that the organization attempting to control the process has identified all the necessary input variables and deployed adequate controls (i.e. “headwinds).

Recently, I fell victim to this oversimplification after resurrecting a model which used the “lazy” approach to modeling uncertainty discussed above and applying it to a process where the “headwinds” (i.e. controls on raw material) had been removed. The result was a drastic underestimation of the probability of an undesirable outcome (production of material outside of the specification limits). Using the “lazy” approach (+/- 3σ) to modeling the probable outcomes the likelihood of nonconformity ended up being an order of magnitude higher than originally projected, DOH!.

Lesson Learned: Avoid the “lazy” approach and embrace the Monte Carlo!



Remembering Hans Rosling, Co-founder of the GapMinder Foundation


I had the privilege to see Dr. Hans Rosling speak at the 2012 ARPA-E Energy Innovation Summit. An event where Elon Musk announced Tesla would be repaying their government loans early and Secretary Steven Chu kicked off his talk with a dirty joke. The punchline, “Yes, but my previous husband was an entrepreneur. Every night he would stand at the end of the bed and tell me how great it was going to be”. With all the great technology being exhibited and guest speakers like Elon Musk, Dr. Chu, T. Boone Pickens, and several congressmen & women, Dr. Rosling blew them all out of the water! Dr. Rosling put an amazing show using his revolutionary data visualization tools as he painted a picture of global health and economics which he used to promote his battle against preconceived notions. Cheers to you Dr. Rosling!

If you enjoyed the video check out Dr. Rosling’s organization or his book  Factfulness.

“I keep saying the sexy job in the next ten years will be statisticians. People think I’m joking, but who would’ve guessed that computer engineers would’ve been the sexy job of the 1990s?”

-Hal Varian, Chief Economist at Google

Attributes to Look for When you Must Select an “Expert”

If you enjoyed our last post where we explored the effectiveness of financial titans, an example of how medical professionals can be fallible, and how incentives lead realtors to to work against your best interest (Link), but find yourself still in need an “Expert” seek out these traits adapted from The Signal and the Noise by Nate Silver.


Attributes to Look For:

  • Multidisciplinary: Demonstrates the ability to incorporate ideas from other fields or disciplines
  • Adaptable: Finds new approaches or willing to pursue multiple approaches at the same time
  • Self-Critical: Willingness to take ownership for mistakes or failed predictions
  • Tolerant of Complexity: Understanding that the universe is complex and recognizes that some things are unpredictable (e.g. Black Swans)
  • Cautious: Express projected outcomes in probabilistic terms
  • Empirical:Rely more on observations and data than theory

Attributes to be Cautious Of:

  • Specialized: Career dedicated to working on one or two great problems. Skeptical of the opinion of “outsiders”
  • Stalwart: Singular approach to problem solving. New data is used to refine the original model
  • Stubborn: Blames others or “bad luck” for mistakes and errors
  • Order-seeking: Expects things to abide by relatively simple governing relationships
  • Confident: Speak in terms of certainty
  • Ideological: Expects solutions to be a manifestation of some grander theory


The two lists outlined above signify that the “best” experts will be reluctant to make bold statements regarding  projected outcomes. This reluctance may be perceived as a “weakness”, as it violates laws of persuasion. Always be on your guard for Gurus preaching certainty as they are of course trying to sell you something!

Persuasion Tip #9: Display confidence [either real or faked] to improve your persuasiveness. You have to believe yourself, or at least appear as if you do, in order to get anyone else to believe

Win Bigly by Scott Adams

Forget Gurus! Why you should do your own homework.

“How could I have been so mistaken as to have trusted the experts?”

-John F. Kennedy after Bay of Pigs

Turning to experts for solutions to problems can be appeasing for a number of reasons the most common of which is the comfort in outsourcing the decision making process. In my own experience I’ve found organizations love to delegate decisions, especially technical decisions, to an ordained group of “Experts”. These shaman will fly in on Monday afternoons and confer in conference rooms until Thursday at which point they will collectively provide their OPINION on the matter at hand. This OPINION will taken as fact via a combination of confirmation bias and office politics. Decisions will be made based on the new found “facts” and opportunities will be lost.

This post; however, is not about the use of Guru’s in the corporate world, but how we turn to these anointed professionals  for help making decisions in our everyday life. In this post we will explore the effectiveness of investment professionals, political pundits, doctors and realtors with the goal of providing you with a little extra motivation to do your own research.

Investment Professionals:

Turn on Fox Business, MSNBC, or pop open the Wall Street journal and you can witness the worship of investment “greats” providing you a myriad of explanations of where the market is going or what the hot stock is. Though Wall Street analysts make average salaries measured in the million$, are these analysts anymore effective than dart throwing chimps? The answer is no!

Need further evidence?! A 1995 study of the highest paid Wall Street analysts invited by Barron’s to its annual roundtable to make recommendations revealed investments made based on the projections of these oracles merely matched the average market return (Source: Mlodinow). Furthermore, studies from 1987 and 1997 found that the recommendations from the television show  Wall $treet Week significantly lagged the overall market performance, while a Harvard study of 153 investment newsletters indicated “no evidence of stock pickling ability” (Source: Mlodinow).

You may now be asking yourself how these stock market gurus who apparently have at best the insight of a dart throwing chimp be touted as “beating the market” and be paid outrageous amounts of money? The answer is the illusion of patterns. Humans have evolved to be excellent at pattern recognition, allowing us to perform feats such as circumnavigating the globe with the stars. Excellent abilities to recognize patterns is a double edged sword, as we can also identify patterns where none exist. The example below provides a distribution of the top 300 fund managers and the number of consecutive years they beat the market (S&P 500) and the success of students correctly guessing a series of coin flips. The two distributions are almost identical providing insight that the ability of fund managers to beat the market equivalent to the probability of correctly calling “heads” or “tails”. As the population of coin flipping students or fund managers increases the probability of a small number of “extraordinary performers” increases to the point of certainty.

bell curve

Top 300 Fund Managers vs. 300 Coin Flipping Students (Source: Mlodinow)

Political Pundits:

After recognizing the failure of political scientists to accurately predict the fall of the Soviet Union, Phil Tetlock (then a professor at UC Berkeley) undertook a 15 year study to evaluate  the accuracy of political predictions. Tetlock’s findings were published in his book, Expert Political Judgement.  Tetlock’s findings concluded that experts were barely better than random chance (think dart throwing chimps) at predicting events. In fact, events Tetlock’s “experts” predicted had a zero percent chance of occurring, actually occurred about 15% of the time, while absolute certain events (per the “experts”) did not occur about 25% of the time.  Tetlock has noted that the more interviews an expert participated in the worse his or her prediction accuracy (Source: N. Silver)!

In his own review of political pundits, Nate Silver evaluated predictions made by panelist on the television program The McLaughlin Group. Silver analyzed almost 1000 predictions and found that the pundits were about as accurate as a coin flip. Furthermore, Silver noted that “experts” on The McLaughlin Group predictions were influenced by their political affiliations (Source: N. Silver).

Persuasion Tip #1: When you identify as part of a group, your opinions tend to be biased toward the group consensus.

Win Bigly by Scott Adams

Medical Doctors:

As it may be easy to “write off” investment analysts and political pundits as modern day snake oil salesmen, lets take a brief look at Medical Doctors. Studies have shown that radiologists fail to identify lung disease in about 30 percent of the X-ray results they read, despite the clear presence of the disease on the film (Source: Malkiel). Other experiments have shown that professional psychiatrists were unable to distinguish between the sane & insane (Source: Malkiel).

A classic example is a study by the American Child Health Association performed in the 1920’s were 1,000 children from the New York City public schools were examined by physicians to determine the need for a tonsillectomy (Source: Malkiel). Of the original 1,000 students, 611 (61.1%) were identified as needing their tonsils removed. The remaining students were then evaluated by another group of physicians who then selected 174 (44.7%) as requiring a tonsillectomy. The remaining group of 215 students were evaluated by a third set of physicians who concluded that 99 (46%) of those students were in need of the operation. The final 116 children were examined a final time and the next group of physicians recommended that 51 (43.9%) students needed a tonsillectomy. The results of this study indicated that parents taking their children to a New York City physician in the 1920’s for tonsil issues were effectively paying doctors to flip a coin!


Realtors, or as I refer to them as the most morally bankrupt reptiles on the plane (I’ve recently had some bad experience with realtors), are another example of anointed professionals who often fail to deliver the insight and results their clients expect.  Realtors, as the video from the Freakonomics Movie, demonstrates how real estate agents are incentivized to work against their clients to quickly ascertain a sale.

If you have found yourself complaining about the time on the market of your home or lack of open houses and your realtors only advise is “lower the price” the video below is essential. The previous examples of investment professionals and medical doctors hinted at the influence of random luck, the example of real estate professionals provides insight into how incentive systems lead anointed professionals to provide you with misinformation. Leverage websites such as and data from your local realtor association to generate your own insight.

Wrapping it Up:

Gurus and professionals are not infallible as the examples provided indicate. In some cases as we explored with investment professionals and tonsillectomy diagnosis, professionals may about as effective as dart throwing chimps or coin flipping. Professionals will be influenced by the consensus of the groups they are in, as was the case with political pundits. Lastly, they may be incentivized to act against your best interests (think real estate agents). Thus, it is essential to do your own homework! Filter the signal from the noise in the data you collect and mindful of the bias of Guru’s as a result of incentives and affiliations.

“All professions are conspiracies against the laity.”

-George Bernard Shaw, Major Barbara

How Nate Silver made me a better Metallurgist

Nate Silver is the founder of, creator of the PECOTA baseball forecasting system used by Baseball Prospectus, and a renowned political forecaster. In his book, The Signal and the Noise, Nate outlines the creation of the PECOTA system and lessons learned from Bill James (founder of Sabermetrics), along with taking a look at other forecasting problems opportunities. Silver’s PECOTA system relies on a metric resembling the similarity index  proposed by Bill James in his 1986 Baseball Abstract. James developed the similarity index as a tool for comparing any two major league players. In James system the index starts with a 1000 points and detects points based on a set of guidelines. Highly similar players will have indexes as high as 950 or 975. Similarly the PECOTA system uses an index to evaluate a player against a multitude of former major and minor leaguers to project a players performance.

For a young metallurgist whose livelihood depends on projecting the results of varying parameters of an assortment of  metallurgical processes to achieve a desired result, how could the lessons of a Sabermetrician help? The opportunity presented itself with the need to develop a high strength product in Alloy 825, an austenitic iron-nickel-chromium alloy commonly used in environments where enhanced corrosion performance is required. The product was to be cold-worked (i.e. deformed at room temperature) to a desired size and strength level. The challenge is none of this data was readily available!

After performing a simple Google search, data for other austenitic alloys such as Alloy 625 (a Ni based alloy) and 316 stainless steel (Fe based) could readily be obtained from sources like ATI and Special Metals. Thus, a simple curve could be fitted to the results for these two alloys. Following Silver’s first principle, Think Probabilistically, a Monte Carlo simulation was developed using several distributions fed into the model to generate a distribution of results at each cold working level. The Monte Carlo simulation was formulated feeding a similarity index varying uniformly (0.5-0.9), a normal distribution of fully annealed Alloy 825 yield strengths, and a normal distribution of residuals from the fitted cold working curves for Alloy 625 and 316. An outline of the model is provided in the figure below.

Alloy 825 Model

The Monte Carlo simulation results are provided in the graph below with the blue line representing the mean result with respect to degree of deformation (i.e. percent cold work / area reduction), the redline representing the 99% probability and the bottom line representing the 1% probability. The customer upper and lower specification limits (USL & LSL) are also plotted for reference. The work hardening curve below shows that at a cold working percent of about 30 the product is nearly assured to meet the tensile strength requirements. These results were subsequently validated with actual experiments with a percent error of less than 3%. Eureka!

825 Model Results


3 Books for Leading the Fight against the Illusion of Management

#1 The Drunkard’s Walk

The Drunkard’s Walk… How Randomness Rules our Lives by Leonard Mlodinow is essential  reading for crusaders against the Illusion of Management. Mlodinow provides readers with an entertaining look at the probability of Roger Maris breaking Ruth’s homerun record in 1961 (3.1%), an introduction to Bayes’ theorem, the bias of statistics in the courtroom, and much more.

#2 The Black Swan

The Black Swan… The Impact of the Highly Improbable by Nassim Nicholas Taleb challenges portfolio theory and the normal distribution and introduces his readers to the concept of asymmetrical risk (high probability of small loss and low probability of tremendous reward).

#3 The Signal and the Noise

The Signal and the Noise… Why so many predictions fail but some don’t by Nate Silver walks readers through the art of forecasting through a look at Moneyball, Global Warming, and the accuracy or inaccuracy of television pundits. Silver provides a fantastic introduction to Bayes’ theorem, power-law distributions, and overfitting.

Are You Living in a Red Bead Experiment?

The red bead experiment was created as a gift for W. E. Deming as a demonstration of the Illusion of Management. The Red Bead Experiment consists of a paddle with 50 slots and a group of “willing workers”. The workers are instructed to dip the paddle into the pan of beads (white & red) which contains 20% red beads & 80% white beads. The goal of is to produce white beads, as the customer will not accept red beads. The workers are required to read the detailed work instructions and extract the beads from the pan using the paddle. A trained quality inspector counts the beads and records the results. “Management” provides encouragement and administers discipline to “poor performers”. An example of the kit is provided below (source).


To illustrate the experiment I’ve typed names in an excel spreadsheet selected 6 “willing workers” who have graciously applied. Each worker randomly generates carefully scoops 50 beads each week and the total number of red beads is diligently recorded by the inspector. After 4 weeks the average number of defects (red beads) is calculated for each “willing worker”. The quality manager presents the results to the staff at the monthly management review with the following chart.


The conversation likely goes as follows:

Quality Manager: “Here is the average weekly scrap performance for each operator. As you can see they all missed the target of 4 red beads per week; however, Brad was quite close.

Human Resources Manager:”Tom was the worst performer. We need to consider putting him on a performance improvement plan“.

Production Manager: “We need to create a quality mindset and start doing things right the first time

Engineering Manager: “I’ll have the process engineers watch Brad and try to glean any Best Practices

Next Management Review:


Quality Manager: ” The results from the previous month have not improved. We need to make quality a priority

Human Resources: “Tom’s performance has clearly not improved. I recommend we move to terminate him. Brad’s performance has also been impacted by everyone else’s. And Tessa is only getting worse!

Production Manager: “We must drive accountability down to the operators

Engineering Manager: “Clearly they [operators] are not following the standard work”

Final Management Review:


Quality Manager: “The results are not improving…. We need to hold the supervisors accountable”

Human Resources: “The New Girl is an improvement over Tessa and Brad is heading in the right direction now. I don’t know what else you want me to do!

Production Manager: “I keep emphasizing quality in the kick off meetings… these people are just not getting it!

If the fictitious scenario created using a random number generator in excel hits close to home, you have my condolences. This example and Deming’s Red Bead experiment are intended to help managers think about the system and processes which generate the results. They also illustrate how “data driven” thinking with a touch of confirmation bias can get out of hand quickly as managers perpetuate the Illusion of Management.


For more info on the Red Bead Experiment check out this Youtube video here . Also for an additional resource on Dr. Deming, please check out the book linked below.

Donald Trump, Control Charts, & Lesson’s in Psychology from my Favorite Cartoonist

Note: The Illusion of Management is not a political blog; however, those struggling to create  change within their organizations and build a culture of responsible data analysis can learn a tremendous amount from Scott’s new book. Thus, I explore the concepts of Adams’ book and how they pertain to the Illusion of Management.

Win Bigly is Dilbert cartoonist (and trained hypnotist), Scott Adams, newest book where he reviews his prediction that Donald Trump would win the 2016 election from the viewpoint of Trump using the techniques of what Adam’s describes as a Master Persuader. Other examples of Master Persuaders are Steve Jobs and Tony Robbins.  Adams walks readers through his explanation of why facts don’t matter, tactics of Master Persuader’s, and explores the psychological concepts of confirmation bias and cognitive dissonance through the lens of the 2016 election. Adams explains a Master Persuader is an individual who recognizes people are irrational 90% of the time (recall behavioral finance advocates from the last post argued innate irrationality invalidated the efficient market hypothesis) and uses this observation, along with confirmation bias to “pace and lead” the victims subjects. Adams draws  upon examples such as Trumps extreme immigration stance early during the Republican primaries as a way to match his supporters on an emotional level and then lead  them later in the race as he transitions to a less extreme position. Adams also notes Trumps ridiculous behavior and visualizations such as the infamous “Wall” are a tool to prompt discussion which elevates the issue in importance as a result of the “energy” consumed from the ensuing discussions and ridicule. Enlightened individuals such as Illusion of Management readers have likely seen this tactic deployed in the office as a skilled individual can successfully “spin up” a seemingly benign issue and suck the life out of an organization.

Persuasion Tip #4: The things that you think about the most will irrationally rise in importance in your mind.

Another popular tactic outlined by Adams is the High Ground Maneuver where a persuader elevates the discussion to a level where everyone agrees. Adams example is Steve Jobs’ handling of “Antennae gate”  where Jobs famously stated all smart phones have problems  in response to issues with the iPhone 4. In the corporate world you’ve likely heard ambiguous statements such as  “We need to focus on quality” or “create a quality mindset”(Note: Strategic Ambiguity is also a tool of the Master Persuader). Inevitably if the High Ground Maneuver goes unchecked the organization will likely kick of projects such as standard work deployment projects where the resulting  work instructions are pulled out for external audits and collect dust for the other 360 days a year.

Persuasion Tip #13: Use the High Ground Maneuver to frame yourself as the wise adult in the room. It forces others to join you or be framed as the small thinkers.

Though Win Bigly aids readers in identifying tactics of the Master Persuader, the continued discussion of confirmation bias is vital for ensuring meaningful performance improvement. From the perspective of the Illusion of Management, confirmation bias is crucial in perpetuating an environment where “noise” in performance data can be translated into “evidence” of progress or poor performance based on the bias of the observer. Adams uses evolution to explain this as understanding reality isn’t essential for people to live long enough to procreate. Evolution & confirmation bias also explains why people are skilled at pattern recognition enabling humans to circumnavigate the globe with the stars; however, prompts us to see patters where information is purely random (Source). The most fundamental tool in the war against confirmation bias & motivated persuaders is the control chart. Control charts can be used with upper and lower control limits which bound individual results within a range which is considered “noise” or common cause variation. Other rules such as those developed by Western Electric (link) can be used further enable filtering of signal from noise. Armed with tools such as control charts and the lessons of Master Persuaders outlined by Adams’ provide enlightened and motivated individuals with a fighting chance against the Illusion of Management.

Persuasion Tip #7: Its easy to fit completely different explanations to the observed facts. Don’t trust any interpretation of reality that isn’t able to predict.

Create a website or blog at

Up ↑

%d bloggers like this: