Anchor bias has been well described:
- Somebody sets an anchor (an initial estimate, or a desired outcome)
- You evaluate the anchor value (you didn't set the anchor, you evaluate whether it's a good thing)
- You make an adjustment if you decide the anchor value is a bit off
- You might have to reevaluate the new adjustment in light of circumstances
- Perhaps another adjustment before declaring victory!
In a similar situation here's the way Thomas Bayes described it:
- Somebody makes an educated guess--call it a hypothesis--of what an initial outcome might be, and its probability
- You run an experiment, model, prototype, or some analytic process to get real data to see if it conforms to the educated guess
- If yes, you accept the initial guess as the anchor; if not, you make an adjustment of the hypothesis so the adjusted guess now conforms to the data; now the initial guess is more than a guess since there is data to back it up.
- You might have to run the experiment a few times in light of circumstances
- Perhaps you refine the hypothesis a few more times before declaring victory!
So, both working with an anchor and working with the theorems of Thomas Bayes are very nearly the same thing. How convenient, since there is a lot of background and backup for Bayes and the work of others that came after him (He was a 18th century guy)
And, this is nice since for many projects, a guess is all we have to get started with. It's nice to see that folks have thought ahead how to work a genuine guess into something calibrated.
To see how this works in the real world with a rich description of many projects, check out and read:
"The theory that would not die" by Sharon Bertsch. It's a great read for anyone managing under uncertainty.