Note: for the last few posts, I’ve been exhausting the store-house of prewritten pieces from other websites that I hadn’t yet transferred to this website. I believe all have been posted here now, so let’s return to our regularly scheduled programming.
I’ve had an article saved on Pocket for a few months now with a section highlighted. I don’t often highlight sections of articles because I don’t often keep articles on Pocket — once I’ve read it, I delete it (so the highlighting is superfluous). However, there was an article I came across a few months ago with a passage that stopped me in my tracks. It was in a rather weeds-y article about the Twitter
war strongly worded discussion (?) between Nate Silver and Nassim Nicholas Taleb. I won’t get into the details of it, because it’s not really necessary for the passage that popped, though I did want to set the context, in case anyone clicked through to the link and was confused.
About halfway through the article, the author begins a discussion on uncertainty. In particular, he’s talking about two kinds of uncertainty — aleatory and epistemic. [NOTE: You’re not alone if you had to look up aleatory — I don’t recall every coming across that word.] Anyway, here’s the key bit:
Aleatory uncertainty is concerned with the fundamental system (probability of rolling a six on a standard die). Epistemic uncertainty is concerned with the uncertainty of the system (how many sides does a die have? And what is the probability of rolling a six?).
How many times have you come across a model that purports to be able to predict the outcome of something — without there being a way to “look under the hood” of the model and see how it came to its conclusions? OK, maybe looking under the hood doesn’t suit your fancy, but I bet you partake in the cultural phenomenon that is following who’s “up or down” in the election forecast for 2020? Will POTUS be re-elected? Will the other party win? Or what about our friends on the other side of the pond — Brexit!? Will there be a hard Brexit, a soft Brexit, are they going to hold another election?!
All things, all events where the author of the piece or the creator of the model might not be adequately representing (or disclosing) the amount of epistemic risk inherent in answering the underlying question.
So let’s bring this closer to home for something that might be more applicable. You make decisions — everyday. Some of you might make decisions that have an impact on a larger number of people, but regardless of the people impacted, the decisions you make have effects. When you take in information to make that decision, when you run it through your internal circuitry, the internal model you have for how your decision will have an effect, are you accounting for the right kind of uncertainty? Do you think that you know all possible outcomes (aleatory) and so the probabilities are “elementary, my dear Watson,” or is it possible that the answer to whether you should have cereal for breakfast is actually, “elephants in the sky,” (epistemic). OK, maybe a bit dramatic and off-beat in the example there, but you never know when you’re going to see elephants in the sky when you ponder what kind of breakfast cereal to pull down off the top of the fridge.