The Bloomberg columnist Barry Ritholtz wrote a nice post on his finance blog, The Big Picture, recently acknowledging an error (not at all fundamental) in a column and discussing how to deal with mistakes when you make them - which everyone does! I'd like to add that there are mistakes, and then there are mistakes. It's important to know which you've made. I've written about this before, but I may have a somewhat new way to make the point.
Suppose that you're making a prediction - and every assertion about how the world works has to involve at least an implicit prediction of something, because otherwise it's empty. This prediction comes from some kind of model - if you don't think you have a model, you're kidding yourself, and your model is all the worse because you imagine that you aren't using it.
For the sake of argument, let's say that your model takes the form
y = a + b * x + u
where y is what you're predicting, x is some kind of explanatory variable, a and b are parameters, and u represents random stuff (not necessarily really random, but stuff that isn't part of your model). That last term is important: nobody, and nobody's model, gets things totally right.
So, suppose your prediction about y ends up having been pretty far off. What does that tell you?
It could simply mean that, as the bumper stickers don't quite say, "stuff happens." There could have been a random shock; or for that matter your explanatory variables may not have done what you expected them to. But it could also mean that your underlying model is just all wrong and requires a rethink.
And here's the thing: over the course of your life, you're going to make both kinds of mistakes. The question is whether to hold 'em or fold 'em — to stick with your basic story, or realize that the story is wrong.
Let me give four examples, from my own considerable record of mistakes.
First, back in the mid-1990s I was extremely skeptical about claims of an IT-driven surge in productivity. And I was just wrong: productivity really was surging, although it eventually tapered off. What kind of error was that?
The answer was not fundamental. My model of how the world works didn't at all preclude productivity surges - I was just misjudging the one in sight. One thing I did learn, however, was to take buzz that isn't in the official numbers more seriously than I used to.
Second, in 2003 I warned about a financial crisis in the United States, driven by fiscal irresponsibility, somehow comparable to the crises in Asia a few years earlier. This was, I now believe, due to a fundamental error: countries that borrow in their own currency don't face the same kind of risk as countries that don't. What really annoyed me about this error was that my own analysis was trying to tell me that. I had done quite a lot of work on the Asian crisis, with models that relied crucially on foreign currency debt and balance sheet effects. But
I put that analysis aside and went with my gut, which is almost always a bad idea.
So this was a fundamental modeling error, calling for a major revision of views - which I did.
Third, I worried a lot in 2010-2012 about a breakup of the euro zone. And here too I had a fundamentally flawed model. But the flaw wasn't in my economic model, which has worked pretty well, but in my implicit political model: I simply failed to appreciate the incentives facing European elites and how willing they would be to do whatever it takes, both in debtor countries and at the European Central Bank, to avoid an outright rift.
Finally, Britain is growing much faster right now than I expected. Fundamental model flaw? I don't think so. As the economist Simon Wren-Lewis has pointed out repeatedly, the government of Prime Minister David Cameron essentially stopped tightening fiscal policy before the upturn, which means in effect that the "x" in my equation didn't do what I thought it would. On top of that, there was a drop in private savings, which is one of those things that happens now and then. The point is that the deviation of British growth from what a standard Keynesian model would have predicted, while real, wasn't out of line with the normal range of variation-due-to-stuff-happening; nothing there warranted a major revision of the framework.
So you will be wrong sometimes, and need to do your best to figure out why. What you should never ever do, of course, is make excuses, or pretend that you didn't say what you said.