Wednesday, April 17, 2013

The Case for Austerity Weakened

The economics blogosphere has been agitated for several days by a just-released paper exposing errors of calculation and judgment in a 2010 NBER working paper by Ken Rogoff and Carmen Reinhardt purporting to show that countries that carry a debt/GDP ratio above 90% suffer from sharply lower growth. This paper has often been cited by European officials as justification for the austerity policy imposed on the countries of the Eurozone. Matt Yglesias has a good summary of the controversy here. Paul Krugman weighs in here, here, and here.

It is no doubt too much to hope that this debate will alter the settled views of policymakers, but with anti-austerity sentiment on the rise everywhere in Europe, it can't hurt.


FrédéricLN said...

A very important counter-analysis indeed.

Re myself, I can argue I always warn my "students" (in lifelong learning) about the difficulty of fitting models on as parcimonious data as macro-economical ones can be: not many countries with meaningful data, high inter-countries correlations, high inter-years correlation, country weights are almost arbitrary (1=1, or 1$GDP=1? both make sense), many effects are delayed and these various time lags are difficult to accommodate… Even the national accounts are heterogeneous by nature, if not falsified…

If I remember, we already had a (IMF? WB? around 2000?) famous article on the conditions for good public management; the conclusions of were heavily depended on the subpanel considered (including or not the largest countries, the smallest ones…).

But I can't remember about such a case, including wrong Excel formulae!

My usual way of thinking is: don't rely on multi-country, multi-years models, common sense is much more powerful than they can be.

(This is not a aggression against models. I'm a big fan of individual-level micro-economic and micro-social models).

Yet… I approved at least once Reinhart/Rogoff's conclusions, commenting an article by Jean Peyrelevade

And there is a (more) interesting reply by Peyrelevade, when another commentator referred to a "pro-debt report" :

'Societe Generale have released a new report highlighting the impact of the expiration of the Bush tax cuts and the end of fiscal stimulus on U.S. GDP in 2011. Quite clearly, it is not going to be good.
Their projection is for a 0.9% fall in GDP in 2011.'

Peyrelevade answered: "Je me méfie des modèles macroéconomiques fabriqués par les banques. Elles n’ont en général pas les moyens de construire des modèles complets intégrant les dimensions financières des problèmes et assurant le bouclage exhaustif des interdépendances. Tous les résultats sont bien entendu, comme d’habitude, dans les hypothèses. (…) Dans un modèle où la dimension financière n’est pas complètement intégrée (ce qui, une fois de plus, est le cas la plupart du temps), toute dépense accroît le niveau d’activité et le déficit, de ce fait, est préférable à l’équilibre …
Ce n’est pas seulement son article qu’il faudrait demander à la Société Générale mais ses équations elles-mêmes."

The bottom line would be: at least, the advantage of an academic publication (over a home-made report by a bank) is the possibility of later review and contradiction.

More science would mean: Publish the source code! Publish the data online!

bernard said...

FredericLN's comment is interesting in many ways.

As a professional macro model builder, I would introduce some caveats to his analysis though.

First, the Excel point is essential. Excel was never meant for the kinds of uses that some economists put it to, ie. complicated modelling.

It is unsurprising that Rogoff's article relayed on corrupt Excel calculations, for two reasons:
- first, it appears that no one else was able to replicate their result which arguably is a necessary property of sound science;
- second their calculations were too complicated to be safely effected under Excel (in Excel, formulas that you input become hidden, only their result showing unless you explicitely look at them). This is the reason why quantitative economists use specialised software which may be a bit more difficult to handle than Excel but are also much safer as their are meant to effect the sort of analysis that economists do (you can dig a hole in the dirt using a hammer, but it is better to use a shovel!).

Interestingly, a great many developing countries governments make projections using financial programming tools developed by the IMF and using Excel. These are extremely complex both in terms of underlying relationships and in terms of ease of use and rarely indeed have the coefficients used in their underlying equations been validated on actual data. It is hardly any wonder that forecasts and outcomes are so rarely related given the number of mistakes made and the difficulty in using these models under Excel.

I would argue contrary to FredericLN that macro models are extremely useful when they are soundly built and that relying on intuitions rather than spelt-out models is dangerously close to relying on preconceived notions. When someone does so, I tend to answer as my British friends: where is the scientific evidence.

So Rogoff and Reinhardt chose to use the wrong software and made serious mistakes, possibly because they knew what they result they wanted to obtain. Had their calculations been available online easily, someone would indeed have spotted the mistakes early on given the amount of exposure that their article got. One example among many of a world class economist - I don't subscribe to his political views, but he is still excellent - who publishes his data is Shleifer (is he at Harvard or MIT, I can't remember now).

Last, a point Krugman already made. Politicians chose to take the K-R paper as gospel because it said what they wanted to hear. Now that it has been thoroughly debunked, the surprise would be that politicians turn around on their earlier pronouncements. Asking a politician to do so is surely harder than having an economist admit he was wrong are K-R are not very likely to do so.

Unknown said...

I agree with you that Excel is a problematic tool, but complexity was not the problem with the R&R analysis. Rather, they simplified too much, using nothing more complicated than averages of buckets of countries. The buckets were arbitrarily set at debt/gdp ratios of 30, 60, and 90 percent, with no theoretical justification. Tiny New Zealand was treated on a par with large Great Britain, etc. Excel had nothing to do with this. The Excel error omitted 5 countries from their averages, which would have shifted their results, but it was the concept of averaging highly disparate countries over many incomparable years that was the root problem, as Frédéric points out. Excel is perfectly adequate for doing averages. I myself use R for economic work, which I much prefer.

FrédéricLN said...

I agree with bernard on all points ;-)

And/but I would add that business modeling on Excel is even more widespread than macro-economical modeling, with exactly the same technical and scientific drawbacks, and the same potentially disastrous consequences.

Moreover, spreadsheets *come from* macro-modeling (input-outputs tables) as far as I remember my computer technics history.

Just to say that finding a better tool (as flexible, and more readable for reviewers) would be a great accomplishment.

FrédéricLN said...

(Hello, Art!)

Bernard and you are the true professionals. I even couldn't accommodate R's design yet. But I understand I will have to!

So far, I do much data analysis with xlStat (a commercial software). I gives good readabilty on the model, as SPSS or SAS would, but cuts the link with the data, as SPSS or SAS also do.

I love Excel for the dynamic link between data and results (so I use VBA only when there is absolutely no alternative) — but the readability of the model is poor. Both together would be great.

bernard said...

Personally, I love eviews, but I suppose love is in the eyes of the beholder...

Mitch Guthman said...

I don't think these "errors" resulted from some kind of an Excel mistake or some overlooked examples that were totally randomly omitted by accident. That's just too great a coincidence.

Also, don't forget that the more significant errors and omissions were brought to the attention of Rogoff and Reinhardt almost immediately, yet they seem not to have checked their work in light of those criticisms. Maybe they didn't feel the need to reexamine anything because they'd actually examined the results with those countries included and excluded and made deliberate choices about what would be included and what would be excluded? That would be my guess and all the circumstantial evidence supports it.

I think an institution that just suspended a bunch of undergraduate for cheating on exams needs to do some soul searching about what needs to happen when two of its professors are caught cooking their results.

FrédéricLN said...

@ bernard: I heard only recently about eViews, I think I should try. You can see I'm just an amateur!

@Mitch Guthman : as little as I know such teams and organizations (and experienced maybe similar situations in the marketing/HR stats context), I would agree with some "conditional random" hypothesis: they don't cheat, but, when a set of hypothesis / equations provides the nice-and-useful conclusions, much less energy is invested into checking or "stressing" the data and models…

So, in a context where *the data basis is by essence insufficient to draw clear-cut conclusions* (my first comment), this mechanism allows each economist / marketing consultant / HR consultant… to draw the "facts-based" conclusions that suit best the theories he/she teaches.

Oh, I even read this recent nicely printed, full-color report undersigned by two teachers in business schools I won't give the names of: the conclusions were obviously contrary with the very facts and figures the report exposed. Let's say, that's France and its business schools :-) and at least, the reader, if statistician, could find by him/herself the consistent conclusions!

bernard said...

the Excel error was essential. Simplistic methodology gets criticised by specialists. Specialists don't matter as the past 5 years have amply demonstrated again. See today's Krugman Excel matters. Though I suspect that, maybe just this once Krugman is being over-optimistic ...

bernard said...


as a matter of fact I have donated - well, the EU has donated through me - over 250 copies of Eviews to various African governments these past few years and they are very happy with it. I chose it because it does easily almost anything economists need to do except perhaps the most arcane econometrics - where I suspect R is useful though I don't really know it - which only get done in really high level pure research. So, yes, you might want to try it (SAS feels unmanageable compared to it and SPSS is really good for pure statistics not their use and presentation). And the Eviews guys never even offered me a coffee for the business I gave to them. Stingy bastards!

Unknown said...

Bernard, Point taken. I agree with Krugman about the importance of the embarrassment associated with the coding error.

FrédéricLN said...

I also take the point about the high honesty level of the people at eViews! and about the software itself.

Anwar Fazil said...

Free Social Media Marketing where Every thing will be Free, Facebook Likes, Twitter Followers, Twitter Tweets, Twitter Re-Tweets, Twitter Favorites, Google Plus Followers, StumbleUpon Followers, Youtube Views, Youtube Likes, Youtube Subsribes, Pinterest Followers, Pinterest Likes, Pinterest PinIt, Free Website Visitors.
Just Join now and Free Increase your Social Media Networks.