Mr. Phillips and the Capital-Labor Tradeoff — Evidence From Germany

Jens Weidmann, the head of Germany’s Bundesbank, thinks the latest ECB stimulus package is “reckless.” At its current pace of QE, the ECB is on track to own close to 16% of the Eurozone government bond market by early 2017 when the program ends. (This number peaked at 17% in the US and is close to 30% in Japan.) Weidmann presumably thinks monetary easing of this size is going to end in destabilizing inflation. In the classical model, destabilizing inflation comes about via the Phillips curve, when unemployment is pushed so far down that it generates a wage-price spiral causing inflation expectations to become unanchored to the upside.

For the Eurozone as a whole, this sort of wage-price spiral is nowhere near imminent, as unemployment remains cyclically very high and nominal wage growth very low:


However, from Weidmann’s perspective in Germany, is does indeed seem the economy is running very hot, with unemployment the lowest in more than 20 years and nominal wage growth accelerating upwards:


Mr. Phillips says this situation in Germany should be causing consumer prices to spiral upwards, as firms move to protect their profit margins in the face of escalating labor costs. Except that Mr. Phillips has not shown his face to the Germany people yet; core inflation in Germany continues to move sideways at a low level:


What is happening, on the other hand, is that labor’s share of income in Germany is rising. Contrast that with a declining labor share in a country like Spain, which is still very cyclically depressed:


Is the Phillips curve “alive and well,” as Robert Gordon likes to say? Or has the Phillips curve always been a façade hiding an inherent tradeoff between capital and labor late in the business cycle? Mr. Phillips worshippers like to point to the 1970s as a period where the curve was alive and well, but were wage-price dynamics during that period independent from the two major oil shocks as well as the collapse of the Bretton Woods fixed exchange rate system? Is there an example anywhere in the history of macro where supply shocks don’t add considerable noise to the relationship Mr. Phillips lives by? Here in the US, Janet Yellen and her colleagues on the FOMC have begun an interest rate hiking campaign that rests almost entirely in Mr. Phillips’s existence. Are they right in doing so, especially in an election year when inequality is a major topic of debate?

So many questions, yet a great deal of silence from proponents of mainstream macro…

Fun With John Mauldin

John Mauldin has a fun article in Business Insider. His basic points are:

  1. The Fed has been distorting financial markets by keeping interest rates at 0% for too long
  2. The distortion has led to excessive risk taking in many asset classes, notably stocks
  3. It’s all going to end very badly sometime in the near future when the Fed-induced asset bubble implodes
    • Mauldin sees the fed funds rates back at 0% before it gets as high as 2%

Doom-and-gloom sensationalism always sells on Wall Street, which is why Mauldin has so many readers. I don’t normally respond to these types of articles, but I will in this case because clearing up Mauldin’s confusion may shed light on various topics of philosophical interest.

For example, Mauldin begins by noting:

A fixed-income market in which the only fixed element is an interest rate fixed at zero is not something that would arise naturally. It exists only because someone twisted nature into a new shape … And as we all know, it’s not nice to fool with Mother Nature. She always takes her revenge.

The problem here is that markets are not natural. They are an imaginary construct that we humans create in order to allow for wide-scale cooperation amongst our species, which in turn gives us our dominance over the natural world. Throughout history, such constructs were predominantly religious ones, whereas market-based ideologies dominate the world today. But the two are no different in form. They both rely on some assumed higher order – you must till this land until you nearly die of thirst, because God says so; you must pay the price of a collapsed asset bubble for interfering with the market’s pricing of interest rates, because the Laws of Supply and Demand say so – and it is this higher order that makes people scared and gets them to cooperate.

Now, the even deeper point here is that if enough people believe in something and act because of it, the thing can become sort of real. If Mauldin convinces enough people that risk assets are overpriced because of the Fed setting interest rates too low for too long, this may get people to think there really is an asset bubble out there. In turn, they may all sell when a shift in sentiment occurs.

However, it seems to me that the asset bubbles that truly do damage are the ones that fuel imbalances in the real economy in some very noticeable way. I don’t see this as being the case today. When stocks turned down in 2008 after Lehman collapsed, the bear market kept extending because the economic data, particularly in the housing sector, kept on deteriorating. Today if there is some shock that induces a shift in market sentiment, stocks may sell off, but as long as the economic data holds up, they should bounce back to where they were shortly after the sell off. In fact, back in August, this exact turn of events occurred. Stocks sold off when the Chinese authorities surprised the market by devaluing the yuan, but then when a few solid jobs reports came in for October and November, stocks bounced back to where they were prior to the devaluation. For an extended bear market, you really need a recession, which Mauldin sees in the near future:

We have already had the third longest “recovery” (weak as it is) following a recession since World War II … To think we can go without a recession for another full two years, through 2017, strains credibility.

The problem here is that recessions don’t come about deterministically when an expansion has passed a certain length. Recessions come from investment balances which need to be corrected. In the 2000s, we had a massive investment imbalance in housing. Where is the investment imbalance today? Note that hiring counts as a form of investment. Do we think the employment-population ratio is reaching imbalance territory?


Moving on, I think Mauldin’s history on interest rates is a bit confused:

There was a world back in the day where 5% was the minimum return you’d expect from any manager worth his salt. And any manager who only delivered 3% would have to polish his résumé … I hear you asking, what is this 5% return you speak of? Believe it or not, Treasury bills really yielded 5% as recently as 2006—right before the Fed began the easing cycle that ended this week.

A very long-term view of interest rates suggest the 5% T-bill yield Mauldin speaks of is an exception rather than the norm. Here is a chart of 10-year yields back to the early 1800s, from a recent NYT article:

Screen Shot 2016-01-03 at 3.02.41 PM

It’s very unfortunate that we wrote economic textbooks around what occurred in the 1970s. That is a period when inflation indeed became unhinged, but it is the only period of such unhinging when we look at the data over the last 200 years. Inflation was low and stable prior to the 1970s because of the gold standard and similar international setups like Bretton Woods. Today inflation is low and stable because of the inflation-targeting regimes adopted by the world’s major central banks. Provided the world’s central banks don’t suddenly drop their inflation targets, I don’t see why we should expect a 5% T-bill yield anytime soon, or for it to be something “natural” that markets produce.

Now, lets get to the crux of Mauldin’s excessive risk taking story:

A risk-free return is pure fantasy now. We just finished seven years in which achieving returns with a + sign required taking on risk … We’ve been able to choose our poison: we could take credit risk, inflation risk, equity risk, hedge fund risk, hurricane risk (seriously, you can)—any risk we liked … The one thing we weren’t allowed to have was daily liquidity with returns above zero as a certainty. It was the Age of the Guessing Game.

The first point to make is that central bankers should be incredibly happy with what Mauldin is saying here. After all, the point of QE was to get market participants to take on more risk. The private sector accumulated too much debt and went through a deleveraging campaign beginning in 2008. This is what slowed the economy. Deleveraging means not taking on enough risk. The Fed drove risk-free rates to 0% to get the private sector to take on more risk, in the view that more risk taking means stronger economic growth. According to what Mauldin is saying, the Fed has been successful at least in terms of the former.

Now, as I mentioned in my previous post, there is a real problem with QE creating wealth inequality. This may lead to social instability and political change, which could weaken economic growth. On the other hand, QE undeniably boosted economy-wide investment to some extent. So there is a battle between the absolute living standard gain people are receiving from QE and the relative living standard decline it produces for the masses. Which of these forces wins out in the end? I don’t know, the script is still being written. My first take is that people are more responsive to absolute gains. However, the growing popularity of Bernie may be telling us something important. The point is that this is the most important conversation we should be having about QE.

Finally, Mauldin makes a good point when he says that:

The world has never seen a full-blown recession with interest rates this close to 0% in both the US and Europe.

I don’t think a world recession is likely any time soon, but the point is nonetheless well taken. However, Mauldin is not going to like the answer which is already being devised by some economists: namely, to remove the zero lower bound by getting the masses to move to an electronic taxable currency. Economists like Willem Buiter and Miles Kimball are behind this campaign.

What Mauldin fails to understand is that if indeed we end up back at the zero lower bound in the near future, and if indeed central bankers remove the lower bound by taxing currency, it will be because saving tendencies in the private sector demand it. Interest rates are set by the balance between the supply and demand for credit. If there is too much supply of savings relative to the demand for investment, interest rates fall. Practically speaking, if there is too much supply of savings relative to the demand for investment, the economy weakens and the Fed cuts interest rates, but the outcome is the same. Seen in this way, the Fed doesn’t really set interest rates however it wants; the actions of the private sector force the Fed to move interest rates in its attempt to equilibriate the supply and demand for credit at a level consistent with full employment in the labor market.

What’s new this time around is that the Fed is employing a data dependent model, where it will set interest rates based on how the economy is evolving in real time – as opposed to a modeled forecast of the future. I agree with Mauldin that the economy probably won’t be strong enough to warrant 100bps of rate hikes in 2016. However, I don’t see a shock occurring in the near future which gets the Fed to cut rates back to 0%. If anything there may be a few stop-and-go periods, but without the collapse of an investment imbalance, it’s hard to see how the Fed doesn’t intermittently grind higher over the next 2-3 years.

QE to Keep the Masses Quiet – At Least in the Short Run

I would argue very strongly that quantitative easing (QE) increased wealth inequality. The easiest way to think of the mechanism is to see QE as driving down the market’s collective estimate of the discount rate, causing discount-rate-sensitive assets (equities and housing) to rise in price. Rising wealth inequality is the result of this QE-induced asset inflation.

At the same time, QE clearly boosted growth and created jobs. Of course the capex recovery was not as strong as we would have hoped for coming out of a severe recession like that which the 2008 financial crisis spawned, but, to speculate on the counterfactual, I find it hard to believe that capital formation didn’t at least benefit somewhat from QE. Sure, we can talk about how much of the money printing went toward share buybacks and dividend increases rather than capex, which is probably true. But if you look at countries where QE was not as aggressive (e.g., in Europe), capex trends have been much weaker. And as Brad DeLong notes, once you take housing out of the picture, capex trends in the US don’t look especially weak.

So on the one hand, we have QE increasing relative inequality in the economy. On the other, QE is lifting the absolute living standards of the masses. Which of these forces wins out? Will the QE-induced rise in inequality lead to mass protests against the economic system even though the masses’ living standards are now rising in an absolute sense?

Well, there are descriptive and normative responses to these questions. On the descriptive, while it is still too early to tell, the evidence we have so far suggests to me that raising living standards in an absolute sense is more impactful to the psyche of the masses than changes in relative inequality. This goes all the way back to Keynes, who said during the Great Depression that if you want to stop the protests and the general rising preference for communism – i.e., if you want to save capitalism – the best way is to get people back to work. Work will occupy their time, and as long as they see their inflation adjusted incomes rising relative to their own past, or their parents’ past, that will be enough to keep them silent. I don’t believe Keynes was asked if he would support policies that raise absolute living standards while at the same time increase relative inequality, but if he were I suspect he would have said yes (at least as second-best policies; more on this shortly).

You can see the evidence in two ways. The protests associated with the Occupy movement clearly faded as the US economy picked up steam. And, again while it’s still too early to tell, it seems like the populist storm is fading in Europe now that the ECB has stepped up in a big way to close output gaps and raise inflation expectations via a new QE program beginning in early 2015. Of course the refugee/immigration crisis in Europe is creating political tensions all across the continent, but these tensions are I would argue independent from the income-generating capacity of the economic system. Of course we can’t run controlled experiments in macro, so that’s about as much as I can say here.

Thus it seems to me that the masses are willing to tolerate rising relative inequality so long as they are seeing their living standards rising in an absolute sense. So are we done here? Can we just do QE forever and call it a day?

Not quite, because for one there might be short-run vs long-run differences. In the short-run it seems like just getting people back to work is the best way to keep them quiet. However, you could imagine a situation where rising relative inequality swamps the absolute in the long run. Or at least I can – this is where I’m going to get a bit normative.

Over the long-run increases in relative inequality will lead to a two-tiered society. The rich will have their gated communities and their own forms of transportation; the poor will be pushed to the outskirts of our cities, relying on a public transport system that is falling apart (because there is not enough money in the public coffers to maintain the system (because rising relative inequality leads to slashed tax rates (because of the peculiar interplay between wealth and political power in a capitalistic democracy))). This can’t be good over the long run. Democracy doesn’t have a chance of working without a common national identity, which can only be formed if economic classes aren’t too far apart.[1]

Of course wealth inequality was rising long before QE began. Which is kind of a crucial point. QE was always a second-best policy to fiscal stimulus, at least in the eyes of we (post) Keynesians. On the back of a private-sector debt-fueled investment bubble, we argued that the public sector should step up and spend more as the private sector increased its propensity to save. We argued that debt-GDP ratio limits were misleading, which proved to be correct – the world didn’t end in nuclear Armageddon when the US debt-GDP ratio crosses 90%. We argued that we could kill two birds (jumpstarting nominal income growth and repairing a dilapidated public infrastructure) with one stone (fiscal stimulus). But our arguments were not politically palatable. Perhaps because of the multi-decade trend of rising wealth inequality? Perhaps. Importantly, we reluctantly embraced QE without more fiscal stimulus as being superior to a policy of no QE and no more fiscal stimulus. But that doesn’t mean we are celebrating the wealth inequality that QE has created.

Yet, as describers of the economy, even though we (post) Keynesians take Keynes very seriously, perhaps what we’ve learned over the past five years is that the short run is even more important than we thought. After all, the masses are willing to tolerate rising wealth inequality in the short run so long as they are finding jobs and seeing their incomes rise in an absolute sense. We can speculate like I did above on the long run, but as Keynes famously said about the long run…

[1] Martha Nussbaum calls this common national identity the “glue” that keeps a political democracy functioning.

Large-Scale Emigration: Good or Bad?

In a recent blognote, Paul Krugman described the similarities and differences between the debt crises facing Greece and Puerto Rico. At one point Krugman notes that the decline in output per person in Puerto Rico has not been as severe as that in Greece, and he attributes this large-scale emigration, which:

is actually supposed to happen when changing economic winds cause a U.S. region to lose competitive advantage.

It is important to point out that what Krugman is saying here is pretty crazy, yet it is the standard way economists like him think about the process of economic adjustment. They think it is entirely fine to view humans as widgets that can and should be moved around geographically to balance supply and demand. If an economic shock occurs in Puerto Rico and the demand for labor there declines precipitously, then large-scale emigration is to be celebrated, in the same way we celebrate bringing electric generators to a devastated region in the aftermath of a hurricane.

What is lost is the fact that Puerto Ricans have families and a local culture in Puerto Rico. When Puerto Ricans leave Puerto Rico in search of better economic opportunities on the mainland, they are effectively placing their economic well being ahead of their social and cultural well being. This is what Paul Krugman is celebrating: an economic system that forces people to place their economic well being ahead of their social and cultural well being.

Which is fine. The world is not perfect. Maybe it’s asking too much for a system that maximizes both economic well being and social and cultural well being. But we should at least be honest about these tradeoffs. And not strictly for normative reasons: sometimes these tradeoffs end up influencing the out-of-sample prediction process – which is, after all, the most important thing we should be worried about if we are to call ourselves “scientists.”

For an example of the said tradeoffs influencing the prediction process, take a look at Greece. As Anil Kashyap notes, before Syriza was voted into power in January, the Greek economy was actually looking okay. Sure, the outcome of following the IMF’s structural reform program for nearly 5 years wrecked the economy and inflicted an enormous amount of suffering on the people of Greece. But the economy in 2014 finally began to turn around, and would have likely grown in 2015 had Syriza never been elected. The economy will now likely contract in 2015, but it is reasonable to think that if Syriza was never noted into power then the Greek economy would have continued to grow, the debts may have been worked off very slowly, and Greece may have remained in the euro. Now the most likely case is Grexit.

You can respond to this in two ways. You can get angry at Syriza, seeing its leaders as childish negotiators and blaming them for pushing a growing economy back into financial crisis. Or you can be a more rigorous social scientist. Being a more rigorous social scientist means understanding the systemic conditions that led to Syriza’s uprising.

Because the individual countries in the euro area do not have their own exchange rates, they are forced to adjust in draconian ways whenever an asymmetric shock hits. These draconian adjustments involve either cuts in nominal wages – provided the central bank is not willing to raise the continent wide inflation rate – or, you guessed it, large-scale emigration. The irony is that the latter option was available to the Greek people. Many could have left over the past 5 years in search of better economic opportunities in Germany or the UK. In fact the European Community was set up to encourage large-scale emigration of the sort Paul Krugman celebrates – e.g., eliminating country-specific visa requirements, encouraging common spoken languages, etc.

In the case of Greece, however, the people opted for placing their social and cultural well being ahead of their economic well being. And can you blame them? It’s not exactly easy to pack your bags and move on a dime to a new place with a different culture that you are not used to.

This is essentially the sentiment that voted Syriza into power: it was a protest against an irrational economic system which treats people like they are widgets, to be tossed around in a mechanical production process. I’ve mentioned this sentiment before in the context of Karl Polanyi’s famous book, The Great Transformation. The key point is that mainstream economics celebrates the commoditization of labor, all the while standing baffled when such commoditization leads to political uprisings which threaten to undo the very commoditization that provoked them.[1] As Polanyi famously argued, it’s actually impossible to get to a free market utopia where labor is fully commoditized: every time we take one step forward, politics spontaneously takes us two steps backward.

Under these conditions, it’s not surprising that the creditors of Greece have acted so as to influence local politics in Greece. Fortunately the Greek people voted today in support of democracy. But the outcome of today’s referendum does not remove the inherent contradiction between a fully functioning market economy and democracy. This is a contradiction that has always existed, and will always exist, whether you like it or not.

[1] Here I am not speaking to Krugman; he has been more aware than most of the changing political tides in Europe.

Hipster Macro

Now nearly seven years after the global financial crisis of 2008, the macroeconomics community continues to fret about sluggish economic growth. This “secular stagnation,” we’re told, is causing us trillions and trillions of dollars worth of lost economic output, as evidenced by still-large output gaps across much of the advanced world:

ae output gaps

% of potential GDP, IMF

We’re then told that the welfare of the masses would be higher if we could find a way to close these output gaps by stimulating demand. The solutions to stimulating demand run from getting the fiscal authorities to spend more money to having our central banks buy more bonds (quantitative easing) to taxing currency to extinguishing the high levels of debt that still pervade the household and corporate sectors to clarifying regulations/simplifying taxes in order to reduce political uncertainty which may be holding back business investment.

The sleight of hand of equating closed output gaps from more effective demand to higher welfare for the masses should worry the attentive readers of Econolosophy. For, such a leap, which may very well be justified, would need to be defended on theoreitical, empirical and contextual grounds. I will not do so here, but I want to entertain a fun hypothetical, as a counterexample, for some food for thought.

Imagine a younger folk who was planning in 2008 to have much higher material levels of consumption today. Say this person hoped to have new wardrobe with flashy suits. Then the crisis hits and this person’s income trajectory is drastically changed. He/she will no longer be able to afford those flashy suits. But with the hit earnings potential comes about a whole new “culture of frugality.” And here I mean frugality as a hip new thing, something bestowing social status. Luckily for our said younger folk, he/she resides in the center of this new culture: Brooklyn, New York. And as such, he/she benefits greatly from the 2008 financial crisis: the thrift shop rags this person wears become a form of social capital, the factory rave parties in Bushwick are outta-this-world fun, everybody is hooking up, life is great.

Can the culturally oblivious economist really make a claim that the crisis-turned hipster would have been better off if his/her income potential didn’t shift – if he/she were wearing those flashy suits today, working in the FiDi, or TriBeCa, or god-forbid Midtown? I don’t think the economist can, because what we are dealing with here is a shift in preferences that is endogenous to the state of the economy. When the economy goes in one direction, people adjust to it, make do with it, find happiness within it. To make a claim that the happiness found when the economy goes in one direction is inferior to the happiness that would have been found had the economy gone in another requires a great deal of heroic assumptions, like the belief that people don’t fundamentally change who they are over time, or the more outrageous view that the only reality that exists is a purely materialistic one – suggesting of course that vice (via the procurement of material things) rather than virtue (being content with one’s inner self, on the basis of adhering to his/her moral code) is the only path to happiness. The Greeks certainly knew better than to make such dubious/defeatist assumptions about human nature. Keynes himself even knew better.

The astute reader will now say: Your one-off hipster example does not do justice to the swaths of people who are living in dire material conditions because of the financial crisis, people who can barely afford to put food on the table for themselves and their families. Would not closing output gaps from demand stimulus help these people find happiness? Is not some minimum level of subsistence needed before people can even begin to wrap their heads around this new culture of frugality? How does one even get to Bushwick if he/she can’t even afford to pay for public transit, however terrible and smelly and jail-like the L line is?

Great questions, astute reader! To answer them, however, we needn’t do rocket science. In fact, they have been answered time and again by the great thinkers of our past, again most notably by Keynes but also by famed leisurists such as Marx/Tolstoy/Seneca the Younger.

The first point to make is that with a better distribution of economic outcomes, we could solve almost all of the problems of subsistence. In other words, we have the technological means to give people enough food and nourishment so that they can achieve “flow” and contemplate bigger questions about meaning and quantum entanglement. That we don’t is a function of who owns the means of production and how they allow the fruits of technological progress to be distributed throughout society. This has nothing to do with whether the macroeconomy is running at a capacity below potential capacities of production – the latter, btw, estimated from rather dubious/normative assumptions about, e.g., how many hours a week each available worker should be working – provided that the growth rate is still positive, which it is. As I’ve said time and again, the problem here is in part one of culture, wherein, in the advanced, mostly English-speaking liberal democracies of today, individuals are assumed to be ultimately in control of their fates, so much so that whatever comes their way as a result of the spontaneous order of economic interactions across time and space is amazingly seen as entirely their doing, a function of their unparalleled work ethic and smartness. Until we solve this cultural problem, which to be sure goes all the way back to 16th Century physics and its manifestly “atomized” view of the world, we as a species will not be able to achieve a stable distribution of wealth and material access that is fair to our least well off members of society.[1]

If we are among the bourgeoisie thinkers of the industrial revolution, such as Mill (drawing from Locke), who think that there is more to happiness than merely subsistence leading to “flow,” namely, that people need a job to work in order to feel a sense of social worth and personal fulfillment, then here we needn’t also blindly push for closing output gaps from the bottom up with more demand stimulus. Indeed, we could take the alternative route of reducing labor supply by encouraging those currently working to work fewer hours so that the available work to be done can be spread around more equitably. The Germans are onto something in this regard with their heavy emphasis on work sharing as a means to combat the inherent volatility of the economic cycle. Crucially, we must always remember that there are normative assumptions made whenever the CBO or the Fed or the IMF tells us that actual output is below potential output. How are you guys, at these institutions of enormous undemocratic power, calculating the latter? What assumptions are you making for how many hours the typical workweek comprises? Are you telling us that people want to work these crazy hours, or do your impressive calculations performatively feedback on the culture of the people, telling them that they should work these crazy hours?

And finally, we should want to start emphasizing supply reduction because the future existence of our planet depends heavily on it. One can tell all sorts of sensationalist stories and anecdotes about how, e.g., plummeting costs for solar energy are leading to increased efficiencies, suggesting we are well on our way to a world where economic growth is decoupled from greenhouse-gas (GHG) emissions. But as every good scientist knows, anecdotes tell us little about the state of the world. Those who look at the data on a global level, continue to tell us that such decoupling is not happening fast enough, that we urgently need to accelerate the transition to an economic system with a drastically smaller carbon footprint.

To date, the economics community has seen the problem of growth being correlated with GHG emissions as a supply problem, relating to the efficiencies of, for example, our infrastructures, our electricity grid and our waste-management systems. To be sure, these supply factors matter greatly, and the future health of our planet relies heavily on getting them right. But as every good (Post-Keynesian) economist knows, demand has a tendency to create its own supply. When we as a society demand, because of our culture, endless material things, we create endless amounts of waste which our planet cannot handle. To translate all of this into “action on the ground,” we need more demand-side movements such as the reducetarian movement, which is trying to make eating less meat a cool and hip thing to do, to complement the various successful supply-side movements, such as the push across various universities in the US to get investors (college endowment funds) to divest out of fossil fuel companies.

The general takeaway from all of this is that the state of macro doesn’t have to be like this – shouldn’t be like this – where op-eds ad nauseam continue to show up in the NYT or the WAPO or Project Syndicate telling us that the single biggest threat to the world is our inability to spend money, and where macro’s “experts” continue to ignore the culture of the masses, which is profoundly wrapped up in and influenced by the very theoretical construct these self-declared positive scientists interpret the world through. To be sure, we at Econolosophy are not against spending money – as mentioned fixing the supply side to reduce GHG emissions will undoubtedly involve spending more money – but what we are against is spending money in a way that normatively influences people’s lives, causing them to value work and material things over leisure and inner spiritual love – you know, the Kumbaya stuff.

[1] Or stated differently, the great and detrimental misstep in Rawls’ thinking was to assume that we as a society, comprised of hundreds of millions of Kantian agents all separate from one another (and special above the animals and plants), could ever reach an emergent social consciousness that gives a damn about the welfare of the least well off.

Abrupt Changes Call for Alarm Bells – Heckscher–Ohlin Edition

Ok, let’s get something straight. The macroeconomy doesn’t tend slowly toward some steady state; it jumps around wildly because of technological change and the political economy.

That’s not to say that many of the key insights about long-run growth from mainstream macro aren’t true. For example, Heckscher–Ohlin tells us that a rich and well-developed country like the United States should have a very capital-intensive manufacturing sector. And the US does. But here’s the thing, the evolution of the US manufacturing sector toward capital-intensive production wasn’t a slow-moving process; it more or less happened over night due to, I would argue, three key changes.

The first is the dramatic appreciation of the dollar from 1995-2002. During this period, the real trade-weighted (broad) dollar index appreciated by roughly 30%. This appreciation occurred during the Asian financial crisis, after which numerous economies in developing Asia found it to be in their best interest to peg their currencies to the dollar at low values.

The second is information technology. Once telecommunications advanced to the point where it became possible for US businessmen to remotely manage manufacturing facilities overseas, labor-intensive production could be shifted out of the US to take advantage of lower labor costs abroad.

The third relates to the two key trade agreements that set international standards for trade in manufacturing: namely, NAFTA in 1994 and China’s admission into the WTO in 2001. These two agreements liberalized the financial sectors in Mexico and China, allowing a flood of foreign investment into these countries for the purposes of building low-cost production facilities and establishing global supply chains.

In the end, we got a situation where US manufacturing companies could shift labor-intensive production very easily out of the United States into countries where workers’ wages were a fraction of those in the US, coupled with a currency that stimulated consumption demand for cheap imported goods.[1] The key point is that we got to this end very quickly – in a span of about two decades. As such, the US manufacturing sector went from employing 18 million workers in the late-1980s to employing less than 12 million by 2010. Over the same period, annual gross output in the US manufacturing sector increased by 110%.

mfg employ output

In the stylized model, the shift toward capital-intensive manufacturing production in a wealthy economy would not happen so quickly. Thus it is not surprising that the stylized model downplays the problems of distribution and reallocation. After all, if the shift toward capital-intensive manufacturing production is slow moving, there is ample time for workers formerly employed in the manufacturing sector to gain the skills they need to transition into the high-skilled service sector – or to remain in the manufacturing sector operating the increasingly sophisticated capital that is used to produce. If, on the other hand, the shift toward capital-intensive production happens abruptly, a great deal of previously employed and secure workers may find themselves unemployed or working in low-skill/low-wage service occupations, such as manning a cash register at Walmart.

Importantly, the burden of abrupt change in manufacturing production is not just on the workers who are displaced by offshoring. The burden is also on the education sector, which needs to properly train workers for high-skilled service occupations — think Big Data analysis roles at Google(nomics). It is much easier to restructure the education sector if the economy is slowly tending to some end state that is visible and understandable.

So it would be wonderful if the stylized model actually described how the world works; we would have far fewer problems today if it did. But it doesn’t. The United States (including many other advanced economies) has a major distribution/reallocation problem in part because its manufacturing sector was altered dramatically in a relatively short time span.[2] This has led to a situation where we now have a glut of workers competing for a limited supply of service jobs at the lower end of the skills distribution, which has put downward pressure on the wages of millions of workers, who, without the ability to borrow in a credit-lax lending environment, are unable to spend to support aggregate consumption demand, in turn leading to the problem of “secular stagnation” that people like Larry Summers lose sleep over.

So what do we do? High-wage labor-intensive manufacturing jobs are not coming back to the US. (When they leave China due to rising labor costs, they will go to Cambodia/Vietnam and then to Africa.) So we need to figure out what to do with the oversupply of workers who are now competing for low-wage jobs in the domestic service sector. I see two policies that would probably be most effective.

The first is to keep the pedal to the metal from the perspective of monetary policy. If monetary policy is kept looser for longer, enough demand will be generated to employ the excess supply of workers seeking employment in the service sector, which will eventually start to raise their wages. In fact, we are already starting to see this, as evidenced by Walmart’s decision to raise the wages of 500k of its workers last month. Unfortunately, the Fed is gearing up to start tightening monetary policy sometime this year. This will slow the recovery and thereby deny wage gains for millions of workers.

The other thing that can be done to address the problem of secular stagnation caused by the collapse of labor-intensive manufacturing in the US is to simply redistribute money from the winners of globalization/technological change to the losers – such redistribution would be justified under the philosophy of luck-egalitarianism (i.e., the view that people are entitled to what comes their way as a result of effort but not (brute) luck). The winners have a low marginal propensity to spend, whereas that of the losers is higher, meaning that increased redistribution would stimulate aggregate consumption demand. Unfortunately, we live in a culture where the majority of the winners think they won because they’re awesome, hard working and super smart, which makes increased redistribution very difficult to carry out.[3]

The last thing I will say is that the stylized model is not just wrong; its wrongness may actually be preventing some of the policies we need to address the problems abruptly brought on by the collapse of labor-intensive manufacturing in the US. As mentioned, the stylized model sees the world as a slowly evolving system. In a slowly evolving system, things tend to work themselves out naturally – the education sector will adjust endogenously, people will get the new skills they need, redistribution efforts will automatically step up, etc. In other words, the stylized model says, “Don’t worry, it’s all under control, we would expect the US manufacturing sector to be increasingly capital intensive.” Such a narrative obscures the fact that we have a very real crisis that calls for dramatic policies to address the very abrupt evaporation of high-wage manufacturing jobs in the US. In short, alarm bells should be ringing but they aren’t because mainstream macro tells us that everything is under control.

[1] In addition, consumption demand in the US was also driven by an unsustainable accumulation of household debt up until 2006. Perhaps the US household debt bubble can be seen as a fourth key change allowing for the shift in global manufacturing production.

[2] Of course the US also has a distribution problem because many sectors allow relatively few workers to extract rents from the broader population. The sectors in which such rent seeking is probably most pervasive are financial services, healthcare and law.

[3] Our philosophically untenable “just desserts” culture is reinforced by numerous economic ideologues who misuse theories like marginal productivity theory.

Robotics and Redistribution – Theory-Land Is Nothing to Brag About


Technology is progressing quickly. The worry is that robotics and other artificial intelligence will displace large numbers of workers in the coming decades.

As Dean Baker notes, however, this much talked about technological boom has yet to show up in the productivity data. But that doesn’t mean it is not occurring. For one, our productivity statistics may be poorly suited to pick up advancements in information technology, much of which includes new non-rival services (e.g., a new Instagram account) that can be produced at low or zero marginal cost – Joel Mokyr makes this point in the secular stagnation ebook. In addition, it historically takes a while for new technologies to boost economy-wide productivity, as it usually takes a while for existing industries to be restructured to make the best use of new technologies – Barry Eichengreen makes this point in his 2015 AEA essay.

But let’s assume the robotics revolution is for real, that it will displace large numbers of workers whose skills are easily automatable. What is the policy response to this development? Do we say “sorry” to the displaced workers, telling them that they should have acquired skills more relevant for a 21st Century economy?

Well, no. Post-Keynesians often say that “radical uncertainty” characterizes the trajectory of technological change, and I think there’s something to this characterization. If nobody can predict the way technology will progress, then those affected negatively by technological change may be said to have experienced “bad luck.” Correspondingly, those affected positively by technological change may be seen as “lucky.”

One of the truisms of the modern welfare state is that it attempts to redistribute resources away from the “lucky” toward those experiencing “bad luck.” Welfare economics established this truism. It used the idea that economics is fundamentally a positive-sum game to reason that policy can and should be used to promote changes in the economy that leave some better off without making anyone else worse off. The way policy does this is through redistribution.

For example, policy may be used to remove a trade barrier, such as a tariff on imported steel. If foreign steel is produced and priced more competitively, the removal of such a tariff will result in winners and losers. The winners will be the domestic consumers of steel (companies using steel to build stuff). The losers will be those tied to the domestic steel production industry (entities like the no-longer-existent Bethlehem Steel Corporation). We may hear horror stories about all of the domestic steel workers who lost their jobs because of the removal of the tariff, but those stories need not upset the economist. For the economist knows that because free trade is a positive-sum game, the amount gained by the winners will be greater than the amount lost by the losers. Hence, some of the winnings of the winners can be used to fully compensate the losers for their losses, and presto, the removal of the tariff makes some better off without making anyone else worse off.[1]

Such redistribution is typically defended on the grounds of compensating people for falling upon bad luck. After all, the domestic steel worker who lost his job has no say in crafting trade policy; his fate is in the hands of the technocrat (or more recently, the CEO, to promote an agenda of “barge economics,” a term coined by Thomas Palley). And so if we can compensate the steel worker for his bad luck while at the same time ensuring that the economy continues to grow, that’s a way of killing two birds with one stone.

The same logic applies to the robotics revolution. The winners are likely to be the owners of capital, the entrepreneurs starting new robotics companies and the workers whose skills are complementary to robotics. The losers are likely to be those workers whose skills are easily automatable – those for whom robots may substitute in the production process. One difference is that the winners are likely to be a tiny handful of people, whereas the pool of losers may be vast. But that doesn’t undercut the notion that redistribution from the winners to the losers can compensate the losers fully for their losses while continuing to grow the overall economic pie.[2] One nuance, though, is that the necessary redistribution will likely require rather hefty tax increases on the winners.

I, for one, don’t think the necessary redistribution will be done, for it will be contested strongly by the winners. The implication is that many people are likely to be made worse off from robotics. The reason is that there is an 800-pound gorilla in the room: namely, marginal productivity theory.

Marginal productivity theory says that in competitive markets the wage a worker earns will be equal to how productive he or she is. The theory is supposed to be descriptive; it says what workers will earn in the market, not what they should earn. But as with both religion and science, economic theory is not bad per se; it’s how humans use it that creates badness.

The reality is that a great deal of economists use marginal productivity theory in a normatively loaded way, so as to suggest that people deserve the income they receive from the market. Usually a discussion of educational attainment enters into the conversation at this point. People deserve the income they receive in the market because it is determined by their education, which each person is presumed to have control over. This type of “just deserts” moral philosophy is most prominently endorsed by walked-out Harvard Econ Chair Greg Mankiw:

The just-deserts perspective focuses on questions [like]: Do the high incomes of the top 1 percent reflect extraordinary productivity, or some type of market failure? … My own reading of the evidence is that most of the very wealthy get that way by making substantial economic contributions … [I]t seems that changes in technology have allowed a small number of highly educated and exceptionally talented individuals to command superstar incomes in ways that were not possible a generation ago … [T]he educational and career opportunities available to children of the top 1 percent are, I believe, not very different from those available to the middle class.

However bad this type of arm-chair reasoning is, the reality is that Mankiw’s philosophy is now so ingrained in American culture that it’s impossible to ignore. Nevermind the fact that education is not under the control of each individual (that it is largely determined by family resources); or the fact that education alone is a terrible explanation for why a hedge fund manager, who usually holds a master’s degree, earns on average 26,000x what a school teacher, who also usually holds a master’s degree, earns. The bottomline is that very many successful and powerful people think they deserve the income they earn in the market, and that has huge implications for policy.

For it’s going to make the necessary redistribution on the back of the robotics revolution nearly impossible to carry out. The winners are going to oppose any and all such redistribution, on the basis that it amounts to stealing what is earned through hard work. James Clerk Maxwell may be more responsible for Peter Thiel’s income than Peter Thiel is, but that’s irrelevant for the world we live in, a world made up of men and their egos. So long as people like Peter Thiel know they’re smart and hardworking, we’re sadly going to have a hard to compensating people for bad luck.

That’s not to say that vast numbers of people are going to end up starving in the street because of the robotics revolution. Dietz Vollrath makes the key point that the goal of the wealthy in a capitalist economy is to keep the wages of everybody else at a level that is just above the subsistence level. Doing so ensures that there will still be demand for the goods and services produced by the wealthy, while at the same time keeping profit rates high. But wages just above subsistence is a terrible outcome from the perspective of policy, which is supposed to be used to maximize social welfare. Remember, mainstream economic theory says we can and should do better.

The key point is that the economics profession has evolved to a state where it cannot even implement the policies it recommends, because the culture of the masses won’t allow them to be implemented. I agree with Noah Smith when he says that mainstream economics is actually fairly interventionist – that it repeatedly acknowledges instances where the market breaks down, where government can play a role in improving people’s welfare. But the way economics is practiced, or at least talked in popular media, makes the discipline analogous to a religion.

The principle of “love thy neighbor” is pretty hard to disagree with in theory. But when the principle is used to coerce people and inflict violence upon them, we may say that the principle is being used in the wrong way. The same goes for marginal productivity theory. The theory may be useful to a first approximation to explain differences in wage income. But when the theory is used as a moral justification for why redistribution from the winners to the losers following a technological change is unjust, we may say that the theory is being used in the wrong way. In both cases, humans are the problem, not religion or economic theory.

The issue at stake is the relation between economic theory and the culture and mindsets of those modeled by theory. The former routinely feedbacks on the latter, yet this crucial feedback is never talked about – especially to young minds who are taught in the classroom that mushy things like culture are outside the domain of positive science and thus not worthy of consideration.

The more reasonable economists who aren’t Greg Mankiw, I would argue, play a crucial role merely by being silent. The burden is on them to speak up when people like Mankiw describe theories like marginal productivity theory in the wrong way. These more reasonable economists might even do themselves a lot of good by reading someone like John Rawls, who argued, quite convincingly in my opinion, that dessert is a terrible foundation on which to place moral weight as regards the income distribution, for the simple reason that dessert is infinitely impossible to pin down (each person is always going to make up his or her own unique story for why more income is deserved). Here’s Rawls (section 48):

There is a tendency for common sense to suppose that income and wealth, and the good things in life generally, should be distributed according to moral desert … There seems to be no way of defining the requisite criterion [for this] … In determining wages, a competitive economy gives weight to the precept of contribution. But as we have seen, the extent of one’s contribution (estimated by one’s marginal productivity) depends upon supply and demand. Surely a person’s moral worth does not vary according to how many offer similar skills, or happen to want what he can produce. No one supposes that when someone’s abilities are less in demand or have deteriorated (as in the case of singers) his moral deservingness undergoes a similar shift … The distributive shares that result [in a competitive economy] do not correlate with moral worth, since the initial endowment of natural assets and the contingencies of their growth and nurture in early life are arbitrary from a moral point of view … The idea of rewarding desert is impracticable.

So, anyway, my challenge to Noah Smith is to point out the specific cases where all those impressive leftist/interventionist theories within mainstream economics are actually implemented, you know, out there in the real world – where they do what they are intended to do, which is maximize social welfare. If they only exist in theory-land, that’s not something to brag about.

Addendum: I forgot to mention that a definition of “left” is really needed when we have these debates about biases in economic theory. As Elizabeth Popp Berman points out, the left in econ today (Krugman/Stiglitz) isn’t really all that left — or at least it’s a technocratic, market-friendly left. I always thought the whole discussion of “market failure” is framed very peculiarly, as if any deviation from market success if worth studying but not success itself. Michael Sandel certainly wouldn’t agree with this frame. To be sure, taking this logic further leads to the paradoxical conclusion that maybe Keynes himself wasn’t really all that left. There may be some truth to that supposition.

[1] This strategy of pursuing positive-sum changes in the economy that lead to post-redistribution Pareto improvements is known as Kaldor-Hicks theory. I argued in a prior post that this theory is morally problematic in a rights-based approach to development, but I will in this post stay mainly within the (narrow) domain of economic efficiency.

[2] Please note that I’m using the word “redistribution” loosely. I’m not talking about only money transfers from the winners to the losers. In the case of robotics, the winners could be taxed to fund job-retraining programs for the losers so that the latter can make their skills more complementary to new technology.

Paul Krugman vs. Dean Baker – On Getting Close to Marx

paul v dean

That’s a weird title for a blognote. After all, Paul and Dean agree on almost every policy issue. Both are old-school Keynesians who believe that the economy can occasionally get stuck in a short-run demand slump, which isn’t always quickly corrected without government intervention. They both probably agree that in the long run the economy will self correct absent policy intervention, but not before extreme and preventable human suffering is inflicted. And insofar as periods of extended human suffering from demand slumps are associated with spontaneous political movements calling for a new economic system, it’s probably best to avoid that outcome, especially if history is to provide any clues for what will improve the welfare of people over the long run.

But what’s not widely known is that Paul and Dean usually get to the same macro-policy end by using different methodological means. That methodological difference is very interesting from a sociological perspective.

The difference is that Dean is more likely to tell his readers the truth as he sees it, while Paul has to constantly tip toe the line between hinting at the truth Dean makes explicit and not upsetting certain people who will write Paul off completely if he is as explicit as Dean is.

To be more concrete, Dean is very explicit about the inherent conflict between workers at the lower end skills distribution who lack bargaining power in a slacked economy and those who derive their income from profits. Paul rarely mentions this class struggle and in fact works hard to avoid mentioning it in his widely disseminated commentary.

To put this into economic jargon, Paul typically reasons about what policy the Fed should be pursuing in a very standard output gap-NAIRU model, whereas Dean is more willing to throw that model into the garbage.

For example, in this recently published blognote, Paul is very careful to keep the discussion about what the Fed should be doing limited to the risks between tightening monetary policy too soon, thereby risking a slip back into the liquidity trap/zero lower bound in the not too distant future – much like what happened when Trichet raised rates too soon in the Eurozone in 2011 – and keeping policy too loose for too long, resulting in accelerating inflation. Presumably, Paul thinks that the latter risk is real and will rise as the actual unemployment rate continues to fall closer to the CBO’s estimate of the NAIRU. However, he doesn’t think that the latter risk outweighs the former (slipping back into stagnation, more hysteresis, etc.), so the policy conclusion is to err on the side of lower for longer.

Dean gets to the same policy conclusion, but in a different way. For example, in this article, he draws our attention to the fact that the last time the actual unemployment rate fell toward and eventually below the CBO’s estimate of the NAIRU, inflation didn’t accelerate as the model would predict. The period was the late-1990s and Greenspan, being the eccentric guy that he was, allowed the unemployment rate to fall all the way toward 4 percent before tightening monetary policy (the estimated NAIRU at the time was around 6 percent). As Dean notes, the inflation rate didn’t budge. What did happen is that millions of people got jobs who otherwise would not have, and tens of millions of workers at the middle and bottom of the wage distribution saw substantial real wage gains for the first time in a quarter century. The profit rate also fell noticeably. By recounting this historical experience, Dean is revealing to us that he doesn’t put a lot of faith in the NAIRU as a reliable indicator of the risks of keeping the stance of monetary policy loose for longer.

The model Dean does put faith in is basically an institutional model. It includes the concept of “bargaining power.” As Dean explains in his book on full employment with Jared Bernstein, the unfortunate (or fortunate, depending on who you are) characteristic of our modern economy, which lacks unions, is that it denies wage gains for lower-skilled workers until late in the business cycle, when labor markets tighten enough to improve the ability of such workers to bargain for higher wages. Those with the right skills and who are relatively mobile with their supply of labor don’t have this problem, even in a slacked economy – I’m speaking mostly from personal experience here, but I’m pretty confident that my experience is generalizable (who can point me to a paper on this?). But for those at the middle and bottom of the wage distribution, the classic Marxian conflict between labor and capital is very much alive and well. And in this sense, it doesn’t make much sense to fret about wage-price spirals as labor markets tighten toward our current estimates of the NAIRU; wage gains, when they occur, will first come out of the profit rate before they start to drive up consumer prices in a threatening way. We are presumably a long ways from that outcome, as there has been essentially no recovery in labor’s share of total income earned in the economy.


And therein lies the difference between Paul Krugman and Dean Baker: Dean is willing to get close to the basic insights of Marx, whereas Paul Krugman avoids them like the plague.[1]

As noted, this difference is interesting from a sociological perspective. I have no idea what goes on inside Paul Krugman’s head, but I’m now going to speculate – just to foreworn you. Paul either doesn’t buy into the class struggle reading of the unemployment/wage data, or he does but isn’t willing to let it enter into his published commentary. The latter seems more plausible to me.

After all, Paul is writing for the New York Times, which is a very mainstream media outlet. Mainstream media outlets are supposed to be serious. They aren’t supposed to say radical things like “depending on the context, a zero-sum game between labor and capital exists.” If Paul started saying things like that, he would likely lose his job. If he were saying things like that prior to 2008, he may not have won the Nobel Prize.

So he probably restricts what he says to make career advancements in a society where anyone who mentions that dreaded M word is immediately castigated and seen as a crazy person. Which is fine – everybody does that. Dean even does it to some extent.[2]

But there are two risks to the approach Paul Krugman (I think) takes. The first is that by using a different means to get to the same end that Dean gets to, Paul risks taking us to a different end. Paul and Dean both want the Fed to stay lower for longer to improve the wellbeing of lower-skilled workers at the middle and bottom who have seen no real wage gains in this recovery. But by staying within the standard NAIRU-output gap model, Paul may not convince policymakers that the risks of tightening too soon truly outweigh the inflation risks of staying lower for longer; they may see the cost-benefit calculus differently. Or even if Paul does convince Yellen & Co. that the risks in this cycle are asymmetric and point toward staying lower for longer, he may not convince them to take the side of labor in the next cycle. The class struggle reading of the data is, according to its proponents, something that’s generalizable and should be apparent in all cycles.

The second risk is that Paul Krugman may eventually lose his mind. If you spend your entire life reasoning from a model that you know abstracts from something you believe to be true and important, you will probably eventually reach a breaking point where you feel the need to tell your readers the truth. Otherwise you have a hard time looking at yourself in the mirror at night, for you increasingly come to see yourself as, well, a fraud. Again, this is all conditional on the belief that Paul accepts the Marxian reading of the labor market data but won’t let that reading enter into his published commentary for fear of being seen in the wrong way by the elite pundits of the world. Perhaps Paul has been trained so persuasively in neoclassical methods that he truly does not believe that the bargaining power of workers is an important causal variable that should be isolated and considered in our economic models of the business cycle.

But if not, the only question is: When is Paul Krugman going to snap and tell his readers what he truly believes? What additional prize is Paul waiting for before he lets the beard, stache and hair grow to be much longer?

[1] Just to be entirely clear about what I’m saying, Dean is not a Marxist who wants the state to seize the private means of production. What I am saying is that Dean seems to accept the basic insight that, in certain contexts and for certain workers, a zero-sum game exists between labor and capital. How you deal with that zero-sum game is a different question. Getting the Fed to stay lower for longer is one approach. Pushing for more unionization is another. Moving toward socialized production is yet another.

[2] I said earlier that Dean occasionally gets close to Marx but he never actually cites Marx and is generally not seen as a Marxian economist in the way that someone like, say, Richard Wolff is. And just to be even more nitpicky, Richard Wolff is not a Soviet-style Marxian; he wants to move us to a world where the worker cooperative, rather than the corporation, is the dominant form of business enterprise – which is not what happened in the Soviet Union.

The Tyranny of Asymmetry

asymmetryTechnocrats like to complain about the “tyranny of the majority.” The problem with direct democracy, they argue, is that it allows spur-of-the-moment populism to pass laws that we might otherwise not want to pass if we had the time to fully consider their ramifications. The Founding Fathers noticed this problem, which is why they set up a system of checks and balances to slow down the legislative process.

The closest example to a direct democracy in the advanced world today is Switzerland. For any change in the Swiss constitution, a public referendum is mandatory. For any change in a law, a referendum can be requested with enough support from the Swiss citizenry.

And, not surprisingly, the Swiss system has some issues.

The latest populist movement there is one led by a consortium of gold bugs who are trying to force the Swiss National Bank (SNB) to repatriate its overseas gold holdings. The so-called “Save Our Swiss Gold” initiative would also mandate that the SNB boosts its gold holdings to above 20% of the central bank’s total assets – gold currently accounts for around 7.5% of the SNB’s total assets – and that the SNB is to never sell its gold holdings in the future.

Which seems like silly requirements to be forced upon a central bank. As Willem Buiter of Citi notes, if the SNB is never allowed to sell its gold holdings, that effectively amounts to reducing the value of those holdings to zero. Not to mention, the extra gold that will likely need to be mined should the populist initiative pass would do unnecessary environmental damage. Fiat currencies that carry a marginal social cost of close to zero to produce, such as US dollars or Japanese Yen, are always preferable as a store of value and as a means of exchange to currencies that are costly to produce, like gold.

There are also growing nationalistic tendencies in Switzerland (as there are in numerous countries in Europe), tendencies which may too quickly manifest themselves as new immigration laws under the direct-democracy system. On hot-button issues like immigration, it’s probably better to slow down the law-making process. After all, nationalism tends to ebb and flow with the business cycle – even if its current form in Switzerland is supposedly about “environmental considerations” – and it’s probably not the case that Europe will remain mired in a depression forever. (Though the elites in Brussels and at the European Central Bank are sure trying hard to keep the economy depressed with their anti-stimulus bias.)

So I think reasonable people can agree: the tyranny of the majority is something to be wary of. But why don’t we ever talk about the “tyranny of the profit incentive,” which seems to be replete with similar issues?

Let me illustrate with an example.

The shale gas boom in the US is proving to be revolutionary. The boom is driven by a new technology known as horizontal drilling, which drills a bit into the earth near a shale formation and then horizontally into it, blasting water and other mysterious chemicals into the formation so as to crack it. The cracked formation then releases its stored natural gas, which the drilling company extracts for sale and profit.

Once horizontal drilling was developed, numerous oil and gas companies quickly adopted it and started drilling for gas – in the same way that a group of populists can get captured by a new political idea, quickly voting it into law. Which is a serious problem because without the same checks and balances that are needed to slowdown the political process, the market will move forward with activities that we might otherwise not want if we had the time to fully consider their ramifications.

In the case of horizontal drilling, the market moved forward with the activity without knowing its full environmental implications. The burning of natural gas releases fewer greenhouse gases into the atmosphere compared to the burning of crude oil, a fact that the natural gas industry has jumped on to appeal to environmentally conscious consumers. But the industry (along with the EPA!) has repeatedly underestimated the amount of methane – a highly potent greenhouse gas – released into the atmosphere when drilling occurs.[1] Under more accurate estimates of methane leakage, it is very plausible that natural gas, when taking into account its extraction, is dirtier than crude oil.

Where were the checks and balances preventing the extraction of natural gas until we fully learned about the problems of methane leakage? Who were the Founding Fathers who designed our economic system?

We do have a check and balance of sorts in economics. It’s called a Pigouvian tax. We use it to charge market participants for the bad things they produce that aren’t fully accounted for in the prices of the transactions into which they enter. In the case of natural gas extraction, we may want to use a Pigouvian tax to tax drilling companies for the environmental damage done by leaking large amounts of methane into our air.

But the Pigouvian check and balance normally occurs after we learn about the bad things that are associated with a given market activity. If we do end up smartly taxing natural gas companies for excessive methane leakage, we will only do so after a large amount has already been leaked. Which is problematic because any greenhouse gas we emit into the atmosphere stays there for a very long time until it is naturally absorbed by the earth. (We have not yet figured out how to reliably remove greenhouse gases from the atmosphere.)

The Pigouvian check and balance is analogous to letting the impulsive majority irrationally vote into law a racist immigration bill, and then trying to tweak the negative implications of the bill after the fact – by, say, fining firms who refuse to hire people of the race targeted by the bill. That’s not how we pass legislation in sophisticated liberal republics. (No offense, Switzerland.) So why do we let the market operate that way?

Of course whenever somebody uses words like “the tyranny of the profit incentive,” he or she is immediately labeled as a radical Bolshevik who wants the state to violently seize the means of production. Yet our Founding Fathers are seen as wise oracles. That asymmetry in discourse is what’s tyrannical.

[1] See the episode “Winds of Change” in the documentary Years of Living Dangerously.

Science v Religion — for the Kids


I have been thinking a lot lately about the debate between evolutionary science and religion, and how it influences what we teach children in our schools.

It’s no secret that science is winning. As David Barash puts it:

As evolutionary science has progressed, the available space for religious faith has narrowed: It has demolished two previously potent pillars of religious faith and undermined belief in an omnipotent and omni-benevolent God … The more we know of evolution, the more unavoidable is the conclusion that living things, including human beings, are produced by a natural, totally amoral process, with no indication of a benevolent, controlling creator.

And so we teach children a naturalistic conception of man, based on established facts within the scientific community. These facts – such as evolving skeletal structures of humans, based on our findings of fossils – have yet to be contradicted by the religious community. It appears as if man truly came from the animal kingdom.

Which is an amazing insight that should be taught to children! But it is an insight of “natural science.” We also want to teach children about “social science.”

One way to do the latter would be to teach about religion, but in a completely explanatory and comparative way. A course, in other words, on all the different religions that humans have believed in over time, based on descriptive and normatively neutral documentation.

But of course we don’t do that in our public schools. Insofar as social science is taught, it revolves around teaching children historical facts, such as what the major wars were that led society in one direction or another or what important documents were signed that established modern political structures.

What’s the problem with teaching religion to children descriptively and comparatively? And why would we want to do that?

The problem, presumably, is that we don’t want to instill certain values over others in our children’s minds. We may claim that courses on religion can be taught descriptively, but there’s always the risk that teachers won’t teach in that way – that they will become brainwashers.

But what we don’t talk much about is that the same risk applies to teaching children about evolutionary biology. Sure, the facts about evolutionary biology can be taught descriptively. But teaching students those facts without also implying to them that there is no meaning to life beyond the “natural, totally amoral process” of evolutionary biology is no easy task.

Which is a big problem because leaving children with such an impression is equally harmful; as the findings of evolutionary biology in no way outlaw the existence of meaning nor do they undermine the importance of moral structures in social life. The fact is that, at its core, science relies on the same faithful assumptions on which religious scriptures rely: the belief that nature is elegantly ordered in a rational and intelligible way.

Paul Davies has it:

[T]he very notion of physical law is a theological one in the first place, a fact that makes many scientists squirm. Isaac Newton first got the idea of absolute, universal, perfect, immutable laws from the Christian doctrine that God created the world and ordered it in a rational way … [J]ust as Christians claim that the world depends utterly on God for its existence … so physicists declare a similar asymmetry: the universe is governed by eternal laws, but the laws are completely impervious to what happens in the universe.

Relatedly, as Ronald Dworkin argued in his final book, Religion Without God, the whole methodology of science is widely misunderstood. We think empirical validity is what drives progress in science – that the theories that always win out are the ones that best explain our world. Empirical validity of course does play a role, but so too does simplicity and elegance. A formula that is simple and elegant enough to be written on a t-shirt is ultimately what scientists are after.

This can be seen vividly in the domain of quantum physics. The quantum world, as best we can tell, is extremely messy and not at all simple. It is filled with all sorts of strange particle behaviors such as superposition and entanglement. Quantum physicists could try to deal with the mess straight on, coming up with vastly complicated formulas to describe it. But they generally don’t; they prefer looking deeper at substructures, like Higgs fields, because they are truly convinced that if they go deep enough, they will eventually find simplicity and elegance. But that’s an extremely faithful conviction, no different than a religious one claiming that a benevolent God ordered our universe elegantly with a purpose in mind.

We could teach children these caveats when we teach them science. I mean, obviously we can’t teach them about superposition and entanglement. But we could make it clearer that the findings of natural science in no way rule out the possibility that our universe is the way it is for a reason.

One response may be that discussing the ontology of scientific laws is philosophy, and that it therefore doesn’t belong in a course on observational science. Fair enough. But then teach ontology in a nearby philosophy course, and put equal weight in the curriculum on the philosophy course as on the course on evolutionary biology – as on the previously mentioned descriptive course on comparative religion.

Look, the best we can do is teach children a variety of views about meaning and morality, in a very non-judgmental way, and then let them choose which view to believe. Because, after all, everybody has to choose a view in the end. Or to put it in the wise words of David Foster Wallace:

There is no such thing as not worshipping. Everybody worships. The only choice we get is what to worship.

If you don’t reflect on what you worship consciously, you end up unconsciously worshipping things that appeal to your “default settings,” such as power, status and self gratification – the animalistic things. Wallace believed, as I do, that if we all were to reflect a little more deeply on the matter, we would choose to worship different things, like possibly “love, fellowship and the mystical oneness of all things deep down” – the humanistic things.

As it stands, it seems to me that we place an unwarranted emphasis on teaching a meaningless and naturalistic conception of man; and that even if this is done in a very descriptive way, it nonetheless tends to crowd out other conceptions of man and purpose, ultimately influencing who our children become later in life. That doesn’t seem very reflective of an educational program that is honest and tolerant of the diversity of views and ideas we as humans have advanced over time.

System Speak: Is It Good?

system speak

It has become very popular among policy types to “blame the system” and to claim that it needs to be tweaked, in a very technocratic way, to produce better outcomes.

If the system is spewing too many greenhouse gases into the atmosphere, then the solution is an externality-internalizing carbon-tax tweak. If the system is encouraging the underwriting of fraudulent mortgages based on manipulated home appraisals, then the solution is to tweak compensation incentives so that underwriters are more likely to favor quality over quantity. If the system is allowing executives to write their own paychecks while their director pals look the other way, then the solution is to tweak the corporate governance structure.

I’m not just making this up. On the latter, here’s how the authors of the incredibly useful book on executive compensation, Pay Without Performance, put it:

Our problem is with the system of arrangements and incentives within which directors and executive operate, not with the moral virtue or caliber of directors and executives. As currently structured, the system unavoidably creates incentives and psychological and social forces that distort pay choices. They can be expected to lead most people (if they are not saints) to go some way, at least as long as they remain within established practices and conventions toward arrangements that favor themselves, their colleagues, or people who can in turn favor them. If we were to maintain the basic structure of the system and merely replaced directors and executives by an entirely different group of people, their replacements would be exposed to the very same incentives and forces and, by and large, we would not expect them to act differently. To address the problems, we need to change the basic arrangements that produce these distortions.

What’s going on here? Why do we always tend to blame the system, rather than cast judgment upon those making decisions within it?

There are a few plausible explanations worth discussing.

The first is that system speak may be emblematic of our thirst for specificity rather than generality. We have tons of very smart policy wonks who have very specific knowledge about certain subjects. Their solutions to social problems are often technical, impressive to those swayed by equations, statistics and dense jargon. The trouble is that too much specificity often leads to a lack of generality. Indeed, we in America seem to have evolved to a point where many of our top technocrats – those on whom we rely most for policy advice – are seemingly unable to connect the dots across disciplines and across schools of thought.

Nowhere is this perhaps more true than in the climate change debate. Our economists continually reduce the problem down to a technical one involving mispriced externalities. They say nothing about our cultural commitment to living excessively – to overeating, to driving big cars, to flying in luxury. As I mentioned in my previous post, you don’t get one without the other. A carbon tax won’t bite unless people are culturally committed to living more sustainably.

More generally, it doesn’t even make sense to talk about tax policy without also talking about the norms of the society contemplating new taxes. You want the rich to pay more? Try changing the cultural way in which financial success is perceived in America. So long as those on the Forbes 400 list are perceived as heroic businessmen, masters of free-market capitalism, you’re not going to get them to pay higher taxes. Even if higher tax rates are legislated, they won’t be paid if there is no cultural will to enforce them – they will be withered away through loopholes while the public repeatedly looks the other way. But if you change the public mindset by castigating many on the rich list as cheaters who earned their money in very sketchy ways, much like Dean Baker is trying to do, then you may have a shot at getting higher tax rates on the top to fly.

That’s not to say that acultural technical matters are not important and should be tossed aside. It’s just to say that technical matters always need to be understood in a broader context. Social reality is not a solvable math problem.

A second explanation for the rise of system speak relates to the victory that science has claimed over religion. Historically, religious debates were often characterized by conflicting conceptions of the good, and many of those debates of course ended in bloodshed. Best to ignore all speak of morality to keep the planet peaceful, we understandably reasoned.

Which would be fine if being neutral about the good were actually possible, but it’s not. On nearly every issue, we need to take a moral stance, whether we realize it or not.

For example, in the debate on gay marriage, those in support of gay marriage often claim to be prochoice liberals. If people want to marry someone of the same sex, then that’s their choice, which shouldn’t be interfered with, such liberals say. However, why does the line stop with marriages between two people? If people want to marry three or four people of the same sex, why do we not honor that choice as well? Even though most liberals want to extend the traditional institution of marriage to include same-sex marriages, most are unwilling to extend it to include polygamous marriages. Why? The reason is because they’ve taken a moral stance in favor of the belief that marriage should be defined as a common bond between just two people.

The same goes for abortion. The liberal position is again prochoice: let the mother decide whether or not to abort. But when has too much time passed outlawing the abortion option? Three months into pregnancy? Six months? Nine? What about after the baby is born? To answer this, one needs to take a stance on when life begins. We can turn to science for an answer, but even then we have to make a gut-feeling judgment in the end – even our best neuroscientists don’t understand how, when or why consciousness comes about.

In a world where the belief in liberal neutrality nonetheless dominates, the system is repeatedly blamed when things go awry. Insofar as the pay of our corporate executives is way out of line with that of other executives around the world, the problem must be with the incentives they face, not with their insatiable greed or thirst for power.

To be sure, a world where we cast more moral judgment is not a pretty one. It’s a world where radicals will repeatedly try to hijack the national conversation, and it may even lead to more physical violence. But it’s the only way forward, in my opinion.

Deliberation about the good, after all, is what is at the heart of democracy, the most promising form of social governance we as a species have been able to come up with. When we claim neutrality with regard to the good, we will be indifferent about voting, about participating in the political process and about upholding civic duty. It’s no mystery that the rise of liberal thinking has been associated with the demise of voter participation all across America.

Weaker political engagement means our democracy is more likely to be hijacked by lobbyists, special interests and big business in general. So take your pick: we either reopen the door for the religious fanatics by having more public debates about the good, or we stand silent while those with power continue to steal our rights and our money, all the while “blaming the system” when we end up poor and violated. I for one think we have come a long way as a species, probably to a point where we can have mature debates about the good without the fanatics hijacking the popular opinion, but that’s just me. I may be wrong on this.

Anyway, there’s no doubt that system speak is growing within the academy and within our policy circles. The point of this post is to say that there may be good reasons to question its growth and to even push against it in certain contexts.

Macroecon, Carbon Taxes, and Culture

Sustainability graphic 1

A few weeks ago, Paul Krugman participated in a conference in NYC on Rethinking Economics. The macroeconomics panel on which Paul took part at the conference covered a lot of ground, addressing topics such as financial regulation, monetary policy and economic methodology. (A video of the panel, on which also sat James Galbraith and Willem Buiter, can be seen here.) But one topic spawned a heated debate that’s worth commenting on. The topic is climate change and its relationship to macroeconomics.

Paul does not think climate change poses problems for many of the basic assumptions made in modern macroeconomics. One of those assumptions is that economic growth, measured in terms of real GDP, is endlessly achievable. When the moderator pointed out that, at the global level, economic growth remains highly correlated with higher greenhouse gas emissions, Paul responded that it doesn’t have to be this way; that if policymakers would just adopt the solution to climate change that economists have put forth – namely, carbon taxes – we can, through innovation and resource substitution, shift the composition of growth away from the stuff that pollutes our planet. Saving the planet and maintaining a commitment to economic growth, in other words, are compatible with the right tax policies in place.

The real kicker came when Paul hypothecated about who would be to blame if the right policies aren’t adopted. Paul said very clearly that it wouldn’t be the economists, but would rather be the politicians who failed to listen to the economists. This, in my view, inappropriately separates economics from politics. The two are, and have always been, inherently intertwined.

Our economists routinely frame our political priorities. They do so by creating a culture where the masses of people view economic growth as the principle solution to whatever problems they are dealing with. Can’t find a job? Increase GDP. Need a higher salary? Increase GDP. Having marital problems? Increase GDP.

This can be seen starkly in the current debate in macroeconomics on “secular stagnation.” Secular stagnation refers to a situation where aggregate demand in the economy is persistently below aggregate supply, creating underutilization which manifests itself as high unemployment and weak wage growth. One of the causes of secular stagnation is thought to be chronically weak investment demand. Paul Krugman himself has noted that the weakness investment demand is in part due to the retirement of the baby boomer generation and to the slowdown in population growth generally. According to Paul, the demographic drag has put downward pressure on interest rates; and since nominal interest rates can’t fall below zero percent, if we want to maintain equilibrium investment levels in a world with slowing population growth, we are going to need persistently negative real interest rates. The solution to secular stagnation, Paul therefore argues, is to get central banks to target higher levels of inflation in order to spur investment demand.

The debate is almost always framed as if the best way to deal with secular stagnation is to boost aggregate spending. Rarely ever does Paul (or any of his fellow neo-Keynesian counterparts) talk about option two, which involves taking conscious steps to reduce aggregate supply to meet a lower level of demand. Hint: reducing supply is akin to “deprioritizing economic growth.”

Can’t find a job? Then reduce working hours for current employees and implement more work-sharing programs to spread employment around more equitably. Need a higher salary? Then end the upward redistribution of income to those at the top, which has been going on for decades and which is in large part policy-induced, as explained by Dean Baker in his book The End of Loser Liberalism (2011). Having marital problems? Ending our culture of overwork for endless material gain would likely help.

Each of these solutions would not only help the macroeconomy “equilibrate” – a verb, by the way, that doesn’t even make sense if social systems are ontologically evolutionary and complex, as many sociologists and social science philosophers believe – but would do so at lower levels of actual GDP, meaning fewer greenhouse gas emissions relative to a scenario where we achieved equilibrium exclusively on the demand side.

To be clear, I am not against a carbon tax, imposed either directly or through a cap-and-trade system. We should be pushing on these market-based mechanisms as vigorously as possible. But the reality is that we’re not, so why not deal with that reality and what it implies for economic growth in the future? Why does Paul favor an all-eggs-in-one-basket approach, when there are many other channels we could be pushing on to soften our carbon footprint?

And by the way, I’m skeptical that the global carbon tax of the sort Paul is calling for would even work in the theoretical way he thinks it would. Economists love to rave about the wonders of price incentives. But the reality is that what underlie price incentives and markets are cultures and belief systems. Simply put, I don’t think our modern capitalist system can promote sustainable development even with a hefty carbon tax in place so long as the system encourages greed and excess.

History has shown that if people don’t believe in a tax, it won’t be paid – it will be withered away through loopholes and avoidance schemes. What ultimately makes a tax bite is whether people are culturally committed to the philosophy underlying the tax.

What people in the West seem to be afraid to say is the following: we are all eating too much and buying too much useless shit, blindly wrapped up in a culture that promotes selfish action for material gain, the accumulation of which doesn’t even make us generally happier as a society. To use the wise words of David Foster Wallace, we are all living unconsciously. We are operating in “default mode,” driven by our natural instincts for pleasure, persistently placing ourselves at the center of all of our experiences. That’s what’s destroying our planet, not some technical problem related to externality pricing.

The key to living consciously, according to Wallace, is to actively push against our default settings. It means being aware enough “to choose what you pay attention to and to choose how you construct meaning from experience.” With more self-reflection, I believe, we would each recognize that the current economic system within which we live is not compatible with sustainable human development, and that our cultural commitment to individual achievement through material gain is fueling the system. Without altering that cultural commitment, I doubt a carbon tax is going to do very much.

Sociologizing Krugman and His Evolving Worldview

krugman evo

Unlike many very popular economists, Paul Krugman at least makes efforts to acknowledge previously made mistakes. In a recent post, he walks us through four he made over the years:

  1. He missed the IT-related productivity surge beginning in the mid-1990s.
  2. He admits that his preference for fiscal consolidation following the early 2000s recession was probably ill advised, given the budgetary flexibility that politically stable money printing countries like the US have (and also given that the recovery from the early 2000s recession was the worst on the jobs front since the Great Depression).
  3. He was calling for a euro breakup in 2010-12 that never came to fruition.
  4. He seems a bit surprised that Britain’s economy is growing so quickly at present, following the fiscal austerity implemented by the Cameron government beginning in 2010.

But what readers probably don’t know is that there are several other more fundamental mistakes – or, more appropriately, evolutions of thinking – that Krugman has made which he has not yet explicitly acknowledged. To see this, we need to sift through an old debate Krugman had with James Galbraith in the 1990s.

In that debate, Krugman asserted as “unambiguously wrong” and as a “silly doctrine” the following statement: “Workers are hurting because labor has failed to share in national productivity gains.” Back then, Krugman didn’t believe this statement to be true, because, as he noted, stagnant or falling real wages during a period of rising productivity would imply a persistently falling labor share measure, an artifact he couldn’t find in the data.

But Krugman presumably now feels that the failure of wages to keep up with productivity is in fact hurting workers, as hinted at in posts like this one, which documents how the failure of marginal productivity theory to hold in the real world has resulted in a shift of income away from labor toward capital. FWIW, here’s how the data on labor’s share of income looks in the US:

labors share

That series seems to me to be falling secularly beginning in the 1960s.

The other important mistake that Krugman has not explicitly admitted to (but has hinted at) relates to the belief that high rates of unemployment can systemically weaken the bargaining power of workers, thereby resulting in wage cuts for the lower skilled. Galbraith made this point in his debate with Krugman, stressing his belief that persistently high unemployment rates were a key force behind the growing levels of wage inequality. This is also the central issue raised in Dean Baker and Jared Bernstein’s recent book.

The issue at stake is whether the monetary policy shift beginning in the 1980s with Volker’s decision to favor low inflation over low rates of unemployment was, de facto, the beginning of a multi-decade effort to redistribute income away from labor toward capital. This class-oriented view is not embraced by orthodox economics, which sees monetary policy as purely a positive sum game – as in the way central bankers are thought to scientifically optimize policy functions over relevant goals (e.g., low and stable inflation and full employment) so as to maximize overall utility, with no reference whatsoever to distributional considerations.

Krugman has recently been hinting at the idea that inflation targeting may have a class component to it, stressing the obvious fact that those who derive a relatively large share of their income from interest would be hurt by higher rates of inflation. Heck, even Janet Yellen has apparently come to a similar conclusion, as evidenced by her efforts to modify the Taylor rule to favor a so-called “balanced approach” rule, which consciously places more weight on reducing economic slack over maintaining low and stable inflation. We cannot look into Janet Yellen’s mind; but she presumably favors the balanced-approach rule because, recognizing the fundamental class trade-off that monetary policy imparts, she ostensibly feels as though labor has suffered enough over the years and deserves some catchup relative to capital.

To be sure, Krugman has in recent years noted that the world in his eyes does have some Marxian elements to it. But he has always done so while implicitly endorsing the fundamental assumptions of classical political economy – e.g., that markets are competitive, thereby tending toward equilibrium – which Marx himself would have never endorsed and which neo-Marxians like Richard Wolff scoff at. Krugman never once – or perhaps never once explicitly – has entertained the view that the system may be being politically managed by the rentier class so as to maintain a high rate of return for the capitalists. Indeed, in his initial sketch of Piketty’s r-g law, Krugman refers us to the Solow model, which has a hard time incorporating an elasticity of substitution between capital and labor that is higher than 1 – which is what Piketty needs to justify his r-g worldview. Might the Solow model, which is built upon classical equilibrium-oriented assumptions, be the wrong model to use to understand the dynamics between labor and capital (especially financial capital!) earnings if those dynamics are fundamentally rooted in class-based political struggles?

The bottom line is that we are slowly starting to see an evolution in Krugman’s worldview. He is inching toward heterodox economists like James Galbraith and Richard Wolff, but not explicitly. It will be interesting to see if Krugman elaborates on how his mind is evolving at the upcoming Rethinking Economics conference in NYC in September.

Static Model + No Ontological Investigation = Bad Economics


Paul Krugman has an insightful review of Timothy Geithner’s new book, Stress Test. In it, Paul walks us through the similarities between the 2008 financial crisis and a classic bank run.

The model of a classic bank run is as follows. At any given moment, banks hold only a fraction of the deposits they receive in the form of cash; relatively illiquid assets are purchased with the rest of the deposits. The spread between the interest rate a bank pays to its depositors and the yield on the bank’s illiquid assets is what determines the bank’s profits. Banks are able to keep only a small amount of cash on hand because only a small fraction of a bank’s depositors will try to pull their money out on any given day.

But the risk of a run is always there. As Paul writes:

Suppose that for some reason many depositors do decide to demand cash at the same time. The bank won’t have that much cash on hand, and if it tries to raise more cash by selling assets, it will have to sell those assets at fire-sale prices. The result is that mass withdrawals can break a bank, even if it’s fundamentally solvent. And this in turn means that when investors fear that a bank may fail, their actions can produce the very failure they fear: depositors will rush to pull their money out if they believe that other depositors will do the same, and the bank collapses.

Paul then goes on to talk about how federal deposit insurance has been the solution to the problem of banking panics. When deposits are insured, depositors won’t all decide to demand cash at the same time, because even if the bank at which their money is stored has problems, the insurance will assure the depositors that they will still get their money back if the bank fails.

Then Paul draws an important analogy to the 2008 financial crisis, which, he notes, was essentially a classic bank run: the so-called “shadow banks” were raising money through various, and often obscure, forms of short-term borrowing, such as repo, which weren’t federally insured. As soon as a crisis of confidence emerged, the short-term lenders all simultaneously demanded their money back, forcing the shadow banks to sell their assets at fire-sale prices. The forced selling had ripple effects across the broader financial system, pushing many otherwise solvent institutions into insolvency. In turn, lending collapsed, and the economy contracted sharply.

Spelling out the analogy between a classic bank run and the 2008 financial crisis presents a teachable moment for economists. On the one hand, we have what seems like a timeless model of how bank runs emerge. (Well, timeless at least since the advent of fractional reserve banking.) This model offers a clear normative policy prescription: institute federal deposit insurance. Yet the model did not help us avoid the 2008 crisis; nobody really understood the shadow-banking sector well enough to see that a classic bank run could emerge within it. As such, we did not have an insurance mechanism in place to prevent the crisis from spreading. So what went wrong? Why was the model not helpful in real time?[1]

To answer these questions, we need to talk about the ontology of the social world. Philosophers like John Searle and Tony Lawson have noted a peculiar thing about the social world: that it’s dynamic. And by dynamic, I don’t mean in the way most economists understand the word – as, for example, having to do with intertemporal consumption smoothing or rational expectations. No, by dynamic, I basically mean evolutionary.

In the context of the banking sector, it has evolved from consisting of traditional banks that finance themselves by taking in cash deposits to including shadow banks that finance themselves by borrowing money in the form of obscure short-term lending agreements. The key point is that our perception of how to identify a bank has changed. We used to think of banks as those places with high ceilings and polished marble floors where people go to deposit their savings. But over the years, other institutions have been formed and have evolved to do what traditional banks do, except outside the regulatory radar, as it were. Are these new institutions banks? What does the word “bank” even refer to in the model of a classic bank run?

Drawing a contrast between the biological world and the social world might be helpful. We know the biological world evolves, but only very slowly – provided that we’re talking only about large and complex organisms rather than unicellular ones. The species we see in existence today won’t evolve into fundamentally new species tomorrow, next year or even 100 years from now. Conceptually, we can therefore think of the biological world as static for modeling purposes. If climate change continues to warm the planet over the next 50 years, we can hypothesize about how that might affect the various forms of species that exist on the earth today.

The same is not true in the social world, where things evolve rapidly. Federal deposit insurance was instituted in the US in 1933. In less than a century, the whole landscape of the banking sector changed in a way that left many short-term lending arrangements uninsured, the very ones for which insurance would have prevented a collapse in confidence.

One of the biggest problems in economics is that we have a great deal static models that were created at various points in time, when the economy may have been very different than it is today. For example, and to pick on Paul Krugman, we have something called the IS-LM model, which Paul frequently references to argue for using fiscal policy to boost demand when short-term interest rates are at their zero lower bound. As the story goes, when there is a persistent excess of desired private saving over desired investment, as is ostensibly the case at the zero lower bound, stimulus puts the excess saving to work in the form of public investment, helping to equilibrate desired saving and investment at a higher level of output, preferably one that is consistent with full employment.

The IS-LM model was formulated in the 1930s, when the US was very much a closed economy. Does the model still apply today even though the US is much more open? Given the US’s role as a huge net importer, especially for infrastructure materials such as steel and concrete (see chart below), it seems plausible that a portion of any stimulus money spent on domestic infrastructure will generate jobs overseas rather than at home. If the government builds a road, the project will create many more local jobs if the concrete for the road is produced domestically rather than in China or Mexico.

materials trade def

Source: BEA

That’s not to say that spending more on fiscal stimulus in 2009-10 wouldn’t have been a good idea; it’s just saying that the landscape to which the IS-LM model was originally aimed may have evolved, thereby somewhat changing the conclusions of the model. After all, we have some evidence to suggest that fiscal multipliers are much lower for open economies relative to closed ones.

To be sure, Paul Krugman has mentioned this caveat, arguing that stimulus today would probably yield more bang for the buck if it were glazed with a flavor of protectionism. But as far as I can tell, he has downplayed the caveat in other writings, even going as far as to simply assert that the US should be modeled as a closed economy.

My critique is more methodological. When we present an economic model, we should always also present a historical narrative explaining when the model was originally formulated, what world it attempted to describe and why the world today hasn’t evolved to undermine the conclusions of the model. In the context of IS-LM, globalization, NAFTA, the rise of China and the shift of the US from running trade surpluses to running large trade deficits were each evolved processes. The global supply chain for infrastructure materials is no longer located in the United States. This will not change overnight just because the US government decides to spend more money. IS-LM says nothing about these evolved processes, and if you leave them out of your analysis, you’re leaving out many key ingredients.

In the end, I probably agree with Paul on his policy prescriptions. Notwithstanding the lower fiscal multiplier due to the trade deficit, I think more stimulus following the financial crisis would have been preferable. But it’s his lack of interest in ontological investigation that annoys me the most.

Once you think deeply about the ontology of the social world, you realize that to model phenomena in it, your tools and how you apply them need to be constantly changing – nothing is timeless. As such, if the goal is to improve people’s welfare through policy, then the policies we recommend need to be constantly updated.

And, on a deeper level, even the ideals that we as a society have established and have aspired to live up to are constantly changing. This is why it’s a bit ludicrous of some to assert that if we were to just abide by the ideals outlined in the Constitution, then everything would be all well and good. Our perception of what constitutes justice has changed enormously since the 18th Century. Sure, there are ideals like freedom which seem to be universal and timeless. But the terrain to which we apply such ideals is evolving in ways that constantly complicate the application process.

For example, we in America place a high value on freedom of speech. This includes the right the support any political party of our liking. In the olden days, this meant that people were allowed to stand on street corners and hand out pamphlets praising their preferred political parties. But the economy has evolved in ways that make mass marketing more intruding, allowing those with money to express their political views more widely – in the form of TV ads, billboards and mass emails from super PACS. Does rising inequality pose a challenge to the way we apply the ideal of freedom of speech? When those with money express their freedom of speech today, does it drown out the freedom of speech of those without money?

I’m sorry, I wish the social sciences were easier. I wish our conceptualized models and ideals were timeless in application. It is truly a beautiful thing when a radically simplified static model can yield tremendous insights about the natural world, as models in the physical sciences routinely do. But the social sciences don’t work like that, because they can’t work like that.

It’s high time we get over our collective envy for physics-type modeling in economics. We should be having more discussions about ontology. The implication is that we can no longer afford to talk about models without a reference to their historical foundations and to the evolution of the social world. If we can convince Nobel laureates like Paul Krugman to employ a more reflective methodology, then we just might be able to make economics a more progressive science. If people like Paul don’t listen, well then we may need to wait until the older generation of economists dies off before real change can occur.

[1] It’s of course easy to use the model to explain the 2008 financial crisis after the fact. But we perhaps set the bar too low if we aspire to only explain developments after the fact in the social sciences.

Monetary Policy is Not a Class-Neutral Tool

volker greenspan brnanke

One of the biggest cover-ups in economics is the representation of political tools as technocratic, class-neutral mechanisms. Perhaps nowhere is this more true than in the realm of monetary policy. The debates on what the Fed should do to influence the trajectory of interest rates are usually riddled with sophisticated words like “macroprudential” and “optimal control.” We are led to believe that models and scientific thinking drive the actions of central bankers.

The reality is more nuanced. Modern central banks heavily control the relative shares of income going to labor and capital. When the Fed is willing to let aggregate demand accelerate to push up wages for the working class, labor benefits; when the Fed clamps down on wage pressure for fear of igniting inflation, capital benefits.

The modern business cycle, for better or worse, systematically benefits capital before it benefits labor. When a recession hits, firms go into cost-cutting mode, laying off the least productive workers, reorganizing production processes and slashing inventories. As such, recessions make firms leaner and more profitable.[1]

Thus, when sales start rising in the early stage of recovery, productivity growth in the business sector normally accelerates, translating into higher profit margins. Meanwhile, the relative abundance of available labor keeps the wages of workers low by diminishing their bargaining power. The result is that aggregate income in the economy rises, but most of the new income generated goes to capital. It is not until the later stages of recovery that further accelerations in aggregate demand reduce labor-market slack, enhancing the bargaining power of workers and allowing labor’s share of the total pie to rise. Importantly, inflation does not emerge when the wages of workers begin to rise; rising wages first cut into profit margins. Once margins are narrowed sufficiently, then further wage hikes can translate into the sort of wage-price spirals we saw in the 1970s.

Let’s look at an example. Coming out of the 1990-91 recession, which was brought on by the savings and loans crisis in 1989, by oil price shocks in the subsequent year and by Fed tightening, slack in the labor market was high (the unemployment rate peaked at 7.8% in 1992), translating into accelerating productivity growth in the business sector and a declining share of income going to labor. Continued improvement in demand eventually sopped up the excess labor and allowed workers to bargain for higher wages, which cut into profit margins, allowing labor to receive an increasing share of the total income generated. Importantly, inflation over this period was a nonissue: the increasing bargaining power of workers in the late-1990s, as a result of the unemployment rate falling toward 4%, cut directly into firm profits, allowing labor’s share of income to get back to roughly where it was prior to the 1990-91 recession. You can certainly criticize Greenspan for ruining the economy in the 2000s, but he and his staff made the right choice in the late-1990s to keep the pedal to the metal in the face of an abundance of criticism from the inflation hawks. In short, the Fed did its job in the 1990s, allowing labor to participate in the later stages of the recovery.

labor share profits 2

The reason I am emphasizing all of this is because history is rhyming, as it usually does. Coming out of the most recent recession, capital benefited enormously from the cost cutting that was done during the downturn. Business productivity soared, and an unprecedented amount of slack in the labor market kept wage growth muted. The result is that the share of income going to capital spiked, reaching historical levels.

capital labor 3

corp profits

However, the moment of truth is coming for the Fed. Slack is diminishing, and the wages of workers are beginning to rise. The capitalists are screaming nonsense about wage-price spirals and financial instability in order to get the Fed to tighten, in hopes of locking in the record share of income going to capital. If the Fed ignores the capitalists, it can put upward pressure on labor’s share of income by keeping monetary policy extraordinarily loose. Inflation is a nonissue, and it won’t become a threat until after rising unit labor costs cut substantially into profit margins.

Monetary policy needs to be understood in this context. The Fed is not just responsible for maintaining price stability and full employment; it also needs to make sure that demand imbalances don’t push factor shares too far out of whack in the short run. Technological change can of course alter the relative shares of income going to capital and labor over the long run, but this is not, I would argue, the most pressing development we face today. The share of income going to labor has fallen by roughly 6% since 2007. The vast majority of this can be overturned by the Fed in the coming years if demand is permitted to accelerate and wage growth is not choked off.

We have avoided these class debates in the domain of monetary policy for far too long. If the central bank as an institution were more democratic and transparent, the battle between capital and labor would be publically acknowledged and dealt with. Under the current setup, we rely on technocrats to maintain an equitable distribution of income; but I’m not particularly optimistic that they will rise to the occasion. The Fed people I have interacted with seem utterly ignorant of the great works of distributive justice produced by the likes of, say, Jon Rawls, Michael Sandel, John Stuart Mill or Robert Nozick. How are our modern central bankers to make a judgment on distribution if they have no understanding of what an ideal distribution would even look like?

This is not a failure of the individual but rather of the system within which central bankers are educated and groomed. Technical skills are valued over sound moral judgment, business acumen over civic duty.

The path forward, it seems to me, involves either better educating our central bankers or democratizing their institution to make it more accountable to the masses of people. At the very least, we need to recognize that monetary policy is not, will not and can never be a class-neutral tool. Debates about justice and distribution are not easy, but that does not mean they should be avoided, deceptively or otherwise.

[1] That is, at least those that survive or are bailed out by the state.

New Research is Looking Very Polanyi-Like

euro crumble

I attended the annual INET conference in Toronto a few weeks ago. Many interesting ideas were discussed, and it was great to hear what is at the cutting edge of econ these days. In particular, two ideas were stressed that really cut into the core of neoclassical thought, and I want to take the time to describe them and what they imply for our understanding of the modern (political) economy.

The first is George Soros’s idea about reflexivity in financial markets. This idea is not new, as Soros has been talking about reflexivity for the better part of at least two decades. But what is new is that the philosophical foundations of reflexivity were recently spelled out in detail in a special edition of the Journal of Economic Methodology.

The punch line is to say that there is seemingly an inherent feedback loop between the actions that fallible individuals make based on their assessments of fundamental value and the fundamental value itself. These feedback loops, moreover, can often lead to boom-and-bust cycles.

The classic example can be found in the stock market. When we find ourselves in a situation where we measure the price of a stock to be below that implied by its fundamental value (e.g., its earnings per share), we usually take action; namely, we purchase the stock. But everyone tends to do this at the same time because everyone can understand and easily measure the stock’s fundamental value. The problem is that if enough people simultaneously act on their measurements of the stock’s fundamental value, their actions may end up changing the stock’s fundamental value in a big way.

Indeed, when people pile into a stock and the price of the stock starts rising fast, the media usually notices and starts talking highly about the stock. This affects people’s perceptions of the company beneath the stock, causing greater demand for the company’s products. As people then purchase more of the company’s products, the company’s earnings start to rise, increasing the stock’s fundamental value. A game of predicting the expectations of others can then sometimes drive the company’s stock price even higher: people start to purchase the stock not because they estimate the stock’s fundamental value to be above the stock’s market price, but rather because they expect purchases by others to drive up earnings for the company, eventually justifying the higher stock price with stronger fundamentals.[1] When people’s expectations get very far ahead of the fundamentals – because people are inherently fallible – buyers eventually have a collective moment of clarity, realizing that the fundamentals will never rise to justify the stock’s exorbitant price. When this happens, everyone sells en masse and the bubble collapses.

The point of reflexivity is to say that certain markets are never in a stable equilibrium; they are always transitioning from boom to bust or bust to boom. The cutting-edge work is being done by people like Cars Hommes, who is trying to figure out, empirically, under what conditions reflexive processes will lead to negative feedback loops (which do eventually settle into stable equilibriums) versus positive feedback loops (which are characterized by boom-bust patterns).

The other big idea that was discussed at the INET conference relates to what the complexity economists have been doing lately. The seminal paper in this movement is by Brian Arthur, who highlights the key findings that complexity economists have discovered in recent decades. In short, complexity has gone hand in hand with the advances we have seen in computer science over the past few decades, particularly with respect to machine learning.

In one of the classic simulations, Kristian Lindgren constructed a computerized tournament where bots competed in randomly chosen pairs to play a repeated prisoner’s dilemma game. The computerized agents have learning capability, which means they can adjust their strategies to perform better in future iterations of the game. After thousands and thousands of iterations, some patterns emerge: the system as a whole goes through periods alternating between relative stability and chaotic instability. Importantly, the system never “settles down” and is constantly at risk of going through a phase transition from either stability to instability, or vice versa. Also importantly, the phase transitions can be abrupt and violent.

Taken together, these two ideas – Soros’s elaborations on reflexivity and the findings by the complexity folks on interactive systemic instability – suggest that markets may be inherently volatile, in a vicious and destabilizing sort of way. If true, this has vast implications for the political economy. It would essentially mean that Karl Polanyi’s central hypothesis has merit: that the dream of the market as a self-regulating and stable system is just that, a utopian dream, which, if followed, will inevitably lead to war, conflict and strife.

You see, Polanyi argued that there are two types of commodities: real commodities like tangible goods and “fictitious” commodities like labor. The more we marketize, the more we force commodities to go through price or location adjustments. For example, when the market price of a widget falls (say, for some exogenous reason), buyers adjust to purchase more widgets and producers adjust to supply fewer widgets (both of which send the price of a widget higher again). Price adjustments may also force widget suppliers to move their inventories from one location to another (e.g., from a low-demand location to a high-demand location). What the Soros/complexity research suggests is that this adjustment process may be violent and it might not ever settle down into steady state; i.e., the price of a widget might whipsaw around chaotically and it might even go through persistent booms and busts. But in the context of widgets, this is not a big deal, you know, because widgets don’t revolt or feel bad when their prices are volatile or low, or when their production is moved suddenly from one location to another.

Labor, on the other hand, has a notoriously difficult time dealing with market-induced adjustments. Economic shocks may force workers to either abruptly accept much lower wages than what they feel they are worth – which could inflict severe psychological harm – or the shocks may force workers to move to a new place where economic prospects are better, perhaps one where the local culture is very different from what the exposed workers are used to. All of this means that workers may get very angry when they are forced to adjust their prices or living styles to the market. And when workers get angry, they vote for change, primarily against the very liberalization that has inflicted suffering upon them. The populist resentment may even turn racial or nationalistic: when workers don’t have anyone to blame, they typically blame those who simply look and speak differently than they do.

Polanyi saw the embeddedness of the market in social and psychological relations within the context of the gold-standard era stretching from the late 1800s to the 1940s. To him, it was obvious that the gold standard was a form of liberalization inflicting massive, abrupt and unnatural adjustments onto people, particularly those in Europe. Eventually, Polanyi argued, the adjustments became too unbearable, leading to the rise of protectionism and nationalism in the 1930s, which of course set the stage for World War II.

If we flash forward to today, the situation in Europe looks eerily similar.  The adjustments that the European Monetary Union has inflicted on people are so vast that the citizens of Europe are unwilling to go through with them; and indeed they are pushing strongly against them, voting for populist, anti-euro parties all across the continent.[2] And do you blame the citizenry for revolting? Workers are being forced to either accept rampant wage cuts (in Greece and Spain) or leave their families and friends to move to entirely different cultures where they are not accepted. The ideal of full labor mobility across Europe sounds great in theory, but if people are fundamentally wedded to their home cultures – in a Sandelian/encumbered sort of way – it’s not exactly easy to just pick up the bags one day and leave on a dime’s notice.

If the new research suggesting that markets are inherently violently volatile is correct, then it means that projects like the European Monetary Union won’t work, because they can’t work: they force fictitious commodities like labor to go through devastating adjustment processes to the point where those being forced to adjust revolt politically until the marketization is undone.[3]

What’s going on is that we may fundamentally have the wrong model of markets in economics. Projects like the European Monetary Union are sold on the basis that markets are stable, mostly in equilibrium and characterized by relatively minor adjustments to get to equilibrium. In that model, you can even accept Polanyi’s distinction between real and fictitious commodities and still believe in liberalization, provided that you view the adjustments required in an ideal market setting as not violent enough to force labor into a state of rebellion. But if market movements are violent even under ideal conditions, then we have a serious problem: even if we liberalize with good intentions, the whole project may backfire politically when rampant adjustments in  lifestyle and in personal self-worth are forced onto people. This suggests that the path forward may be to either take a step back and fundamentally rethink our liberalization projects and what we hope to accomplish with them, or make certain that, prior to embracing the market solution, we have the appropriate safety nets in place to shield people from the extreme adjustments that the market will inevitably force upon them.

In other words, the market often moves too fast, and we should slow it down lest it destroys our cultures and our people, as Polanyi stressed nearly a century ago:

It should need no elaboration that a process of undirected change, the pace of which is deemed too fast, should be slowed down, if possible, to safeguard the welfare of the community. (The Great Transformation, Chapter 3)

[1] This predicting-the-expectations-of-others game is not dissimilar to how Keynes thought prices in financial markets are determined.

[2] Specifically, the adjustment that the EMU imposed on the economies of Europe has been as follows. Monetary union led to a quick convergence of borrowing rates across the continent, which in turn caused capital to migrate out of the core northern euro countries (like Germany and Austria) and into the peripheral southern countries (like Greece, Italy, Spain and Ireland). The capital movements resulted in huge asset bubbles and debt buildups in the southern countries, particularly in housing. Once the bubbles collapsed, the necessary adjustment has been either massive deflation in the southern countries or massive inflation in the northern countries (to get relative prices to realign), or huge amounts of labor outflows from the south to the north (to move the supply of labor to where it’s demanded most).

[3] To be sure, it might be possible for liberalization projects like the EMU to work if the appropriate social safety nets are in place to help smooth the adjustments forced onto workers. But of course Europe has gone in the opposition direction in this regard, choosing to slash social spending in the very countries that need it most.

Models and Morals

right wrong

The students are revolting against the economics curriculum. For full disclosure, I should say that I am an organizing member of this growing movement, particularly active in the Rethinking Economics initiative in New York City.

Our complaint is an obvious one: that mainstream economics, the way it is taught at most universities, is too narrow. We therefore want to open the discipline up to new or underappreciated ways of thinking. And we’re not just talking about emphasizing useful ideas from the various schools of economic thought that have long been marginalized. We also want to open the discipline up to ideas from outside of the economics texts, particularly from such diverse disciplines as moral and political philosophy, sociology, history, anthropology, psychology and physics.

Because our alternative to mainstream econ is essentially a melting pot of new ways of thinking, we’re left to argue through one-off examples. This is precisely what I’ve tried to do in this blog: to give instances of how ideas from the outside may be useful for economists in their pursuit to explain, predict and prescribe economic phenomena. My analysis of how performativity in economics has vast implications for whether the discipline can really be a value-neutral science is just one example. As was my description of what economists are really saying when they tell us that economic growth can potentially make everyone better off.

The point is, we don’t have an alternative, one-size-fits-all model to counter the neoclassical paradigm, which emphasizes rational choice, maximizing agents, and general equilibrium as the base case. You may therefore see our opposition as a weak one. But you know, the social world is incredibly complicated. That complexity, I would argue, demands flexibility. Our goal is to understand many diverse ways of thinking, so that when we’re thrown into a difficult situation, we know which model would be the best to apply depending on the context. And we also want to know what the limits of our chosen model are, like when the analysis must shift away from economics toward other disciplines.

Finally, I should say that we are not against neoclassical economics as such.[1] Depending on the context, we think many neoclassical models are incredibly useful. What worries us most of all, however, is when thinking in a neoclassical way becomes the only way economics students are able to think. That’s the serious problem we want to address.

So let me turn back to the goal of this blog, which as I said is to hammer home examples of where conventional economics fails. And I have a pretty good one for this blogpost.

A recently published paper by researchers at Upenn and Amherst argues that the extension of unemployment benefits is the primary cause of the jobless recoveries we have recently seen in the US. This is not a new argument. Casey Mulligan over at the New York Times has been yelling the same thing for years. As the argument goes, benefit-receiving unemployed workers are getting too good of a deal remaining unemployed that they don’t have an incentive to go out and search for work. Thus if you cut unemployment benefits, the incentives will shift and many of these workers will choose to work again, hence the jobless recovery ends.

But the glaring fact that Mulligan has had to grapple with is that job vacancies collapsed during the recession and remained low throughout the recovery. The decision by a firm to post a vacancy is on the labor demand side; the decision by an unemployed worker to search for a job is on the supply side. If you cut unemployment benefits, then, yes, you do make workers more desperate to find a job in order to survive – i.e., you increase labor supply. But if there are no job vacancies to be filled (if labor demand is weak), then where are these workers going to go after you cut their benefits?

I have never heard Mulligan give a serious argument as to why the collapse in job openings doesn’t negate his hypothesis. But now in come these researchers from Upenn and Amherst. They argue that labor demand and supply are connected: that providing too generous unemployment benefits not only disincentivizes workers from searching for work, but it also disincentivizes firms from posting new job openings!

I don’t want to completely mock this theory, as there might be some truth to it. The argument seems to be that because posting job vacancies is costly to firms, if firms come to believe that the supply of available labor has become disincentived to work, then they will try to save costs by posting fewer vacancies. Thus if you cut unemployment benefits, you won’t be kicking people out into the street with nowhere to go; firms will like the fact that the unemployed have become more eager to work, and so posting additional job vacancies may now make sense in the cost-benefit calculus.[2]

Nevertheless, the authors of this paper completely fail to understand the moral implications of their work. Suppose there is a connection between labor demand and supply in the way the authors posit. The next question is, how strong is the connection? The authors calibrate their model based off some micro evidence, and then find that their model can account for the movements we have seen in the relevant macro variables during the recent jobless recoveries. Importantly, when they adjust their model so as to not extend unemployment benefits, the model predicts much faster job growth, particularly in the current cycle.

While the authors do not make any normative claims about policy, it’s clear that they think the labor market would have been better off if we hadn’t extended unemployment benefits to a record 99 weeks during the Great Recession. And they also presumably think their very fancy model supports this normative view.

But here’s the thing, even if the theory that labor supply and demand are connected is true, if we wrongly estimate the degree to which they are connected and then we implement policy based on the incorrect estimate, then we may potentially do a great deal of moral harm. You see, we in this society have come to collectively endorse the luck-egalitarian view that bad “brute” luck events should not ruin people’s lives in fundamental ways. The overwhelming majority of layoffs these days are due to macro circumstances that workers cannot control. It just seems downright cruel to not help people who fall upon large spells of bad luck, especially when they can lead to dislocation, depression or even death.

Opposite to the luck-egalitarian ideal is a ruthless (non-Lockean proviso) form of libertarianism. I’m not saying that the libertarian view is wrong. I’m just saying that it would need to be defended on moral grounds by anyone proposing to cut unemployment benefits in the face of presumably weak aggregate labor demand. A theoretical economic model is not enough.

In short, for these authors to say anything convincing about why we should cut unemployment benefits in order to speed up the jobs recovery, they would need to engage with the risks of wrongly estimating the connection between labor supply and demand – the risks of adhering to libertarianism over luck-egalitarianism.

I have no idea whether the Upenn and Amherst authors are indeed radical libertarians who despise the luck-egalitarian view. But regardless, their paper effectively amounts to pushing the libertarian view through the backdoor (via a fancy mathematical model) when in fact the moral argument for kicking people into the street with nowhere to go would probably not be taken seriously in most policy debates. This is an ongoing theme in economics: a great deal of policy papers totally ignore questionable moral dilemmas that are wrapped up in the research questions asked and that are arguably far more important to resolve than efficiency considerations alone. Economists often respond by saying that moral considerations fall outside the scope of economics, that economics should only focus on answering if-then efficiency questions – e.g., if unemployment benefits are cut, then what happens to aggregate job growth? But this is not the way it works in practice, I’m sorry. Economists routinely present their results with a frankly arrogant level of certainty, and most of them are utterly ignorant of the moral boundaries they cross when incorrect statistical estimates are used for policy implementation. I mean seriously, read the Penn/Amherst paper: the authors literally assert with full confidence that “countercyclical unemployment benefit extensions lead to jobless recoveries,” excluding any discussion of the asymmetric risks involved with wrongly estimating the link between labor demand and supply.

This is what economics has come to, a profession in which the practitioners are either trying to downright mislead and sidestep difficult moral questions or one where the practitioners don’t even  understand the full implications of what they’re doing. This is precisely why we, the students, are revolting. Economics has to be done differently – the stakes are too high.

[1] And no, we’re also not afraid of the math. I myself have an undergraduate degree in math, and I can say with absolute certainty that the math I faced in graduate econ was child’s play compared to my undergraduate travails.

[2] Sounds plausible, right? But one small quibble: the authors never once mention the very real fact that the cost to firms of posting vacancies has been plummeting due to technological change!

Mankiw the Political Philosopher

political phil

Greg Mankiw, the chair of the economics department at Harvard, has an instructive article in the New York Times. He tells us that behind all of the models that economists routinely use to sway public policy in one direction or another lies a particularly political philosophy, held and encouraged by the modeler herself.

I couldn’t agree more. Economists study social phenomena, which are created and understood by humans. As such, these phenomena are always being manipulated and swayed by new ideas and beliefs. There is thus an inherent feedback between the ideas that social scientists come up with and how those ideas end up affecting and propagating through the very social world these scientists claim to be studying objectively.

Most economists do not understand this important feedback. They literally think they are doing positive science, when in fact the ontology of the social world mandates that their endeavor is a normative one.

So kudos to Greg Mankiw for understanding the inherent normativity of economic theory. When it comes to the rest of his article, however, issues abound.

Mankiw tells us that most policies backed by economic theory are grounded in a crude utilitarian conception of the good life. The goal of utilitarianism is to maximize the overall amount of happiness in society. Economists change this goal slightly and say that it is income that should be maximized, because happiness is non-quantifiable. As such, they persistently promote policies rooted in Kaldor-Hicks theory, which is the basic idea that with a larger overall economic pie, everyone’s income can potentially be increased (or left unchanged) with a little redistribution. If we can pursue a policy that lifts the income of some but doesn’t lower the after-redistribution income of others, why wouldn’t we do that? 

Mankiw says that while the theory is elegant, it is difficult to implement in practice. He notes that the economy is a highly complex and interconnected system, in which it’s impossible to know whom the exact winners and losers are of a given policy. Thus you can’t redistribute from one segment of the population to another to achieve Kaldor-Hicks efficiency if you don’t know whose money to take and to whom the money should go. Because of this imprecision, Mankiw argues, it’s better to not attempt the policy and redistribution at all, for the imprecision may cause the effort to do more harm than good. According to Mankiw, the guiding principle of the economist should always be, “first do no harm.”

The problem is that Mankiw’s non-interventionist way of looking at the world is in itself a particular normative view. It is essentially grounded in the libertarian conception of the good life: that people should be left alone by the state, whose only purpose should be to uphold law and perhaps provide a military for national defense. In this world, the state has no right to provide basic public services to individuals, including, for example, health care or educational services.

Does Mankiw realize that he’s implicitly endorsing a certain political philosophy? Maybe he truly doesn’t. But that just points to the sad state of education for economists these days. If economics is inherently a normative science in which practitioners promote particular political philosophies with their models, then the implication is that these practitioners need a strong understanding of political philosophy!

I am of course not advocating for utilitarianism, or saying that Kaldor-Hicks theory is the basis on which all policies should be implemented. And I largely agree with Mankiw that the economy is vastly complex and hard to interpret. But, in my view, the complexity demands deep philosophical thinking as opposed to hand waving about non-interventionism.

To give you an example, Mankiw, with his “do no harm” principle, would seemingly be okay with some technological change in the economy that displaces a large number of workers. Economic theory tells us that technological change should not be pushed against, as it can lead to higher levels of productivity and higher prosperity in the long run. But in the short run, you have the very real problem that certain individuals may lose their jobs because of a technological change – which, by the way, the displaced workers may have done nothing to promote in the first place.

What do you do? Do you attempt to redistribute away from those benefitting from the technological change and toward those displaced by the change, or do you do nothing for fear of doing more harm than good with the redistribution effort? If we take Mankiw seriously, we should do nothing. However, doing nothing in this case is morally problematic on many different levels. If it were truly bad “brute” luck on the part of the displaced workers that led to their joblessness, luck egalitarians would have a huge problem with doing nothing. Rawlsians would take issue with doing nothing if the displacement from the technological change led to an unequal distribution of opportunity for various members of society. And the communitarians would certainly not adopt the do-nothing approach if it degraded certain shared social values that we have come to appreciate, especially if the technological change were to promote, say, a vicious form of materialism.

The point is that political philosophy is tough. It requires reasoning on many different metaphysical levels, and often times there is no clear answer to a given dilemma. But the process of considering the merits of many different philosophical views, rather than passively endorsing the libertarian view, is hugely important.

Greg Mankiw has rightly pointed out that behind every economic model lies a philosophical view. But his subsequent comments are indicative of the fact that most economists today are woefully ignorant of the diversity of views that political philosophers have acknowledged and debated over the years. A fruitful way to rethink economics would place this diversity at the center of every economics curriculum.

Think Systematically

stove pot

It is well understood in physics and mathematics that if you want to understand why a system is the way it is, you need to look outside the system. For example, a pot of boiling water on a stove is a system. If you imagine yourself as a molecule within the water, it’s hard to understand why the system is tossing you around relentlessly. Why aren’t the molecules settling down? What’s driving their chaotic movements? Only when you look outside the system do you understand that it is the heat coming from below the pot that is driving the system.

Similarly, there are many problems in mathematics that can’t be solved within the system they are presented. For example, while all quadratic, cubic and quartic equations can be solved with formulas using the usual operations of addition, subtraction, multiplication, division, exponentiation and roots, many quintic equations cannot be solved with these simple operations. Indeed, if you want to solve certain quintic equations, you have to venture outside the normal operational system and use more sophisticated techniques, such as calculus or infinitary methods.

I reckon economics is no different. Complexity economists like Brian Arthur are on to something when they tell us that economic phenomena come about through the interaction of agents in highly complex nonequilibrium systems. Economic agents are just like the molecules moving around in the boiling pot of water. The only difference – and this is a big difference – is that agents in the economy have the capacity to learn in a self-referential way, which means they can evolve and uniquely change their strategies so as to help them accomplish their goals. The water molecules, while moving around chaotically, are still bound deterministically. Since we really don’t know exactly how agents in the economy think and learn, for all intents and purposes their interactions are nondeterministic.

Still, I think the general principle from physics and mathematics outlined above can be useful for economics. That is, if you want to understand why an economic system is the way it is, you need to look outside the system.

I can perhaps make this point with an example. One of the reasons why central banks historically are able to have large effects on the macroeconomy is because they usually come in from outside the system with their monetary policy tools.

For example, consider the last two bubbles inflicting the US economy. The bubble burst in 2000, the housing bubble burst in 2006. Each collapse occurred shortly after the Fed raised interest rates. I find the following chart highly instructive:

fed bubbles

The Fed raising interest rates in each case is much like turning on or off the stove on which the boiling pot of water sits. When you turn the stove on or off, you change the system. When the Fed raises or lowers interest rates, it changes the system.

We can debate how, precisely, higher interest rates cause bubbles to collapse. I’m very much in agreement with George Soros who believes that bubbles collapse when a moment of collective reassessment occurs, after which everyone realizes that current prices in the market cannot possibly be supported by the fundamentals. Perhaps rate hikes by the Fed cause market participants to collectively reassess. However the mechanism precisely works, the point is that central banks usually have big affects on the economy by coming in from outside the system with their actions.

Which brings me to today, where I’m afraid to say that central banks may have lost some of their thunder. Normally, the limitation of central bank action in the current environment is presented as having something to do with the zero percent lower bound on short-term interest rates. This is the story that Paul Krugman has been telling for years. But once you start thinking in terms of systems, you start to see things a bit differently.

Why has the Fed seemingly lost its ability to steer the real economy? Well, duh … because the Fed has become trapped inside the system!

This all became clear to me when the Fed started to provide forward guidance about the likely path of future interest rates. Initially the Fed told us that rates were likely to stay low for a long time even after the economy improved. Then it gave us a specific time frame in which rates were likely to start normalizing. And then the big backtrack came: the Fed started to emphasize that rates would only normalize along the Fed’s explicit calendar schedule provided that economic conditions improved in the way that the Fed prayed predicted.

This was monumental in that it showed that the Fed is now just like an ordinary investor, reacting to what happens within the system. It can no longer come into the system from the outside like it did in 2000 and 2006, causing a Sorosian moment of collective reassessment. It has lost its element of surprise, and thereby its perception among market watchers as a truth holder. Never mind the reality that the Fed has always been more or less clueless about what’s really going on in the economy; its ability to come into the system from the outside at least had the effect of convincing very many people that the Fed knew something they didn’t. Now it’s entirely clear: the Fed is reacting within the system, just as everyone else is.

Okay, I’m exaggerating my case a little. The taper tantrum beginning last year shows us that the Fed still has the element of surprise and the ability to change the system with its actions. But I think the general principle is still relevant: that the economy needs to be understood as a system, which is only going to go through meaningful phase transitions when something outside the system acts upon it. Note also that central banks probably aren’t the only institutions with the ability to affect the system from the outside; I reckon certain large investment houses hold this superpower as well, as do entrepreneurs when they bring to the market a revolutionary technology.

Oh, and this way of thinking about systems and about how they change when acted upon from the outside isn’t only relevant to the macroeconomy and markets. Microeconomic systems exist too. For example, a group of same-seniority workers interacting within a business can be thought of as a system, which may only change in a substantive way when a higher-seniority manager comes in from outside the system with a new rule or process.

The point is that if you want to understand when and how the economy may change, you need to think about the numerous ways in which it could be affected from the outside. Admittedly, there is probably a natural within-system evolutionary process that shapes the way the economy is and where it’s headed at any given moment. But the really big changes occur only when someone hits the on or off switch on the metaphorical stove.

Complexity Economics, Bots, and All That


I’m excited about machine learning. Computers are advancing quickly. They now have the ability to learn and conquer simple video games, including many of the old Atari games; and we are seriously not too far from a world where automated cars populate our roads. The latter will save many lives by reducing traffic accidents, not to mention decreasing highway congestion and thereby also reducing greenhouse gas emissions.

It is indeed a good time to be a computer scientist. Progress is being made, and the future looks encouraging. The same, sadly, cannot be said about economics.

The computer scientist’s future, however, is producing all sorts of speculation that needs to be brought back down to earth. The speculation is most emblematic in the recent Spike Jonze’s film, “Her”, which explores the romantic relationship between a computer program and a human being. The computer program in the movie is so lifelike that its spoken interactions are indistinguishable from those of humans. Indeed, the computer program around which the film is centered has reflexive consciousness – and is human in every aspect except for the fact that it doesn’t have a body to house its thoughts and actions.

While “Her” seems a bit far-fetched, the New York Times apparently didn’t think so. It ran an article on the movie making the philosophical point that human minds, at the foundational level, are not very different than computer minds:

[O]ur best empirical theory of the brain holds that it is an information-processing system and that all mental functions are computations. If this is right, then creatures like Samantha [the computer program from the movie] can be conscious, for they have the same kind of minds as ours: computational ones.

The article then goes on to speculate about whether it would be possible to create scans of human brains and then upload them into the digital universe. If so, then humans could ostensibly live forever.

While these sorts of questions are fun to think about, I wouldn’t get your hopes up. The reason is because human minds, at least in my view, are actually fundamentally very different than computer minds.

First, it is generally understood that one of the limitations in computer programing is self-reference – this is a big theme, by the way, in Noson Yanofsky’s recent book, which everyone should read. A computer is unable to handle a code that requires the computer to refer to itself in order to solve the code. The famous example of a self-referencing and unsolvable code is the Halting problem, which refers to the question of whether a computer is able to look at a code and determine certain properties about the code (e.g., whether the code will force the computer into an infinite loop) before beginning to solve the code. Computers can’t do that. They must solve the code by computing its component parts algorithmically, without the ability to step back, in a self-reflexive way, to assess the structure of the code holistically.

Humans can step back and view their surroundings holistically. They can refer to themselves in the world as conscious agents, without going into an endless feedback loops when attempting to solve self-reflexive problems. Indeed, Descartes’ mind did not break down when he famously referred to himself in his Discours de la Méthode (1637).

It’s important to point out that self-reference is not a computational limitation. No amount of increased processing power will ever allow computers to solve codes that require self-reference. Such problems are simply outside the realm of solvable computer programs, based on how computers think and operate. The ability to self-refer, in my view, is what makes biological consciousness unique. Indeed, moreover, I think the degree of self-referring ability is what differentiates a human mind from that of, say, a cat. Cats can probably deal with some self-reflexive phenomena, but not as many as chimps can; and humans can probably self-reflect considerably more than chimps can.

Why am I talking about this, and how does it relate to economics? Well, there is a growing program in economics called complexity economics, which attempts to model the economy as an algorithmic, non-linear system, with all sorts of persistent feedback effects between learning agents at the micro level and the system’s macro properties. The complexity folks think that the combination of immeasurable risk plus never-ending propagation from technological change means that the economy is never in an equilibrium, and thus it should not be modeled in a general-equilibrium framework. I could not agree more with these views.

The issue I have with the complexity folks, however, relates to how they go about understanding the complexity. They think we should run simulations of markets on computers to see if there are distinguishable patterns in how the virtual markets evolve. In one of the classic simulations, Kristian Lindgren (1991) constructed a computerized tournament where computerized agents competed in randomly chosen pairs to play a repeated prisoner’s dilemma game. The computerized agents have learning capability, which means they can adjust their strategies to perform better in future iterations of the game. After thousands and thousands of iterations of the game, some patterns emerge: in short, the system as a whole goes through periods alternating between stability and instability. Importantly, the system never settles down and is constantly at risk of going through a phase transition from either stability to instability, or vice versa.

This is supposed to give us some insights about how economies in which human agents interact evolve. Perhaps it does. But before we jump on this bandwagon, let’s just recall the whole discussion earlier about self-reference. Any computerized market economy is not going to be nearly as complex as the actual economy, because the actual economy is characterized by self-reflexively conscious individuals interacting with each other whereas the former is not. Even though the agents in Lindgren’s system “learn” how to play the prisoner’s dilemma game better after each iteration, they still operate algorithmically. They don’t have the ability to refer to their own existence, reflect on the lives they’ve lived and their positions in the world, and make post-self-reflection decisions. It’s hard for me to even put words together to describe self-reflexive living, but that’s precisely the point: our conscious experience as humans is unique and isn’t reducible to code. We have free will and can choose in ways that don’t follow a script, so to speak.

In my opinion, the better approach to deal with the complexity would be to follow in the footsteps of the behavioral guys, trying to understand all of the biases that systematically affect actual individual decision-making. The behavioral program should probably spend more time thinking about feedback effects between the micro and the macro – the complexity folks are on to something here, for sure – but that doesn’t mean humans should be reduced to bots. It’s of course challenging and costly to analyze large systems of humans experimentally. The lure of computer systems is that they’re quick and cheap to compute. But that’s not an argument for why we should make a wholesale shift away from analyzing actual decision-making and toward looking a computerized economies.

As with most things, though, the balance is probably somewhere in between. Let’s embrace complexity economics because it smartly acknowledges the inherent complex, disequilibrium forces that characterize markets and economies – as opposed to the basically absurd neoclassical world, where everything is stable, understandable and under control. But let’s not take these computerized simulations too far. They seem to ignore a fundamental characteristic of what it means to be a human: namely, the ability to self-reflect.

Fundamental Value in the Social Sciences


To understand why some phenomenon is the way it is, you need to assess its fundamental value. This is reasonably straightforward in the natural sciences. For example, if you want to understand why that bridge over there isn’t collapsing despite the fact that those big trucks are driving over it, then you need to assess the bridge’s ability to handle the compression forces that the trucks are exerting on the bridge.[1] If the compression forces are too great, then the bridge will buckle and collapse. Figuring out whether the bridge will buckle and collapse is reasonably straightforward because:

  1. We all agree that the bridge’s ability to handle compression forces is fundamental to its state.
  2. We all agree on how to best measure the compression forces (use newtons).
  3. Our actions toward the bridge once we measure its ability to handle compression forces don’t end up meaningfully altering its ability to handle compression forces.

Assessing fundamental value in the social sciences is not at all straightforward, because:

  1. We sometimes don’t agree on what the fundamental value of a given social variable is.
  2. Even when we do come to an agreement, we each sometimes measure the fundamental value differently, in ways that aren’t easily convertible.
  3. When we then act on our measurements, our actions sometimes end up changing the fundamental value on which we acted.

For some social variables, we can’t even get past 1. For others, we run into measurement problems (2). And certain social variables have a fundamental value that is highly responsive to our actions (3). Importantly, only when we get through 1, 2 and 3 with ease and agreement will a stable equilibrium for a given social variable emerge. It is of course very rare that we get through 1, 2 and 3 with ease and agreement; the more likely case in the social sciences is that 1, 2 and/or 3 will introduce difficulties in our assessment of fundamental value, meaning that disequilibrium is probably the general state of most social variables.

So let’s look at some examples. An example of a social variable where we can’t even get past 1 might be a complex financial derivative, such as a collateralized debt obligation (CDO) whose payment tranches are comprised of bundles of subprime mortgages. As Ricardo Caballero has mentioned, one of the challenges with complex CDOs is that we don’t have a long history of pricing them, so we don’t really know what their fundamental value is (or should be). Subprime mortgages are relatively new loans, which only grew in popularity in the late-1990s. Thus, we don’t know much about, for example, average default rates on subprime mortgages over long periods of time. Given this limited knowledge, those pricing complex CDOs during the boom years simply assumed subprime mortgages to have default rates similar to those on prime mortgages. That assumption didn’t turn out too well.

An example of a social variable where we can get past 1 but have trouble with 2 relates to the solvency of a government. We all tend to agree that a government’s debt-to-GDP ratio is a key factor determining solvency.[2] But how do we measure the debt-to-GDP ratio?

Take a look at the balance sheet for the Puerto Rican government (page 34-35). Can you tell me what the total stock of the Puerto Rican government’s debt is? The Puerto Rican government has 56 different component units (i.e., subsidiaries). Many of them issue their own debt. Some of them are backed financially by Puerto Rico’s primary government; others might not be, since it depends on the legal interpretation of the government’s financial mandates. Also, should the denominator of the ratio really be GDP? Or should it be GNP? Due to certain tax laws, a lot of phantom corporate income has been traditionally booked in Puerto Rico even though it is generated elsewhere.[3] As such, not all of the income booked on the island benefits the local economy. We might therefore want to look at GNP rather than GDP. Depending on whether you look at GDP or GNP and depending which component units you include in your debt calculation, you could find a debt ratio as low as 60% or one as high as 110%. What’s the fundamental value of Puerto Rican debt? Is the Puerto Rican government solvent or not?

Next, consider a share of stock in a company. We all agree that a company’s earnings per share is a fundamental determinant of the company’s stock price. Furthermore, a company’s earnings per share is easily measurable.[4] So if we can all understand and measure the fundamental value of a company’s stock, then the stock price should always be in equilibrium, right?

Not quite. We still have point 3 to deal with. When we find ourselves in a situation where we measure the price of a stock to be below that implied by its fundamental value, we usually take action: namely, we purchase the stock. But everyone tends to do this at the same time, because again everyone can understand and measure the stock’s fundamental value. The problem is that if enough people act on their correct measurement of the stock’s fundamental value, their collective actions may end up changing the stock’s fundamental value. When people pile into a stock and the price of the stock starts rising fast, the media usually notices and starts talking highly about the stock. This affects people’s perceptions of the company beneath the stock, causing greater demand for the company’s products. As people then purchase more of the company’s products, the company’s earnings start to rise, increasing the fundamental value of the company’s stock. A game of predicting the expectations of others can then sometimes drive the company’s stock price even higher: people start to purchase the stock not because they estimate the stock’s fundamental value to be above the stock’s price, but rather because they expect purchases by others to drive up earnings for the company, eventually justifying the higher stock price with stronger fundamentals. The expectations game can sometimes lead to huge distortions, however. When people’s expectations get so far ahead of the fundamentals that buyers soon realize that the fundamentals will never rise to justify the stock’s exorbitant price, everyone sells en masse – and the bubble collapses.

This reflexive process between people acting on the fundamentals of a company’s stock and the effect their actions have on the very fundamentals on which they’re acting is quite different than the bridge example described earlier. If you believe the bridge has the capacity to handle the compression forces exerted by your truck when you drive over it, you will act on that belief and drive over the bridge. When you do so, your actions don’t really change the bridge’s fundamental ability to handle compression forces.[5] Thus, the bridge’s ability to handle compression forces is in a stable equilibrium, not interrupted by the beliefs and actions of people. Is a stock’s price ever in a similar type of equilibrium?

Moreover, is such an equilibrium ever possible for any social variable? I’m really not sure, to be honest. If it is, it is extremely rare. Again, if a social variable is going to settle into a stable equilibrium, then the variable’s fundamental value must be: (1) obvious to everyone, (2) easily measurable and (3) unchanged when we collectively act on our beliefs about the variable.

Maybe there are certain stocks that pass all three tests. The media probably doesn’t get into a frenzy when well-known, blue-chip stocks like GE experience rapid price changes. So maybe there isn’t much of a feedback between the actions people take on GE’s stock and the stock’s fundamental value. As such, maybe stocks like GE are in stable equilibriums most of the time. I’d be interested to hear a trader’s thoughts on this.

A couple of concluding points: I’ve kept this discussion limited to financial variables because they are the easiest to describe. However, there’s no reason to think that everything I’ve said doesn’t generalize to other, perhaps more important, social variables. For example, it is very common for economists to say that “the economy is in an equilibrium.” In fact, the starting assumption behind nearly every mainstream macroeconomic model is that economies tend toward equilibrium, which is disrupted only by an exogenous shock.

I honestly have no idea what this could possibly mean – despite the fact that I’ve, ironically, solved many mainstream macroeconomic models in the past. If an economy is in an equilibrium, then it must be operating at its fundamental value, which also has to be stable. Okay, then what is the fundamental value of an economy? Perhaps it is the rate of potential GDP growth. Okay, how do we then measure potential GDP? Do we use a production function? Do we look at inflation and unemployment differentials? Do we apply some statistical filter to actual GDP?

Suppose we agree that using a production function is the best way to estimate potential GDP, which by the way is unobserved. This means that we have to measure the entire capital stock, the total number of hours worked in the economy and how productive our inputs are. This seems like a formidable task, which will likely yield different results by different researchers – and indeed it usually does.

(Fun question: What’s easier, estimating potential GDP or calculating Puerto Rico’s debt-to-GDP ratio?)

Now suppose everyone somehow got similar calculations of potential GDP from their production functions. Presumably people would then act on their calculations – e.g., capitalists may choose to build additional factories if the calculations are optimistic. Collective action might then change the economy’s potential GDP, by for example expanding the capital stock at a rapid pace. Could the same sort of expectations game described earlier with regard to stocks come into play? Might economies never reach a stable equilibrium because of the reflexivity between agents’ assessments of, and actions toward, economic fundamental value and the fundamental value itself? Do we really need exogenous shocks to explain volatility in the business cycle, or is volatility embedded in the system due to the complex ontology of the social world?

Very many questions, for which our most famous and well-known economists have provided very few answers.

[Final note: My views in the post are heavily influenced by this paper by George Soros – though the 1-2-3 taxonomy is entirely my own creation.]

[1] There are also torque and tension forces to consider, but let’s keep the example simple and look only at compression.

[2] Though I should mention that the importance of the debt-to-GDP ratio as an indicator of fiscal solvency probably depends heavily on the monetary regime of the government’s currency, as Paul Krugman has recently argued. Furthermore, Dean Baker has argued, quite persuasively in my opinion, that a government’s debt-to-GDP ratio is conceptually a very poor indicator or solvency.

[3] This has been particularly true for the pharmaceutical industry, though more recently many of these favorable tax laws have expired.

[4] Yes, sometimes you may want to look at diluted earnings per share, and other times we need to check whether a company’s earnings per share is artificially inflated due to earnings manipulation. But these are relatively small caveats compared to those in the Puerto Rican debt example.

[5] I mean, the action of driving over the bridge might change the bridge’s fundamental value slightly due to dilapidation, but the change is so minuscule for each truck.

History v Stats


Is there a difference between history and statistics? Schumpeter seemed to think there is. In fact, he claimed that if he had to choose between using either history or statistics to understand the economy, he would choose history:

Of these fundamental fields [history, statistics and theory], economic history is by far the most important. I wish to state right now that if, starting my work in economics afresh, I were told that I could study only one of the three but could have my choice, it would be economic history that I should choose.

History and statistics serve a common purpose: to understand the causal force of some phenomenon. It seems to me, moreover, that statistics is a simplifying tool to understand causality, whereas history is a more elaborate tool. And by “more elaborate” I mean that history usually attempts to take into account both more variables as well as fundamentally different variables in our quest to understand causality.

To make this point clear, think about what a statistical model is: it is a representation of some dependent variable as a function of one or more independent variables, which we think, perhaps because of some theory, have a causal influence on the dependent variable in question. A historical analysis is a similar type of model. For example, a historian typically starts by acknowledging some development, say a war, and then attempts to describe, in words, the events that led to the particular development. Now, it is true that historians typically delve deeply into the details of the events predating the development – e.g., by examining written correspondence between officials, by reciting historical news clippings to understand the public mood, etc. – but this simply means that the historian is examining more variables than the simplifying statistician. If the statistician added more variables to his regression, he would be on his way to producing a historical analysis.

There is, however, one fundamental way in which the historian’s model is different from the statistician’s: namely, the statistician is limited by the fact that he can only consider precisely quantified variables in his model. The historian, in contrast, can add whatever variables he wants to his model. Indeed, the historian’s model is non-numeric.[1]

Now, I said that the statistician considers “precisely quantified” variables in his model for a reason. The truth is that the historian may also try to quantify his chosen variables. For example, the historian may say that cultural influence “played a strong role” in the events leading up to the development; or that the development “made sense” for political or strategic reasons. In each of these examples, the historian is trying to quantify something that is perhaps inherently unquantifiable.

The statistician does not bother with such imprecisely quantified variables.[2] The statistician is determined to deal only with precisely quantified variables, such as pounds of force, or kilowatts of energy, or dollars per capita.

And often times the statistician’s model works great! Which is wonderful because when it does work well that means we don’t have to take time to grind through all of the nitty-gritty details of history. Doing statistics is much less time consuming than looking at history. A successful statistical model is thus a vindication of Occam’s razor.

But statistical models are not always successful. Sometimes, historical models do a better job of understanding causal forces. Why?

It is my view that what differentiates whether history or statistics will be successful relates to the subject area to which each tool is applied. In subjects where precisely quantified variables are all we need to confidently determine the causal force of some phenomenon, statistics will be preferable; in subjects where imprecisely quantified variables play an important causal role, we need to rely on history.

It seems to me, moreover, that the line dividing the subjects to which we apply our historical or statistical tools cuts along the same seam as does the line dividing the social sciences from the natural sciences. In the latter, we can ignore imprecisely quantified variables, such as human beliefs, as these variables don’t play an important causal role in the movement of natural phenomena.[3] In the former, such imprecisely quantified variables play a central role in the construction and the stability of the laws that govern society at any given moment.[4]

If we want to, for example, understand what caused a dyke protecting some town to collapse, we can do so confidently by looking at only the relevant precisely quantified variables – e.g., the water pressure of the sea, the height of the sea, the force of the wind, etc. We don’t need to understand the culture of those in the town or the history of the town to find out why the dyke collapsed. Now, I should note that these quantified measures such as water pressure, height and force don’t, in my opinion, exist in a Platonic sense; they are simply measuring representations that we humans created to help us understand and manipulate the natural world to our liking. The key point is that these measuring representations are one-way directional and stable; when we try to manipulate the natural world, it doesn’t try to manipulate us back, as the social world inevitably always does.

If we want to understand why, in contrast, microcredit led to more development in this town versus that town, it’s not enough to just look at the precisely quantified variables available to us, such as the amount of microcredit dispersed or the relative differences in poverty between the two towns. The reason is because there may be significant cultural or social differences between the two towns that distort our efforts to understand the causal force of microcredit on development. If we ignore these cultural or social differences in our statistical model, then our model will give us a biased conclusion regarding causality. When we apply these biased results to other situations, we should not therefore be surprised if the dispersion of microcredit does not produce the results we hoped for.

When we run into these problems of obtaining biased conclusions due to leaving relevant imprecisely quantified variables out of our statistical model, we can take one of two paths: we can try to precisely quantify the imprecisely quantified variables and then consider them in our model, or we can try to complement the statistical model with a historical analysis.

On the first path, I’m not very hopeful. The imprecisely quantified variables variables are socially constructed. They are created by humans and evolve in complex ways that we don’t fully understand. They surely relate in some ways to human psychology, but we don’t really know how psychological factors affect human behavior, not only at the individual level – sorry, my fellow economists, but the situation is much more complicated than what rational choice theory assumes – but also crucially at the group level.[5]

Our best attempt to try to understand how these social variables interact with our considered dependent and independent variables is to look at the social variables in a historical light.

To give you an example, Piketty’s recently published book, Capital, does a very good job of mixing historical analysis with statistical trends. He documents the trends in demographic growth over the past three or so centuries, and basically finds that global population growth has followed a bell curve, peaking in the mid-20th Century and declining thereafter. The central tendency based on these data is that global population growth will continue to slow and will eventually stagnate sometime in the second half of this decade.

Piketty then says that there are three main causal factors driving population growth: life expectancy rates, infant mortality rates, and people’s preferences regarding fertility. The first two are highly dependent on advances in medicine. Forecasting such advances is no small matter, but Piketty acknowledges that it’s way easier than forecasting people’s preferences regarding fertility. How many children people choose to have in the future is highly dependent on numerous social factors, such as cultural influences, religious preferences and politics and policy. Any forecast excluding an analysis of such factors is likely to have a very high standard error, which Piketty smartly acknowledges.

The point is that when we are trying to understand causality in the social world, statistics is not going to be enough – we will need to complement our analysis with historical narratives about how societies have evolved. Perhaps this is why Schumpeter preferred statistics to history: he recognized that economists study the social world, which is much more complex than the natural world and which thereby calls for the use of different tools in our quest to understand causality. He was far ahead of his time.

[1] But not necessarily non-parametric, I would think. Indeed, whether they realize it or not, historians probably make certain implicit probabilistic assumptions about how developments usually unfold before any research is conducted.

[2] Well, okay, this is not entirely true. Many economists have tried to quantify things like cultural and social influences by creating rank indexes measuring, for example, people’s attitudes toward government or markets/free trade; but such indices are highly imprecise, and nobody really takes them with more than a grain of salt, at least in my experience.

[3] While humans can of course alter the natural world with their beliefs regarding what they would like to achieve, they can’t change the natural world’s fundamental laws.

[4] Indeed, whether the social world will continue to evolve in this way or that way depends entirely on whether humans believe it will evolve in this way or that way. I wouldn’t even, therefore, call such tendencies “laws,” as they are highly conditional on people’s beliefs, which are always changing in complex ways that we don’t fully understand. And perhaps we can’t even fully understand them! We researchers, after all, have our own beliefs about the evolution of the social world; and, as such, we are part of the believing system, a system out of which we cannot step to better understand its fundamental properties.

[5] Furthermore, even if we could accurately quantify a variable such as cultural influence, its effect on development probably evolves in a very non-linear way, and we don’t (yet) have very accurate statistical tools to deal with such complex non-linearities. Average estimation techniques are therefore going to give highly misleading results.

What is the Justification for Economic Growth?


We in America have gone all in on the economy. It has been our number one priority for years, a priority that President Obama enthusiastically put front and center in his recent State of the Union address.

But what is the philosophical justification for a faster-growing economy? Are we truly better off when the size of our economy, as measured by gross domestic product or total income, is bigger?

Let’s talk about these important questions in this post. A starting point is to say that the goal of economists in their pursuit to improve the welfare of society is to always try to get us to a Pareto improvement: a change where the welfare of at least one person is improved, while at the same time not lowering anyone else’s welfare. This is such a vague goal – kind of like the vagueness of rational choice theory – that it’s almost impossible to disagree. Of course we would all like to improve the welfare of some without lowering the welfare of others. The interesting question, however, is, how exactly do we do that?

The next step by economists is to essentially associate welfare with income. If there is a change in the economy that increases the income of some and doesn’t lower the income of anyone else, then that’s a good thing, economists argue. This jump in reasoning is problematic because it assumes that relative differences in income don’t lower people’s welfare, when all of the micro evidence suggests the opposite. But let’s stick with the economists for now and agree that a change in the economy that lifts the income of some and doesn’t lower the income of anyone else would indeed be desirable.

The question then is, how do we make such changes? Alas, nobody has figured this out yet, and it doesn’t appear that such changes are even possible. In fact, the consensus view among economists is that there are always winners and losers whenever the economy goes through a change. It is simply not possible to find a policy or a mechanism that lifts the income of some and doesn’t lower the income of anyone else.

Nevertheless, that has not stopped economists from marching forward. Facing this reality, they’ve concluded that Pareto improvements are still “potentially” possible if the amount gained by the winners of some change in the economy exceeds the amount lost by the losers. In these situations, it is still theoretically possible to arrive at a Pareto improvement if the winners from the change adequately compensate the losers in a way that makes the losers no worse off from the change. In income terms, this means that if we can find changes in the economy that increase the income of some by more than the income lost by others, then these changes would be desirable; for, with some redistribution from the winners to the losers, we could lift the income of some without lowering the income of anyone else.

That, ladies and gentlemen, is the basis for pursing policies that increase the size of our economy.[1] Higher net economic growth can make some better off without making anyone else worse off. Pretty clever, huh? Unfortunately, issues abound.

The first problem relates to measurement. How, exactly, do we tell whether a change in the economy will increase the income of some by more than the income lost by others? Well, one thing we can do is that we can ask people directly. For example, we can say, “How much would you be willing to pay to see this park built over there?” If the actual costs of the park are lower than what people are willing to pay to see it built, then we know that the park would create net economic value, so we should build it. In this example, the people who pay taxes to see the park built but who don’t value the park are the losers, to whom we will redistribute money in other ways in the future to offset their losses.

The first problem with this logic is that our political system doesn’t always redistribute money in the future to offset the losses of those who lose. If the winners have exorbitant political power, as they almost always do, they will prevent such redistribution. The concept of “exorbitant political power,” however, is not something that falls under the purview of neoclassical economics, so most economists ignore this important qualification.

But there is a more fundamental problem with using willingness to pay measures to understand who gains or loses what from changes in the economy. In short, some policies don’t just raise the income of some and lower the income of others in a vacuum; they also involve other complicated changes, many of which end up violating people’s rights in some way or another. When rights are involved, we shouldn’t ask how much people would be willing to pay to see some change instituted; we should be asking the losers how much they would be willing to accept to see the change instituted. There is a fundamental difference between willingness to pay and willingness to accept.

For example, suppose the change we are considering is whether we should open up some industry to international trade. A domestic worker might oppose this change, as it may cause her to lose her job. In turn, if this worker’s healthcare coverage is tied to her job, she might lose coverage from the change, and we may see this as a violation of her right to normal functioning and equality of opportunity. Moreover, this violation of rights can be seen by looking at willingness to pay/accept differentials. If we ask this worker how much she would be willing to pay to prevent her industry from being exposed to international trade, she may reply by saying, “All of my wealth.” After all, she doesn’t want to lose her healthcare coverage, as that may cause her to feel as though her ability to function normally in society has been violated. This violation may be reflected if we asked her not how much she would be willing to pay to see trade barriers upheld, but rather how much she would be willing to accept to see trade barriers torn down. To the latter question, she may respond by giving a much higher figure – one that would allow her to buy her own individual healthcare coverage in the private market, I suppose. Importantly, her differing responses to the willingness to pay or accept questions may lead to different conclusions about whether we should expose her industry to international trade. In other words, whether or not tearing down the trade barrier constitutes a potential Pareto improvement depends on how we measure the losses inflicted upon the losers of increased trade.

To avoid these sensitive rights issues, economists generally ignore willingness to accept measures in their cost-benefit analyses, using willingness to pay as the appropriate measuring stick. It’s obvious why: using willingness to accept measures would lead to far fewer changes that we can truly call potential Pareto improvements. After all, there is no upper bound on the willingness to accept measure; the losers of some change may not be willing to accept any dollar amount to see the change occur if the change destroys their life in a fundamental way.

When economists come to the conclusion that such-and-such policy will increase GDP and therefore we should pursue the policy – because willingness to pay measures suggest the policy has the potential to lead to a Pareto improvement – they are fundamentally sidestepping the issue of whether the policy will violate certain people’s rights. In the process, they are fundamentally favoring a utilitarian conception of justice over a rights-based conception of justice, whether they realize it or not. This view needs to be argued on normative grounds. Economists can’t just spit out some calculation of some model showing that some policy will grow the economy, and then argue that we should pursue the policy.[2] They need to think about whether the policy harms certain individuals in ways that can’t be offset through redistribution. Once we factor the violation of rights into our search for potential Pareto improvements, the scope of such improvements becomes much narrower. Thus, it’s just not always the case that everyone is truly better off when the size of our economy is larger.

Let’s summarize what we’ve learned. The goal of welfare economists is to bring us to Pareto improvements. Such improvements would be desirable, but they are described in such a vague way that it’s impossible to understand what they even imply. One way to approximate Pareto improvements is to use income: if a change in the economy increases the income of some and doesn’t lower the income of others, then we’ll consider that a desirable approximation to a Pareto improvement. This jump, however, ignores the fact that people derive welfare from relative differences in income, not absolute levels.

Because it’s not actually possible to find economic changes that increase the income of some and don’t lower the income of others, economists have turned to finding changes that increase net income, with the idea that if the winners always compensate the losers we can achieve actual Pareto improvements. However, measuring whether some change in the economy actually increases net income is very difficult. To do this properly, we should use willingness to accept measures. Such measures, however, drastically limit the number of potential Pareto improvements that come about from changes in the economy, as there are almost always complicated rights issues involved whenever the economy goes through a change. Because economists don’t like dealing with complicated rights issues, they instead choose to use willingness to pay measures in their cost-benefit analyses. As such, they’ve fundamentally placed utilitarian considerations above rights considerations. Thus, before they tell us that we should do policy X because it will lead to a larger economy, they need to first tell us why utilitarianism is better than, for example, liberal egalitarianism. They may have a hard time doing this, as most moral philosophers have concluded that the latter is better than the former.[3] But let’s give economists the benefit of the doubt and see what they can come up with. Any takers?

[1] For those in need of jargon, this way of looking at the world is called Kaldor-Hicks theory, motivated by one of Nicholas Kaldor’s famous writings on interpersonal comparisons of utility.

[2] For example, Greg Mankiw, the head of the economics department at Harvard, recently wrote that “a case can be made that for human welfare, growth swamps fluctuations” when arguing on behalf of the social value of the financial sector. In other words, he’s saying that if financial innovations lead to more volatile business cycles but a higher long-run growth rate, then we should still favor such innovations. But if the innovations violate the rights of people in the short term by crashing the economy, should we still favor the higher long-run growth rate from the innovations? Would people support this reasoning in a willingness to accept sense?

[3] For example, see Rawls’s separability of persons argument, or even Nozick’s utility monster thought experiment.

Competing for What?

Running businessman.

As previously mentioned, when economists bring their wisdom into the policymaking realm, their preferred tool to use to make recommendations is rational choice theory. Rational choice theory posits the existence of a rational thinker who can step back from the chaotic influences of the social world, choose his or her own ends, and pursue them in a maximizing way. Rational choice theory also does not make a normative claim about what preferences people should hold; it simply says that people have preferences, broadly defined, and that policy should be designed so as to satisfy them.

Except that it must make a normative claim about what preferences people should hold, otherwise the theory becomes untenable. For example, a non-normative policy recommendation using rational choice theory would compare the preferences of rapists to commit rape against the preferences of the raped to not be raped. It would treat the lower utility of the rapists as a cost in a policy seeking to reduce the number of rape incidents in a society; and if somehow the costs to the rapists from such a policy were to outweigh the benefits to those being protected from rape, the policy recommendation would necessarily point toward non-intervention.

This way of looking at the world is of course absurd, and economists have acknowledged it as absurd. Which is why they’ve sought to narrow the scope of preferences that should be satisfied by policy. By claiming that only “purified preferences” should be satisfied, economists have made their theory as applied to policy less untenable.[1] If it is deemed that the preference of a rapist to commit rape is a distorted preference, then the economist can ignore this preference when making a cost-benefit analysis.

It is important to note that when the economist jumps from considering all preferences in his cost-benefit analyses to considering just purified preferences, he has entered the realm of moral philosophy. He can no longer claim to be a liberally neutral policy advocate.[2] He now needs to take a stance on what he thinks constitutes the good life and, as such, argue for policies that lead people in the direction of that life.

Which is wonderful! The beauty of moral philosophy is that everyone can and should be a moral philosopher. Indeed, society is at its best when people with different moral beliefs debate and envision, collectively, what the good life is and how society should be structured so that people can achieve it. Deliberative democracy is at the heart of what makes a modern society thriving and just.

Unfortunately, most economists do not want to debate with the broader public about moral philosophy. They would rather sneakily hide their moral views behind mathematical models, which they falsely claim are non-normative. I’m not sure how they’ve been able to pull off this swindle for so long; but it probably has something to do with the implicit bias we all have to be swayed by fancy physics-looking math, which we incorrectly assume must be grounded in positive empiricism and not moral intuition.

So in this post, let’s look at two policy recommendations that economists have given in recent decades, each of which appears non-normative upon first glance but in fact rests on a certain moral ideal.

The first relates to incentive structures in our workplaces. It has become more common for employers to pay employees based on their performance rather than on strict, predefined pay thresholds. These pay-for-performance structures usually take the form of end-of-the-year bonus packages, where employees are given a lump-sum payment based on their performance during the prior year. Such payment schemes have been traditionally common in the financial industry, but have spread to other industries, including education and healthcare.

The economist’s rationale for supporting pay-for-performance structures is that they promote more individual choice for workers and that they often lead to a more productive workforce. The former is where economists claim to be non-normative. If certain people want to work harder in order to earn a higher salary, then why not give them the choice to? In other words, economists try to justify pay-for-performance structures on the basis of liberal neutrality: such structures give workers more freedom in the workplace to decide how hard they would like to work and how much they would like to earn, not relying on any conception of how work should be performed.

If only it were that simple. The problem is that these pay structures define the worker as an individual entity whose purpose is to compete in the workplace against other workers. They, in other words, completely undermine the communal aspect of production that we might value if we took the time to reflect on the matter. Moreover, even if these pay-for-performance structures do lead to a more productive workforce, they do so at the risk of changing the workplace from a social environment where workers go to work together cooperatively to a competitive environment where it pays (literally) to be selfish.[3]

And here’s the kicker: if people’s preferences and personalities are malleable, then these pay-for-performance structures do not just create selfish competitors inside the workplace but outside it as well. If we’re in competition mode for 10 hours each day as workers, then chances are we will also be in competition mode when we get home from work, and during the weekends, too. This means that we will be more competitive and selfish at the dinner table or when we’re playing with our children. If we’re young and into the dating scene, then our work environments will make us more self-centered in our efforts to impress those whom we’re attracted to. In short, a simple change in incentives in the workplace can cascade into a wholesale change of how we define ourselves socially and how we act in settings outside of the workplace.

It makes sense that the business community, which doesn’t care how people behave outside of the workplace, would support pay-for-performance schemes if they lead to more productive workers. But what’s the economists’ excuse? Why do they routinely support these types of salary structures while ignoring their broader social effects?

Another policy that economists like to argue for, one that also rests on a certain conception of the good life, is the idea that retirement saving should be a personal endeavor rather than a collective one. We’ve been moving slowly over the past several decades from collective defined-benefit retirement schemes to private and individual defined-contribution schemes. The argument from economists for the shift is that the latter gives workers more personal responsibility to take control of their retirement finances, potentially leading to higher returns if the money is invested prudently. Admittedly, many economists acknowledge the biases that people have to value the short-term over the long-term, and so they argue that people should be nudged to save more into these types of personal saving plans; but the point is that many economists think there are consequential benefits of having people take individual control of their retirement savings.

The problem is that this way of thinking ignores the fact that people have a fixed amount of time each day to do things. If you force people to take control of their retirement finances, then you’re forcing them to become avid market watchers – to follow all the ups and downs in financial markets and to time their investments accordingly. When we make teachers or engineers market watchers, they will necessarily have less time to not only do their work, but also to participate in the things that hold our democracy together, like voting in elections for candidates we think will make positive change.

Yes, it’s great that we now have all sorts of exchange-traded funds that allow retail investors to gain exposure to natural gas or economic growth in Africa, but the cost is that by making everyone do their own research about investing, everyone is going to have less time to participate in politics. And when people aren’t paying attention to politics, it means that our democracy will be more susceptible to being hijacked by narrow interest groups.

The point is that even if the shift from collective retirement-saving schemes to private and individual ones leads to more frequent saving habits or better investment performance for our seniors, these benefits need to be weighed against the cost of making everyone in every profession sacrifice their time to become a market watcher. The costs are potentially big and they undermine many valuable functions in our society.

My next post will be important. I want to talk generally about GDP growth, and discuss philosophically what justifications economists have put forth for pursuing policies that grow the overall economic pie. I will argue that such policies also rely on a certain moral ideal, against which, if known more broadly, the general public might push rather than constantly regard the economy as the number one problem facing our nation.

[1] Purified preferences are those which are said to be self-interested, well informed (e.g., not under the influence of drugs, alcohol, etc.) and undistorted (e.g., not sadistic). For more information, see this paper by Daniel Hausman.

[2] A policy is liberally neutral if it doesn’t push people in a certain direction towards a certain end. As Michael Sandel has argued, however, it’s almost impossible to be liberally neutral. On nearly every policy issue, we need to take a stand on what we think is right and what we think is wrong. For example, in the debate on gay marriage, those in support of gay marriage often claim to be prochoice liberals. If people want to marry someone of the same sex, then that’s their choice, which shouldn’t be interfered with, such liberals say. However, why does the line stop with marriages between just two people? If people want to marry three of four people of the same sex, why do we not honor that choice, too? Even though most liberals want to extend the traditional institution of marriage to include same-sex marriages, most are unwilling to extend it to include polygamous marriages. Why? As Sandel notes, the reason is because they’ve taken a moral stance in favor of the belief that marriage should be defined as a common bond between just two people. According to Sandel, most of the policies that we think are liberally neutral in fact rest on certain moral ideals, which become evident once we get rid of all the white noise and rhetoric that often clouds our policy debates.

(For the record, I support gay marriage, but not polygamous marriage.)

[3] To be sure, sometimes these pay-for-performance structures are set up to reward those who are best able to work cooperatively in teams. I would say, however, that more often than not these pay schemes reduce everything to a final number, such as the amount of sales/revenue each worker generated over the prior year or how well a worker’s students performed on some standardized test.

Are Economists Creating the Signal?

no sig

A few commenters have accused me of overplaying the performativity story. They’re presumably sympathetic to the idea that economists are changing the social world with their theories, but skeptical as to whether it’s possible to understand the change in the clear and predictive way I assume. Indeed, one commenter even accused me of playing God when I suggested that I can see how the diffusion of rational choice theory is influencing people’s behavior.

I am actually very sympathetic to these criticisms. My presentation of the Black-Scholes theory was a bit misleading, I have to admit. In that example, it is very clear how the forces of performativity worked: economists first thought of a clever theoretical pricing model, which was empirically inaccurate in terms of describing the prices of option securities in financial markets, but which over time caused prices to converge to the model’s predictions after traders adopted the model for their market-making practices. In other words, the direction of performativity was straightforward: it was from theory to practice. However, not all performative forces are that one-directional. Sometimes practice influences economists to come up with new theoretical ideas, which then influence practice again once the new ideas are diffused. These multidirectional performative forces are incredibly difficult to understand. But not impossible, in my opinion.

Let’s look at a recent example that I’ve been thinking about lately: the plight of the long-term unemployed.

As you may know, one of the defining characteristics of the 2008 financial meltdown was that it produced a glut of long-term unemployed workers.[1] Not since the Great Depression have we seen the unprecedented level of long-term unemployment that we have witnessed in recent years. lt unemploymnet

This reality got economists thinking. They said, gee maybe there’s something wrong with the long-term unemployed, making them unable to find work. And so they turned to their handy-dandy signaling theory in labor economics, which says that a long bout of unemployment might be a signal of skill atrophy to employers, suggesting that those who’ve been out of work for a while are less productive.

The next logical step for economists was to try to measure the effect of that signal empirically. And what they found is that, indeed, it is usually the case that the longer workers are unemployed, the less likely they are to receive an interview for a new job.

The economists even quantified the disadvantage. They found that when workers are out of work for more than roughly 9 months, the chance of receiving an interview for a new job falls significantly. So there we have it, we’ve found a threshold of about 9 months that separates employability and non-employability in the post-recession labor market.

What happened next? Well, the media ate this research up. They diffused it through blogs, through Twitter, through newspapers, through just about every channel available. And now here’s the million-dollar question: did this research, after the media diffused it, exacerbate the plight of the long-term unemployed?

I suspect it may have. Suppose you’re a hiring manager at a business. You definitely want to hire the most productive workers, and it makes sense that you would not want to hire workers who have been out of employment for a while, because you would probably need to train them a lot to get their skills up to speed. But maybe you think the relevant threshold is something like 1.5 years. If you see someone who has been out of work for 1.5 years or longer, you won’t consider him or her for hire.

And now the media tells you that you’re wrong; that the relevant threshold, based off rigorous empirical work done by economists, is 9 months, not 1.5 years. Do you then change your hiring habits? Do you decide to throw all resumes showing unemployment spells of 9 months or longer in the trash?

Now, what would be a really interesting study would be one that first pinpoints the moment when all this empirical research about the signaling effects of long-term unemployment was diffused, and then tries to see whether the research itself changed the hiring behavior of firms – i.e., an attempt to measure the performative effect of economic research on the very labor market that economists claim they are studying only at a distance.

My point, all along, is that we need to better understand these performative feedback effects. They are not impossible to comprehend, so long as we think clearly. If it is true that economists are changing the social world in a way that we don’t like – e.g., exacerbating the plight of those who have been out of work for a long time, making us more selfish creatures, etc. – then we need to start thinking deeply about what the domain of economists should be, what types of research they should be doing, how their conclusions should be precisely worded, and whether there is a communication problem with the way in which economic research is commonly diffused by the media and other institutions, including universities, governments, and think tanks.

[1] According to the BLS, workers are classified as being “long-term unemployed” if they’ve been out of work (but are still actively looking) for 27 weeks or more.