Saturday, February 18, 2017

Why liberals should own guns

I wrote a Twitter thread about this a while back, but it got deleted in a periodic wipe, so I thought I'd reprise it here for posterity, and expand a little on the earlier point.

For decades now, liberals - a term I'm using loosely to mean anyone on the American left - have mostly shunned gun ownership and gun culture. Around half of Republicans own guns, and 41% of those who call themselves "conservatives," compared to only 22% of Democrats and 23% of those who call themselves "liberals".

Why? One reason is that liberals are more likely to live in big cities, where there is an assumption that violence will be stopped by the police (and by numerous witnesses), rather than by one's own defensive actions. But I think part of it is cultural - liberals, by and large, want to live in a society without widespread gun ownership, and many have decided to "be the change they want to see in the world."

But leading by example hasn't worked. Gun control has been mostly a political non-starter except for a very brief period at the beginning of the Clinton administration. Conservatives continue to use the issue of gun rights as a rallying cry and cultural wedge issue, constantly invoking the fear that liberal politicians will come storming into Americans' houses and take away their means of self-defense.

Now things may be changing. A recent BBC report found that since the election of Trump, liberal interest in gun ownership has spiked. The Liberal Gun Club, run by Lara Smith (no relation) has reported a 10% increase in membership and a "huge" increase in interest.

I think this is a good trend. More liberals need to own guns. Why? Here are two reasons:

1. It would make any calls for gun control more credible.

Right now, many conservatives see gun control as a plot to disarm them. But if liberals are also armed, calls for things like assault weapons bans, or background checks, or stricter licensing requirements sound more like an arms limitation treaty than a call for unilateral disarmament. In other words, instead of liberals saying "Hey, give up your guns," they'll be saying "Hey, let's all give up some of our guns." That's a more credible, more powerful message.

There's a precedent for this. Half a century ago, the Black Panthers, ardent gun nuts, staged an armed protest at the California state capitol. The state responded with the Mulford Act, which forbid open carry - a very big, rapid success for prudent gun control policy. Now, I'm not suggesting liberals stage armed takeovers of government buildings - the 60s were a very different time, and people like Cliven Bundy aren't going about things in the right way. But the larger point is that when liberals have guns, even conservative politicians are willing to embrace sensible gun control measures.

A more metaphorical example is the successful history of U.S.-Soviet and U.S.-Russian arms limitation treaties. The Russians love nukes like Ted Nugent loves guns, but because we had nukes of our own, we managed to make them see how sensible it would be to limit the total amount.

2. It's insurance against the breakdown of public order.

The total breakdown of public order is highly, highly unlikely. It would take a nuclear war, a civil war or coup, or a major natural disaster like a Yellowstone super-eruption to produce a situation in which America reverted to anarchy.

But just because it's unlikely doesn't mean it's pointless to insure against it. This is a tail risk, but the consequences would be huge and disastrous. So it might make a lot of people sleep better in their beds at night knowing that if it came time to grab our guns and get to safety, they'd have guns to grab.

And the erratic nature of Trump's leadership probably increases the tail risk just a little bit - an accidental tweet or a falling-out with Putin might set off a nuclear war.

Also, it's important to remember that small, localized breakdowns of public order do also happen in cities from time to time, and that it can help to have a gun in those extreme cases - especially if you're a minority, and less likely to get immediate help from the cops.

There's actually sort of a historical precedent for this as well. In the episode known as "Bleeding Kansas" in 1854-1861, the U.S. government decreed that slavery's legality in Kansas would be decided by popular vote. Naturally, this made pro- and anti-slavery people both rush to settle in the state, and it also sparked a guerilla war. The government, paralyzed by the fear of a larger civil war (which soon happened anyway), did little to quell the violence, so the state became a zone of anarchy. The anti-slavery forces, known as Jayhawkers, were well-armed, and eventually won the conflict.

So these are the two main reasons that liberals should own guns. The main argument against owning guns is the risk of accident - over a hundred American kids die from gun accidents every year. Why take the risk? Well, it's certainly possible to minimize the risk of gun accidents - keep the gun in a safe. If you take proper precautions, the risk is far lower than the aggregate statistics might suggest.

But if that risk is just too high, consider simply learning how to use guns. Knowing how to shoot, maintain guns, etc. is probably more important than physically having the guns in your home. Who might find out it's fun!

Anyway, remember, always safety first. And if you do have mental illness, I'd say don't buy a gun, even if the law allows you to.

Thursday, February 16, 2017

Why go after Milton Friedman?

The top question on my Reddit AMA was "When and why did you start hating on our lord and saviour Milton Friedman?". Two or three other questions were basically the same.

And it's true, I have been on a Milt-bashing kick of late. I did a post evaluating how Friedman's macro theories had held up, and gave them a C+ overall. I wrote another post complaining about his "pool player analogy", which people use to justify not checking their model assumptions against micro data. I wrote two Bloomberg posts declaring the Permanent Income Hypothesis dead (post 1, post 2). And I wrote a tweetstorm (since deleted in a periodic tweet-wipe) about how Friedman's libertarian policy program might have prevented racial integration in the United States.

My revisionist campaign against the late Friedman has ruffled a lot of feathers. Uncle Milt is something of a secular saint among both economists and libertarians. If you say "people don't smooth consumption," economists will talk about the issue calmly and reasonably, but if you say "the Permanent Income Hypothesis is wrong" - which means exactly the same thing - lots of hackles are instantly raised, and people jump to defend the hallowed PIH. Similarly, in policy discussions, if you diss vouchers, people will argue, but if you say "Milton Friedman was wrong about vouchers," they get mad.

So why do it? Why not leave Friedman alone and just talk about his ideas, or the modern-day versions thereof? Here are my reasons:

1. Clickbait!

Saying "X is wrong" gets less attention than saying "X, which Milton Friedman supported, is wrong." So why not do the latter? If it gets more laypeople paying attention to economic research and serious economic ideas, I say that's a good thing.

It's hard to get people interested in the latest research on consumption smoothing. That is a nontrivial thing to do. And putting Friedman's name up there is one way to do it. As long as I'm not misattributing anything to the man, what's wrong with that?

2. Fighting against "Great Sage" culture

"Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation," Feynman said. And you should take it from him, right?. ;-)

Anyway, that principle makes sense. If understanding is going to progress, people can't have too much reverence for the opinions and theories of respected humans. In the humanities, there tends to be a lot of reverence for the ideas and thoughts of the Great Old Masters. "Kant said X" and "Foucault said Y" are things you'll actually hear when you talk to humanities types. Who cares? Why should I believe Kant or Foucault? I never understood this. I guess in the humanities, lots of things are just matters of opinion, or untestable conjecture, so it's not that important to go out and try to prove Kant wrong. But in a scientific field, it's the knowledge that matters, not the people who found it (or tried to find it and failed). Too much reverence for the teachings of a Great Sage can hold people back from finding better ideas.

I feel like economics has a bit of Great Sage disease. People are way too reverent about old masters like Friedman or Lucas. This is in contrast with physics, where people delight in saying "Einstein was wrong" or "Feynman was wrong" about something. I like the irreverent way better.

3. Annoyance at the mixing of econ and politics

In his scholarly writings, Friedman was careful to draw a distinction between normative and positive economics. But it's not clear his fans got the message. Friedman very publicly engaged in policy advocacy. His most famous book was an ideological tract. They made that book into a TV show!

Do you think that Friedman's status as a top academic economist had nothing to do with the respect and credence that were afforded to his ideological and political ideas? If so, I've got a bridge to sell you. Friedman taught a generation of fans that laissez-faire policies were great, and his academic status lent an imprimatur to those teachings that a Wall Street Journal writer or libertarian pundit never could have enjoyed.

So by informing the public that Friedman got some big things wrong in his academic research (which of course is true of any economist), I hope to be able to dispel a little of that mystique. Fans of Friedman's libertarian ideology need to know that their sage was just as fallible a scientist as any other.

So there you go. Three reasons to publicly criticize the ideas of Milton Friedman. As for reasons not to -- well, the man has already passed away, and he amassed so much fame and respect that the tiny stings of an insignificant insect such as myself pose no real threat to his legacy. So I don't feel guilty at all.

Thursday Bloomberg Roundup, 2/16/2017

This week's Bloomberg View posts:

1. "Still Seeking Growth From Tax Cuts and Union Busting"

Can states win with a low-tax, anti-union strategy? A few are trying. How are they doing? The best way to answer this is to compare adjacent pairs of otherwise similar states. In this post, I take a look at Wisconsin vs. Minnesota and Kansas vs. Nebraska. Short version: laizzez-faire policies can create vast wealth for the poor and middle class don't seem to have much effect at all.

2. "Things Might Be OK if Trump Borrows From Abe"

I don't expect this to happen, but if Trump decided to follow Shinzo Abe's playbook, he could end up being the kind of responsible, forward-looking nationalist leader that Abe has proven to be.

This post is also a rebuke to all those gaijin writers who were yelling "Abe is a fascist!" a couple years back.

3. "Economics Gets a Presidential Demotion"

Economists might think that all the economist-bashing in the media is limited to a rabble of irrelevant angry British lefties. But in fact, the loss of status is real, and Trump's decision not to include the CEA Chair in his cabinet is just one more sign of that. If economists want to retain the extraordinary prestige they've amassed over the last few decades, they're going to need to make a few changes in the way they present themselves to the public. In this post I give three ideas for how they can do this.

4. "Monopolies are Worse Than We Thought"

More and more economists are pointing to rising industrial concentration and market power as the source of many of the problems in our economy. But what's behind that trend? In this post I suggest a few possible explanations - weakened antitrust, overregulation, and the influence of technology.

5. "Market Failure Looks Like the Culprit in Rising Costs"

This post is a riff on a great Scott Alexander post about excess costs in America. The question is why America has anomalously high costs for health care, infrastructure, college education, and asset management. Government intervention doesn't seem to be the explanation, since other rich countries generally have more of this, alongside lower costs. Baumol Cost Disease also doesn't seem like the whole explanation, for the same reason. Market failures, of the kind that most econ students learn about, might be part of the reason. But I suspect that a lot of it comes from what Akerlof and Shiller call "Phishing for Phools" - a combination of limited information and outright trickery that reaches a bad equilibrium.

Wednesday, February 15, 2017

My AMA on r/badeconomics

I did an AMA on r/badeconomics (whose name is tongue-in-cheek, as it's actually much more econ-savvy than r/economics). It ended up being really long, since it was posted a day early. But that just made it more fun! Thanks much to excellent moderator Jericho Hill for setting it up, and to everyone who posted questions.

Questions included:

1. Why do I diss Milton Friedman a lot these days?

2. Which is a bigger problem: 101ism, or the people who say econ is a bunch of neoliberal garbage?

3. Is heterodox econ the antidote to "economism"?

4. Do banks "lend excess reserves"?

5. Which economists in the public sphere do I respect the most?

6. How could the Euler Equation possibly be wrong?

7. Which pop econ books do I recommend?

8. Which economists in the public sphere do I respect the most?

9. What have economists changed their minds about the most in recent years?

10. Is the Permanent Income Hypothesis really "wrong"?

11. Does money need to be "backed" by some valuable commodity?

12. No, really, why do I hate Milton Friedman so much?

13. How did I develop my writing style?

14. Does Bloomberg pay me enough? Am I not afraid of life without tenure?

15. How does one get started being a blogger?

16. Neoliberalism is out of favor these days, so why keep on bashing it?

17. Who will be the next great public explainer of economics?

And more! A fun time was had by all. Check out the whole thing here.

Tuesday, February 07, 2017

No, we don't need an immigration "pause".

I've been getting in arguments with immigration restrictionists for years now. The more reasonable restrictionists suggest that we need an immigration "pause" in order to assimilate the recent big wave of immigrants. They point to the Immigration Act of 1924 as an example, and suggest doing something similar today.

The argument is not implausible. Integration is important. When the citizens of a country view themselves through a tribal lens, it can be very hard to get important things done, and the country can become dysfunctional and - eventually - poor. In the past, America has done well at combining many disparate ethnicities - Irish, Germans, Italians, Jews, Greeks, Poles, etc. There's plenty of reason to believe that this is happening again, with the mostly Hispanic and Asian immigrants of the recent wave.

But there's an argument that we need to speed this process up, by pausing immigration. Without a pause, restrictionists say, the phenomenon of "replenished ethnicity" might keep Hispanic and Asian people feeling like "permanent foreigners" for decades, leading to tribalized politics and social strife. Only because we paused immigration in the past, they say, did we managed to integrate the previous waves.

That's not implausible, but I think a closer look at the history of U.S. immigration shows that past restrictions were not as important as many believe. Here, via Natalia Bronshtein, is a graph showing the history of U.S. immigration by source country. I've annotated the graph with some important events:

The things that stand out most are 1) the big pause in the early middle 20th century, and 2) the big waves in the early 1900s and late 1900s/early 2000s. The y-axis is in absolute numbers; in terms of percentages of the U.S. population, those two waves were about equally big. 

One thing you'll notice is that there was no pause in the 19th century. Despite big waves of anti-immigrant and anti-Catholic sentiment, immigration was not banned and didn't halt. An exception was the Chinese Exclusion Act of 1882, but this didn't affect the biggest waves of immigrants coming in at the time. 

But despite the fact that there was no pause and no ban, and despite the fact that Irish and German immigrants kept coming throughout the 1800s, immigrants from these countries integrated quite effectively into American society.

Another thing to notice is that when the big immigration restriction was enacted in 1924, immigration had already fallen substantially from its peak about 15 years earlier. The law was probably important, but maybe not as important as its fans think. I bet the Great Depression, which came just 5 years later,  and WW2, would have been almost as effective in keeping immigration low.

Also note that immigration had started increasing substantially well before the 1965 law that loosened official controls. The 1950s were a time of rapidly increasing immigration, despite the legal ban. Nor was the 1965 law change immediately followed by a trend break; immigration increased steadily, but didn't really explode until the 1990s.

This doesn't mean that laws don't have an effect - the Simpson-Mazzoli act, commonly known as "Reagan's amnesty," was followed by a surge in Mexican immigration (and even more that was undocumented, and not on this graph). But overall, most of the ups and downs seem to correspond to economic booms, busts, and wars rather than to U.S. government policy. 

So fans of the 1924 immigration restriction should rethink their understanding of history. Economic factors were probably just as important as laws in determining immigration levels.

Another important observation is that country-specific immigration booms all seem to end on their own. Irish and German immigration trickled off around the turn of the 20th century. Italian immigration experienced a short mini-boom after WW2, but never came close to regaining its previous levels. The Austro-Hungarian and Russian booms were short-lived, one-shot affairs.

Should we expect the Mexican boom to end similarly, on its own, without government controls? Yes. In fact, it already did end, at least a decade ago. It's done, finished, over, kaput:

More Mexicans are going back to Mexico than are coming in. Mexican immigration basically halted sometime in the 2000s and went into reverse. And yes, that includes illegal immigration, which has been negative since the Great Recession.

The Mexican Boom is done. The Hispanic Boom as a whole is not quite finished - Central Americans and Caribbeans continue to come in, though at a slower rate than before. But these are trickling off as well. 

As of now, the main source of immigration to the U.S. is Asian. Asian immigrants are expected to surpass Hispanics as the largest foreign-born population in the country by mid century, unless Trump or other leaders block Asians from entering.

So the fears of "replenished ethnicity" keeping the American population from integrating are, in my opinion, overdone. Immigration booms end on their own. The new immigrants don't come from the same places that the old ones did. There is, therefore, little danger that allowing continued immigration will put us in danger of tribal balkanization.


For a more in-depth post on this topic, see this by Lyman Stone. Most of the conclusions and points are fairly similar, but there's much more theory and data. 

Monday, February 06, 2017

Much of econ has become more scientific

My employers, the editors of Bloomberg View, have an editorial out called "Why Not Make Economics a Science?". I like this post, it gets at a very important point. But I think it leaves some very important things out, too.

The article's main criticism of econ - or really, of macro, which most people casually call "economics" - is that it seems reluctant to discard theories:
[T]oo many theorists...have drifted far from the real world...Before the 2008 financial crisis, for example, the standard models more or less ignored finance...Given such spectacular failures, you’d think the profession would have gone back to the drawing board. It hasn’t. True, some tweaks have been attempted...But the error at the core of modern macroeconomics -- that mathematical consistency matters more than empirical relevance -- prevails...Reviving economics as a science will require economists to act more like scientists. If models are refuted by the observable world, toss them out. Rely on experiments, data and replication to test theories and understand how people and companies really behave.
As you can tell from the links (the post itself has many more), the editors have done their homework.

The editors are right that mainstream macro theory hasn't changed much since the crisis - the addition of finance, though important, really is a tweak to the basic structure. Central elements like the consumption Euler equation, TFP shocks, Calvo pricing, infinite forward-looking-ness, exponential discounting, profit-maximizing firms, etc. are still really common in DSGE models, despite the steady drumbeat of evidence against many of these assumptions. And DSGE models, though a tiny bit less popular than at their peak, are still really common in the literature:

The editors are right to be annoyed that the basic DSGE framework has only been tweaked, instead of rethought from the bottom up. Imre Lakatos would probably agree. Science should be about tossing out theories, not generating infinite numbers of theories to sit on the bookshelves gathering mold.

But I also think this brief editorial leaves out a few important things, which I'd like to remind people about:

1. Most economics is not macro.

In the press, we have a tendency to use "economics" as a synonym for "macroeconomics". There are a couple reasons for this. First, it's tedious to keep writing "macroeconomics". Second the only branches of econ that the public traditionally cares a lot about are macro and trade, and maybe a little about finance. Now people are starting to care a bit about labor too, which is good. But relatively few readers care about game theory, decision theory, industrial organization, development economics, public finance, economic history, environmental econ, ag econ, urban econ, etc.

The Bloomberg View editors know this distinction, which is why they specify in the article that they're talking about macro. But many economists in other fields will tend to read this editorial and get annoyed, since it leaves them out. In fact, as many of us Bloomberg View writers have noted, the non-macro parts of econ are now mostly empirical, and empirical econ is looking more and more like a standard science (i.e., very careful attention to controls). So it's important to remember that.

2. The definition of "macro" is part of the problem.

If you look at what academic macroeconomists - that is, the professors in the macro areas of econ departments - are doing, a lot of it now is pretty empirical. For example, macroeconomists might try to determine whether sticky prices or sticky wages matter more in business cycles, or why companies don't hire more young workers during recessions. Each paper of this sort will typically focus on understanding one piece of the macro puzzle. They'll have theory sections, but the theory will be limited to the phenomenon in question - it won't be a big general model of the whole economy.

Unfortunately, we often use the term "macro theory" only to refer to the big, general models of the whole economy. And since DSGE models are still the main type of big, general model of the whole economy (OLG would be a distant second, and most people don't call VARs "theory" at all), this means that "macro" now means "DSGE" almost by definition. Until someone comes up with a type of theory that isn't called "DSGE" and it gains credence as an alternative (for example, "agent-based" models), macro theory will by definition consist only of tweaks.

So while the Bloomberg View editors are right to be annoyed at the fact that mainstream macro theory hasn't changed much in the past decade, we should all recognize the big changes that have taken place not only in econ as a whole, but even within the macro area itself. No, there hasn't yet been a replacement for the basic Ed Prescott-inspired business-cycle modeling framework. But lots of other important work is being done, much of it very scientific.

Sunday, January 15, 2017

Cracks in the anti-behavioral dam?

This is purely my impression, buttressed with some anecdotes; I don't have any systematic data to back this up. But in both papers and casual discussion, I'm seeing macro people taking behavioral ideas more seriously. 

"Behavioral" is a very squishy idea, but basically I think of it as meaning "imperfect use of information". The difficulty with labeling a model "behavioral" is that we don't really know what information is available. This is why I believe there's a fundamental equivalence between behavioral and informational models - for any "informational" model where agents don't know all of the facts, there's an observationally equivalent "behavioral" model where they do observe the facts and just don't make use of them. 

But anyway, in macro, most models use Rational Expectations, so let's think of "behavioral" as just meaning "non-RE". Actually, non-RE models have been kicking around for a long time - for example, Sargent's learning models, or Mankiw and Reis' sticky information models. What seems to be changing (slightly) is that A) younger people seem to be making non-RE models, B) people are recommending non-RE models for policy analysis, and C) the departures from RE are getting more stark. 

Some recent examples I've seen are:

1. The learning approach to New Keynesian models, promulgated by Evans et al., which seems to be solidly mainstream

2. Mike Woodford's response to Neo-Fisherism, which relies crucially on a slight departure from RE

3. Xavier Gabaix's behavioral New-Keynesian model, where consumers are short-term thinkers instead of infinitely far-ahead-looking

These are all well-established people making these models - they cut their teeth on RE models for years before daring to venture out into behavioral waters. But now I'm starting to see young people doing behavioral stuff as well. A good example, sent to me by Kurt Mitman, is this paper by Kozlowski, Veldkamp, and Venkateswaran, entitled "The Tail that Wags the Economy: Belief-Driven Business Cycles and Persistent Stagnation". 

The basic idea of the paper is that instead of knowing the true PDF of macroeconomic shocks, people re-estimate the distribution every time they see a shock. Not too crazy, right? But that seemingly small departure from RE has big business-cycle implications. 

The reason is tail events. When big shocks are rare, just one of them can change people's whole understanding of how the economy works. How many events like the Great Depression have there been in American history? Really, there are only two since we started keeping national accounts. Two! In 2008 we abruptly went from "There was that one really bad depression one time" to "Whoa, this is a thing that can happen multiple times!". To think that this would have zero impact on agents' beliefs about the economy - which is exactly what RE demands we think - seems implausible. The authors write:
No one knows the true distribution of shocks to the economy. Economists typically assume that agents in their models do know this distribution as a way to discipline beliefs. But assuming that agents do the same kind of real-time estimation that an econometrician would do is equally disciplined and more plausible. For many applications, assuming full knowledge has little effect on outcomes and offers tractability. But for outcomes that are sensitive to tail probabilities, the difference between knowing these probabilities and estimating them with real-time data can be large.
Anyway, to make a long story short, this can produce long economic stagnations, like the one we just had. Taking a gander at the literature review section, I see that these authors didn't aren't the first to use this mechanism in a theory - it looks like it can be traced back to a 2007 AER paper by Lars Hansen. The other similar papers the authors cite, however, all come from 2013 or later, showing that this sort of idea has been gaining currency recently and rapidly. 

Now, Koslowski et al. do dodge one important issue: what data set do agents use to estimate the distribution of economic shocks? The data set they use goes back to Word War 2 - they don't even include the Great Depression. But even if we go back further than that, we'll miss earlier episodes like the Panic of 1873, when good national accounts just weren't kept at all. Data availability is so recent that there's almost an observational equivalence between assuming that people use all the available data, vs assuming that people overweight data from their own lifetimes.

If the authors - or some other authors - were to assume that people overweight data from their own lifetimes, as evidence from Malmendier and Nagel suggests, it would have important implications down the line. Instead of people's expectations slowly converging to RE over the decades (centuries?), people would forget the lessons of history and continue being surprised by depressions every 50 or 100 years or so. 

For now, macroeconomists don't have to worry about this question. Authors like Koslowski et al. can frame their papers as quasi-behavioral papers, where RE is limited by data availability, instead of fully behavioral papers where RE is limited by collective forgetting. So these are still only cracks in the anti-behavioral dam, not a full torrential flood.

But my question is this: What happens when people start applying this mechanism to more complicated shock processes? What if the economy has regime switches that last decades? What if there is more than one kind of rare shock (e.g. the Great Inflation of the 70s/80s)? I've seen some people try to model stuff like this, and the end result can come out looking like practically any type of non-rational expectations you can think of. Meanwhile, empirical macro people are starting to pay more attention to survey measures of expectations. And people from behavioral finance are starting to put things extrapolative expectations into macro models, to explain macro facts. And evidence like that collected by Malmendier and Nagel continues to pile up.

And I should mention casual conversation as well. More and more young macro people that I interact with, including (even especially?) those who run in "freshwater" circles, are saying that behavioral explanations will have to be part of our understanding of how consumption works. Here's an example from a recent blog comment. 

So I wouldn't be surprised to see some more cracks in the anti-behavioral dam in the years to come. Chris House, my old macro prof, proclaimed three years ago that behaviorism was a dead end and would never have a transformative impact on macro. But seeing papers like Koslowski et al.'s, I'm thinking that his prediction now looks to have been quite ill-timed.


Just for fun, I'll post some more random behavioral macro papers I see.

"Explaining Consumption Excess Sensitivity with Near-Rationality: Evidence from Large Predetermined Payments", by Kueng

"YOLO: Mortality Beliefs and Household Finance Puzzles", by Heimer, Myrseth, and Schoenle

"Learning about Consumption Dynamics", by Johannes, Lochstoer, and Mou

"Understanding Uncertainty Shocks and the Role of Black Swans", by Orlik and Veldkamp

"The Liquid Hand-to-Mouth: Evidence from Personal Finance Management Software", by Olafsson and Pagel

Friday, January 13, 2017

The $30k Hypothesis

I wrote a Bloomberg View post about the Permanent Income Hypothesis. Basically, more and more research is piling up showing that it doesn't fit real consumption patterns. Some consumption smoothing takes place, but there's also a substantial amount of hand-to-mouth consuming going on. Most economists I know of have already accepted this fact, and usually chalk the hand-to-mouth behavior up to liquidity constraints (or, less commonly, to precautionary saving).

But a new paper on unemployment insurance extension casts major doubt on these standard fixes, especially on liquidity constraints as the culprit. A long-anticipated transitory shock - UI expiration -- shouldn't produce a big bump in consumption even if people are liquidity constrained. Nor is home production the answer, since unemployed people are already at home long before UI expires. Something else is going on here - either people interpret UI expiration as a (false) signal of the expected duration of unemployment, or they expected Congress to extend UI at the last minute, or they're just short-term thinkers in general. Or something else. I predict that as more and more good consumption data become available, more and more of this short-termist behavior will be observed, putting ever more pressure on people who use standard models of consumption behavior.

Anyway, as expected, some people came out to defend the good ol' PIH, including my friend David Andolfatto, one of the web's best econ bloggers and a ruthless enforcer of Fed dress codes. David's claim was that the PIH is still useful in some cases, and not in others.

That's fine...IF we know ex ante what the cases are. If it's just an ex post thing - "consumers look like they're completely smoothing in this case, but not in this other case" - then the theory has no predictive power ex ante. How do you know if consumers are going to perfectly smooth in advance, if sometimes they do and sometimes they don't, and you don't know why? Liquidity constraints, which we can probably observe, are one reason to expect PIH not to hold, but they're only one reason - as the Ganong and Noel paper shows, there are other important reasons out there, and we don't know what they are yet.

Here's an example of why I think we can't just be satisfied with the notion that theories work sometimes and not others. Consider a very simple theory of consumption: the $30k Hypothesis. Stated simply, it's the hypothesis that households consume $30,000 a year, every year.

Rigorous statistical tests will reject this hypothesis. But so what? Rigorous statistical tests will reject any leading economic theory, especially as data gets better and better. All theories are wrong (right?). Some households do consume $30k, or close to it. So the $30k hypothesis is obviously right in some cases, wrong in others.

So should we use the $30k Hypothesis to inform our policy decisions? How should we know when to use it and when not to? Judgment? Plausibility? Political expedience?

This is a reductio, of course - if you don't impose any systematic restrictions on when to use a theory, you become completely anti-empirical, and priors rule everything. This is also why I'm a little uneasy about Dani Rodrik's idea that economists should rely heavily on judgment to pick which model to use in which situation. With an infinite array of models on the shelf, economists can always find one that supports their desired conclusions. I worry that judgment contains a lot more bias than real information.

Tuesday, January 03, 2017

Scenarios for the future of racial politics in America

If you don't live in a sensory deprivation tank, you probably noticed that the 2016 presidential election was rather racially charged. Many on the Democratic side charged Trump and his voters with racism, white supremacism, etc. Political scientists found that Trump's most ardent supporters were especially likely to score high on what they call "racial resentment" - their term for the belief that black Americans are getting more than they deserve. Meanwhile, the election results were very polarized by race:

Trump's victory was almost entirely furnished by the white vote, while Clinton overwhelmingly won all minorities. This repeated the pattern of 2012.  

Race has always been important in American politics - except for a brief period in the mid 20th century, blacks and Southern whites have always been on opposite sides of the partisan divide. The 1964 Civil Rights Act is widely acknowledged to have spurred the shift of Southern whites from the Democrats to the GOP. Meanwhile, "ethnics" - East and South European immigrants of the early 20th century - voted reliably Democratic until they merged with whites into the modern version of the white racial group.

Many (myself included) also believe that race is important to U.S. political economy. I buy the story that racial divisions are one of the big reasons that America doesn't have as big a welfare state as Europe. One big example of this is the way the GOP has profited from the "line-cutting" narrative - the idea that black Americans (and possibly other groups as well) are getting more than their fair share, "cutting in line" in front of more deserving whites. That narrative has probably damped white support for social safety nets.

So race is really important. But there are three big reasons why racial politics aren't set in stone. First, racial coalitions can change, as when Southern whites and blacks briefly united to support FDR. Second, racial definitions can change, as when "ethnics" joined the white race in the latter half of the 20th century. And third, the salience of race in politics can increase and decrease. So predicting the future of racial politics in the U.S. is no easy task.

Here are the possible scenarios, as I see them. These are extreme scenarios, of course; reality will probably be a lot messier, just as saying "Hispanics vote Democrat" ignores the 29% who voted for Trump. But anyway, here are five futures I can imagine:

1. Scenario 1: Race Loses Salience

This is not a future in which racial divisions vanish or America becomes "colorblind". It simply means that racial divisions would no longer the main dividing line in American public life. People would largely stop defining their political interests by race. The GOP starts appealing to more nonwhites, and the Democrats start appealing to more whites - maybe because the parties shift their ideologies, or maybe because the racial groups themselves change what they want. Intermarriage helps by blurring the boundaries between races. In this scenario, Americans go back to fighting over economics, or perhaps national security or religion, instead of about race.

(There's a very extreme form of this scenario where race does vanish, and "American" becomes a catch-all racial group that absorbs all the groups. But I consider this extreme version to be pretty unlikely.)

2. Scenario 2: White Expands

This would be a repeat of what happened in the 20th century. Just as Italians, Jews, and Slavs became "white", Asians and Hispanics could come to be regarded as part of the same group as whites. In this scenario, high rates of intermarriage between whites, Asians, and Hispanics, combined with the fact that many Hispanics already identify as white, blur the distinction between the three groups. The new group might be called "white", or it might be called something else.

In this scenario, blacks would be the odd group out, as they ended up being in the 20th century. Since black people are expected to stay at only around 13% of the American population, even with continued African immigration, this means that there could be no winning coalition that did not include a very large piece of the new racial majority. Race would lose some (but not all) salience, as it did in the late 20th century, when economic issues joined racial issues as the dividing lines between Republican and Democrat.

3. Scenario 3: All Against Whites

In this scenario, tensions between whites and the other racial groups continue to rise. The GOP gains an increasing share of the white vote, while Asians and Hispanics become even more overwhelmingly Democratic. Asians, Hispanics, and blacks might or might not start to consider themselves a single race, but they would be united politically by their opposition to whites. Since other races are approaching demographic parity with whites, this scenario might see an increasingly racialized but still even split - whites could desert the Dems at about the same rate that they lost demographic heft, leaving the two parties still roughly equal for decades.

4. Scenario 4: White Splits

This is similar to Scenario 3, except that the white racial group would split in two. The dividing line might be education, or perhaps just politics itself. Those who left the white race would simply stop self-identifying as white. They might go back to identifying with their national ancestries ("German-American", "Irish-American", etc.), they might combine with Asians, Hispanics, and/or blacks into a new racial group, or they might create some new category for themselves. Meanwhile, the rump "white" race would simply be the GOP-voting part of the current white race, and would continue to identify as white. The dividing line would still mainly be race, but now the Democrats would have a structural advantage as the percentages of Asians and Hispanics increased.

5. Scenario 5: Politics Becomes Race

This is the weirdest scenario. It's a bit similar to Scenario 4, except that some Asians and Hispanics also leave their races and join the GOP-voting whites both electorally and racially. The nation would still have two big racial blocs, and the electoral dividing line would still be race - so this is different than Scenario 1 - but the American races of the future would in no way resemble the ones we see today. Politics and race would fuse into a single concept. Democrats and Republicans would become like Hutus and Tutsis, Bosniaks and Serbs - not necessarily able to tell each other apart visually, yet deeply believing themselves to be two totally different peoples. As you can see from the aforementioned analogies, I consider this to be a pretty pessimistic scenario.

These five scenarios don't exhaust the possibilities (everyone could start to identify as black!), but they're the only ones that seem to me to have any chance of happening. Actually, I'm not sure about Scenario 1 - it's kind of wishful thinking on my part.

Scenario 2 has the weight of history on its side - it's happened twice before. The white race in America has proven very capable of expanding to take in new entrants, as it did with Germans and Swedes in the 19th century and East and South Europeans in the 20th.

Scenario 3 is most similar to the recent electoral outcomes, so it's sort of a straight-line trend projection of increasing racial polarization. I also consider this to be a pretty pessimistic scenario.

Scenario 4 is a projection of a somewhat less prominent trend - the increasing polarization of the white electorate by education. College has emerged as one of the key institutions of American society, if not the key institution, and there's a chance that skill-biased technological change will make that situation irreversible. A combination of progressive education and the venom of GOP-voting whites could cause liberal whites to simply decide that the white race isn't something they want to be a part of anymore.

Scenario 5 is the projection of yet another trend - the Big Sort. Like-minded Americans are already moving near each other and marrying each other. Social media, and the splintering of mass media in general, could accelerate the trend. Partisanship is virulent in America at the best of times, and it does seem conceivable that it might eventually be even more powerful than race.

So what do you think? Did I miss any plausible scenarios? Which scenario do you think will come to pass? Which will be best for the Democrats, and which will be best for the GOP? How can the parties nudge American society toward their desired scenario? And what would be the consequences of each scenario, for policy, for people's lives, and for the integrity of the nation-state? How should we intellectuals try to steer the populace, if indeed we have any ability to do so? These are the big questions, and they're all beyond my ability to answer just yet.

Sunday, January 01, 2017

Some thoughts on UBI, jobs, and dignity

One of the more interesting arguments these days is between proponents of a universal basic income (UBI) and promoters of policies to help people get jobs, such as a job guarantee (JG). To some extent, these policies aren't really in conflict - it's perfectly possible for the government to mail people monthly checks and try to help them get jobs. But there are some tradeoffs here. First, there's money - both UBI and JG cost money, and more importantly cost real resources, which are always in limited supply. Also, there's political attention/capital/focus - talking up UBI takes time and attention away from talking up JG and other pro-employment policies.

One of the key arguments used by supporters of pro-employment policies - myself included - is that work is essential to many people's sense of self-worth and dignity. There's a more extreme variant of this argument, which says that large-scale government handouts actually destroy dignity throughout society. Josh Barro promotes this more extreme argument in a recent post.

This seems possible, but it's very hard to get evidence about whether welfare payments are actually dignity-destroying. Anyone who goes on welfare probably has other bad stuff happening to them in life, so there's a big endogeneity problem. Meanwhile, time-series analyses of nationwide aggregate happiness before and after welfare policy implementation are unlikely to tell us much. The best way to study this would be to find some natural experiment that made one group of people eligible for a big UBI-style welfare benefit, without allowing switching between groups - for example, payouts to some Native American group might fit the bill. My prior is that handouts are not destructive to dignity and self-worth, as Barro assumes - I predict that they basically have no effect one way or the other. But this is an empirical question worth looking into.

In his own post, Matt Bruenig argues against Barro. His argument, basically, is that many rich people earn passive income, and seem to be doing just fine in the dignity department:
If passive income is so destructive, then you would think that centuries of dedicating one-third of national income to it would have burned society to the ground by now...In 2015, according to PSZ, the richest 1% of people in America received 20.2% of all the income in the nation. Ten points of that 20.2% came from equity income, net interest, housing rents, and the capital component of mixed income...1 in 10 dollars of income produced in this country is paid out to the richest 1% without them having to work for it.
I don't think this constitutes an effective rebuttal of Barro, for the following reasons:

1. "Work" is subjective. Many rich people believe that investing constitutes work (I'd probably beg to differ, but no one listens to me). And founding a successful business, which creates capital gains, certainly requires a lot of work. 

2. Passive income very well might be destructive to the self-worth of the rich, on the margin. In fact, I have known a number of rich kids who inherited their wealth, and devoted their youths to self-destructive pursuits like drug sales and petty crime. It could be that for many rich people, the dignity-destroying effects of unearned income are merely outweighed by the dignity-enhancing effects of high social status and relative position.

3. Many rich people became wealthy through work - either a highly paid profession like CEO, or by starting their own companies. This past work may provide dignity for old rich people, just as retired people of all classes may derive dignity from their years of prior effort.

And Matt's argument certainly doesn't counter the less extreme version of the "jobs and dignity" argument (i.e., the version made by Yours Truly). Even if passive income isn't actively harmful to dignity, it might not be helpful either, in which case pro-employment policies would be more effective than UBI in promoting dignity.

But that said, I do think Matt's policy proposal is a good one:
A national UBI would work very similarly. The US federal government would employ various strategies (mandatory share issuances, wealth taxes, counter-cyclical asset purchases, etc.) to build up a big wealth fund that owns capital assets. Those capital assets would deliver returns. And then the returns would be parceled out as a social dividend.
This is something I've suggested as well. I see it as an insurance policy against the possibility that robots might really render large subsets of human workers obsolete. 

UBI isn't a bad policy. If robots take most of our jobs, it will be an absolutely essential policy. I just don't think it solves the dignity problem. And with Trump winning elections in part by promising to restore dignity, I think Democrats need an issue to counter him, and jobs policy is far more likely than UBI to fit this bill. 

Saturday, December 31, 2016

Who is responsible when an article gets misread?

How much of the responsibility for understanding lies with the writer of an article, and how much with the reader? This is not an easy question to answer. Obviously both sides bear some responsibility. There are articles so baroque and circuitous that to get the point would require an unreasonable amount of time and effort to parse, even for the smartest reader. And there are readers who skim articles so lazily that even the simplest and most clearly written points are lost. Most cases fall somewhere in between. And the fact that writers don't usually get to write their headlines complicates the issue.

See what you think about this one. The other day, Susan Dynarski wrote an op-ed in the New York Times criticizing school vouchers (a subject I've written about myself). Dynarski opens with the observation that economists are generally less supportive of vouchers than they are of most free-market policies:
You might think that most economists agree with this overall approach, because economists generally like free markets. For example, over 90 percent of the members of the University of Chicago’s panel of leading economists thought that ride-hailing services like Uber and Lyft made consumers better off by providing competition for the highly regulated taxi industry. 
But economists are far less optimistic about what an unfettered market can achieve in education. Only a third of economists on the Chicago panel agreed that students would be better off if they all had access to vouchers to use at any private (or public) school of their choice.
Here's the actual poll: 

As you can see, the modal economist opinion is uncertain about whether vouchers would improve educational quality, while the median is between "uncertain" and "agree". This clearly supports Dynarski's statement that economists are "far less optimistic about vouchers than about Uber and Lyft. 

The headline of the article (which Dynarski of course did not write) might overstate the case a little bit: "Free Market for Education? Economists Generally Don’t Buy It". Whether the IGM survey shows that economists "generally don't buy" vouchers depends on what you think "don't buy" and "generally" mean. It's a little click-bait-y, like most headlines, but in my opinion not too bad. 

Scott Alexander, however, was pretty up in arms about this article. He writes:
By leaving it at “only a third of economists support vouchers”, the article implies that there is an economic consensus against the policy. Heck, it more than implies it – its title is “Free Market For Education: Economists Generally Don’t Buy It”. But its own source suggests that, of economists who have an opinion, a large majority are pro-voucher... 
I think this is really poor journalistic practice and implies the opinion of the nation’s economists to be the opposite of what it really is. I hope the Times prints a correction.
A correction!! Of course no correction will be printed, because no incorrect statements were made. Dynarski said that economists are "far less optimistic" about vouchers than about Uber/Lyft, and this is true. She also reported close to the correct percentage of economists who said they supported the policy in the IGM poll ("a third" for 36%). 

Scott is upset because Dynarski left out other information he considered pertinent - i.e., the breakdown between economists who were "uncertain" and those who "disagree". Scott thinks that information is pertinent because he thinks the article is trying to argue that most economists think vouchers are bad. 

If Dynarski were in fact trying to make that case, then yes, it would have been misleading to omit the breakdown between "uncertain" and "disagree". But she wasn't. In fact, her article was arguing that economists tend to have reservations about vouchers. And she supports her case well with data.

This is a special kind of straw man fallacy. Straw manning is where you present a caricature of your opponent's argument. But there's a particularly insidious kind of straw man where you characerize someone's arguments correctly, but get their thesis wrong. You misread someone's argument, and then criticize them for failing to support your misreading. Other examples of this fallacy might be:

1. You write an article citing Autor et al. to show that the costs of trade can be very high. Someone else says "This doesn't prove autarky is better than free trade!" But of course, you weren't trying to prove that.

2. You write an article arguing that solar is cost-competitive with fossil fuels by pointing out that solar power is expanding rapidly. Someone else says "Solar is still a TINY fraction of global generating capacity!" But of course, you weren't trying to refute that.

3. You write an article saying we shouldn't listen to libertarian calls to dismantle our institutions. Someone else says "Libertarians aren't powerful enough to dismantle our institutions!" But of course, you weren't trying to say they are.

I think Scott is doing this with respect to Dynarski's article. To be fair, his misreading was somewhat assisted by the headline the NYT put on the piece. But once he was reminded of the fact that the headline wasn't Dynarski's, and once he re-read the article itself and realized what its actual thesis was, I think he should have muted his criticism. 

Instead, he doubled down. He argued that most reasonable people, reading the article, would think it was arguing that economists are mostly against vouchers. But his justification for this continues to rely very heavily on the wording of the headline:
First, I feel like you could write exactly the opposite headline. “Public School: Economists Generally Don’t Buy It”... 
Second, the article uses economists “not buying it” as a segue into a description of why economic theory says school choice could be a bad idea... 
In the face of all of this, the New York Times reports the field’s opinion as “Free Market In Education: Economists Generally Don’t Buy It”.
On Twitter, he said: "the actual article is more misleading than the headline." But he appears to say this because he takes the headline - or, more accurately, his reading of it - as defining the thesis that Dynarski is then obligated to defend (when in fact she wrote the piece long before a headline was assigned to it). When he finds that Dynarski doesn't support his reading of a headline she didn't write, it is her article, not the headline, that he calls "misleading".

Of course, the fault here is partly that of the NYT, who used a headline that focused only on one part of Dynarski's article and overstated that part. It's a little harsh for me to say "Come on, man, you should know an article isn't about what its headline says it's about!" Misleading headlines are a problem, it's absolutely true. But after learning that Dynarski didn't write the headline, I think Scott should have been able to then read the article on its own, and go back evaluate the arguments Dynarski actually makes. It's the refusal to do this that seems to me to constitute a straw-man fallacy.

Anyway, one last point: I think Dynarski is actually wrong that economists are more wary of vouchers than other free-market policies. Yes, economists in general are probably wary of voucher schemes. But they're also a lot more favorable to government intervention in a variety of cases than Dynarski claims. Klein and Stern (2006) have some very broad survey data (much broader than IGM). They find that 67.1% of economists support "government production of schooling" at the k-12 level, with 14.4% uncertain and 17.4% opposed. But they also record strong support for a variety of other interventionist policies, such as income redistribution, various types of regulation, and stabilization policy. On many of these issues, economists are more interventionist than the general public! So I think if Dynarski makes a mistake, it's to characterize economists as being generally pro-free-market. Their ambivalence about vouchers doesn't look very exceptional.

Saturday, December 24, 2016

The Fundamental Fallacy of Pop Economics

The Fundamental Fallacy of Pop Economics (which I get to name, because this is my blog and I can do whatever I want, mwahahaha) is the idea that the President controls economic outcomes.

The Fundamental Fallacy is in operation every time you hear a phrase like "the Bush boom" or "the Obama recovery". It's in effect every time someone asks "how many jobs Obama has created". It's present every time you see charts of economic activity divided up by presidential administration. For example, here's a chart from Salon writer Sean McElwee, using data from a paper by Alan Blinder and Mark Watson:

Blinder and Watson attribute the difference to "shocks to oil prices, total factor productivity, European growth, and consumer expectations of future economic conditions", but McElwee attributes it to progressive economic policy.

Larry Bartels has made headlines with similar analyses about inequality:

But the worst perpetrators of this fallacy tend to be conservative econ/finance commentators. And of these, the worst I've seen is Larry Kudlow. Kudlow is being mooted for chairman of the Council of Economic Advisers -- basically, the president's chief economist. Here's an excerpt from a Kudlow post in December 2007 (!) denying that the economy was in danger: 
The recession debate is over. It’s not gonna happen. Time to move on. At a bare minimum, we are looking at Goldilocks 2.0. (And that’s a minimum). The Bush boom is alive and well. It’s finishing up its sixth splendid year with many more years to come.
Notice how this is actually wrong (there was a mild recession in 2001), but Kudlow explicitly associates economic good fortune with the President's term in office. Here's another, from around the same time:
The GOP...has a positive supply-side message of limited government, lower spending, and lower tax rates....I believe the economic pendulum will soon swing in favor of the GOP. There’s no recession coming. The pessimistas were wrong. It’s not going to happen. At a bare minimum, we are looking at Goldilocks 2.0. (And that’s a minimum)...The Bush boom is alive and well.
Or here's Kudlow on Obama:
You've had so much war on business in the last eight or 10 years…I think that has really damaged the economy and has held businesses back from investing and creating jobs. It will take a while to turn that ship around," Kudlow said of Obama's economic policies.
You see the same kind of President-based magical thinking here. In fact, go back and read Kudlow's commentary over the years, and his whole body of work is shot through with this simple thesis - Republican presidents are great for the economy, Democratic presidents are terrible, etc. Kudlow has ridden the Fundamental Fallacy about as far as it's possible to ride it. 

In a recent post, James Kwak declares that Kudlow is a victim of what he calls "economism" (and which I call "101ism"). He thinks Kudlow is wedded to a vision of an economy where free markets always work best. But I respectfully disagree with James. Kudlow doesn't seem to think about supply and demand, or deadweight loss, or any of that - nothing that would be taught in an econ class. Kudlow's thinking is more instinctive and tribal - it's "Republican President = good economy". It's the idea that if the man in charge comes from Our Team, things must go well, and if it's someone from the Other Team, things are bound to be a disaster. The Fundamental Fallacy doesn't come from Econ 101 - it's far more primal than that, an upwelling of our deepest pack instincts.

So, you may ask, why is the Fundamental Fallacy a fallacy? Three basic reasons:

1. Reason 1: Policy isn't all-powerful. 

Macroeconomic models are not reliable, so it's very hard to get believable numbers for the effects of policies like the Bush tax cuts or Obama's stimulus bill. But most estimates show that the effect of both was very modest - the Bush tax cuts might have increased overall GDP by 0.5-1.5% in the short term, and probably had close to no effect in the long term. Meanwhile, the ARRA's effect on unemployment and growth was probably quite modest. Optimistic estimates have Obama's policy package reducing unemployment by about 0.5-1.5% from 2009 through 2013 - not nothing, but not nearly enough to make the Great Recession go away. And those are the most optimistic, favorable estimates.

Only in (some) econ models does policy have complete control over things like GDP and unemployment. But those models are almost certainly highly misspecified. In reality, policy has institutional constraints - nominal interest rates can't go much below zero, there's a federal debt ceiling, etc. And even more importantly, if policy becomes extreme enough, the models themselves start to lose validity - if you have the government go deeply enough into debt, the fiscal stimulus effect will no longer be the only way in which more government borrowing affects the economy. 

In reality, things like growth and unemployment are often determined by natural forces rather than government decisions. For example, I suspect that the pattern of higher growth during Democratic administrations cited by Blinder & Watson is at least partly endogenous - recessions cause white working-class voters to ignore social/identity issues and vote for Democrats like Clinton in 1992 and Obama in 2008, allowing those Democrats to take credit for the natural as well as the policy-induced parts of the recovery.

2. Reason 2: The President doesn't control policy.

Charts like those of Bartels and Blinder & Watson, as well as buzzwords like Kudlow's "Bush boom" look only at the party of the President. But Congress is often controlled by a different party. Obama and Clinton faced Republican Congresses for much of their term in office, and Reagan faced a Democratic Congress. Even when the President has a Congress of the same party, it's often difficult for him to push through his desired policies - witness Bush's failure to privatize Social Security, or Clinton's failure to enact fiscal stimulus.

Additionally, a lot of power is held by the states. Much of Obama's stimulus bill actually just went to shore up decreases in state spending. Meanwhile, the Fed controls interest rates, and though the President appoints the Fed chair, he has very little control over what that Fed chair subsequently decides to do. 

3. Policy often acts with a lag.

Cutting taxes does relatively little if spending isn't also cut. That's because if tax cuts aren't eventually matched by spending cuts, then the government has to either hike taxes, or default on its debt. Therefore, if tax cuts don't "starve the beast", their only effect will be through short-run fiscal stimulus. And tax cuts aren't a very efficient form of stimulus.

So, guess what? Tax cuts don't ever seem to lead to spending cuts. "Starve the beast" doesn't work. 

This is just one example of how policy often acts with "long and variable lags". Deregulation is another. Many people believe that Reagan's deregulations led to the boom of the late 1980s, but Carter actually slashed a lot more regulation than Reagan did. It could have taken years for those deregulations to lead to higher growth. 

Any structural policy you want to name - welfare reform, tax cuts, infrastructure spending, research spending, trade treaties - should only have its full effect after a number of years. It takes years for businesses to invest and grow, for trade patterns to shift, and (probably) for worker and consumer behavior to permanently change. Presidential terms only last 8 years at most. So even if presidents controlled policy, and even if policy was very effective, we'd still see many presidents getting credit for their predecessors' deeds.

Obviously there are some big exceptions to this. The President can start a war, and wars can make the economy boom (as in WW2 for America) or wreck it utterly (as in WW2 for everyone else). Given enough power, a President could in theory wreak havoc on the economy, as Hugo Chavez did in Venezuela. In poor countries, a strong President like Deng Xiaoping can push through reforms that change a country's entire economic destiny. 

But when a country is already rich, where the President is restrained by checks and balances, and where policy changes are not sweeping or huge - i.e., as in the United States over the past half century - we would be well-advised not to exaggerate the economic impact of the chief executive.

Wednesday, December 14, 2016

Academic signaling and the post-truth world

Lots of people are freaking out about the "post-truth world" and the "war on science". People are blaming Trump, but I think Trump is just a symptom. 

For one thing, rising distrust of science long predates the current political climate; conservative rejection of climate science is a decades-old phenomenon. It's natural for people to want to disbelieve scientific results that would lead to them making less money. And there's always a tribal element to the arguments over how to use scientific results; conservatives accurately perceive that people who hate capitalism tend to over-emphasize scientific results that imply capitalism is fundamentally destructive.

But I think things are worse now than before. The right's distrust of science has reached knee-jerk levels. And on the left, more seem willing to embrace things like anti-vax, and to be overly skeptical of scientific results saying GMOs are safe. 

Why is this happening? Well, tribalism has gotten more severe in America, for whatever reason, and tribal reality and cultural cognition are powerful forces. But I also wonder whether a few of science's wounds might be self-inflicted. The incentives for academic researchers seem like they encourage a large volume of well-publicized spurious results. 

The U.S. university system rewards professors who have done prestigious research in the past. That is what gets you tenure. That is what gets you a high salary. That is what gets you the ability to choose what city you want to work in. Exactly why the system rewards this is not quite clear, but it seems likely that some kind of signaling process is involved - profs with prestigious research records bring more prestige to the universities where they work, which helps increase undergrad demand for education there, etc. 

But for whatever reason, this is the incentive: Do prestigious research. That's the incentive not just at the top of the distribution, but for every top-200 school throughout the nation. And volume is rewarded too. So what we have is tens of thousands of academics throughout the nation all trying to publish, publish, publish. 

As the U.S. population expands, the number of undergraduates expands. Given roughly constant productivity in teaching, this means that the number of professors must expand. Which means there is an ever-increasing army of people out there trying to find and report interesting results. 

But there's no guarantee that the supply of interesting results is infinite. In some fields (currently, materials science and neuroscience), there might be plenty to find, but elsewhere (particle physics, monetary theory) the low-hanging fruit might be picked for now. If there are diminishing returns to overall research labor input at any point in time - and history suggests there are - then this means the standards for publishable results must fall, or America will be unable to provide research professors to teach all of its undergrads.

This might be why we have a replication crisis in psychology (and a quieter replication crisis in medicine, and a replication crisis in empirical economics that no one has even noticed yet). It might be why nutrition science changes its recommendations every few months. It might be a big reason for p-hacking, data mining, and specification search. It might be a reason for the proliferation of untestable theories in high-energy physics, finance, macroeconomics, and elsewhere. And it might be a reason for the flood of banal, jargon-drenched unoriginal work in the humanities.

Almost every graduate student and assistant professor I talk to complains about the amount of bullshit that gets published and popularized in their field. Part of this is the healthy skepticism of science, and part is youthful idealism coming into conflict with messy reality. But part might just be low standards for publication and popularization. 

Now, that's in addition to the incentive to get research funding. Corporate sponsorship of research can obviously bias results. And competition for increasingly scarce grant money gives scientists every incentive to oversell their results to granting agencies. Popularization of research in the media, including overstatement of results, probably helps a lot with that.  

I recall John Cochrane once shrugging at bad macro models, saying something like "Well, assistant profs need to publish." OK, but what's the impact of that on public trust in science? The public knows that a lot of psych research is B.S. They know not to trust the latest nutrition advice. They know macroeconomics basically doesn't work at all. They know the effectiveness of many pharmaceuticals has been oversold. These things have little to do with the tribal warfare between liberals and conservatives, but I bet they contribute a bit to the erosion of trust in science. 

Of course, the media (including yours truly) plays a part in this. I try to impose some quality filters by checking the methodologies of the papers I report on. I'd say I toss out about 25% of my articles because I think a paper's methodology was B.S. And even for the ones I report on, I try to mention important caveats and potential methodological weaknesses. But this is an uphill battle. If a thousand unreliable results come my way, I'm going to end up treating a few hundred of them as real.

So if America's professors are really being incentivized to crank out crap, what's the solution? The obvious move is to decouple research from teaching and limit the number of tenured research professorships nationwide. This is already being done to some extent, as universities rely more on lecturers to teach their classes, but maybe it could be accelerated. Another option is to use MOOCs and other online options to allow one professor to teach many more undergrads. 

Many people have bemoaned both of these developments, but by limiting the number of profs, they might help raise standards for what qualifies as a research finding. That won't fully restore public trust in science - political tribalism is way too powerful a force - but it might help slow it's erosion. 

Or maybe I'm completely imagining this, and academic papers are no more full of B.S. than they ever were, and it's all just tribalism and excessive media hype. I'm not sure. But it's a thought, anyway.