Politics

Don't Wait for the Government's Economic Data

Unemployment numbers and the consumer price index (CPI) are among the most important economic indicators used by both Wall Street traders and government policy makers. During the government shutdown those statistics weren't be collected or calculated and now that the government is up and running again it will be a while before the latest numbers are released.

According to the New York Times, the October 4 unemployment report will be released October 22, and the October 16th CPI will come out on Halloween Eve. But, you don't have to wait for those numbers to come out. In fact, these numbers are always slow — CPI takes about a month to calculate so even if it had come out on October 6, the number still would have been out of date. Luckily, there are already other good tools for anticipating what these numbers will be even before they're released. For CPI, there's the Billion Prices Project developed at MIT (now at PriceStats). For more on the Billion Prices Project, check out this post. And for unemployment, we worked through how a model based on search trends can be used to forecast the yet to be released official number in the last post on this blog.

cpi

PriceStats (based on MIT's Billion Prices Project) already knows what the CPI number will be.

This is a great opportunity for investors and policy makers to reduce their dependence on glacial paced antiquated statistics and start incorporating information from newer faster tools into their decision making.

 

"Predicting the Present" at the CIA

The CIA is using tools similar to those we teach in the Kellogg Social Dynamics and Networks course to "predict the present" according to an AP article (see also this NPR On the Media interview).

While accurately predicting the future is often impossible, it can be pretty challenging just to know what's happening right now.  Predicting the present is the idea of using new tools to get a faster, better picture of what's happening in the present.  For example, the US Bureau of Labor and Statistics essentially gathers the pricing information that goes into the Consumer Price Index (CPI) by hand (no joke, read how they do it here). This means that the governments measure of CPI (and thus inflation) is always a month behind, which is not good for making policy in a world where decades old investment banks can collapse in a few days.

To speed the process up, researchers at MIT developed the Billion Prices Project, which as the name implies collects massive quantities of price data from across the Internet to get a more rapid estimate of CPI. The measure works, and is much more responsive than the governments measure. For example, in the wake of the Lehman collapse, the BPP detected deflationary movement almost immediately while it took more than a month for those changes to show up in the governments numbers.

Crowdsourcing the Palin Email Release

Slate reports that several major news outlets, including the Washington Post and the New York Times, are planning to use crowdsourcing to scour thousands of pages of emails from her time as Governor of Alaska that will be released on Friday.

In many ways this is a perfect crowdsourcing task.  It would be hugely time consuming for news reporters to sift through the more than 24,000 pages of email themselves.  And automating this process would be next to impossible because what counts as "interesting" is very difficult to program into a natural language processor. On the other hand, it is relatively easy for for humans to pick out.  The task comes with built in motivation: first, people are personally interested in reading Palin's emails; second, Palin's detractors are motivated to try and dig up embarrassing information and supporters will be motivated to respond; and third, finding something interesting comes with the promise of acknowledgement in the pages of a major news outlet.  All this adds up to the fact that you don't need to pay anyone to do this and do it well.  The biggest potential pitfall is that crowdsourcing relies fundamentally on local information.  Each individual looks through a handful of emails, which is good for finding particular juicy quotes, but not so good for identifying larger patterns.  To combat this, the news outlets could rely on wiki-like interfaces where the crowdsourcers could post "leads" that other individuals could add to in order to piece together larger narratives.

One person, one vote?

An article in the New York Times describes recent research by economists Brian Knight and Nathan Schiff on the relative impact of votes from different states in the presidential primaries. They estimate that a vote in an Iowa or New Hampshire primary has the impact of five Super Tuesday voters. The focus of the Times article is on the policy implications of this impact inequality. One of the interesting things about this research is how "impact" is measured.  What the article doesn't mention is why there is any impact difference in the first place. After all, mathematically, a vote in Iowa or New Hampshire counts just as much as one in New Jersey or Montana.

The way the Knight and Schiff estimated "impact" was to look at election polls before and after each primary.  They found that the polls shifted the most after early primaries.  Their theory is that voters are uncertain about the quality of different candidates, but learn (or infer) something about that quality by observing others.  This is kind of like noticing that a lot of people drive a certain type of car and then inferring that therefore it must be a pretty good car.  But, we could imagine several other stories.  For example, If voters that prefer candidate A perceive candidate B as a lock-in to win the nomination, then maybe they decide not to vote.  Prospective voters for candidate B on the other hand may continue to vote because they enjoy being on the winning side.  Some undecided voters may shift towards candidate B for the same reason.  A question that has perplexed political scientists and economists for decades is why does anyone vote in the first place.  A more careful look at these results could shed light on that question.

Another Tipping Point Sighting

The New York Times makes another tipping point claim today.  This one seems more believable:  “For many gay rights advocates, the decision amounts to a turning point in the debate — the moment at which opposition to same-sex marriage came to look like bigotry, similar to racial discrimination and the subordination of women.”  Attitudes regarding gay marriage (and segregation, women’s rights, ...) are social norms, and norm formation is a process that social scientists have studied and modeled extensively.  Almost all of these models do exhibit tipping behavior.  Whether or not this is in fact the tipping point for norms regarding the definition of marriage is an empirical question, but such a tipping point almost certainly exists.