by Ed Dolan
Any econ blogger who has ever written a line about inflation is familiar with ShadowStats. Time and again, readers cite it in comments, not infrequently paranoid in their tone and rude in their language. Brief replies that cast doubt on some of more extreme claims made by ShadowStats fans don’t seem to have much effect. After a recent round of comments, I promised the editor of one website to undertake a thorough deconstruction of ShadownStats. Here is the result.
What ShadowStats Gets Right: The CPI is a Flawed Measure of the Cost of Living
ShadowStats is Williams’ attempt to provide an alternative to the official consumer price index (CPI), which he views as a flawed measure of what members of the general public have in mind when they think of the cost of living. Let me start by saying that although I share the skepticism of many economists about the specific numbers published on ShadowStats, I agree that the official data do not tell the whole story. I support Williams’ attempt to provide an alternative to the official consumer price index that more closely reflects pubic perceptions of inflation. Here, in his own words, is how Williams explains his undertaing:
In the last 30 years, a growing gap has been obvious between government reporting of inflation, as measured by the consumer price index (CPI), and the perceptions of actual inflation held by the general public. Anecdotal evidence and occasional surveys have indicated that the general public believes inflation is running well above official reporting . . .
Measurement of consumer inflation traditionally reflected assessing the cost of maintaining a constant standard of living, as measured by a fixed-basket of goods. Maintaining a constant standard of living, however, is a concept not popular in current economic literature, and certainly not within the thinking or the lexicon of the Bureau of Labor Statistics (BLS), the government’s statistical agency that estimates and reports on consumer inflation. . . Individuals look to the government’s CPI as a measure of the cost of maintaining a constant standard of living, as well as measuring that cost of living in terms of out-of-pocket expenses. Without meeting those parameters, an inflation measure has limited, if any, use for an individual.
Williams is right about the gap between public perceptions of inflation and official indicators. As a recent series of posts on inflation expectations on the Atlanta Fed’s Macroblog noted, “Inflation surveys of households reveal a remarkably wide range of opinion on future inflation compared to those of professional forecasters. Really, really wide.” According to Macroblog, household expectations of inflation for the coming year consistently average two percentage points higher than those of professional forecasters, and some 13 percent of household respondents report inflation expectations of 10 percent or higher even at a time when professional forecasts fall short of 2 percent.
In technical terminology, we refer to a cost of living index based on the changing cost of a fixed-proportion basket of goods that themselves remain unchanged over time as a Laspeyres index without quality adjustment. Williams is again correct when he says that the official CPI, following mainstream academic thinking, has gradually evolved away from the Laspeyres concept toward a measure of the cost of a changing basket of goods that gives equivalent satisfaction as the prices, quantities, and qualities of the goods that consumers buy change over time.
The substitution issue. One of Williams’ key objections to the CPI is that instead of holding the cost-of-living basket unchanged for long periods, the BLS allows for frequent changes in its composition. Some changes in the consumer market basket occur when goods like audio cassette players become technically obsolete and new goods like cell phones appear on the market, but those are not the ones that Williams takes issue with.
What he finds more objectionable are changes in composition of the market basket that stem directly from changes in prices, as, for example, when people eat more chicken because beef becomes unaffordably expensive. To many people, fiddling the market basket to give more weight to the goods whose prices increase least and less to those whose prices increase most sounds like cheating. They see it as if a teacher tried to impress a tenure committee with high test student scores by letting the smart kids take the test several times each while sending their slow-learning classmates home on testing day.
Mainstream economists have a standard response: If we did not account for changed consumption patterns in response to changed prices, they say, we would overstate the cost of maintaining a constant level of satisfaction. Consider an example. Last week you went to the supermarket and bought 5 pounds of chicken at $2 a pound and 5 pounds of steak at $5 a pound, $35 total. This week you go to the supermarket and find that chicken still costs $2 but steak has gone up to $10. There is no question that the new prices leave you worse off than you were the week before, but how do you react?
You would need $60 to buy the same basket of goods that you bought last week for $35. In reality, you might not have that $60 in your wallet or purse, but if I gave you a $60 coupon that you could spend only at the meat counter, you would probably not spend it on the same basket of goods you bought last week. Instead, you might buy, say, 10 pounds of chicken and 4 pounds of steak. However, since $60 would be enough to buy your previous selection if you wanted to, we could conclude that you would change the mix only if the new $60 selection gave you more satisfaction than the original one.
Experience shows that if you put a large number of consumers in this situation and average their behavior, they will shift their consumption toward chicken, even though some individuals might stick with the original mix. Those who did shift would be better off with $60 and the new prices than with $35 and the old prices, and the ones who don’t shift are no worse off. In that sense, $60 overstates the increase in income the average consumer would need to reach the same level of satisfaction as before the price change.
Your cost of living has gone up, and that hurts, but just how much has the increase in the price of steak raised your cost of living? By the ratio of 60/35, a 70 percent increase, or by less than that? It depends on what you mean by the cost of living. If you mean the cost of buying a fixed market basket (the popular conception), then the 70% is correct. If you mean the cost of maintaining a fixed level of satisfaction, then 70% is an overstatement.
The quality issue. In addition to adjusting the relative quantities of goods in the consumer market basket over time, the BLS adjusts the CPI for changes in the quality of goods. The rationale for doing so is that failure to account for quality improvements would cause a further overstatement of the increase in spending that needed to maintain a constant level of consumer satisfaction.
Consider tires for your car. In the old days, you were lucky if a set of bias-ply tires lasted 30,000 miles. Today, a decent set of radial tires will go 60,000 miles or more, and give you a better ride along the way. So, if the price of a set of tires has increased from $100 to $400, what has been the impact on your cost of living? If you calculate the cost per tire, without accounting for quality, tires are four times more expensive than they used to be. If you calculate the cost per mile, they are only twice as expensive.
Williams does not necessarily object to adjusting for quality changes when they are objectively measurable, like package size or the number of miles you get from a set of tires. However, he argues that the BLS exaggerates the importance of quality by making adjustments for changes that consumers don’t really care about. In one post, he uses the example of two computers, purchased ten years apart. Yes, the newer computer has many extra features—more memory, a faster processor, a sharper display, and so on, each of which is quantifiable. However, not all consumers care about the new features. If you just use your computer for e-mail and browsing the web, and not for running big financial spreadsheets or high-powered gaming, who cares about processor speed? The old model does the job just as well.
Other issues. Williams has a number of other criticisms of the CPI beyond the substitution and quality issues. In particular, he takes issue with the way the BLS measures housing prices and medical costs. Without going into detail, in both cases Williams favors an out-of-pocket approach to housing and medical costs as being more in tune with the general public’s concept of the cost of living. I think it is fair to say that mainstream economists agree that these two items, which loom large in household budgets, are particularly difficult to measure, although not everyone agrees with the way Williams would like to see them handled. I hope to deal with these issues in a future post, but this one will focus on the basics.
Where ShadowStats goes wrong: How great is the understatement?
No one really denies that the CPI, as presently calculated, understates the rate of inflation compared to a measure based on a fixed basket of unchanged goods. Rather, what many economists, myself included, find hard to accept is Williams’ estimate of the degree of understatement. The following chart, reproduced by permission and updated monthly on ShadowStats.com, claims that since the early 1980s, the CPI has been understating the true rate of inflation by an ever increasing margin that now amounts to some 7 percentage points.
To get a more complete picture, here are the Shadow stats and CPI data in tabular form. (Williams does not permit reproduction of data that he provides to subscribers only, but his inflation estimates can be derived indirectly using the publicly available inflation calculator posted online by Tom R. Halfhill.)
Column (2) of the table gives the annual inflation rate, as calculated by Williams. His numbers agree with the official CPI data in Column (3) for 1980 and before. After that, they are mostly higher. Columns (4) and (5) apply the inflation rates for each year to calculate a price index with a 1980 base-year value of 100. Based on the ShadowStats rates, the index increases from 100 in 1980 to 1235 by 2014. If you apply the official inflation data, the index increase from 100 to 287 over the same period.
Like most professional economists, I find the rates of inflation given by ShadowStats to be implausibly high, even if we accept the notion of measuring inflation based on a fixed basket of unchanged goods. Economists at the BLS itself have published a defense of their own methodology, so there is no reason to repeat what they have done. Instead, I would like to make some simple and intuitive crosschecks of the ShadowStats vs. the CPI cost of living estimates. All of these crosschecks will use information from sources outside the control of the BLS or other agencies US government.
The grocery price crosscheck. The first crosscheck focuses on the prices of common grocery items. Old newspapers ads are a good nongovernmental source of historical price information. Here is one from 1982:
Pick a selection of items from this ad, then go to your local supermarket and note today’s prices for the comparable items. Now, perform two sets of calculations: First, predict forward to calculate how much each item should cost as of February 2015, using both the ShadowStats estimate of the increase in the cost of living from 1982 to 2014 (10.5 times higher) and the CPI estimate (2.5 times higher). Second, starting with the 2015 prices, predict backward to calculate how much each item should have cost in 1982 according to each of the two cost-of-living indexes.
The following table shows what I got when I performed the experiment, using prices for February 9, 2015 from my own local supermarket, Tom’s in Northport, Michigan. I have adjusted all prices for changes in package size, if any.
For example, a can of tomato sauce that cost $.25 at Piggly Wiggly in 1982 cost $.79 at my local market in early 2015. Starting from the 1982 price, the CPI predicts that it should cost $.61 in 2015 while ShadowStats predicts that it should cost $2.64. Starting from the 2015 price and working backwards, the CPI predicts that it should have cost $.32 in 1982 while ShadowStats predicts that is should have cost $.08. Based on these calculations, we see that the CPI underestimates inflation, as measured by the Tomato Sauce Index: The ratio of the 2015 predicted price of $.61 to the 2015 actual price, $.79, is .77, an underestimate of 23 percent. The ratio of the ShadowStats prediction to the actual price is 3.32, an overstatement of 223 percent. For tuna, both indexes overestimate inflation, the CPI by 34 percent and ShadowStats by 478 percent, and so on.
Continuing in that way, we see that the average underestimate of inflation from the CPI is 9 percent while the average overestimate from ShadowStats is 292 percent. However, we might want to take into account the fact that the 1982 prices given in the ad are sale prices, while those from 2015 are everyday prices that I took right from the shelf at the store. If we were able to use higher, everyday prices from 1982 instead of sale prices as the base for our calculations, it is likely that the CPI would not underestimate food inflation at all, while the overestimate from ShadowStats would be even greater.
Some ShadowStats supporters might object that this comparison is unfair because there have been few if any quality changes in the grocery items under consideration. They would say it is only to be expected that CPI understates inflation less seriously for items that are not subject to quality adjustment. Keep in mind, though, the BLS applies its controversial hedonic quality adjustment procedure to only 3 percent of the ordinary consumer goods that enter the CPI market basket. (See this source for a list. The BLS also applies quality adjustments to housing, which constitutes another 30 percent of the CPI.) For the vast majority of consumer goods, like groceries, for which the BLS does not make quality adjustments, ShadowStats fails the crosscheck.
The catalog price crosscheck. To get an idea of the accuracy of the CPI for goods that do undergo significant quality changes, we can turn to another source of historical price information: old mail order catalogs. Full reproductions of historical catalogs from Sears, Wards, and other retailers are available online at wishbookweb.com. In a post I wrote a couple of years ago, I proposed this simple thought experiment: If you could choose between shopping on line today at today’s prices, or buying from a mail order catalog of the past at past prices but with your present disposable income, what items, if any, would you buy from the past?
For example, would you buy a 25 inch color television from the 1981 Wards Christmas Catalog (p. 344) at $688.88, or would you buy the Samsung 28 inch LED model that I found on Amazon for $219? Based on the 1981 price, ShadowStats would predict that a 25 inch TV should cost $7,712 today, whereas the CPI would predict that it should cost a mere $1,794. Both indexes wildly overstate TV price inflation even without allowing for quality changes, but the CPI does not miss by nearly as much.
Maybe it is unfair to use something high-tech like a TV, so let’s go low tech. How about a pair of ladies leather gloves, item A on p. 85 of the 1981 Ward’s catalog, for $13. Compare them to a pair I found on Amazon for $10.70. ShadowStats suggests that a pair of leather gloves should cost $145 today.
You can repeat this experiment for item after item, and you come to the same conclusion: ShadowStats seriously overstates inflation for items of changing quality just as it does for those of unchanging quality.
Physical output crosschecks Among the other charts published by ShadowStats is one that compares the growth rate of real GDP, as officially reported, with an alternate GDP series that adjusted using Williams’ higher estimates of inflation. Here is the chart, reproduced with permission, as of March 2015:
The official series shows steady growth of real GDP for the years since 2000 except during the recession years of 2008 and 2009. In contrast, Williams’ series shows negative growth for every year except 2004. Many observers find it hard to believe that real output has been falling almost constantly for the past fifteen years, especially in view of the fact that total hours worked have increased slightly over the period. However, it would be nice to have some independent data, not tainted by inflation estimates, against which we could crosscheck the real GDP numbers.
During the Great Depression and earlier, before the concept of real GDP was invented, policymakers used to resort to purely physical indexes of output to gauge the strength of the economy—things like boxcar loadings and steel output. We can do the same. The next chart shows three such indicators: the number of new cars sold annually, an index of ton-miles of freight transportation, and an index of kilowatt hours of electricity generated.
All three show positive growth on average since 2000 (3.8 percent for autos, 1.4 percent for freight transportation, and 1.1 percent for electricity). It is hard to believe such growth would have been possible during a period when real GDP was falling at an average rate of 2 percent per year, as it has according to the ShadowStats series. Why would people be buying more and more cars if their incomes were falling? Why would railroads and barges be hauling more and more tons of freight if mines and factories were producing less and less output? Why would kWh of electricity per dollar of real GDP be rising by 3 percent per year despite the widespread adoption of more efficient lighting, more efficient electric motors, and more efficient buildings?
The physical output indexes are much more plausible if we accept the official GDP series: A rise in car sales roughly in line with the growth of real incomes, freight transportation lagging real GDP somewhat because service sectors are growing faster than goods-producing sectors, and electricity output growing but lagging real GDP because of more efficient technology.
The interest rate crosscheck. One final crosscheck uses data from financial markets to compare nominal and real interest rates. Nominal interest rates are those that are stated in the ordinary way, in dollars of interest that must be paid annually per dollar borrowed. Real interest rates are adjusted for changes in the purchasing power of money by subtracting the rate of inflation from the nominal rate of interest.
According to a well-established principle of financial economics known as the Fisher effect, nominal interest rates tend to rise as inflation accelerates and fall as it slows. It is easy to understand why. Suppose that a bank would be willing to loan you $1,000 for a year at 3 percent interest if it expected zero inflation during the year, and that you were willing to borrow at that rate. Now suppose that, instead, both you and the bank expected the price level to rise by 5 percent over the year. The bank would no longer be satisfied with a 3 percent nominal rate. The $1,030 you would pay back at the end of the year would be worth less than the $1,000 you borrowed in the first place. The bank’s real rate of return would be 3 percent minus 5 percent = -2 percent. However, if you agreed to pay an inflation premium of 5 percent, bringing the nominal rate up to 8 percent, the bank would get its desired 3 percent real return even taking inflation into account. From your point of view, the loan would still be worth it, especially if you expected your salary to rise with inflation. In that case, coming up with $1,080 at the end of the year would be no harder than it would have been to pay $1,030 if there had been no inflation and no wage increase. (See here for a more detailed explanation of the Fisher effect.)
The Fisher effect doesn’t hold exactly every year. Market conditions change and expectations do not always turn out to be accurate. However, the principle does hold well enough so that the real rate varies less over time than the nominal rate. It also ensures that periods of negative real interest rates on ordinary consumer loans are brief and rare.
Let’s take a look at interest rates over the past 25 years, then, to see what they tell us about ShadowStats vs. the CPI. The next chart uses the interest rate on 30-year conventional mortgages, but almost any other interest rate would give a similar picture. The chart shows the nominal 30-year mortgage rate and two real rates, one calculated by subtracting the inflation rate according to the CPI and the other according to ShadowStats.
What we see is that as inflation slows from the 1980s to the present, the nominal interest rate gradually falls. The real interest rate based on the official CPI remains roughly constant during the 1990s, and then decreases in the 2000s, but decreases less rapidly than the nominal rate and remains positive in every year. That pattern is consistent with the Fisher effect.
The real interest rate based on the ShadowStats inflation rate shows a very different pattern. It decreases more rapidly than the nominal rate and is negative in every year after 1995. This pattern is not consistent with the Fisher effect, nor is it consistent with common sense. In order to believe the ShadowStats real interest rate, we have to believe that banks have been losing money on every dollar of mortgage loans for the past 20 years. Furthermore, far from learning from their mistakes, their loans have become more and more unprofitable as time has gone by. It is simply not credible. ShadowStats fails another crosscheck.
Has Williams Simply Made a Mistake?
The fact that the ShadowStats inflation rate fails every crosscheck makes one wonder whether Williams has simply made some kind of mistake in his calculations. I believe that he has done just that. The mistake, I think, can be found in a table given in a post that represents Williams’ most complete explanation of his methodology. Here is the table in question, reproduced from his website, with the addition of my own lettered column headings for easier reference and the correction of a minor typographical error in Williams’ heading for Column E:
This table is of critical importance because it is the source of 5.1 points of the 7 percentage point gap between the official BLS inflation rate and the ShadowStats inflation rate. Here is how Williams himself summarizes his findings:
The substitution-related alterations to inflation methodologies were made beginning in the mid-1990s. The introduction of major hedonic concepts began in the 1980s. The aggregate impact of the reporting changes since 1980 has been to reduce the reported level of annual CPI inflation by roughly seven percentage points, where 5.1 percentage points come from the BLS’s published estimates of the effects of the individual methodological changes on inflation, shown in the preceding table. The balance comes from ShadowStats estimates of the changes not formally estimated by the BLS. The effects are cumulative going forward in time.
The intention of the table is to estimate the effect on the inflation rate of BLS methodological changes by comparing two different versions of the CPI. The series shown in Column D is the official CPI-U as originally published. The BLS does not revise these official CPI-U data after release even when it later makes methodological changes that would result in a different reported rate. The series in Column D is an experimental “research series” called CPI-U-RS that was devised by BLS economists to show what the index for urban consumers (CPI-U) would have looked if all of the methodological changes it made in the 1980s and 1990s had been in force from 1980. Columns C and E show the year-on-year inflation rate based on CPI-U-RS and CPI-U, respectively.
I agree with Williams that it is possible, in principle, to estimate the impact of the methodological changes in question by comparing CPI-U and CPI-U-RS. For example, imagine that a methodological change made in 1985 reduced the reported inflation rate by 0.8 percentage points per year and another change in 1990 reduced it by an additional 0.5 percentage points. The CPI-U-RS inflation rate would then be 1.3 percentage points slower than CPI-U inflation for CPI for each year from 1980 through 1984, when neither change had yet taken effect. From 1985 through 1989, we would expect the CPI-U-RS inflation rate to be 0.5 percentage points lower, because CPI-U for those years would incorporate the 1985 change but not the 1990 change. For years after 1990, when both changes were in effect, there would be no difference between the two inflation rates.
In practice, the situation a little more complicated, because a given methodological change does not necessarily change the annual inflation rate by a fixed number of basis points in every year. Suppose, for example, that the change in question affects only housing costs. The exact amount by which the change slowed or increased reported inflation could vary as the weight assigned to the housing sector changes over time, and also as the rate of housing inflation increases or decreases relative to the average rate of inflation for all goods and services. Still, it seems reasonable to expect that any such sector-specific effects would average out over time. If so, Williams’ assumption that we can calculate the impact of methodological changes by comparing the difference between the inflation rate for CPI-U and CPI-U-RS would, over a number of years, give a reasonable approximation.
Williams labels Column F “Change in annual inflation.” (I prefer to call it the “inflation differential,” since it is not a change from one year to the next, but rather, the difference between the inflation rates derived from CPI-U-RS and CPI-U, but terminology is not the central issue.) A negative entry in this column indicates that CPI-U-RS shows slower inflation than CPI-U. As we would expect, that is the case for most of the early years in the table. After 1999, when all of the controversial methodological changes are fully reflected in the CPI, there is essentially no difference between the two series, which is also what we would expect.
The trouble comes in Column G of the table. That column gives the running totals of the inflation differentials from Column F, which, by 1999, reach 5.1 percentage points. Williams’ mistake is to misinterpret the figures in this column as what he calls the “cumulative annual inflation shortfall,” that is, as a measure of the amount by which the official CPI-U underestimates the rate of inflation that would have been reported if the controversial methodological changes had never been made. However, even by Williams’ own assumptions, that interpretation is incorrect, as we can see by working through a few lines of the table.
Start with the line for 1981. In that year, the officially reported inflation rate, based on CPI-U, was 10.3 percent and the rate based on CPI-U-RS, which assumes that all future methodological changes were already in effect, was 9.5 percent, or 0.8 percentage points lower. Another way to put it is to say that the 0.8 percent difference measures the consolidated effects of all methodological changes made in 1982, 1983, 1984, and so on through 1999, when the last of the changes in question came into force.
Next look at the line for 1982, when CPI-U inflation was 6.2 percent and CPI-U-RS inflation was 6 percent. (Because of the way in which he handles rounding errors, Williams records that as a difference of -0.1 percent rather than -0.2 percent. Rounding errors do not play any significant role in my story, but if you want, you can work out the unrounded numbers based on Columns B and D.) That difference reflects the impact of all methodological changes made in 1983, 1984, and so on, that is, the effect of all changes except those that came into effect in the year 1982 itself. If we add the -0.1 from Column F of 1982 to the -0.8 of the same column for 1981 to get -0.9, as Williams does, we are double counting the effects of 1983 and later, which, by assumption, were already included in the -0.8 for 1982 and are included again in the -.01 for 1983.
The problem of double counting is even more obvious if we look at a pair of years like 1988 and 1989, when the Column F differential between CPI-U-RS inflation and CPI-U inflation does not change. (Again, beware of rounding errors in Williams’ table.) By assumption, the differential changes each time there is a change in methodology. The fact that it does not change from 1988 to 1989 implies that no new methodological changes came into effect in 1989. If not, why does Williams increase his estimate of the cumulative impact of the changes (Column G) from -1.0 percentage points in 1988 to -1.5 percentage points in 1989? To do so double counts the effects of changes subsequent to 1989, which were already included in the numbers for 1988.
Williams’ error becomes more obvious still if we replace the numbers in his table with the simplified numerical example that I gave earlier. In that example, there are just two methodological changes, one in 1985, which reduces the reported rate of inflation by 0.8 percentage points, and another in 1990, which reduces the reported inflation rate by a further 0.5 percentage points. I also add two columns to the table. Column H gives the hypothetical unadjusted rate of inflation, that is, the rate that would be reported if none of the methodological changes had been made. For simplicity, we assume that rate to be a steady 4 percent. Column I gives the true cumulative inflation differential, that is, the difference between the officially reported CPI-U rate (Column E) and the assumed adjusted rate (Column H). Here is my the full numerical example:
Several things that the complexity of Williams original table obscures become crystal clear in this example:
- Each methodological change has the effect of reducing the inflation rate by a given number of percentage points in the year when it first comes into use, and for all years going forward.
- The total permanent reduction in the inflation rate (true cumulative inflation shortfall) is equal to the sum of the effects of the individual methodological changes, with each change counted only once. (In this example, there are two changes, which slow inflation by 0.8 and 0.5 percentage points, respectively, making a total reduction in the reported inflation rate of -1.3 percentage points.
- The cumulative inflation reduction can never be greater than the differential given in Column F for years before any of the methodological changes come into use. The simplest way to determine the impact of the entire series of methodological changes is to look at the difference between CPI-U-RS and CPI-U for such a year . It this example, that would mean any year before 1985, when the differential is 1.3 percentage points.
- The method that Williams uses in his original table, a running total of the difference between the CPI-U-RS and CPI-U inflation rates (Column G), seriously overstates the true cumulative inflation shortfall because it counts each change more than once.
The Bottom Line
The bottom line here is that Williams’ use of a running total of inflation differentials to compute a “cumulative inflation shortfall” of 5.1 percentage points exaggerates the true impact of the methodological changes made by the BLS. A better way to estimate the cumulative inflation shortfall would be to look at the differences between CPI-U-RS and CPI-U before 1983, the year when the BLS implemented the first of the changes that it incorporates in the CPI-U-RS series. That approach is not quite as precise when we use real-world numbers, as Williams does in his original table. As explained earlier, the actual data include statistical noise caused by changes in weighting and in relative price changes among sectors. However, we can approximate the true inflation shortfall by averaging the numbers for 1981 and 1982 from Williams’ table, giving an estimate of -0.45 percentage points.
As mentioned above, Williams’ ShadowStats inflation series incorporates an additional 2.0 percentage point correction to reflect methodological changes that are not captured in the CPI-U-RS series. I would like to examine that number more carefully in a future post, but for the sake of discussion, we can let it stand. If so, it appears to me that, based entirely on Williams’ own data, methods, and assumptions, the adjustment for the ShadowStats inflation series should be about 2.45 percentage points below CPI-U, rather than the 7 percentage points he uses.
In my view, Williams alternative measure of inflation would be more convincing if he were to make this correction. It would also be less likely to feed the anti-government paranoia of some of his followers, who allege that the BLS is falsifies source data and manipulates reported indicators in the way that Argentina and some other countries appear to do.
It is worth noting that Williams himself makes no such claim. He is a fierce critic of BLS methodology, but he acknowledges that the agency follows its own published methods. He argues that the BLS has adopted methods that produce low inflation indicators, but not for motives of short-term partisan politics. Rather, he sees the choice of methodology as driven by a longstanding, bipartisan desire to reduce the cost of Social Security and other inflation-indexed transfer payments. It would be hard to deny that he is at least partly right about that motivation.
Finally, in closing, I would like to thank Williams for taking the time to make detailed comments on an earlier draft of this post. Our private dialog has not yet led to a complete resolution of the issues I have raised here, but I hope that he will address them in future public comments. The search for alternative inflation indicators goes on.
First of two parts.Follow this linkfor Part 2, which discusses the ShadowStats alternate unemployment indicator.