LIA $FACTS$ for February 1997
View other months

Lies, Damned Lies and Statistics
or
Surveys and Snake Oil

You come to Japan and realize that you must be better than competitive in order to obtain useful employees; most of the good ones are not willing to give up the security of lifetime employment and the power accumulation that comes with tenure-based promotions. But, what is competitive?

Many companies take the consulting route, paying several million(s) of yen to get details from one of the big firm's surveys. In North America, companies have relied on such approaches with good results.

Why wouldn't it be valid here?

First, few surveys, regardless of where they are done, have any statistical inference reliability about the universe they claim to be samples from. Most surveys reveal facts only about the actual people who answered the questions, assuming they told the truth!

When the surveys are expressed that way (40% of our respondents said...) they may be misleading, but they have not reached snakeoil status.

They become snake oil when someone suggests that the results can be statistically extrapolated to your company or, worse, when the survey makers actually use statistical terms to describe reliability (we are 97% sure of this!) If your pay system or benefits systems are based on such a survey, you only know about where you stand compared to the surveyed population and no one else!

The reality is, the survey is statistically reliable before the results are ever tabulated or it's not statistically reliable. Surveys are statistically reliable by design, not by result. The calculations of means, quartiles, deciles, standard deviations, correlation coefficients and confidence intervals does not make them statistically relevant.

Only "random" samples have statistical inferencing power. Not understanding how a sample was taken or from whom does not make it random. A "random" group of people returning their questionnaires is not a random sample, it's a sample of convenience, as in, "we didn't want to work any harder."

We don't mean to cast a pall on all surveys. The organizations that do sampling for television and political preferences have generally applied proper statistical techniques to their work. Check their professional employees, odds are there's an army of real statisticians, not just expensive clerks who have used SPSS or some other statistical software package.

A great example is a recent survey done by the Nikkei mentioned by Dr. Ohtaki of Mercer during a chamber presentation. The inference was that lifetime employment and seniority-based pay systems are imminently on the way out. The survey found 42% or so of the respondents felt merit-based pay systems were better than tenure-based ones and only about 9% thought merit-based systems were bad. In Q&A, it was revealed that the survey was not big enough to provide correlations by age or pay, variables that most would agree would significantly influence a person's opinion about merit-based pay. Fact was, the survey was based on only about 100 individuals from a select group of companies; a sample of convenience.

Our second example is the triennial survey of ACCJ member employment practices sponsored by the Human Resources Committee.

First, the 1993 edition is called "Employment Practices of American Companies in Japan." Is it? The second bullet says "ACCJ member companies." Does a company become "American" merely because it has joined the ACCJ?

Second, the same bullet says "The purpose of the survey is to identify major changes underway in employment practices among ACCJ member companies in Japan, driven by the bubble collapse and recession." This sounds statistical, despite the fact that the survey is merely one of convenience ("and our survey said..."). The companies responding were not a random sample and, thus, it can not be relied on for inferences about the entire membership of the ACCJ. Further, there is not a one-to-one correspondence between prior survey respondents and the current ones. The changes, attributed to the "bubble collapse," etc. may not be changes at all, they may merely reflect who was able to answer the survey questions. The ACCJ has a broad spectrum of members; some respond to surveys, some don't, and some sometimes do.

What effect did the bubble collapse and recession have on employment practices? We only know that the bubble collapsed before the survey was conducted and, even if the survey had been statistically valid, we could find no questions in the survey directly dealing with the bubble collapse impact. The reality is, despite the stated purpose of the survey, it produced no information about the impact of the bubble collapse and it's results can only be statistically attributed to those who answered the survey, not the membership at large.

So, what's the answer? The professional standing and experience of the people you hire, as always. Whether you're hiring employees or consultants, you need to ask the hard questions. You should steer clear of a consultant who offers an existing survey as proof of what you should be doing; there is something wrong.

When an existing survey is offered as evidence of what should be done, check the survey participants. If you feel your company compares with them and you also feel the respondents both understood the questions and were forthright, have your consultant use it as a guide.

Without such analysis and judgment, using such a survey as a proxy for an individualized survey is done at your company's peril; you may be right, you may be wrong with a 100% confidence interval.


Copyright 1997 Lohmann International Associates

You have been reading the online edition of LIA $FACTS$, the monthly fax newsletter of Lohmann International Associates. For further information, please visit our home page on the Web or send e-mail to Les Lohmann.


BenefitsLink Home PageThis page proudly hosted by BenefitsLink