Roger Skoff, for the first time ever, actually advocates blind testing!
Any of you, who have followed my writings, either here or in other publications, should already have a pretty good idea of where I stand on the issues of measurement and double blind testing as related to High End audio. Even those of you who've never read anything else by me, but who know that I founded and, until I sold that company, was the designer for XLO, should also have pretty well figured it out: XLO is, after all, an audio cable manufacturer, and like all such companies has, since time immemorial, been under constant attack by those—even including the Audio Engineering Society and any number of audio industry and other professionals—who roundly condemn the entire High End audio cable industry as frauds, charlatans, and purveyors of snake oil and voodoo.
With the cable industry, that position is easy to take: After all cables are just wire, aren't they? What could possibly be so special about them? And, if you buy a receiver, a CD or DVD player, or some other audio or source component, at least at a certain price point, it's likely that the manufacturer will even give you the cable for it free! That must be good enough, mustn't it? Otherwise, why would the manufacturer not just (at least tacitly) recommend it, but actually give you one? And have you seen the prices for all those fancy cables that the cable companies want you to buy? Hundreds or even thousands of dollars for just a few feet of wire, when everybody knows that you can get wire at any hardware store for just pennies a foot!
Every one of those arguments has been made more times than we can remember, by more people than we can count, and most of them can be easily dismissed. There is one point, though, that's hard to deny and that seems to lend credibility to at least some of the others: That's simply that the cable industry seems always to respond to challenges with anecdotes or theory, when what its critics want is "scientific proof" in the form of either significant measured data or—the darling of the doubters—double blind testing.
It's not just the cable companies that are challenged, either: At one time or another, any number of other High End audio products (amplifiers, anti-resonance devices, AC power conditioners, a seemingly infinite number of "tweaks" of various kinds, and currently, even CD and DVD sampling rates) have come under the same kinds of attack and their proponents have been equally unable to give their critics the sort of measured or double-blind-tested proof that they want to see.
Philosophically, my position on this issue has always been very simple: if, as is clearly and obviously true, tens or hundreds of thousands of people believe that there are audible differences between (or produced by) certain products and believe that those differences are great enough for them to spend hundreds or thousands or even tens of thousands of dollars to buy them, then, for the people able to perceive them, those differences must certainly be there. It's sort of like the old saying that "You can fool some of the people some of the time, and most of the people most of the time, but you can't fool all of the people all of the time. Or if not that, perhaps this: If not all of the people in a particular place see a particular thing at a particular time, maybe some of those people were blind or had their eyes closed or were looking in the wrong direction when it happened. The significant thing is not that some people didn't see it, but that everybody else did!
The fact of it is, though, that more than just a philosophical defense is possible: For one thing, as I've pointed out many times, both conversationally and in writing, the range and effective resolution of human hearing is vastly greater than that of any scientific measuring device that I know of. With its minimum audible level ("threshold-of-perception") around 30dB absolute and its maximum tolerable level (its "threshold–of-pain") of about 130dB, the human ear has, to put it into the kind of terms that "scientific"-types like, an effective single-scale readout encompassing 100dB, which, if you'll remember that the decibel scale is logarithmic, is a loudness range of 100 to 1010, or, stating it differently, a full ten orders of magnitude!
Do you know of any test equipment in any field or discipline that has that kind of single-scale range? If not, and if people can hear something that even the best test equipment can't corroborate, then who cares about "scientific" testing? The problem is not with what people can hear, but with what their equipment can test!
Another point is that, as I've been writing for years, much of the difference we hear between "suspect" products like cables or cable lifters or amplifiers is due to inductive or capacitive discharge effects—the release of out-of-phase energy into a signal path as either an electromagnetic field collapses or a capacitor discharges due to a change of signal phase. Both of these, while they can in unusual circumstances produce out-of-phase artifacts, are usually "seen" simply as cancellations of low-level signal energy at or immediately adjacent to the "zero" line on an oscilloscope trace, resulting in nothing visible other than a slight lessening of total peak-to-peak energy. Even if researchers were—as they almost never do—to direct their attention to the zero line instead of the more obvious aspects of the trace, there would be nothing there for them to see or document.
As to double blind testing, as it's usually applied, it does offer real possibilities for meaningful testing of any number of things—except, unfortunately, High End audio.
The problem is not with the test, itself, which, in its simplest form, is nothing more than a tester, who doesn't know which is which, asking a testee, who also doesn't know, by the use of his senses or by the response of his body, to identify which of two or more different things has been presented to him. A basic rule in any kind of comparative testing is that as many complicating factors as possible must be removed and, to the extent possible, the only differences between the things being compared must lie in the things, themselves.
Testing pharmaceuticals is a perfect use for double blind testing. From a single group of testee, all having similar significant characteristics and all having the same medical condition, two test groups are drawn at random. Then, a staff of testers who—so they can't unwittingly give away any information and spoil the test—aren't told what they are giving to which group, give one group of testee a test medication and give the other group an identical-looking placebo. Finally, the two groups are tracked to see if there are statistically significant differences over time—if, for example, the test drug produces a greater rate of "cures" of whatever illness or condition than does the placebo.
It's all very simple: one difference (medication or placebo) allows for determining between two possible outcomes (statistically significant cure rate or not). Easy. It's sort of like the famous "Pepsi Challenge," except that one group of testee is given two unmarked products and asked which they prefer. In either case, though, there's only one easily identifiable difference involved and under test.
With audio, isolating and testing only one single factor is virtually impossible: Let's suppose, for example, that you wanted to see if different CD players sound different. How would you do it? People can’t listen to CD players directly, so you'd have to play your player-under-test through something. How about an amplifier. Okay, to make sure that it's only the CD player that we’re listening to, let's be certain to always use the same amplifier with all players. Now, because you still can't just listen to an amplifier, we're going to need something to actually listen to: Speakers? Headphones? Will the ones we choose have sufficient resolving capability to show whatever differences there might be? If we choose speakers, where are we going to play them? In what kind of a room? With what kind of acoustics? For how many people? And if there's going to be more than one person, how many people are going to be in the "sweet spot"? And what about the others? What will they hear? And will their observations still be of significant value?
Even if we somehow solve all of those problems and come up with a way that each and every one of our testee will hear exactly the same thing under the exact same circumstances, will each of them have exactly the same level of hearing acuity? Or be in exactly the same mood and state of health, with exactly the same level of interest and receptivity to what he's hearing?
Speaking of interest, it's a known fact that, even with the same music playing, different people will listen to different things: Some will listen to the bass; some to the treble, some will, if it's a song, listen to the words, and other just the instrumental background. Still others will just listen to the sound—how well it's recorded; its dynamics; its transient "attack and decay"; whether and how well it images and soundstages. In short, even when a bunch of people are listening to the same thing, it's likely that no two of them are listening to the same thing.
What that means, of course, is that, at least for audio, double blind testing has no value because it simply can't be done: there will always be not only differences between the products, but between the people testing them, and that ideal state of just one single isolated variable can never be achieved.
At least that's what I thought until just very recently, when I remembered a sales technique that XLO used to recommend to its dealers years ago: "When the sale has been made (we told them), and the customer says 'Yes, I'll take it' (a preamp, for example), instead of just wrapping it up and taking the customer's money, say 'Thank you, but do you mind if I try just one more thing?' The customer will always agree, and when he does, without telling him what you are doing , reach behind the preamp and change the cables from whatever brand were hooked-up to any other brand. (From XLO, for example, to "Brand X", or the other way around). Then play the very last thing you played for him one more time, and ask if he hears any difference. If he says "No", thank him and sell him his preamp; obviously he has wooden ears, and any further selling attempts will be fruitless. If, on the other hand, he says "Yes", tell him that what you did was to change the cables; ask him which ones he preferred; and then sell him a new set of cables (whichever ones he liked best) to go with his new preamp!"
Isn't that the perfect example of (single) blind testing? Same System; same room; same customer; same music; and only one single isolated and unknown variable—the cables!
It meets every challenge; I would wonder why I didn't think of it sooner, except that I obviously did!