OP
- Thread Starter
- #61
The real danger in blind testing speakers comes from two things imo;
1. Comparing just two or three speakers will exaggerate the good/bad and those differences will most likely be instantly recognizable across songs, making it somewhat biased depending on what you preferred early in the test.
2. Assuming that whatever sounds the most impressive during short stints will translate to what is preferred over time without switching/comparing to something different enough to create doubts.
The first point is adressed by Toole in his book and acknowledged by Kevin Voecks at Harman in one of the interviews posted here on a thread the other day.
The second is a difficult one and I'm not sure how we can reliably test for that.
What's the solution for #1? I don't have access to Toole's book, but I should if it speaks to blind tests if I'm going to some day do a big one involving Revel vs Genelec vs others.
I could involve four speakers for example, but then what's the best methodology to test those? For example: Play the same track on "Speaker ID" #1, #2, #3, #4, #5, #6, #7, #8, #9, #10 (with the actual speakers assigned to these IDs being randomly sampled from the set of e.g. 4 speakers, so we can measure false positives/negatives), and have listeners write down their ratings?
My only worry here is that the sheer number of speakers here would be overwhelming to the listener. Even if preference differences are heard, it may be difficult for the listener to generate a signal much more reliable than "This speaker sounds better/worse/same than the previous one you played."
Last edited: