Just to add to what Cosmik was saying (which I agree with): Assuming you were to publish this test in a scientific journal, and you
wanted to show that cables could make a difference. What you could do then, is to administer the test several times, getting no results, and lo and behold! - suddenly after one of the test you're finally able to crunch your numbers in order to arrive at statistical significance which shows a small cable-effect. Here's for example a guy from psychology who was able to show statistically that people can look into the future, published in a top journal (not kidding):
http://psycnet.apa.org/journals/psp/100/3/407/
Of course, that experiment has never been replicated. It was a combination of bad experimental design and stupid use of statistics. But that's the problem with those kind of studies which rely on statistical significance in a simplistic manner.
(btw, the publication of that article led to some serious soul searching among psychologists about the statistical methods they accept as valid)
I don't mean that experiments are worthless. But it's hard to do right, and if the effects one finds are very small, they are probably not that important. If one finds big and substantial effects in an experiment, and if they can be replicated by other researchers who use other approaches, then we're talking.