change the <STRONG>relative</STRONG> performance of the detectors under test. That is, if Detector A tested better than Detector B in the air test, would you expect Detector B to test better than Detector A using ferrite, a bucket of dirt, etc., all other things being equal?
If not, then the air test is just as good as anything else in determining the <STRONG>relative</STRONG> performances of the two detectors under ideal conditions. That's not to say that they would perform the same in the field, there are far too many potentially confounding variables at work. Even finding a more suitable medium to "bench test" a detector is a questionable pursuit, in my opinion, given the real world presence of confounding variables (e.g., soil type, method of discrimination, presence of trash, ground balancing accuracy, etc.).
And what kind of dirt would you use and how would your standardize it? Also, there are all kinds of ferrite materials with a wide range of permeabilities. Which do you use? And does it make a difference? Again, in my opinion, these options introduce even more confounding variables (if not standardized) than air, which is relatively uniform from the standpoint of variables critical to detector performance.
I guess this debate will go on forever and the bottom line is to use whatever method you feel most comfortable with from a logical level and withhold real world performance determinations until you get the detector out in the field.
HH...Thomas