• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
Amazing work with the spreadsheet calculations, thanks! I think you’ve misunderstood what Olive meant in his LFX description though. In the full AES paper (scroll down for the full paper - much easier to follow than the patent application), he says:


The patent application says “The sound power curve (SP) may be used for the calculation” instead of “is used for the calculation” (my emphasis) in the AES paper. I believe ‘calculation’ in both refers to the -6 dB point of the sound power curve only, and ‘may’ was used in the patent application as it’s describing techniques that may be used to calculate predicted preference ratings. (You’ll notice he uses ‘may’ instead of ‘is’ throughout much of the patent application – this might be to do with legal wording which could have to be very technically precise for a patent application.) So, I’m pretty sure you need to use the mean level of the listening window between 300 Hz and 10 kHz for the reference level of the LFX calculation, as stated in the actual AES paper (and the LFX equations in both the paper and patent application).


Definitely not the last option, as Olive defines it as “the first frequency x_SP below 300 Hz” (not 'nearest' or 'closest') so it must be the same side of the -6 dB point every time. I would say closest Hz less than the -6 dB point is correct, as the next part of the definition, “that is -6 dB relative to the mean level y_LW”, I believe should be read as ‘at least 6 dB less than’ i.e. the ‘first’ frequency you ‘hit’ moving down the SP curve from 300 Hz that has the condition of being at least 6 dB less than y_LW. Otherwise, taking the closest Hz greater than the -6 dB point would mean the low extension frequency not meeting the condition of being -6 dB relative to y_LW, which would be incorrect according to the LFX definition and formula presented.



Olive states in the NDB definition:

(my emphasis)

This would suggest 11,712.90 Hz should be used as the upper bound as it is within the range 100 Hz-12 kHz, whereas 12,126 Hz is outside this range. The former is also more consistent with the lower bound chosen (101.807 Hz), which is also within, not outside, the prescribed range.

Having said this, are we certain Olive is referring to the lower and upper bounds, and not the center frequency of the lowest and highest bands, as I previously suggested? I have more reason to think this after seeing some excerpts from Part 1 of his paper, A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results. I don’t have access to the full paper, but found excerpts and on this blog by a Chinese acoustic engineer. Here, in reference to this chart from the paper, he quotes Olive as saying:


So at least in these listening tests, Olive has defined bands by their center frequencies, not their lower and upper bounds. This might suggest he did the same in the second paper when devising the preference formula, and so “bands between 100 Hz-12 kHz” actually means ‘bands with center frequencies between 100 Hz-12 kHz’. Maybe @amirm can clarify this with Sean Olive?
Amir should indeed clarify this with Sean.

My issue is thus:

For -6db, what if it’s 39.5Hz, wouldn’t using 39.5508Hz be more accurate to use than 38.8184Hz?

The same for the frequencies (ignoring if it’s the center), I chose the start/end points based off what is closer to what is described in the paper.

For LFX, I calculated both.
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,466
Location
Australia
Toole is wrong if he claims that. You become more tolerant and more critical.

Some experts are quoted as though they are infallible. Independent validation is not always available. Circumstances outside of their research scope may be significant. However, their work may be the best we have until we learn more.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
I added a Radar chart for what I assume are the best/worst scores possible.

EDIT: Added Sensitivity; anyone know how to covert this to a ranking? I have 75/105 as min/max, but since it should use skewed bell-curve, not sure how to do that.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
OK

@amirm, I believe I'm done (except for Hz discrepancy), it should auto-calculate when first tab (spin data) and second tab (PIR data) is pasted. It was a pain in the f:mad:king ass to get the auto aspect going, whole bunch of indexes and matching; especially how the Revel had less Spinorama measurement points than the NHT but had the same for PIR, so now it should work regardless of # of measurement points.

https://docs.google.com/spreadsheets/d/1kcUunOI-EKHh3yX61F9VqgjRt3bMfUTMTbHZUmFBPUQ/edit?usp=sharing
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,775
Likes
242,466
Location
Seattle Area
OK

I believe I'm done (except for Hz discrepancy), it should auto-calculate when first tab (spin data) and second tab (PIR data) is pasted. It was a pain in the f:mad:king ass to get the auto aspect going, whole bunch of indexes and matching; especially how the Revel had less Spinorama measurement points than the NHT but had the same for PIR.

https://docs.google.com/spreadsheets/d/1kcUunOI-EKHh3yX61F9VqgjRt3bMfUTMTbHZUmFBPUQ/edit?usp=sharing
Thanks a bunch for your efforts. That spreadsheet though opens with no data in it!
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
Hmmm. How do I do that? I have the data in the clipboard from the txt file but can't past it in.
You can download it as an Excel sheet.
It's currently set that only I can edit it, for obvious reasons as I made the link public; if you wish to edit it in Sheets, I would need your email address to grant editing permission.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,080
Hmmm. How do I do that? I have the data in the clipboard from the txt file but can't past it in.

In Google Sheets you can just go to File > Make a copy, then you can edit your own copy within Google Sheets.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
Thanks. Downloaded and paste in the data in Excel. Am I supposed to leave the headers in there or just the data?
Damn, I just tried it and some of the formatting got screwed up (slightly different formula notation); I'll PM you a link to a duplicate sheet with edit permissions.
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,080
Thanks. Downloaded and paste in the data in Excel. Am I supposed to leave the headers in there or just the data?

I strongly suggest editing the final formula in cell B1 of the Score tab. As I've said before, in its current form it's incorrect according to Sean Olive's description and formula for the LFX variable in his paper. (LFX!B4) should be replaced with (LFX!D4) in the final formula i.e. the 'closest Hz less than' method should be used on the LFX tab, not the 'closest Hz greater than' method. Here's why:

For -6db, what if it’s 39.5Hz, wouldn’t using 39.5508Hz be more accurate to use than 38.8184Hz?

It may be more ‘accurate’ mathematically, but that’s irrelevant if that’s not the method Olive used in his AES paper. (He says ‘first frequency x_SP’ for a reason, not ‘nearest’ or ‘closest’.) We have to match Olive’s calculation method as close as possible, as the formula correlating these variables to actual preference scores was based on this method – we have no idea how changing the method would change the correlation. And as I said, I think it’s pretty obvious from Olive’s description of LFX that he was using the ‘closest Hz less than’ method. As the formula stands currently in the spreadsheet, using ‘closest Hz greater than’ translates to a ‘-6 dB’ low extension frequency that has not actually reached -6 dB, but is at a higher amplitude than this. Using ‘closest Hz less than’ gives you a frequency that has crossed the threshold of -6 dB, and so satisfies the criteria of Olive’s formula and description of LFX.

With these corrections, I arrive at a score of 4.86 (up from 4.73) for the Revel C52 (used full-range on its own – I presume for the second score, ‘ignoring LFX’ means ‘score if used with an ideal subwoofer with -6 dB point at 14.5 Hz’, as we discussed previously). The NHT speaker’s score is then also adjusted to 2.39 (up from 2.31 in the original spreadsheet).

As you can see by the affect it has on the scores, this small change in the formula is not inconsequential. I believe the 'closest Hz less than' is highly likely the correct method, but maybe you could confirm this with Sean Olive to be sure?

@MZKM is there any real reason you're sticking with the 'closest Hz greater than' method?
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
I strongly suggest editing the final formula in cell B1 of the Score tab. As I've said before, in its current form it's incorrect according to Sean Olive's description and formula for the LFX variable in his paper. (LFX!B4) should be replaced with (LFX!D4) in the final formula i.e. the 'closest Hz less than' method should be used on the LFX tab, not the 'closest Hz greater than' method. Here's why:



It may be more ‘accurate’ mathematically, but that’s irrelevant if that’s not the method Olive used in his AES paper. (He says ‘first frequency x_SP’ for a reason, not ‘nearest’ or ‘closest’.) We have to match Olive’s calculation method as close as possible, as the formula correlating these variables to actual preference scores was based on this method – we have no idea how changing the method would change the correlation. And as I said, I think it’s pretty obvious from Olive’s description of LFX that he was using the ‘closest Hz less than’ method. As the formula stands currently in the spreadsheet, using ‘closest Hz greater than’ translates to a ‘-6 dB’ low extension frequency that has not reached -6 dB, but is at a higher amplitude than this. Using ‘closest Hz less than’ gives you a frequency that has crossed the threshold of -6 dB, and so satisfies the criteria of Olive’s formula and description of LFX.

With these corrections, I arrive at a score of 4.86 (up from 4.73) for the Revel C52 (used full-range on its own – I presume for the second score, ‘ignoring LFX’ means ‘score if used with an ideal subwoofer with -6 dB point at 14.5 Hz’, as we discussed previously). The NHT speaker’s score is then also adjusted to 2.39 (up from 2.31 in the original spreadsheet).

As you can see by the affect it has on the scores, this small change in the formula is not inconsequential. I believe the 'closest Hz less than' is highly likely the correct method, but maybe you could confirm this with Sean Olive to be sure?

@MZKM is there any real reason you're sticking to using the 'closest Hz greater than' method?
I’m not sticking to it, I was just leaving it as-is until clarification from Sean Olive is given. I used 3 formulas to anticipate this.

I’ll change it now though to get you off my back:).
 
Last edited:

daverosenthal

Member
Forum Donor
Joined
Jan 5, 2020
Messages
40
Likes
104
Wrote a script in Python tonight that:
- Loads ASR-PIR-data-style text file
- Compensates for Harmon target response
- Computes reference level via RMS average of 150Hz to 6kHz band
- Finds approximate LFX point (-6db)
- Defines operating region (1/2 octave above LFX point to 1/2 octave below 20kHz (~14kHz))
- Fits linear regression to frequency response trend in operating region
- Computes warm/cool tilt factor
- Computes something proportional to SM_PIR
- Produces graph

It provides several key factors to how people perceive a speaker in one graph:
  • How low it goes
  • Whether it is "warm or cool"
  • Smoothness of response
I ran it on both the NHT and the Revel (did I miss any that @amirm posted files for?). It is trivial to run for any new speakers measured:

index.php
NHT.png

Revel.png


index.php




Next step is looking through the olive preference patent to see if there are enough details to match his model exactly. From what I've seen in quick glance I'm skeptical that the coefficients he uses will work with the data we have (differences in number of measurements per octave, etc.) but I'll look.

Let me know what you think. (I think we should set perfect/excellent/good/ok/poor targets for each number a la the 4 buckets for SINAD!)
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,080
I’m not sticking to it, I was just leaving it as-is until clarification from Sean Olive is given. I used 3 formulas to anticipate this.

I’ll change it now thought to get you off my back:).

Thanks ;) I didn't mean to come across as aggressive or anything, I just think it's important we get this right, and I honestly have good reason to believe the 'less than' method is what Olive intended, whereas I haven't thought of or seen any good arguments for using the 'greater than' method. Once again your creation of the spreadsheet is massively appreciated, it will be of huge help to the community!
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,253
Likes
11,572
Location
Land O’ Lakes, FL
Stole idea from @BYRTT and added some graphs.

Link for NHT

Sheets can't do area graphs between lines (uses all lines), so can't nicely do that.
Sheets also can't have DI graphs with different scaling, so every start point is 0dB (am working on this).

It's a mess of tabs, but once I know the automation works nicely, the calculation tabs can be hidden.

SPINORAMA.pngHorizontal Directivity.pngHorizontal Directivity (Normalized).png
They can be exported as scalar PDF's too (infinite resolution).

I was never given vertical off-axis data, @amirm could you provide this for the NHT?
 
Last edited:

daverosenthal

Member
Forum Donor
Joined
Jan 5, 2020
Messages
40
Likes
104
Wrote a script in Python tonight that:
...
Next step is looking through the olive preference patent to see if there are enough details to match his model exactly. From what I've seen in quick glance I'm skeptical that the coefficients he uses will work with the data we have (differences in number of measurements per octave, etc.) but I'll look.

Guys, I'm reading the Olive AES paper and it's really weird. Yes, I know this is heresy. The SM_* "smoothness" model feature appears to use the Pearson 'r' regression correlation coefficient in a way that's, well, charitably, counter-intuituve. To wit: A speaker that had a highly-flat-and-smooth (i.e. desirable) frequency response would have a very low "smoothness" by this measure whereas a speaker that had a bumpy response with a distinct frequency-dependent tilt would score highly. The stats intuition here is that r is high when the variation in dB is well explained by the variation in frequency. In layman's terms, given a fixed amount of natural "wobble" in frequency response, the "smoothness" number will be much higher if there is a non-flat slope to the frequency response. Weird.

(In the patent, Olive notices the effect of this in the regression: "Variables that have small correlations with preference are smoothness (SM) and slope (SL) when applied to the ON and LW curves", but doesn't seem to realize the cause--on-axis (ON) and listening window (LW) frequency responses tend to be flat, not downward sloping, so the 'r' coefficient disappears.)

The final model is fit from many features with mutual correlation so the use of this weird SM feature doesn't invalidate the model, it just means that we shouldn't think of it as measuring smoothness(!). My guess is that the more fundamental effect of SM_PIR in the final score is steering preference to speakers with gradually downward-tilting response. Finally, the "NBD_* feature captures a similar concept but appears to be better engineered, which is perhaps why the "smoothness" factor plays only a small role in the final model.

Forgetting for a second about producing one number to rule them all... At this point we know how to measure a few key numerical attributes of a speaker in a way that can 1) be sorted best to worst and 2) are very likely to related to listener preference. These are:
  1. Low frequency extension (lower better)
  2. Narrow-band frequency response variations (less better. Per Olive: "the narrow band deviation (NBD) metric yields some of the highest correlations with preference...")
  3. Overall slope close to Harmon in-room target (closer better)
Are there others that fall in this category?
 
Top Bottom