• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Interesting article by Mitchco.

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,123
Likes
12,314
Location
London

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,696
Likes
37,432
Yes, that was a nicely done and different review of an audio show. Mitchco seems pretty sensible about such things.
 

hvbias

Addicted to Fun and Learning
Joined
Apr 28, 2016
Messages
577
Likes
419
Location
US
Has Toole written anything about audibility of time coherence in loudspeakers? Or any other legit publication would be fine.

Someone straighten me out if I'm not remembering correctly, weren't the Revel Salon Ultima (or Ultima 2) the speakers that did very well in blind tests, yet their time domain response is nothing to write home about.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Thanks. Good read. He is confusing what MQA means by "time smearing." It is not the same as "time coherent speakers."
I think he is just taking Meridian at their word when they say:
...timing sensitivity evolved to help us thrive in forest environments. As sounds reach us, microseconds apart, our brains build a 3D sonic ‘picture’. Similarly, at a live performance, we're able to position individual instruments. It’s why live music feels so powerful and a recording seems so flat in comparison. MQA captures this timing information from the master, so it feels like you’re at the performance.
This is pointless if you then go and replay MQA over speakers which are ridiculously "time smearing" through not being time coherent. I agree with him.
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
I think he is just taking Meridian at their word when they say:

"timing sensitivity evolved to help us thrive in forest environments. As sounds reach us, microseconds apart, our brains build a 3D sonic ‘picture’. Similarly, at a live performance, we're able to position individual instruments. It’s why live music feels so powerful and a recording seems so flat in comparison. MQA captures this timing information from the master, so it feels like you’re at the performance."

Live vs Studio comparisons, might as well compare apples vs grapefruits.

Powerful
doesn't suffer compression well; which timing can't restore ...
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Powerful doesn't suffer compression well; which timing can't restore ...
It need not be compression - some recordings are not compressed. But perception of "powerful" may be reduced if transient edges are staggered in their timing.

Don't get me wrong: I think the MQA stuff is all BS - the timing of CD is accurate to nanoseconds for music signals. But if people are going to fall for the hype, it needs pointing out that their speakers are far worse for timing than their digital replay chains.
 

Phelonious Ponk

Addicted to Fun and Learning
Joined
Feb 26, 2016
Messages
859
Likes
215
I wonder about this:

"Integrating subwoofer(s) without sounding boomy (i.e. peaky room modes) takes considerable skill and effort. I feel this is the primary reason why I did not see any subs on exhibit."

But there were full range speakers. I know we talk a lot about multiple subs and DSP. Have we have concluded that deep bass coming from two fixed but uncontrolled points below the midrange and high-frequency drivers is actually easier to integrate than a single sub, even without room correction? Do we think we have to move the bass around, implement it in multiple points and equalize, unless it's attached to the bottom of full range speakers, then it's all good? Or at least better? I would think that a sub or two, at the very least, allows you to position mains for imaging and move the bass around until it sounds best, subjectively. It seems to me that if all the talk about bass/room problems is accurate, and I believe it is, then full range speakers with fixed positions for deep bass are simply an inferior implementation compared to isolating the bass into independently moveable boxes. Am I missing something? Or is this just another example of audiophiles talking themselves into the sonic superiority of the thing they want to look at; big, impressive speakers?

Tim
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,597
Location
Seattle Area
Has Toole written anything about audibility of time coherence in loudspeakers? Or any other legit publication would be fine.

Someone straighten me out if I'm not remembering correctly, weren't the Revel Salon Ultima (or Ultima 2) the speakers that did very well in blind tests, yet their time domain response is nothing to write home about.
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.

I have not spoken to Dr. Toole about it but here are some quotes from his book that says the same thing:

"This [results of blind listening tests] suggests that we like flat amplitude
spectra and we don’t like resonances, but we tolerate general phase shift, meaning
that waveform fidelity [both amplitude and phase] is not a requirement.

[...]
Loudspeaker transducers, woofers, midranges, and tweeters behave as
minimum-phase devices within their operating frequency ranges (i.e., the phase
response is calculable from the amplitude response). This means that if the
frequency response is smooth, so is the phase response, and as a result, the
impulse response is unblemished by ringing. When multiple transducers are
combined into a system, the correspondence between amplitude and phase is
modified in the crossover frequency ranges because the transducers are at different
points in space. There are propagation path-length differences to different
measuring/listening points. Delays are non-minimum-phase phenomena. In the
crossover regions, where multiple transducers are radiating, the outputs can
combine in many different ways depending on the orientation of the microphone
or listener to the loudspeaker.

The result is that if one chooses to design a loudspeaker system that
has linear phase, there will be only a very limited range of positions in space
over which it will apply. This constraint can be accommodated for the
direct sound from a loudspeaker, but even a single reflection destroys the relationship.

As has been seen throughout Part One of this book, in all circumstances,
from concert halls to sound reproduction in homes, listeners at best
like or at worst are not deterred by normal refl ections in small rooms. Therefore,
it seems that (1) because of reflections in the recording environment there
is little possibility of phase integrity in the recorded signal, (2) there are challenges
in designing loudspeakers that can deliver a signal with phase integrity
over a large angular range, and (3) there is no hope of it reaching a listener in
a normally reflective room. All is not lost, though, because two ears and a brain
seem not to care.

Many investigators over many years have attempted to determine whether
phase shift mattered to sound quality (e.g., Greenfield and Hawksford, 1990;
Hansen and Madsen, 1974a, 1974b; Lipshitz et al., 1982; Van Keulen, 1991).
In every case, it has been shown that if it is audible, it is a subtle effect,
most easily heard through headphones or in an anechoic chamber, using carefully
chosen or contrived signals.
There is quite general agreement that with
music reproduced through loudspeakers in normally reflective rooms, phase
shift is substantially or completely inaudible.
When it has been audible as a
difference, when it is switched in and out, it is not clear that listeners had a
preference.

Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in reflective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classified as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfield and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction.

In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers in the crossover regions. Beyond that, it appears to be unimportant.


So I say that the research is pretty conclusive and there is really no "problem" here to fix. It is the case of chasing measurements that please the eye, instead of the ears. :) The impulse measurements are devoid of room reflections and only use one microphone instead of two ears and a brain. Big difference in acoustics.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,597
Location
Seattle Area
Have we have concluded that deep bass coming from two fixed but uncontrolled points below the midrange and high-frequency drivers is actually easier to integrate than a single sub, even without room correction?
Not at all. It is true that two speakers placed at precise locations provide mode cancellation in that dimension. But outside of that, being able to move a sub anywhere with DSP provides a superior solution. To wit, when we used to demonstrate the KEF LS50, we added a Revel sub with DSP correction and put it in a hidden corner. This was a small room -- much smaller than hotel rooms -- and the combined sound was superb. Excellent tight bass and lots of it, seemingly coming from the bookshelf LS50s.

Subs and DSP are not used because most of the exhibitors don't make subs so don't want to use someone else's product. And in general high-end audiophiles shop for a pair of speakers, not subs and DSPs.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.

I have not spoken to Dr. Toole about it but here are some quotes from his book that says the same thing:

"This [results of blind listening tests] suggests that we like flat amplitude
spectra and we don’t like resonances, but we tolerate general phase shift, meaning
that waveform fidelity [both amplitude and phase] is not a requirement.

[...]
Loudspeaker transducers, woofers, midranges, and tweeters behave as
minimum-phase devices within their operating frequency ranges (i.e., the phase
response is calculable from the amplitude response). This means that if the
frequency response is smooth, so is the phase response, and as a result, the
impulse response is unblemished by ringing. When multiple transducers are
combined into a system, the correspondence between amplitude and phase is
modified in the crossover frequency ranges because the transducers are at different
points in space. There are propagation path-length differences to different
measuring/listening points. Delays are non-minimum-phase phenomena. In the
crossover regions, where multiple transducers are radiating, the outputs can
combine in many different ways depending on the orientation of the microphone
or listener to the loudspeaker.

The result is that if one chooses to design a loudspeaker system that
has linear phase, there will be only a very limited range of positions in space
over which it will apply. This constraint can be accommodated for the
direct sound from a loudspeaker, but even a single reflection destroys the relationship.

As has been seen throughout Part One of this book, in all circumstances,
from concert halls to sound reproduction in homes, listeners at best
like or at worst are not deterred by normal refl ections in small rooms. Therefore,
it seems that (1) because of reflections in the recording environment there
is little possibility of phase integrity in the recorded signal, (2) there are challenges
in designing loudspeakers that can deliver a signal with phase integrity
over a large angular range, and (3) there is no hope of it reaching a listener in
a normally reflective room. All is not lost, though, because two ears and a brain
seem not to care.

Many investigators over many years have attempted to determine whether
phase shift mattered to sound quality (e.g., Greenfield and Hawksford, 1990;
Hansen and Madsen, 1974a, 1974b; Lipshitz et al., 1982; Van Keulen, 1991).
In every case, it has been shown that if it is audible, it is a subtle effect,
most easily heard through headphones or in an anechoic chamber, using carefully
chosen or contrived signals.
There is quite general agreement that with
music reproduced through loudspeakers in normally reflective rooms, phase
shift is substantially or completely inaudible.
When it has been audible as a
difference, when it is switched in and out, it is not clear that listeners had a
preference.

Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in reflective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classified as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfield and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction.

In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers in the crossover regions. Beyond that, it appears to be unimportant.


So I say that the research is pretty conclusive and there is really no "problem" here to fix. It is the case of chasing measurements that please the eye, instead of the ears. :) The impulse measurements are devoid of room reflections and only use one microphone instead of two ears and a brain. Big difference in acoustics.

I disagree! :)

As in much of audio, a plausible set of words and selective experiments can be put together to justify any convenient stance or belief. I have my own stance and beliefs, based on the idea that we hear the direct sound and 'put aside' the reflections using our highly developed hearing systems. It is very convenient for speaker designers to ignore this, but now that we have DSP, I think there is no excuse.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,597
Location
Seattle Area
As in much of audio, a plausible set of words and selective experiments can be put together to justify any convenient stance or belief.
Not so. The paper by Vanderkooy sets out to correct everyone that phase delay *is* audible. There are listening tests, etc. in the paper that demonstrate it. It is only the final conclusions section where he says that the conditions under which phase was audible is hard to replicate in real rooms.

upload_2016-7-23_10-5-17.png


upload_2016-7-23_10-9-13.png


upload_2016-7-23_10-10-1.png



The paper was published in the peer reviewed journal of AES where both Dr. Lipshitz and Vanderkooy are both luminaries. 99% of the paper sets out to *prove that phase delay is audible.* It is just the case that with music, loudspeakers and rooms, it becomes nearly inaudible. And rightly they point it out:

upload_2016-7-23_10-12-3.png


It could not be more balanced than this.
 

hvbias

Addicted to Fun and Learning
Joined
Apr 28, 2016
Messages
577
Likes
419
Location
US
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.

I have not spoken to Dr. Toole about it but here are some quotes from his book that says the same thing:

"This [results of blind listening tests] suggests that we like flat amplitude
spectra and we don’t like resonances, but we tolerate general phase shift, meaning
that waveform fidelity [both amplitude and phase] is not a requirement.

[...]
Loudspeaker transducers, woofers, midranges, and tweeters behave as
minimum-phase devices within their operating frequency ranges (i.e., the phase
response is calculable from the amplitude response). This means that if the
frequency response is smooth, so is the phase response, and as a result, the
impulse response is unblemished by ringing. When multiple transducers are
combined into a system, the correspondence between amplitude and phase is
modified in the crossover frequency ranges because the transducers are at different
points in space. There are propagation path-length differences to different
measuring/listening points. Delays are non-minimum-phase phenomena. In the
crossover regions, where multiple transducers are radiating, the outputs can
combine in many different ways depending on the orientation of the microphone
or listener to the loudspeaker.

The result is that if one chooses to design a loudspeaker system that
has linear phase, there will be only a very limited range of positions in space
over which it will apply. This constraint can be accommodated for the
direct sound from a loudspeaker, but even a single reflection destroys the relationship.

As has been seen throughout Part One of this book, in all circumstances,
from concert halls to sound reproduction in homes, listeners at best
like or at worst are not deterred by normal refl ections in small rooms. Therefore,
it seems that (1) because of reflections in the recording environment there
is little possibility of phase integrity in the recorded signal, (2) there are challenges
in designing loudspeakers that can deliver a signal with phase integrity
over a large angular range, and (3) there is no hope of it reaching a listener in
a normally reflective room. All is not lost, though, because two ears and a brain
seem not to care.

Many investigators over many years have attempted to determine whether
phase shift mattered to sound quality (e.g., Greenfield and Hawksford, 1990;
Hansen and Madsen, 1974a, 1974b; Lipshitz et al., 1982; Van Keulen, 1991).
In every case, it has been shown that if it is audible, it is a subtle effect,
most easily heard through headphones or in an anechoic chamber, using carefully
chosen or contrived signals.
There is quite general agreement that with
music reproduced through loudspeakers in normally reflective rooms, phase
shift is substantially or completely inaudible.
When it has been audible as a
difference, when it is switched in and out, it is not clear that listeners had a
preference.

Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in reflective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classified as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfield and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction.

In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers in the crossover regions. Beyond that, it appears to be unimportant.


So I say that the research is pretty conclusive and there is really no "problem" here to fix. It is the case of chasing measurements that please the eye, instead of the ears. :) The impulse measurements are devoid of room reflections and only use one microphone instead of two ears and a brain. Big difference in acoustics.

Thanks very much Amir! Reassuring since the vast majority of us can't go entirely active crossovers.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Not so. The paper by Vanderkooy sets out to correct everyone that phase delay *is* audible. There are listening tests, etc. in the paper that demonstrate it. It is only the final conclusions section where he says that the conditions under which phase was audible is hard to replicate in real rooms.

View attachment 2370

View attachment 2371

View attachment 2372


The paper was published in the peer reviewed journal of AES where both Dr. Lipshitz and Vanderkooy are both luminaries. 99% of the paper sets out to *prove that phase delay is audible.* It is just the case that with music, loudspeakers and rooms, it becomes nearly inaudible. And rightly they point it out:

View attachment 2373

It could not be more balanced than this.

Well, I don't know the exact details of these listening tests that purport to show that phase and timing doesn't matter (and I'm not sure I draw the same conclusions as you, Amir, on that report you show), but a question I would ask is: do the experiments compare 'a certain type of phase shift' against 'no phase shift', or is it 'a certain type of phase shift' against 'a different type of phase shift'? Like comparing a scene through rippled glass versus clear glass, or rippled glass versus some other rippled glass. I may not be very good at distinguishing between different types of rippled glass, but I might recognise the clear glass a lot better. I suspect that many of these experiments are comparing two types of rippled glass, having been carried out before meaningful correction was possible, and may have been in mono, with small speakers with inadequate bass, or full range drivers and their ilk.

At the end of the day no one knows whether listening tests actually work anyway: consciously listening for differences may make us less sensitive to hearing them - how can you prove whether this is true or not? So I am happy to remain a member of a very small group of people who correct their speakers (as much as is possible) for phase and timing on the grounds that it is common sense - and it makes speaker design a hell of a lot easier anyway. I like the sound it gives, even if you think it is all in my mind. :)
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,696
Likes
37,432
snip

At the end of the day no one knows whether listening tests actually work anyway: consciously listening for differences may make us less sensitive to hearing them - how can you prove whether this is true or not?

Well listening tests do allow one to pick out some very small differences. Consistently, repeatedly, and reliably. So yes they demonstrably work.

Consciously listening making us less sensitive to hearing differences is less clear cut, though what little evidence is available suggests the reverse of what you are pondering about this. It also would conflict with how well other senses work under testing. Further the basic performance envelope mapped out with various listening tests from research also fit pretty nicely with the physical parameters of our hearing mechanism. The areas of possible disagreement there are small.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Well listening tests do allow one to pick out some very small differences. Consistently, repeatedly, and reliably. So yes they demonstrably work.
But this is a circular argument, because the categorisation of "very small" is derived from the smallest differences a human can pick out repeatedly and reliably while consciously listening for them.

We are dealing with human consciousness. It is entirely possible that humans register even smaller differences, or different categories of distortion, say, when not consciously listening for them - but we shall never know, because conducting the experiment affects the observation. Simply repeating an extract of music may kill our ability to hear the subtlest differences. How can you prove it either way?

It doesn't worry me - I just correct my speakers as much as possible on the assumption that it can only be a good thing.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,597
Location
Seattle Area
Well, I don't know the exact details of these listening tests that purport to show that phase and timing doesn't matter (and I'm not sure I draw the same conclusions as you, Amir, on that report you show), but a question I would ask is: do the experiments compare 'a certain type of phase shift' against 'no phase shift', or is it 'a certain type of phase shift' against 'a different type of phase shift'? Like comparing a scene through rippled glass versus clear glass, or rippled glass versus some other rippled glass. I may not be very good at distinguishing between different types of rippled glass, but I might recognise the clear glass a lot better. I suspect that many of these experiments are comparing two types of rippled glass, having been carried out before meaningful correction was possible, and may have been in mono, with small speakers with inadequate bass, or full range drivers and their ilk.
Listening tests in peer reviewed papers have all the details one would want. Not having read them is no excuse to say they may not be so!

That aside, it is easy to intuit why the conclusions are what they are. First, the test actually show that linear phase delays *are* audible in headphone listening where the room is eliminated. When we add the room, we by definition introduce linear phase delay. A high frequency with shorter wavelength will experience far more of a phase distortion than a mid frequency with longer wavelength. These phase "errors" exist by the same reason in a live session, in the recording/mastering room, and your own room. In other words, you have three sets of them at pretty high level. Speaker only contributes one component to an existing mix of unknown phase delay (which is different from track to track in your music!). That is one of the reasons the speaker created phase delay gets lost when music is playing in a room as opposed to headphones or anechoic chamber.

The other reason is what I explain here: http://audiosciencereview.com/forum/index.php?threads/perceptual-effects-of-room-reflections.13/

In a nutshell, you have two ears, not one. Instrumentation showing delays from speaker drivers represents one ear, not two. Your ears are at different distances to the speaker and hence experience different levels of phase shift. Your brain is constantly experiencing this in everyday life. Evolutionary traits have caused our brain to set aside this blurring of timing to use your word, and get to the meat of the matter which is what the source of sound is. Without this, you would constantly hear echos as the two ears hear signals at different timing, causing us to go mad!

As humans we are trained to think of things like timing in the form of precision/clock, etc. So it is natural to put huge importance on them. Our hearing and brain though are not clocks and don't work the way we think they do. So it is important to put aside our intuition and go by what the research says. And the research says achieving phase accuracy is not important and it is a futile goal anyway as nothing in your recording is phase accurate. If the talent didn't hear it in phase accurate manner, why would you insist that you do???
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,597
Location
Seattle Area
We are dealing with human consciousness. It is entirely possible that humans register even smaller differences, or different categories of distortion, say, when not consciously listening for them - but we shall never know, because conducting the experiment affects the observation.
If you are not conscious of it, it doesn't enter into your enjoyment of what you are listening. If they are subconscious, you could never describe them as fidelity differences for example.

As to your second point, we use references and controls. In this case, we could hear differences with headphones, but not in rooms. The former says the test is capable of finding the electrical effect. The second says in real rooms, that effect is not very important.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
I've never worried about phase issues - yes, I can hear the difference if I deliberately test for such things, but it doesn't get in the way of having the sound 'work' or not - other areas in the quality of the sound are far, far more important, IME.
 
Top Bottom