Мы используем файлы cookie.
Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Hearing-Aid Speech Quality Index
Другие языки:

    Hearing-Aid Speech Quality Index

    Подписчиков: 0, рейтинг: 0

    Hearing-Aid Speech Quality Index (HASQI) is a measure of audio quality originally designed for the evaluation of speech quality for those with a hearing aid,. It has also been shown to be able to gauge audio quality for non-speech sounds and for listeners without a hearing loss.

    Background

    While the perception of audio quality can be gauged through perceptual measurements, the testing is time-consuming to undertake. Consequently, a number of metrics have been developed to allow audio quality to be evaluated without the need for human listening. Standardized examples from telephony include PESQ, POLQA, PEVQ and PEAQ. HASQI was originally developed by Kates and Arehart to evaluate how the distortions introduced by hearing aids degrade quality. They also produced a new version in 2014.

    Kressner et al. tested a speech corpus different from the dataset used to develop HASQI and showed that the index generalizes well for listeners without a hearing loss with a performance comparable to PESQ. Kendrick et al. showed that HASQI can grade the audio quality of music and geophonic, biophonic, and anthrophonic quotidian sounds, although their study used a more limited set of degradations.

    Method

    HASQI and its 2014 revision are double-ended methods requiring both a clean reference and the degraded signal to allow evaluation. The index attempts to capture the effects of noise, nonlinear distortion, linear filtering and spectral changes, by computing the difference or correlation between key audio features. This is done by examining short-time signal envelopes to quantify the degradation caused by noise and nonlinear filtering, and long-time signal envelopes to quantify the effects of linear filtering. Version 2 of HASQI includes a model to capture some aspects of the peripheral auditory system for both normal and hearing impaired listeners.

    Kendrick et al. developed a blind (single-ended) method, bHASQI, using machine learning. This enables the audio quality to be evaluated from just the degraded signal without needing the clean reference.

    See also

    External links


    Новое сообщение