Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Speech acquisition
Speech acquisition focuses on the development of vocal, acoustic and oral language by a child. This includes motor planning and execution, pronunciation, phonological and articulation patterns (as opposed to content and grammar which is language).
Spoken speech consists of an organized set of sounds or phonemes that are used to convey meaning while language is an arbitrary association of symbols used according to prescribed rules to convey meaning. While grammatical and syntactic learning can be seen as a part of language acquisition, speech acquisition includes the development of speech perception and speech production over the first years of a child's lifetime. There are several models to explain the norms of speech sound or phoneme acquisition in children.
Development of speech perception
Sensory learning concerning acoustic speech signals already starts during pregnancy. Hepper and Shahidullah (1992) described the progression of fetal response to different pure tone frequencies. They suggested fetuses respond to 500 Hertz (Hz) at 19 weeks gestation, 250 Hz and 500 Hz at 27 weeks gestation and finally respond to 250, 500, 1000, 3000 Hz between 33 and 35 weeks gestation. Lanky and Williams (2005) suggested that fetuses could respond to pure tone stimuli of 500 Hz as early as 16 weeks.
The newborn is already capable of discerning many phonetic contrasts. This capability may be innate. Speech perception becomes language-specific for vowels at around 6 months, for sound combinations at around 9 months and for language-specific consonants at around 11 months.
Infants detect typical word stress patterns, and use stress to identify words around the age of 8 months.
As an infant grows into a child their ability to discriminate between speech sounds should increase. Rvachew (2007) described three developmental stages in which a child recognizes or discerns adult-like, phonological and articulatory representations of sounds. In the first stage, the child is generally unaware of phonological contrast and can produce sounds that are acoustically and perceptually similar. In the second stage the child is aware of phonological contrasts and can produce acoustically different variations imperceptible to adult listeners. Finally, in the third stage, children become aware of phonological contrasts and produce different sounds that are perceptually and acoustically accurate to an adult production.
It is suggested that a child's perceptual capabilities continue to develop for many years. Hazan and Barrett (2000) suggest that this development can cotton into late childhood; 6- to 12-year-old children showed increasing mastery of discriminating synthesized differences in place, manner, and voicing of speech sounds without yet achieving adult-like accuracy in their own production.
Typologies of infant vocalization
Infants are born with the ability to vocalize, most notably through crying. As they grow and develop, infants add more sounds to their inventory. There are two primary typologies of infant vocalizations. Typology 1: Stark Assessment of Early Vocal Development consists of 5 phases.
- Reflexive (0 to 2 months of age) consisting of crying, fussing, and vegetative sounds
- Control of phonation (1 to 4 months of age) consonant-like sounds, clicks, and raspberry sound
- Expansion (3 to 8 months of age) isolated vowels, two or more vowels in a row, and squeals
- Basic canonical syllables (5 to 10 months of age) – a consonant vowel (CV) combination, often repeated (e.g. ba ba ba ba).
- Advanced forms (9 to 18 months of age) complex combinations of differing constant-vowel combinations (CVC) and jargon.
Typology 2: Oller's typology of infant phonations consists primarily of 2 phases with several substages. The 2 primary phases include Non-speech-like vocalizations and Speech-like vocalizations. Non-speech-like vocalizations include a. vegetative sounds such as burping and b. fixed vocal signals like crying or laughing. Speech-like vocalizations consist of a. quasi-vowels, b. primitive articulation, c. expansion stage and d. canonical babbling.
Speech sound normative data
Knowing when a speech sound should be accurately produced helps parents and professionals determine when child may have an articulation disorder. There have been two traditional methods used to compare a child's articulation of speech sounds to chronological age. The first is comparing the number of correct responses on a standardized articulation test with the normative data for a given age on the same test. This allows evaluators to see how well a child is producing sounds compared to their same aged peers. The second method consists of comparing an individual sound a child produces with developmental norms for that individual sound. The second method can be difficult when considering the differing normative data and other factors that affect typical speech development. Many norms are based on age expectations in which a majority of children of a certain age are accurately producing a sound (75% or 90% depending on the study). Using the results from Sander (1972), Templin (1957), and Wellman, Case, Mengert, & Bradbury, (1931), the American Speech-Language Hearing Association suggests the following: Sounds mastered by age 3 include /p, m, h, n, w, b/; by age 4 /k, g, d, f, y/; by age 6 /t, ŋ, r, l/; by age 7 /tʃ, ʃ, j, θ/. and by age 8 /s, z, v, ð, ʒ/.
Early, Middle, and Late 8s
Shriberg (1993) proposed a model for speech sound acquisition known as the Early, Middle, and Late 8 based on 64 children with speech delays ages 3 to 6 years. Shriberg proposed that there were three stages of phoneme development. Using a profile of "consonant mastery" he developed the following:
- Early 8 – /m, b, j, n, w, d, p, h/
- Middle 8 – /t, ŋ, k, g, f, v, tʃ, dʒ/
- Late 8 – /ʃ, θ, s, z, ð, l, r, ʒ/
See also
- Auditory processing disorder
- Developmental verbal dyspraxia
- Infantile speech
- Origin of speech
- Speech and language pathology
- Speech processing
- Speech repetition
Further reading
- Friederici, Angela D.; Oberecker, Regine; Brauer, Jens (2011). "Neurophysiological preconditions of syntax acquisition". Psychological Research. 76 (2): 204–11. doi:10.1007/s00426-011-0357-0. PMID 21706312. S2CID 12213347.
- Guenther, Frank H. (1995). "Speech sound acquisition, coarticulation, and rate effects in a neural network model of speech production". Psychological Review. 102 (3): 594–621. CiteSeerX 10.1.1.67.3016. doi:10.1037/0033-295X.102.3.594. PMID 7624456. S2CID 10405448.
- Perani, D.; Saccuman, M. C.; Scifo, P.; Anwander, A.; Spada, D.; Baldoli, C.; Poloniato, A.; Lohmann, G.; Friederici, A. D. (2011). "Neural language networks at birth". Proceedings of the National Academy of Sciences. 108 (38): 16056–61. Bibcode:2011PNAS..10816056P. doi:10.1073/pnas.1102991108. JSTOR 41352393. PMC 3179044. PMID 21896765.
- Smith, Anne (2006). "Speech motor development: Integrating muscles, movements, and linguistic units". Journal of Communication Disorders. 39 (5): 331–49. doi:10.1016/j.jcomdis.2006.06.017. PMID 16934286.
- Wilson, Erin; Green, Jordan; Yunusova, Yana; Moore, Christopher (2008). "Task Specificity in Early Oral Motor Development". Seminars in Speech and Language. 29 (4): 257–66. doi:10.1055/s-0028-1103389. PMC 2737457. PMID 19058112.