"Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign."
The study of speech sounds used in a particular language and how they are produced and perceived.
Phonology: The study of the sound system of a language, including the sounds (phonemes) used, their patterns or distribution and how they interact.
Phonemes: The smallest unit of sound in a language that can differentiate meaning.
Allophones: Different versions of a phoneme found in different contexts or environments.
Articulatory Phonetics: The study of how speech sounds are produced by the human vocal tract.
Acoustic Phonetics: The study of speech sounds and their physical properties, such as frequency, intensity, and duration.
Auditory Phonetics: The study of how sounds are perceived by the human ear.
Suprasegmentals: The features of speech that go beyond individual sounds and include stress, rhythm, intonation, and tone.
Prosody: The patterns of stress, rhythm, and intonation that give speech its musical quality and contribute to its meaning.
Vowels and Consonants: The two types of speech sounds and their distinguishing features.
Place and Manner of Articulation: The articulatory features that determine where and how a sound is produced in the vocal tract.
Voicing: A property of some sounds that involves vibration of the vocal cords.
Nasalization: The production of a sound through the nasal cavity.
Tones and Tonal Languages: Precise differences in pitch that distinguish words or meanings in some languages.
Phonetic Transcription: The use of symbols to represent speech sounds in written form.
IPA (International Phonetic Alphabet): A system of symbols that represents the sounds of all human languages, used for transcription and pronunciation guides.
Dialects and Accents: Varieties of a language spoken by different groups, geographic areas or social classes, which may have different pronunciation patterns.
Language Universals: Patterns of language shared by all human languages, such as the presence of vowels and consonants, or the distinction between nouns and verbs.
Language Typology: The study of similarities and differences among languages, based on the features of their sound systems, grammars, and vocabularies.
Articulatory Phonetics: The study of how words are pronounced with regards to the movement and placement of the articulators (tongue, lips, and mouth).
Auditory Phonetics: The study of how sounds are perceived through hearing.
Acoustic Phonetics: The study of the physical properties of sound waves produced and perceived during speech production.
Perceptual Phonetics: The study of how sounds are perceived by the human ear and brain.
Experimental Phonetics: The study of speech sounds through experimentation, such as measuring the duration, loudness, and frequency of the sound.
Historical Phonetics: The study of how speech sounds have evolved and changed over time in a language or group of languages.
Contrastive Phonetics: The study of how different sounds in a language differentiate meaning.
Dialectology: The study of how spoken language varies geographically and socially.
Sociolinguistics: The study of how language use varies within different social groups and in different social contexts.
Cognitive Phonetics: The study of the mental processes involved in perceiving and producing speech sounds.
Pedagogical Phonetics: The study of how to teach pronunciation and improve the communication skills of non-native speakers.
Phonetic Typology: The study of the organization of speech sounds and systems across different languages.
Computational Phonetics: The use of computer algorithms and models to analyze and synthesize speech sounds.
Applied Phonetics: The practical application of phonetic research in fields such as speech therapy, forensic linguistics, and speech technology.
"The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech (articulatory phonetics), how various movements affect the properties of the resulting sound (acoustic phonetics), or how humans convert sound waves to linguistic information (auditory phonetics)."
"The phoneme is an abstract categorization of phones, and it is also defined as the smallest unit that discerns meaning between sounds in any given language."
"Languages with oral-aural modalities such as English produce speech orally (using the mouth) and perceive speech aurally (using the ears). Sign languages, such as Australian Sign Language (Auslan) and American Sign Language (ASL), have a manual-visual modality, producing speech manually (using the hands) and perceiving speech visually (using the eyes)."
"Language production consists of several interdependent processes which transform a non-linguistic message into a spoken or signed linguistic signal. After identifying a message to be linguistically encoded, a speaker must select the individual words—known as lexical items—to represent that message in a process called lexical selection."
"During phonological encoding, the mental representation of the words is assigned their phonological content as a sequence of phonemes to be produced."
"These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly the intended sounds are produced."
"The modification is done by the articulators, with different places and manners of articulation producing different acoustic results. For example, the words tack and sack both begin with alveolar sounds in English, but differ in how far the tongue is from the alveolar ridge."
"The most common airstream mechanism is pulmonic—using the lungs—but the glottis and tongue can also be used to produce airstreams."
"Language perception is the process by which a linguistic signal is decoded and understood by a listener."
"In order to perceive speech, the continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes, and words."
"Listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories."
"While certain cues are prioritized over others, many aspects of the signal can contribute to perception. For example, though oral languages prioritize acoustic information, the McGurk effect shows that visual information is used to distinguish ambiguous information when the acoustic cues are unreliable." Quotes were not provided for questions 11-13 as they do not have specific quotes associated with them.