Your password has been changed
Can't sign in? Forgot your password?
Enter your email address below and we will send you the reset instructions
If the address matches an existing account you will receive an email with instructions to reset your password.
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
You do not have any saved searches
Purpose: Functional orofacial behaviors vary in their force endpoint and rate of recruitment.
This study assessed the gating of orofacial cutaneous somatosensation during different
cyclic lip force recruitment rates. Understanding how differences in the rate of force
recruitment influences trigeminal system function is an important step toward furthering
the knowledge of orofacial sensorimotor control.
Method: Lower lip vibrotactile detection thresholds (LL-VDTs) were sampled in response to
sinusoidal inputs delivered to the lip vermilion at 5, 10, 50, and 150 Hz while adult
participants engaged in a baseline condition (no force), 2 low-level lip force recruitment
tasks differing by rate (0.1 Hz or 2 Hz), and passive displacement of the lip as a
control to approximate the mechanosensory consequences of voluntary movement.
Results: LL-VDTs increased significantly for test frequencies at or below 50 Hz during voluntary
lip force recruitment. LL-VDT shifts were positively related to changes in the rate
of lip force recruitment, whereas passively imposed displacements of the lip were
ineffective in shifting LL-VDTs.
Conclusions: These findings are considered in relation to published reports of force-related sensory
gating in orofacial and limb systems and the potential role of somatosensory gating
along the trigeminal system during orofacial behaviors.
Purpose: Previous research has found that auditory training helps native English speakers to
perceive phonemic vowel length contrasts in Japanese, but their performance did not
reach native levels after training. Given that multimodal information, such as lip
movement and hand gesture, influences many aspects of native language processing,
the authors examined whether multimodal input helps to improve native English speakers'
ability to perceive Japanese vowel length contrasts.
Method: Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only;
(b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training,
participants were given phoneme perception tests that measured their ability to identify
short and long vowels in Japanese (e.g., /kato/ vs. /katoː/).
Results: Although all 4 groups improved from pre- to posttest (replicating previous research),
the participants in the audio-mouth condition improved more than those in the audio-only
condition, whereas the 2 conditions involving hand gestures did not.
Conclusions: Seeing lip movements during training significantly helps learners to perceive difficult
second-language phonemic contrasts, but seeing hand gestures does not. The authors
discuss possible benefits and limitations of using multimodal information in second-language
Purpose: One popular method to study the motion of oral articulators is 3D electromagnetic
articulography. For many studies, it is important to use an algorithm to decouple
the motion of the tongue and the lower lip from the motion of the mandible. In this
article, the authors describe and compare 4 methods for decoupling jaw motion by using
3D tongue and lower lip data.
Method: A 3D position estimation method (3DPE), an adapted version of the estimated rotation
method (ERM) proposed by Westbury, Lindstrom, and McClean (2002) for 3D recordings,
a linear subtraction method, and a new method called Jaw and Oral Analysis (JOANA)
were evaluated with data recorded from sensors attached to the lower molars, lower
lip, and tongue.
Results: The 3DPE method showed the fewest errors. However, unlike the other methods, it requires
more than one sensor attached to the lower jaw. Among the single-sensor methods, JOANA
was found to be the most comparable to 3DPE.
Conclusion: The findings suggest that JOANA is efficient in decoupling tongue and lower lip motion
from jaw motion, whereas ERM, with its less complicated procedure for attaching the
lower jaw incisor sensor, can be considered a viable alternative.