RAAMA
15.9 For each two-way voice communication application that makes it possible to identify the activity of a speaking interlocutor, it is possible to identify the activity of a signing interlocutor. Is this rule respected?
Official methodology to test criterion 15.9
Test 1 (15.9)
iOS and Android
- Activate the application and launch a video call between the two terminals.
- Initiate an oral activity, and check that the second terminal has information to identify this activity (for example, the presence of a coloured halo around the thumbnail of the speaker in activity).
- If this is the case:
- look for the presence of a manual mechanism (e.g. a button) that would allow the person signing to indicate that they are signing;
- if not, perform gestures in front of the camera (see note) and check that information is automatically displayed to identify this visual activity.
- If this is the case, the criterion is validated. Note: in communication applications, the identification of a spoken activity is not based on the identification of an intelligible verbal message (a word or sentence, for example), but solely on the identification of a sound (a noise, for example). In this way, visual activity, even if it does not correspond to an element that can be understood in sign language, could be detected automatically by this application and would therefore serve as a mechanism for identifying the activity of a person who signs. It is therefore possible to test by performing gestures even if they do not correspond to an element of meaning in sign language.