Educational Interpreters: Considerations for Schools

Educational interpreters are an important part of the educational team and their work in providing language accessibility for Deaf and hard of hearing (DHH) students is critical. However, it’s important for school districts contemplating hiring an American Sign Language (ASL)-English interpreter for their DHH student(s) to consider a few vital factors. First, what is the language level of the DHH student? If the student has strong signed language skills, they may benefit from having the academic information interpreted into a visual language. If, however, the student has strong oral language skills and minimal signed language skills, then perhaps there needs to be a discussion as to the ultimate goal of having an educational interpreter in the classroom. If the goal is for the student to learn some ASL, then simply being provided an interpreter will not help them acquire a new language. Educational interpreters do not provide language instruction, and it would not be fair to expect the DHH student to attempt to acquire a new language while simultaneously trying to take in academic information. Additionally, having information interpreted into a language they barely know will likely be unhelpful. 

Most crucially, if the student has minimal signed language skills and minimal oral language skills, an interpreter may not be beneficial. In fact, providing an educational interpreter to a DHH child with no complete first language may be more harmful than helpful. As Caselli et al. (2020) assert, there is no evidence that DHH children with language deprivation can overcome their language difficulties from a single language model, even if that model is fluent in the language. School-aged DHH children without fluency in any language will not be able to simply acquire a signed language from an educational interpreter. Rather, they need intensive and purposeful language intervention in their most accessible language as well as plenty of language models and same-language peers with which to interact.

Another important consideration is the skill level of the educational interpreter. In a study by Schick et al. (2005), the authors found that 60% of the interpreters evaluated did not have the skill level necessary to provide DHH students with full access to the curriculum. This may be a result of state-by-state variation in requirements for interpreter skill levels. Many states don’t have standard requirements for educational interpreters, while others have standards that are gravely below the needs of DHH students (National Association of Interpreters in Education, 2021). Thus, it is critical that the school properly vet ASL-English interpreters who may be working with their students by ensuring they have an objective measure of adequate skill level. 

This is vital for a few reasons. First, interpreters themselves may not be able to accurately estimate their skills. This is due to a human cognitive fallacy called the Dunning-Kruger Effect, or the tendency for less-skilled individuals to rate themselves as highly skilled, and highly skilled individuals to rate themselves as less skilled. Indeed, Fitzmaurice (2020) found that the least skilled interpreters overestimated their skills, while the most skilled interpreters underestimated their skills. Therefore, a score on a standardized test like the Educational Interpreter Proficiency Assessment (EIPA) can be helpful in offering a more objective evaluation of an interpreter’s skills. Second, less skilled interpreters are less accurately interpreting information for their DHH students (Schick et al., 2005). The lower the percentage of accurately interpreted information, the less access DHH students are getting to academic content. Indeed, Schick et al. (1999) found that “many deaf children receive an interpretation of classroom discourse that many distort and inadequately represent the information being communicated” (p. 144).

Our DHH students need and deserve 100% access to academic information at all times, just like their hearing peers. It is our responsibility to ensure that a.) the student is a good candidate for an educational interpreter (if they are not, other educational placements should be discussed), and b.) that interpreter is highly qualified to provide full and complete language access.


References

Caselli, N. C., Hall, W. C., & Henner, J. (2020). American Sign Language interpreters in public schools: An illusion of inclusion that perpetuates language deprivation. Maternal and Child Health Journal

Fitzmaurice, S. (2020). Educational interpreters and the Dunning-Kruger Effect. Journal of Interpretation, 28(2).

National Association of Interpreters in Education (2021). State Requirements for Educational Interpreters. https://naiedu.org/state-standards/

Schick, B., Williams, K., & Kupermintz, H. (2005). Look who’s being left behind: Educational interpreters and access to education for deaf and hard-of-hearing students. Journal of Deaf Studies and Deaf Education, 11(1), 3-20.

Schick, B., Williams, K., Bolster, L. (1999). Skill levels of educational interpreters working in public schools. Journal of Deaf Studies and Deaf Education, 4(2), 144-155.

Making Decisions Without All the Information

The idea of parents making informed decisions about their Deaf or hard of hearing (DHH) child’s language acquisition is a laudable one— if parents were actually given all the information. However, it is well-documented anecdotally that when educating parents of DHH children, professionals tend to omit, dismiss, and twist information. As Kecman (2019) states, “though stakeholders widely acknowledge the benefits of informed choice…it appears that the way information is provided is not always consistent with these recommendations” (p. 8). This may be a conscious or unconscious process, but it is one that professionals must work to ameliorate because it results in information that misleads or biases parents.

In a qualitative study by Eleweke & Rodda (2000), data showed that parents were most strongly persuaded by the information they received in the period immediately after their child was diagnosed. This relates to the cognitive bias of anchoring. Anchoring bias is the tendency to rely too heavily on an initial piece of presented information to make a decision (Allen et al., 2020). This can then affect all ensuing decisions on the topic. “Anchoring bias is a common cognitive bias and can lead to narrow-minded thinking” (Sharma et al., 2021, p. 4). Therefore, if professionals present parents with misinformation that they anchor to, it may negatively influence all the subsequent decisions they make about their DHH child. The authors note that during this initial period, information is often presented in an unbalanced way. For example, the parents in the study who chose an auditory-oral approach appeared to be misinformed about how a signed language might have benefitted their children, as evidenced by some of their comments about British Sign Language (Eleweke & Rodda, 2000). Thus, it is vital that professionals present all language options equally so as to lessen misconceptions and misperceptions.

The authors also found that parents’ perception of the functions of assistive listening devices (i.e., hearing aids and cochlear implants) also influenced their choice of a communication approach. The information provided by specialists may have led the parents to have unrealistic expectations about the devices (Eleweke & Rodda, 2000). Because of the information they were given, the parents who chose the auditory-oral approach appeared to have very high expectations of the utility of the hearing aids and “seemed disappointed that their hopes were not realized” (Eleweke & Rodda, 2000, p. 379). If professionals inform parents about listening devices in a way that implies that they “fix” hearing or make a DHH child like a hearing child, they may believe this. It is critical that professionals explain to parents that hearing devices are simply technological tools. “Their use does not guarantee any specific outcome” (Szarkowski, 2019, p. 244), and they certainly do not allow DHH children to hear normally. Professionals should not lead parents to believe that the devices are capable of more than they actually are. They must be accurate and truthful in their explanation of how they work and what should be expected from them. Perhaps if parents understood the limitations of hearing devices, they would have more realistic expectations for their child’s use of them.

Lastly, the attitudes of services professionals and educational authorities could influence the parents’ decisions regarding communication approach and school choice for their child. The parents who chose a British Sign Language expressed that the professionals appeared to follow their own philosophies and would rather “make the child adapt to their system [even if] the system is not adaptable to the child’s needs” (Eleweke & Rodda, 2000, p. 380). Kecman (2019) notes that there are human elements involved in the process of informed choice such as the— often emotional— decision-making process that parents of DHH children go through, as well as professionals’ attitudes towards deafness and its management. All of this can influence the way that potential options are communicated to parents.

Importantly, the way that information is presented is just as influential as whether it is presented at all. This is related to a cognitive bias called the framing effect. This means that the presentation of information, whether in positive or negative semantics, affects people’s decision-making (Kte’pi, 2020). If professionals present information about signed language to parents in a negative or dismissive way (e.g., “It may hinder him from speaking” or “You probably won’t need it”), parents may be less likely to use a signed language with their DHH child. If information is presented in a positive or neutral light (e.g., “It’s very beneficial for all children” or “It’s equally as important as spoken language”) then perhaps parents will be more likely to explore that opportunity.

It is professionals’ duty to accurately and completely explain all language opportunities for DHH children to parents in a balanced way. Only then can parents make a truly informed decision.

References

Allen, J., Miller, B. R., Vido, M. A., Makar, G. A., & Roth, K. R. (2020). Point-of-care ultrasound, anchoring bias, and acute pulmonary embolism: A cautionary tale and report. Radiology Case Reports, 15, 2617-2620.

Eleweke, C. J., & Rodda, M. (2000). Factors contributing to parents' selection of a communication mode to use with their deaf children. American Annals of the Deaf, 145(4), 375-383.

Kecman, E. (2019). Old challenges, changing contexts: Reviewing and reflecting on information provision for parents of children who are deaf or hard-of-hearing. Deafness & Education International, 21(1), 3-24.

Kte’pi, B. (2020). Framing effect (psychology). Salem Press Encyclopedia.

Sharma, R. K., McManus, C., & Kuo, J. H. (2021). Idiopathic thyroid abscess in a healthy 22-year-old female: A case of anchoring bias. Journal of Clinical and Translational Endocrinology: Case Reports, 19.

Szarkoswki, A. (2019). Language development in children with cochlear implants: Possibilities and challenges. In N. S. Glickman & W. C. Hall (Eds.), Language deprivation and deaf mental health (pp. 235-262). Routledge.

Evidence-Based Practice and D/HH Children

Throughout my years working as a speech-language pathologist (SLP) with deaf and hard of hearing (DHH) children, I have noticed that many clinicians do not seek evidence-based sources of knowledge to modify and improve their practice. Instead, they tend to rely on old sources of information without questioning their validity. Portney (2020) identifies three sources of knowledge that practitioners have typically relied on:

  1. Tradition - “That’s how we’ve always done it.”

  2. Authority - “That’s what the experts say.”

  3. Experience - “It’s worked for me before.”

This is highly relevant to SLPs and other professionals who work with DHH children. SLPs will often utilize non-evidence-based approaches with DHH children because of tradition. For example, auditory-verbal therapy methods and concepts such as the Ling 6 sound check and the auditory hierarchy are the way that we’ve always treated DHH children and we inherit that knowledge and accept it as evidence without attempting to validate it. SLPs may also utilize non-evidence-based approaches because of authority. Organizations like the Alexander Graham Bell Association (AGBell) or the Moog Center will support or back a therapy method and clinicians will rely on that information blindly, without seeking out if knowledge has changed. Lastly, SLPs often utilize non-evidence-based approaches with DHH children because of experience. Perhaps they worked with one or two DHH students in the past for whom these methods worked. Therefore, they continue to utilize them for all of their DHH students without attempting to change or modify their practice.

However, the problem with relying on these three sources of knowledge rather than objective research and evidence is that they stagnate our practice as clinicians and hinder our skill growth. As Portney (2020) states, “over time, these are likely to result in a lack of progress, limited understanding of current knowledge, or resistance to change even in the face of evidence” (p. 54). When working with DHH children, it is vital for SLPs to think critically, keep an open mind, seek new evidence, and change their practice to better serve the child.

References

Portney, L. G. (2020). Foundations of clinical research: Applications to evidence-based practice. F. A. Davis Company.

Don't Blame Deaf Kids' English Errors on their ASL

Have you ever blamed a Deaf child’s errors in written or spoken English on the fact that they know ASL? Have you ever heard a colleague make statements about ASL “influencing” a Deaf child’s English production? Let’s take a look at three common statements and why we should avoid saying them:

  1. “My Deaf students always mix up their pronouns. It must be because of ASL.”

Many other languages use pronouns differently than English does. ASL uses non-gendered pronouns. This means that the sign for “he” is the same at the sign for “she.” The Ghanaian language of Twi does the same. There is one non-gendered pronoun to refer to any person. If you want to inform the listener of the gender, you have to use the person’s name or state “the woman” to clarify.

In French and other romance languages, pronouns take the gender of the object, not the subject. For example, if I am discussing my sister’s dog, I would say “son chien” (his dog). This is because the pronoun follows the gender of the word “dog,” which is masculine. My sister’s gender does not influence the pronoun choice. Or, for a sillier example: as my French-English bilingual cousin stated as a young child, “Emma knows his papa!” Therefore, French speakers or Twi speakers learning English as a second language may make similar pronoun errors to Deaf children.

2. “My Deaf students never use the ‘do’ question properly. It must be because of ASL.”

Many other languages don’t use the “do” question (i.e. “Where do you live?” or “Do you know the time?”). In ASL, one might ask YOU LIKE EAT WHAT? instead of “What do you like to eat?” In Italian, the “do” question is simply created by dropping the subject in the statement form. For example, to turn the statement “Noi abbiamo zucchero” (We have sugar) into a question, one would drop the “we” and ask “Abbiamo zucchero?” (Have sugar?). Notice how there is no need to add a “do” in the question form. This is because the word “have” in Italian has the “we” imbedded into it. Therefore, Italian speakers learning English as a second language may struggle with the “do” question as well.

In French, “do” exists, however it’s a cumbersome phrase (est-ce que). If speaking quickly or informally, one can avoid using it by simply inverting the subject and verb. For example, to ask if a stranger has the time, one might ask “Avez-vous le temps?” (Have-you the time?). French speakers learning English as a second language may attempt to use this shortcut, which doesn’t really work in (American) English.

3. “My Deaf students always forget their articles. It must be because of ASL.”

Other languages have different rules for the use of articles (a, an, the). Russian, for example, doesn’t have articles at all. This may result in a Russian speaker learning English as a second language to drop articles in English, too (i.e. “I have dog”). German has many more articles than English does. An English speaker learning German as a second language may struggle to know when to use the correct article.

Thus, it’s vital to be mindful of “blaming” any student’s English production (written or spoken) on another language. If we, as native English speakers, attempted to learn another language we would probably make mistakes that were influenced by our knowledge of English. There is nothing wrong with that.

So the next time a Deaf child makes an error in English, instead of saying, “Oh, that’s their ASL affecting their English again,” try saying, “I wonder what other languages do that, too.”

The Soccer Scenario

Imagine if there was a belief that every child needed to be phenomenal at soccer. It was believed that soccer was the only way kids could learn teamwork, fitness, and discipline. There was an entire organization dedicated to teaching parents about the importance of soccer. When a baby was born, a soccer professional from this organization met the new parents in the hospital and told them, “Your child will be great at soccer.”

However, when only a handful of graduating high school seniors make it in professional soccer, proponents of the soccer approach double down. “Kids who didn’t become skilled at soccer did not practice enough,” they say. “If they had gone to every single practice and did everything their coach told them to do, they would be successful soccer players.”

“But what about kids who just aren’t naturally good at soccer?” others counter. “Can’t they learn teamwork and discipline through marching band or drama club? Or other sports, like tennis or baseball?”

“No. Soccer is the gold standard for child development. It is the only way a kid will ever become successful in life,” the soccer organization insisted.

And so, every year, only a small percentage of students graduated with excellent soccer skills. The soccer organization pushed harder. “We have to train them earlier. Right now, most kids are learning to play soccer at the age of three. We must start them at the age of two. We must teach parents about the importance of bringing their child to every single soccer practice and game. They have to practice at home in addition to that. That is the only way they will become proficient soccer players.”

People started to speak up. “But not every kid needs to play soccer. Some kids will be successful in life without soccer. Other kids will play other sports or do other activities. Soccer isn’t the only way to measure success or teach life skills. Starting kids earlier and forcing them to attend every practice and game will not guarantee that they become successful soccer players.”

The soccer organization quickly silenced these people. “Stop using scare tactics and speaking for all children. Soccer is the only way to go.”

If this scenario sounds hyperbolic that’s because it is. We cannot predict a child’s success with spoken language (or soccer) from birth. We cannot blame a child’s struggles with spoken language (or soccer) on their attendance to training sessions. We cannot assert that spoken language (or soccer) is the only way for Deaf children to be successful in school or in life. We cannot state that spoken language (or soccer) will work for every single Deaf child. We cannot deny Deaf children the chance to learn and grow in other ways, via other avenues and other languages.

This image was purchased for use.

This image was purchased for use.


SimCom is Not Inclusive

Every time I assert that Simultaneous Communication (SimCom) does more harm than good, I always get some pushback. “It’s more inclusive,” someone says. “It gives Deaf people access to my conversation,” someone else states.

The truth is that SimCom is not inclusive at all. In fact, it’s quite the opposite. Numerous studies have shown that when you attempt speak two languages at the exact same time (i.e. ASL and English), both languages suffer. You end up leaving out a lot of information in both languages (although ASL is typically the language that is most degraded), you make mistakes in grammatical structures that you wouldn’t normally make, and you produce the language(s) much, much slower than you normally would.

To make this point, I found a popular Instagram influencer who used SimCom in her stories. First, I watched without sound and just transcribed what she said in ASL. This is what a Deaf person has access to. Then, I watched the same video again and transcribed what she said in English. This is what a hearing person has access to. See if you can spot the differences:

VIDEO 1

Signs during SimCom: Good morning not good morning. Today was a normal day. Busy. 3:30. I don’t know, quiet work busy. I recently went to post (verb) the post office. Disney got *UNCLEAR* so to passport for that, working. Store day. Not share.

Spoken English: “I was going to say good morning but it’s not good morning. Today has been a normal day that’s so busy and it’s like, my goodness it’s almost 3:30. How is it 330?! So *UNCLEAR* been quiet, been working, busy, and we recently went to the post office and we’re working on our Disney cruise, so we had to get passport for the girls. We had to do that, and we had an appointment, and just go go go. So we had a launch today an I know is crazy because I haven’t shared it all but we have a lot that came, probably most excited is the key lanyards. They come in 5 different ones, this is kind of like our team collection. Red, orange, and blue and then we did the responder blue and responder red. These are now available.”

 VIDEO 2

Signs during SimCom: Show you this cute. Perfect long. Team colors, match school colors, maybe your favorite team. You have a firefighter in your family. Police or firefighter. *UNCLEAR SPELLING*

Spoken English: “Ok so I did want to show you about the cutie landward. But I was like ok this is kind of like a badge so I just want to show you where it lays. Your badge is more than likely a littler smaller than this but I think it’s the perfect length, I’m really excited about the team color s because they can match your school colors or maybe your favorite teams. I just think it’s really cool. Or if you have a firefighter in your family or a police. Police. To show your support for them. I think it’s really cool. I love the responder clips and lanyard, so that’s kind of fun.”

 VIDEO 3

Signs during SimCom: Difference between Luna. Wrong I have to stop texting. Really excited because I’m working with a few new makers. Text me today you have, you have. Not here right now for local makers. Always allow people who have their own designs, who create their own. Got one back from a test today, it passed yay! Can’t wait to show you.

Spoken English: “Also I wanted to show you diff between the Luna, which has small beads, like a miniature Jane. And then the Jane just to show you because I’ve gotten so many DMs asking me how are they different. So this is the Luna and this is the Jane for reference so you can see the size difference. Obsessed with this new one. Sorry, my phone never stops it’s like ding ding ding ding. Ok so here’s  several. I’m really excited we have been working with a few new makers and I actually just got a text message today, “are you hiring, you hiring?” We’re actually not hiring right now for local makers. But we always let people who have their own designs like who created their own clips to send them to us. And we actually just got one back from testing today and it passed so I’m really, really excited and I can’t wait to show you!”

Now, imagine using SimCom to instruct young Deaf children. What do you think their language output will look like if this is their model?

Those Signing Gloves Are Not That Great

I’m sure you’ve seen these signing gloves that have gone viral on the internet. Or these ones. They are worn on the hands and translate signs into speech. Seems like pretty amazing technology, right? Well, the truth is, these gloves are not that great. Here’s why:

  1. As might appear obvious, these gloves translate only the signs themselves. This means that depending on what handshape you’re using and where your hands are in space, the gloves can determine what you’re signing and translate it. There is a glaring problem here, however. If you know American Sign Language, you know that the signs themselves hold only a fraction of the meaning. There is rich language in the eyebrows, the mouth, the body, and the manner in which the signs are produced. For example, moving your eyebrows can change a question to a statement. Signing the same sign harder and faster can denote a synonym. Pulling your lips back indicates something that happened recently, a form of tense. Moving your body from one side to the other can depict a list. All of this would be lost on the gloves, therefore leaving the signer with a robotic caveman-like production of their ASL.

  2. The programmers of these gloves do not know American Sign Language. In one sample video of how the gloves worked, the creator signed a simple sentence in Signed Exact English (SEE), not ASL. SEE and ASL are very different; while ASL is a language, SEE is not. The programmers not only did not know this vital distinction, but even programmed the signs wrong. In the sample video, the gloves were programmed to say “my” when the person signed “I.” Imagine if I were creating a product that translated French without knowing any French. Or, if I did know some French, I didn’t think to ask native French speakers if my product made sense when it produced their language. This leads into my next point:

  3. The gloves’ designers are not Deaf, nor did they incorporate the opinions of Deaf people into their design. For a product that purports to help a group of people, it should at the very least enlist the opinion of that group of people. If a Deaf person had been consulted on this project, I am sure they would have made all the points I am making right now.

  4. Any conversation with someone wearing these gloves would be a monologue. That is: the person wearing the gloves would have to be Deaf, right? Imagine they sign to a hearing person and the gloves (somehow) translate the signs into speech. The hearing person received the message, but how will they respond? If they speak their answer, the Deaf person cannot hear it. The hearing person can’t sign their response back because they clearly don’t know ASL or they wouldn’t need a Deaf person to be wearing signing gloves. They could write their response, but then why didn’t they just write the whole conversation?

  5. The gloves’ mere existence perpetuates a mindset that Deaf people are required to accommodate to hearing folks’ language abilities. Why is it the duty of a Deaf person to wear gloves that “translate” their language when it’s just as easy for a hearing person to learn a few signs? It also perpetuates the idea that ASL is comprised of a bunch of signs. It’s not— in fact, it’s comprised of a multitude of complex linguistic signals, just like any other language.

So before you share these viral articles about gloves that can translate signs into speech, take a second and ask a Deaf person you know what they think about it. I have a feeling they’ll tell you that those signing gloves are not that great.

ASL Is Not A Fad

ASL is not a fad. Fingerspelling is not in vogue. A language is not something to fetishize, especially when it is being denied to a whole population of children who need it. American Sign Language is the only language that is fully accessible to deaf and hard of hearing (DHH) children. This does not mean that it’s a nice option. It does not mean that it’s a fun tool. It means that it is an absolute necessity for proper brain development.

If you, as a professional, are actively denying DHH children contact with their only accessible language, then you cannot promote ASL on the internet. If you are passively allowing someone else to deny that child access to ASL, you cannot promote ASL on the internet. If you subscribe to a mindset or a theory that deprives DHH children of an accessible language, then you cannot promote ASL on the internet.

To make an analogy, imagine you work with Spanish-speaking children who are forced to stop using their native language in favor of English. You do not speak Spanish and you don’t believe they should either. Every time they are caught speaking Spanish with each other, you punish them. You give them detention and take away their recess. Then, during your prep period, you make fun inspirational posters in Spanish and hand them out to your colleagues.

Or, to put it more poignantly, imagine you work with children who are starving. Children who are emaciated, completely deprived of nutrition. Yet you take out your lunch and eat it in front of them. “Mmmm, this is such a good sandwich,” you rave, offering your colleague a bite.

The act of promoting a language that you do not speak for personal gain is not only cultural appropriation, but it is a blatant dismissal of our professional obligation to cultural sensitivity. The act of creating fun handouts in a language while you purposefully withhold said language from children who need it is dishonest.

Thousands of DHH children suffer irreparable cognitive and linguistic deficits as a result of being denied access to American Sign Language during their critical language-learning years. This significantly impacts not only their academic achievement but their mental health, social-emotional development, and overall quality of life. If you are not actively advocating for a deaf child’s right to early access to ASL, then you do not have the right to promote the language on the internet for your own gain. ASL is not something you use to increase your social media followers. It is not a passing craze or a fun infatuation. It is a language that is being withheld from the children who need it most.

So before you make a cute handout or poster with fingerspelling from ASL, ask yourself if you are contributing to language deprivation. If your answer is anything other than a resounding no, then drop your project and do your part to advocate for language access for deaf and hard of hearing children.

Hasty Generalization

Hasty generalization is a fallacy in which one reaches a generalization based on insufficient evidence, making a rushed conclusion without considering all of the variables. This cognitive bias purports the following:

A is true for B. A is true for C. Therefore, A is true for D, E, F, etc.

A recent study by Chu et al. (2016) reported that “the language abilities of children who communicated solely via listening and spoken language were significantly better than children who used sign language.”

This is a classic case of hasty generalization.

In the study, Chu looked at two groups of children: one that utilized only spoken language after implantation, and one that used total communication after implantation. Spoken language (in this case, English) is a robust language system in and of itself. Total Communication (TC) is not. TC, which has come to mean simultaneous communication (SimCom), is the practice of signing and speaking at the exact same time. This is well known to have negative effects on language development, as it degrades the signal of both languages. TC/SimCom are not languages. They are a form of pidgin, or an amalgamation of two languages without its own grammar and structure.

What is the ACTUAL conclusion of this study? The language abilities of children who communicated solely via listening and spoken languages were significantly better than children who used a pidgin-like combination of spoken and signed modalities.

Of course they were.

That would be like saying that children who are monolingual English speakers perform better on English language tasks than children who speak Franglais. Therefore, French is the problem. No. Franglais is the problem. Mixing two languages into a non-language is the problem. Comparing real languages to pidgins is the problem.

American Sign Language (ASL) is a fully formed and robust language equal to English. TC and SimCom are NOT. If the study had instead looked at children who use spoken English alone and children who use ASL and English separately in their own forms, they would have found very different results.

How is this hasty generalization? The researchers assumed that because language abilities in English are decreased (A) in children who use total communication (B) and in children who use simultaneous communication (C) then it must be true for children who use American Sign Language (D). This is faulty logic and a perfect example of hasty generalization.

Beware of studies that use hasty generalization. Overgeneralizing negative results from a group of children using TC/SimCom to include a real language like ASL is very bad science. Read these studies carefully. If they are being reported by another source, be sure to find and read the actual article.

The Case for Sign Language

In order to understand the case for sign language, it is important to first understand language development. A typical hearing infant is constantly exposed to language in the spoken modality from the moment they are born. That is, the child cannot turn of their ears and cease the input to the brain. As a result, their brain is receiving continuous stimulation that helps build neuronal connections and shape development.

If a typical hearing infant learns language without effort or explicit teaching, why shouldn’t a deaf child be afforded the same privilege? In the example of the hearing child, the language that he/she is able to learn effortlessly happens to be one of a spoken modality. In the example of the deaf child, the language that he/she is able to learn effortlessly is one of a signed modality. As Glickman asserts in a 2007 study, the only language that a deaf child can acquire naturally and effortlessly is sign language.

Because most deaf children are born to hearing parents, listening and spoken language is the most common modality choice. This means that the child is fitted with hearing aids, or undergoes either unilateral or bilateral cochlear implant surgery, with the purpose of learning to listen and speak. There is one glaring problem with this method: current research has shown that it is not sufficient as a standalone approach for language intervention (Hall et. al, 2017). There are a few reasons for this. The first is that hearing aids and cochlear implants, like most technology, are prone to malfunction and failure. For every moment that the child’s aid or implant is not working properly, that child loses precious input to the brain. Sometimes, the internal component of the implant malfunctions. To replace this, the child must undergo another surgery. Moreover, most of the current technology cannot be worn when the child is showering, swimming, sleeping, or playing sports. These are language-learning opportunities that a hearing child naturally receives, but that are eliminated for the deaf child who is learning to listen.

The second reason is the amount of work and therapy required to learn to listen with a hearing aid, and even more so, a cochlear implant. Listening through a cochlear implant is very different than natural hearing. The implant is an array of electrodes that is inserted into the cochlea, or the hearing organ. Normal hearing occurs when the hair cells of the cochlea are compressed by inner ear fluid and consequently stimulate the auditory nerve. With a cochlear implant, the stimulation to the auditory nerve is via electrical impulses, bypassing the hair cells of the cochlea. As a result, the brain must overtly learn to interpret what these impulses mean. It must be trained to understand the input. Therefore, while hearing children are effortlessly learning spoken language, implanted deaf children are working overtime to explicitly learn something that their brain has the ability to absorb easily in another modality. To do this requires a rigorous course of doctor’s appointments, audiology appointments, MAPping sessions, and speech and listening therapy. The obvious issue here is that many parents are not able, or perhaps willing, to bring their child to these vital appointments as frequently as is required.

The third, and most critical reason is one that is largely overlooked. Cochlear implant technology has improved considerably over the years, and scientists and surgeons highly acclaim the equipment itself. However, there is still no way to predict the reaction of a child’s brain to this technology, despite perfectly functioning equipment. As Humphries et. al. (2012) assert, cochlear implants involve not only progress in technology, but the biological interface between technology and the human brain. Some children’s brains simply do not “take” to the unnatural input to the auditory nerve. Children with additional diagnoses or brain differences demonstrate significant difficulty learning to listen with a cochlear implant. Some children’s brains react to the electrical impulses with vertigo, seizure activity, or migraines. Any of these situations might require years to discover, assess, and attempt to resolve. In the interim, the child is not receiving an adequate language signal during their most imperative years.

This is not to say that a child should not receive hearing aids or cochlear implants. It is simply to demonstrate that listening should not be the child’s sole access to language. According to Hall et. al. (2017), “many deaf children are significantly delayed in language skills despite their use of cochlear implants. Large-scale longitudinal studies indicate significant variability in cochlear implant-related outcomes when sign language is not used, and there is minimal predictive knowledge of who might and who might not succeed in developing a language foundation using just cochlear implants” (p. 2).

Children using cochlear implants alone simply are not acquiring anything close to language fluency. Therefore, it is important that medical professionals do not give families the false impression that the technology has advanced to the point where spoken language is easily and rapidly accessed by implanted children (Humphries et. al, 2012).

If, however, a deaf child is exposed to sign language from an early age, that child will have a natural and effortless language as a foundation for all other learning, including listening and speaking. As Skotara et. al. observed in a 2012 study, the acquisition of a sign language as a fully developed natural language within the sensitive developmental period resulted in the establishment of brain systems important in processing the syntax of human language.

If a deaf child is provided nutrition to the brain via sign language, that child will develop typical language and cognitive abilities. By learning a natural first language from birth, basic abstract principles of form and structure are acquired that create the lifelong ability to learn language (Skotara et. al, 2012). This forms a foundation for learning listening and spoken language, if desired. If, through sign language, a child has the cognitive understanding and neural mapping for the concept of a tree, for example, that child will be better able to produce the word “tree.” If, through sign language, a child has conceptual knowledge of through, that child will be better able to use the word “through” accurately in a sentence. A brain cannot speak the words for concepts it does not possess. Sign language provides the venue for learning these critical concepts. In fact, research has shown that implanted children who sign demonstrate better speech and language development, and intelligence scores than implanted children who don’t sign (Hall et. al, 2017).

Thus, it is vital that a deaf child be provided immediate and frequent access to sign language. This is not in lieu of spoken language, but rather as a prophylactic measure. The two are not mutually exclusive; in fact, they can and should be learned concurrently, as bilingualism has many benefits for brain development. As Humphries et. al. assert, there is no reason for a deaf child to abandon spoken language, if it is accessible to this child, simply because they are also acquiring sign language (2012).  With sign language, a deaf child will always have a fully accessible language. Therefore in the event that their cochlear implant breaks, malfunctions, can’t be worn, or simply doesn’t “click” with their brain, that child still has a language. With sign language as a foundation, a deaf child is able to build other cognitive processes that lead to a lifelong ability to learn and perform on par with their hearing peers.

Well-Written Pseudoscience is Still Not Science

I had the recent misfortune of stumbling upon an article written by a speech-language pathologist who is an auditory-verbal therapist. The content of this heinous piece of work was well-written and appeared professional. However, this skilled façade was masking an overall gross miscarriage of research and information that stems from a history of pseudoscience masquerading as real science. Not only am I ashamed that this woman is a part of my field, but I am positive that I lost brain cells reading her work. Below is the original article with my responses added in bold:

 Why Not Baby Signs?

 Even parents who have chosen a listening and spoken language outcome for their children often ask, “Should we use baby signs?” just to fill the gap during the time from identification to cochlear implantation, or identification to those first spoken words.  If you’re to believe the media hype, every parent, those of children with and without hearing loss, is doing it.  So what could be the problem? I would implore this author to look up the word “hype” as she clearly struggles to understand its meaning. Research-based and evidence-based recommendations are not “hype.” Using the word “hype” to degrade legitimate linguistic and neurological evidence is deceitful and appalling.

However, media hype is just that: hype.  A marketing frenzy created by companies that care way more about their bottom line than your child’s development or any kind of real research, making wildly unsubstantiated claims that baby signs will do everything from increase your child’s IQ to solve world hunger (okay, maybe not that last one).  When we really examine the sources, are baby signs all they’re cracked up to be?  Here is what I discuss with parents: Using hyperbolic statements to demean actual research behind bilingual language development is childish and small. It shows that this author’s only tool against the truth is to mock it, as she has no legitimate counterargument.

If you have chosen a listening and spoken language outcome for your child, start in the direction you mean to go.  Devoting time and energy to learning signs, even baby signs, that you plan on dropping later is taking precious time off task and siphoning your energy away from what you’ve identified as your primary goal: becoming the first and best teacher who can help your child learn to listen and talk.  I believe that parents have the right to choose whatever communication method will work best for their family.  But I advise them:  once you choose a communication method, run after it like crazy and give it 100%. That is how the “success stories” in any communication mode are made. It’s extremely presumptuous to assume that these families will be “dropping” signs later. It is impossible to know what will work for that child’s brain. If the child takes to sign language, then that child will NOT be “dropping” signs later. Taking the speaking/listening method and “running after it like crazy” is how you create language deprivation resulting in permanent neurological deficits. The problem with the idea that you should pigeonhole yourself into only the speaking/listening method is that it causes parents to persist with a language that isn’t working for their child for FAR too long. This results in severe and permanent language deficits and lifelong learning deficits simply because someone like this author convinced parents that forcing a round peg into a square hole with all your might and focus will make it go in.  

Another aspect to consider is that baby signs are not full, complete language.  By only signing key words, parents are providing their child reduced language input, when they have at their disposal a full, fluent language (their native language(s)) already in the home.  If you’re talking to your baby and only signing key words (“Do YOU WANT your BOTTLE?  It’s time to take a DRINK.  Are YOU HUNGRY?) it’s like talking to a dog who only hears, “Wah wah wah wah wah LEASH wah wah WALK wah wah TREAT.”  You’re being the Charlie Brown teacher, and your baby is not building the crucial linguistic connections in the brain for a full language system.  (This is another reason why I encourage parents who choose a sign language approach to become fluent… yesterday). This is, quite simply, disgusting and offensive. YOU WANT BOTTLE with eyebrows raised for DO is grammatically correct in American Sign Language. YOU HUNGRY with eyebrows raised for ARE is grammatically correct in American Sign Language. Just because a language doesn’t follow the syntactic structure of YOUR language does not make it “Charlie Brown teacher talk.” Imagine I said that because in French you say Est-ce que tu veux ton biberon?  and “est-ce que” is not the structure we use in English, therefore French is like gibberish and shouldn’t be used with children. Sound asinine? That’s because it is.

The other assertation in this appalling passage is that parents have to be fluent in American Sign Language in order for their child to learn it. This, again, proves that this author has no knowledge of neurolinguistics and should therefore refrain from commenting on language development. You know when American children of immigrants don’t become fluent in English because their parents don’t speak it fluently? Oh that’s right, that doesn’t happen. That is exactly what this author is prescribing to, even though we know that is exemplary pseudoscience.  

The signs taught in baby signs books/videos/DVDs/flashcards (don’t get me started on flashcards) are iconic.  That is, if you’ve ever played a game of charades, you probably know these signs.  They’re signs that make sense because you’re literally acting out or creating a picture of the thing you’re discussing (think about the signs for book, drink, eat, etc.).  If you think about spoken language, there is nothing inherently “book” about the word “book.”  Nothing about how you say “cat” actually means the animal “cat.”  This is an important difference.  We have to help infants and toddlers learn the relationship between words and their referents.  There are non-iconic signs in ASL, but they’re not the ones in the standard baby sign repertoire.  If your goal is spoken English, you’re much better served helping your child establish spoken word-referent connections instead. This paragraph truly shows the level of incompetence this author has surrounding American Sign Language and languages in general. The level of ineptitude displayed in this passage will take me a while to deconstruct, so bear with me.

First of all, ALL languages, spoken and signed, have iconicity (or words/signs that in and of themselves convey their own meaning). That being said, there is a very archaic and unproven belief that iconicity in a language somehow makes it subpar or substandard. This, from a linguistic perspective, is simply untrue and shouldn’t be given any attention. This author’s audacity is the equivalent of me, who knows absolutely no Chinese, saying, “That one Chinese character happens to look like what it means, therefore learning Chinese will hinder your ability to learn English.” Sound ridiculous? That’s because it is.

The examples of signs provided (i.e. book, drink, eat) are somewhat iconic. This is an interesting observation, and that’s the extent of its utility in this context. It has no effect whatsoever on language development any more than learning “baa” which sounds exactly like a sheep, would impair your ability to learn English. In fact, the majority of signs are not iconic at all; nothing about the way you sign “brother” actually means the person “brother.” I would beseech this author to avoid making linguistic assessments of which she has no background or formal knowledge.   

Parents are often sold on the many myths promoted by those who have a significant financial interest in selling baby sign materials.  But do they have any merit?

·          Myth: Baby signs encourage bonding by enabling children to express their needs sooner.  Baby signs serve to decrease parental responsiveness.  There are real, significant, evolutionarily and developmentally important reasons why babies do not talk until they’re around a year old.  Most mothers of infants can identify their baby’s cries and tell you that the infant has distinct sounds for hunger, wetness, or pain.  There’s a purpose for this!  Babies aren’t supposed to tell us what they need — it’s part of the bonding process that helps parents become attuned to their children’s needs.  It may be more convenient for you to have your child “tell” you what he wants, but you are short-circuiting a very important bonding process. Again, I would implore this author to look up the definition of the word “myth.” She has amazingly taken the objective and evidence-based fact that children can produce signs earlier than words and twisted it into blaming parents for being lazy. Preying on parents’ desires for their deaf child to be “normal” by guilting them into pseudoscientific methods is a repulsive practice that needs to stop.  

·          Myth: My child is so smart, he could tell me he wanted more food using sign way before any of the other babies could say it.  This is simple operant conditioning.  If I do X [the sign], I get Y [more food].  You can train a rat to do this.  I don’t think it says much about your child’s long-term intellectual potential.  Isolated signs like this to get what you want are a “trick,” not a full language system. You can’t call something a myth just because it goes against your own personal beliefs. The fact that children can produce signs earlier than spoken words is rooted in objective evidence that has been proven across multiple fields. The fine motor skills of the hands develop prior to the fine motor movements of the lips and tongue.

·          Myth: Because baby signs are marketed as “educational,” they must have value.  Unlike words like “Reduced Fat” or “Caffeine Free,” “Educational” is not a federally regulated label.  Anyone can advertise their products as being “educational” without the slightest hint of research behind them.  At the end of the day, no matter how cute the story is behind the product, or how hard they try to sell you on the idea that this is a “family” production or “by moms, for moms,” these companies care about their bottom line, not your child.  That’s just how capitalism works. No one is “marketing” baby signs. Recommending early access to language for deaf babies and providing parents the resources to do so is simply best practice. Again, because this author has no other counterargument she is resorting to absurdities in an attempt to make a futile point.

So what does the research say about baby signs? Topshee Johnston et al. (2003) performed a comprehensive review of nearly 1,200 studies that had been conducted on baby signs and found that only five showed that baby sign programs had a positive effect on child language… and the positive effects shown in those studies did not last past age two.  An exhaustive review of the evidence showed overwhelmingly neutral/negative effects from baby sign language.  Any positive outcomes noted did not have persistent, long-lasting effects on the child’s language and cognitive development later in life.  By age two, it was impossible to tell the difference between children who had used baby signs and those who had not. For every poorly conducted research article that states this, there is a robust study that states the opposite. This is not a matter of deaf children, but one of basic bilingual language development. Any linguist, developmental scientist, speech-language pathologist, or neurologist worth their salt will tell you that learning a second language NEVER impedes a child’s ability to learn the first language. Ever.  

Kirk et al. (2012), found no evidence to support claims that using baby signing with babies helps to accelerate their language development.  While babies did learn the signs and begin using them before they started talking, they did not learn the associated words any earlier than babies who had not been exposed to baby signs, and did not show any overall enhancement in language development.  The study did find that helping parents become more attentive to their children’s gestures served to increase responsiveness and bonding, but this is a standard part of early intervention in auditory verbal therapy, and not unique to baby sign programs. This study directly contradicts what this author wrote only three paragraphs ago. The study found that helping parents become more attentive to their children’s signs served to increase responsiveness and bonding. Interestingly, this author just stated that “It may be more convenient for you to have your child “tell” you what he wants, but you are short-circuiting a very important bonding process.” I would beware of believing an article written by someone who contradicts herself within the same essay.

In infants with hearing loss who go on to receive cochlear implants, Dr. Susan Nittrouer found that when sign language was used to supplement spoken language, there was no effect on the spoken language of children identified with hearing loss below one year of age. However, for children identified at one year of age or older, there is a negative effect—that is, when you combine spoken language and sign language in children over one year of age, their spoken language suffers.  Basically, if you want to knock yourself out doing baby signs with your infant pre-CI, you’re just exerting energy for no effect on your child’s language.  If you want to use signs after your child receives the CI, you’re working against their listening and spoken language development. Again, the pseudoscience is rich here. Using sign language after your child received a cochlear implant DOES NOT work against their listening and spoken language development. There is ABSOLUTELY NO EVIDENCE that supports this and these words should never be uttered again.

If you’re interested in reading the original dross, you can find it here.

Why Isn't ASL "Cool" Enough for Deaf Children?

I’m scrolling through my Facebook newsfeed when I see it for the umpteenth time: an article describing how Starbucks will open an ASL-friendly store in October. At least three people have posted the article on my wall or shared it with me. The same goes for the cute Target doormat with “welcome” spelled out in the ASL finger alphabet. And the kids t-shirts with the “I love you” handshape on them. And the video of the college engineering student who designed gloves that simulate ASL signs. And the one of a bride signing a song to her husband or her father at her wedding.

Every day I see these videos, articles, and products going viral. The internet seems to love the idea of American Sign Language. It’s cool. It’s hip. It’s a fun way to communicate. It’s different from the spoken modality that we are all so used to.

However, what most people don’t realize is that ASL is still missing from the one place it is so desperately needed: the brains of young deaf children. An alarming number of deaf children are subjected to inadvertent language deprivation during their critical language-learning period. This means that during the first few years of life, when a child’s brain is most primed and able to learn language, deaf children are not receiving adequate input.  

The repercussions of depriving a young brain of language are severe and long lasting. Children that do not receive access to a robust language signal within the first five years of life demonstrate a variety of potentially irreversible cognitive-linguistic deficits. This includes deficits in the ability to understand language, use language, and organize thoughts into cohesive sentences. Additionally, and perhaps more poignantly, it also includes deficits in cognitive functions such as spatial concepts and awareness, time concepts and sequencing, number sense and counting, and memory.

Language is brain food. A brain with rich language input is like a body with healthy nutritive input. Therefore, depriving a child of language while his or her brain is still developing can permanently and significantly alter that child’s neurological growth.

While hearing aids and cochlear implants are fantastic technology, they are also subject to the unknowns of technology. They break. They malfunction. Children reject them. Sometimes they simply do not connect with the child’s brain for some inexplicable reason. Signed languages are the only languages that are one hundred percent accessible to a deaf child at all times.

So my question is: If ASL is so “cool,” why isn’t it cool enough for a deaf child? Perhaps we should start sharing articles detailing the importance of providing a deaf child early access to a signed language the same way we share the article about an ASL-friendly Starbucks. Perhaps we should infuse deaf children with the same awe and admiration for ASL as we spread around the internet. Perhaps if we did this, we could change a child’s life.

It's a Cochlear Implant, Not a Cochlea

The cochlea is a small, fluid-filled, ice cream swirl-shaped structure in the inner ear. Its inner canals are covered in tiny hair cells. After sound travels through the outer and middle ear, converting from acoustic to mechanical energy, it reaches the cochlea. The mechanical energy from the middle ear bones converts to hydraulic energy when it creates pressure waves on the inner ear fluid of the cochlea. The fluid puts pressure on the tiny hair cells, which activate the auditory nerve. It is at that point that the final conversion of energy occurs, from hydraulic to electrical. The electrical impulses are sent to the brain and interpreted as information.

Like other organs in the body, the cochlea performs an astonishing and uniquely human function. However, unlike other organs in the body, when surgery is performed on the cochlea there is limited concern for bodily rejection.

There is a common misconception that cochlear implants are like eyeglasses. An implant allows you to hear, much like glasses allow you to see. However, the important distinction is that cochlear implants have direct interaction with the brain. As Humphries et. al. (2012) state, cochlear implants involve not only progression in technology, but the biological interface between technology and the human brain. And, while the equipment itself may function perfectly, there is no way to predict the reaction of a child’s brain to the technology.

The intentional disregard for this crucial fact is the most dangerous mentality. This type of blatant overlook is not typical with other surgeries, for obvious reasons. When a pacemaker is placed, the recipient is educated extensively on the potential complications, including failure of the device. When an organ is surgically replaced, the chance of the body rejecting the new implant is openly discussed. Recipients of surgically implanted prostheses of any kind are always informed of the risks of failure or rejection. They are never informed that their artificial structures are seamless replacements for the original organ.

We owe it to implanted children to do the same when educating their parents. Because a child’s brain is still developing and learning language, device rejection or failure of any kind can result in stunted brain development and language deprivation. Parents must be informed that it is still impossible to know how a child’s brain will react to the implant. Because of this, cochlear implants are not sufficient as a standalone approach for language intervention (Kral et. al, 2016). Implanted children must be taught sign language as a preventative measure to ensure proper brain development.

A cochlear implant is a man-made device that is surgically implanted. Just as a pacemaker does not replace the function of the heart, a cochlear implant can never fully replace the function of the cochlea. And just like a pacemaker, its recipients must be properly educated about the repercussions of its potential rejection.

When Sam Found Language

I will never forget the day that I met Sam*. He was tall and shy, with dark tousled hair. He came into my room tentatively and sat still and quiet in his chair.

"Hi, buddy," I greeted him. He smiled shyly.

"How are you?"

He smiled again.

I pointed to myself and signed my sign name. Jen. Then, I pointed to him and gestured for him to introduce himself.

"Eoh," he said. 

How old are you? I signed. He stared at me. I signed, You. Age? Another blank stare. I signed, 7? 8? 9? Sam squinted, confused. I grabbed a blank piece of paper and wrote the numbers down, gesturing for him to point to one. He shrugged.

Under the numbers I scribbled out the alphabet. I pointed to the first letter.

"What letter is this?" I asked, enunciating clearly. Sam shook his head. I covered everything but the first row of letters. Where is B? I signed. Sam shrugged.

I had to figure out how to get in. When Sam looked away, I noticed that his cochlear implants had New York Yankees stickers on them.

Do you like baseball? I signed. Again, a blank stare. I grabbed my iPad and Google image searched pictures of the New York Yankees. When he saw them, his eyes lit up. He grinned and jumped out of his chair. He pointed furiously to the pictures and then perfectly imitated a pitcher's throw.

Yeah! Baseball! I signed.

He copied my sign. Baseball.

After that first session, I began to infuse Sam with language: American Sign Language. We started with the finger alphabet. We practiced forming the letters with our hands, matching them to the written letters, spelling our names and items in the room. 

What are you sisters' names? I asked. Sam shrugged. After an email to his mother and some practice, Sam could tell me: C-A-S-E-Y and H-A-N-N-A-H.

We learned colors and numbers. We learned shapes, animals, and food. We worked on answering questions.

Are the Yankees going to lose tonight? I signed.

No! He signed sharply, giggling.

In the early sessions, there was a lot of gesturing. A lot of manipulatives. A lot of real-life examples. We tasted honey to learn sticky. We left a teddy bear sleeping in the corner of my room to learn hibernate. We got in and out of boxes to learn prepositions. We stepped on leaves to learn crunchy. With this newfound language, Sam's previous use of tantrums came to a halt. A playful personality started to show through.

Sam proved to be a quick learner. We used sign language to build his literacy skills. Soon, he could read and write simple sentences. He began learning harder language concepts.

Why did the Titanic sink? I signed.

Because too many compartments filled with water, he responded.

Once we had a strong foundation for language, we began to target speech production in CV and CVC words. Sam had a diagnosis of apraxia of speech. This meant that his brain wasn't properly informing his mouth how to move for speech. When he would grope, his mouth unsure of how to produce the phonemes, I would show him the sign. With that visual, he was able to produce the word. We built up to CVCVC words with carrier phrases, so that Sam was able to make functional statements and requests in spoken English.

When I look at him now, four years later, sitting among his classmates in my push-in session, I am overwhelmed by how far he has come. His dark hair is still tousled. His cochlear implants still have Yankees stickers on them. But now, when I ask him a question, instead of a blank stare or shrug, his long arm shoots into the air, bouncing with impatience to respond.

I call on him.

White light is a division of seven colors, he signs.

That's right. That's how we see a rainbow. I smile.

Sam came to me like most of my other students do: severely language deprived. He was eight years old, with bilateral cochlear implants, unable to speak, sign, read, or write. A developmentally and cognitively typical child, he was using tantrums to communicate. 

When he was given a visual language that his brain so desperately craved, he was finally able to blossom into the curious, goofy, and capable child that he is today.

 

*name changed