Search
  • Kyle Gardner

Alexa who? How Voice Is Changing The Landscape For The Future Of Language

Today, people fear that language will be diminished as a result of social media. However, with the rise of voice over the last few years they should not be worried. If “Voice” is a term, you are hearing for the first time you are not alone. Voice refers to any form of content that is being consumed that uses the spoken word. With the increased popularity of Podcasts as well as the rise of the Amazon Alexa and other types of assistants there has been an increased awareness towards voice and what the future will hold in terms of how our content will be consumed and  the possible implications it has for the future of language and language development. Although voice is a fairly new technology, its impact has already begun to affect the way that we live our everyday lives.


The largest area of consumed content in voice is in podcasts. Podcasts, over the last five years, have been consumed at an enormous rate. People can listen to content on any subject they could imagine from linguistics to politics, to sports, and everything in between. According to an article from Social Media Expert and Fortune 500 Ceo, Gary Vaynerchuk, “More 16-30-year-olds are listening to podcasts instead of music during their commutes because it saves them time” (Vaynerchuk, 2018). In 2016 Nielsen found that audio streaming was up 76 percent from 2015 with 251.9 billion streams, more than 80 billion more than video (Nielsen, 2016). This increase is due in part to our nature as humans to want to do things faster and more efficiently. Podcasts are so popular due to the multitasking capabilities it creates. You are able to inform yourself on a subject while responding to emails or texts concurrently. Podcasts are even being used to help L2 learners better comprehend spoken word in their non-native language. In Kara Mcbrides book Electronic Discourse in Language Learning and Language Teaching she explains that in the past if an L2 learner wanted to listen to recordings of material that they were learning in class they would “mostly be the property of language laboratories, requiring students to go to the technology” (McBride, 156) However now the material can travel with the learner in the form of a podcast where they can listen to the content at any time without the pressures that a language laboratory would create in terms of the constraints on time that could negatively affect the learner. Earlier in the course, we learned about the Affective Filter. If a language learners Affective Filter is high, then some circumstances are inhibiting their ability to learn such as pressure to communicate in their L2 language, stress, or fear of embarrassment. However, when a language learners affective filter is low, they are very comfortable in their environment, and as a result, they learn the language better. With the rise of podcasts L2 learners, Affective Filter would most likely be low since they can bring materials with them to be consumed at their disposal and listened to as many times as they want.  


In early 2016 Google released the “Google Home” and Amazon released the “Alexa.” Both products include a talking “assistant” that can respond to requests you make in the language of your choice. In the unnatural and sometimes fake environment, social media creates both voice and audio are “by far the most natural interface for humans to interact” (Vaynerchuk, 2018). As humans, our instinct is to talk and listen. No matter how good at texting you are it will always be faster and more efficient to communicate by speaking. With the introduction of voice assistants like the Alexa, and Google Home it is allowing for people to be more efficient in their lives. Instead of searching the internet for something on the phone, many people are turning to their assistants to find what they need from them. As of 2016 one fifth of all Google searches were voice searches according to Google CEO Sundar Pichai which he announced during his Google I/O keynote. People at home are using full sentences, generally in the form of questions to talk to their assistants. For some people, this is a nice change from being buried in their smartphone. People are now able to perform routine household tasks such as cooking, or cleaning with an assistant by their side that can call, text, search the web, play audio, and even tell you a joke.


In 2006 if someone were to say that there would be a billion dollar company built on the back of an application on your phone, you would have laughed in their face.

Thirteen years later a little application called Facebook is worth Seventy Billion dollars. A similar pattern has emerged with the current trend in voice. Individuals and companies have started to create “Alexa Skills,” built-in apps on the Alexa platform that can help with the process of your everyday life through voice commands. Alexa skills are still extremely new, and as of 2018 only 50,000 exist compared to the 2.2 million apps that exist on the Apple App Store. Although it is early in the process, there have been some incredible Alexa skills developed that are helping people become more efficient every day. There are now skills that will remind you to feed your dog, or even turn off your lights if you ask it to do so. The whole purpose behind Alexa and Alexa skills is that we will be able to operate without friction. Friction is something that is created by humans when they carry around a smartphone or laptop computer. As a society, we have always tried to build things to where we would have less and less friction. The future of voice with Alexa and other technologies that have not even been built yet will focus on humans having the least amount of friction and the most amount of efficiency. With voice humans are easily able to communicate with others through their most natural form: speaking. In the past, there have been attempts at becoming more frictionless, with products such as the Google Glass, and Snapchat Spectacles, which both failed miserably. The Amazon Alexa and the Google home are the first products to succeed on a platform that will most certainly be used in the future.


Since voice is the future of our communication with others, as well as the way we consume content, humans will begin to speak more. Over the last ten years since the beginning of smartphones, humans have started to communicate verbally less and less. According to the Pew Research Center, 72% of teenagers text regularly, and one in three sends more than 100 texts per day (Stewart, 2013). The way we are headed with voice is a world where texting will decline rapidly. As stated before voice and audio is the most natural interface for humans to interact. Since we as humans are always looking for a way to be more efficient texting will decline because it is our natural voice that can communicate a thought faster than our fingers can. An Alexa skill with the technology to text through voice has already been developed, and it is called “Mastermind.” Mastermind works by users asking Alexa to “Text John Smith,” however it also includes many other capabilities including sending emails, initiating phone calls, ringing your phone, sharing your location, reading app notifications and sending articles from your PC to Mastermind to have Alexa read them. Since the Alexa has been created there has been a large increase in the focus on voice from companies to content creators. Audio is already being consumed at a rate much higher than any other source of content and businesses have realized these trends. Jeopardy, a popular trivia show that airs on TV has created an Alexa Skill where users can respond to the questions they are asked in the same way they would on the show, in a “What is” manner. There is a paradigm shift occurring right now that is turning the way we play games on our phone, by using our hands, to a more interactive way through voice technology.   


The Alexa skills have even tapped into the area of language development. In August 2018 Amazon rolled out the new “Cleo” skill. Cleo is a skill that has been developed for the Alexa device to understand the intricate dialects of certain countries better. One such country is India which has many local languages including Hindi, Tamil, Marathi, Kannada, Bengali, Telugu, and Gujarati, among others. Cleo works by letting users respond to the Alexa in their chosen dialect. The skill contains rounds wherein each the Alexa will ask you to say 5 or more things in your language varying from something extremely specific to random. This process helps the Alexa understand particular phrases in local languages because the user was speaking the way he or she would when they were with a group of friends talking. The skill was developed by a team of linguists and data scientists at Amazon who hope to be able to offer voice communication to people all over the world no matter what language they speak. According to Janet Slifka, the Senior Manager of Research Science and Alexa Artificial Intelligence at Amazon said what they are “looking to do with Cleo is to find an avenue where our customers can help us develop Alexa further” (Singh, 2018). The Cleo skill will not be able to speak these local dialects yet fully, however, Amazon is building a large community of native speakers who can work as developers and improve Alexa’s understanding of these languages over time. In the same way, humans acquire a second language; the Alexa will entirely have to understand concepts like pronunciation, patterns, and word choice to know how to use the languages in development.  


The future of how we learn language could be through voice technology as well. Linguists have developed an Alexa Skill called “Daily Dose” where users can get daily audio lessons. The lessons include 10-15 minute long learning sessions where both a native and English speaker explains the lesson. The incredible part about this resource is that you are given automatic feedback to your responses if they are incorrect. This reinforcement allows for better development of the language that the person is learning. The skill also includes categories where you can learn grammar, culture and new verbs among other things. In a recent study that was completed last year by Gilbert Dizon at Himeji Dokkyo University in Japan, it was concluded that using intelligent personal assistants for second language learning including the Amazon Alexa are extremely helpful. The study took four Japanese students who were learning English and had the Alexa ask them questions in English. The students were able to use the Alexa to tell them what certain words or phrases meant if they did not understand. About fifty percent of the time the Alexa was able to fully comprehend what the students were saying and provided feedback to the questions it was asked. All four of the students in their comments said that Alexa helped them significantly. According to Dizon “The results of this study suggest that Alexa has the potential to support L2 development by providing implicit feedback” (Dizon, 2017) The students mentioned that the most help came with pronunciations. The study describes that “all of the students mentioned that they could receive pronunciation feedback through Alexa...Alexa promoted learner effectiveness by providing valuable feedback in a key area of oral development” (Dizon, 2017). This study completed by Dizon is the most up to date research available when it comes to language learning through the use of a voice assistant such as the Alexa. It is fascinating that the Alexa was able to help these students in such a significant way. We have barely even scratched the surface of the way that voice will be used to impact language development and language learning in the near future.


A world in which we are communicating through voice will provide tremendous increases in our efficiency as humans as well as a paradigm shift from the way we have been consuming content to a place where we are listening to content and interacting with it ways that people would have never imagined.










References

2016 U.S. Music Year-End Report. (n.d.). Retrieved December 6, 2018, from https://www.nielsen.com/us/en/insights/reports/2017/2016-music-us-year-end-report.html

Abraham, L. B., & Williams, L. (2011). Electronic discourse in language learning and language teaching. Amsterdam: John Benjamins Pub.

Bentahar, A. (2017, November 07). 2017 Will Be The Year Of Voice Search. Retrieved from https://www.forbes.com/sites/forbesagencycouncil/2017/01/03/2017-will-be-the-year-of-voice-search/#1150e83d12c5

Dizon, G. (2017). Using Intelligent Personal Assistants for Second Language Learning: A Case Study of Alexa. TESOL Journal, 8(4), 811-830. doi:10.1002/tesj.353

Google says 20 percent of mobile queries are voice searches. (2016, May 24). Retrieved December 6, 2018, from https://searchengineland.com/google-reveals-20-percent-queries-voice-queries-249917

Hartmans, A. (2018, March 20). Amazon wants your help teaching Alexa new languages - and it could help in its fight against Google. Retrieved December 6, 2018, from https://www.businessinsider.com/cleo-new-alexa-skill-for-teaching-foreign-languages-2018-3

Stewart, E. (n.d.). Does cell phone use really affect our communication skills? Retrieved December 6, 2018, from https://lhslance.org/2013/features/cell-phone-use-really-affect-communication-skills/

The Rise of Audio & Voice. (n.d.). Retrieved December 6 2018, from https://www.garyvaynerchuk.com/rise-audio-voice/

Singh, J. (2018, August 14). Amazon Now Lets Users in India Teach Alexa Local Languages. Retrieved December 6, 2018, from https://gadgets.ndtv.com/apps/news/amazon-alexa-cleo-skill-teach-hindi-other-local-indian-languages-1900197

82 views0 comments

Contact

©2018 by KGMADEIT. Proudly created with Wix.com