meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
references [2019/06/25 18:55]
crispin_reedy_yahoo.com
references [2019/08/08 13:47] (current)
lisa.illgen_concentrix.com Updated with new reference
Line 1: Line 1:
 ==== References ==== ==== References ====
-Aaron, A., Eide, E., & Pitrelli, J. F. (2005). Conversational computers. Scientific American, 292(6), 64–69.+{{anchor:​aaron:​AAR1}}Aaron, A., Eide, E., & Pitrelli, J. F. (2005). Conversational computers. Scientific American, 292(6), 64–69. ​[[https://​doi.org/​10.1038/​scientificamerican0605-64]]
  
-Adlin, X., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Waltham, MA: Morgan Kaufmann.+{{anchor:​adlin:​ADL1}}Adlin, X., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Waltham, MA: Morgan Kaufmann. ​[[https://​learning.oreilly.com/​library/​view/​the-essential-persona/​9780123814180/​xhtml/​title.html]]
  
-Ahlén, S., Kaiser, L., & Olvera, E. (2004). Are you listening to your Spanish speakers? Speech Technology, 9(4), 10-15.+{{anchor:​ahlén:​AHL1}}Ahlén, S., Kaiser, L., & Olvera, E. (2004). Are you listening to your Spanish speakers? Speech Technology, 9(4), 10-15. ​[[https://​doi.org/​10.1007/​s10772-005-4759-5]]
  
-Ainsworth, W. A., & Pratt, S. R. (1992). Feedback strategies for error correction in speech recognition systems. International Journal of Man-Machine Studies, 36, 833–842.+{{anchor:​ainsworth1992:​AIN1}} ​Ainsworth, W. A., & Pratt, S. R. (1992). Feedback strategies for error correction in speech recognition systems. International Journal of Man-Machine Studies, 36, 833–842. ​[[https://​doi.org/​10.1016/​0020-7373(92)90075-V]]
  
-Ainsworth, W. A., & Pratt, S. R. (1993). Comparing error correction strategies in speech recognition systems. In C. Baber & J. M. Noyes (Eds.), Interactive speech technology: Human factors issues in the application of speech input/​output to computers (pp. 131–135). London, UK: Taylor & Francis.+{{anchor:​ainsworth1993:​AIN2}} ​Ainsworth, W. A., & Pratt, S. R. (1993). Comparing error correction strategies in speech recognition systems. In C. Baber & J. M. Noyes (Eds.), Interactive speech technology: Human factors issues in the application of speech input/​output to computers (pp. 131–135). London, UK: Taylor & Francis. ​[[https://​www.amazon.com/​Interactive-Speech-Technology-Application-Computers/​dp/​074840127X]]
  
-Alwan, J., & Suhm, B. (2010). Beyond best practices: A data-driven approach to maximizing self-service. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 99–105). Victoria, Canada: TMA Associates.+{{anchor:​alwan:​ALW1}}Alwan, J., & Suhm, B. (2010). Beyond best practices: A data-driven approach to maximizing self-service. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 99–105). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Attwater, D. (2008). Speech and touch-tone in harmony [PowerPoint Slides]. Paper presented at SpeechTek 2008. New York, NY: SpeechTek.+{{anchor:​attwater:​ATT1}}Attwater, D. (2008). Speech and touch-tone in harmony [PowerPoint Slides]. Paper presented at SpeechTek 2008. New York, NY: SpeechTek.
  
-Baddeley, A. D., & Hitch, G. (1974). Is working memory still working? American Psychologist,​ 56, 851-864.+{{anchor:​baddeley:​BAD1}}Baddeley, A. D., & Hitch, G. (1974). Is working memory still working? American Psychologist,​ 56, 851-864. ​[[https://​doi.org/​10.1037/​0003-066X.56.11.851]]
  
-Bailey, R. W. (1989). Human performance engineering:​ Using human factors/​ergonomics to achieve computer system usability. Englewood Cliffs, NJ: Prentice-Hall.+{{anchor:​bailey:​BAI1}}Bailey, R. W. (1989). Human performance engineering:​ Using human factors/​ergonomics to achieve computer system usability. Englewood Cliffs, NJ: Prentice-Hall. ​[[https://​www.amazon.com/​Human-Performance-Engineering-Ergonomics-Usability/​dp/​0134451805]]
  
-Bailly, G. (2003). Close shadowing natural versus synthetic speech. International Journal of Speech Technology, 6, 11–19.+{{anchor:​bailly:​BAI2}}Bailly, G. (2003). Close shadowing natural versus synthetic speech. International Journal of Speech Technology, 6, 11–19. ​[[https://​doi.org/​10.1023/​A:​1021091720511]]
  
-Balentine, B. (1999). Re-engineering the speech menu. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 205-235). Boston, MA: Kluwer Academic Publishers.+{{anchor:​balentine1999:​BAL1}}Balentine, B. (1999). Re-engineering the speech menu. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 205-235). Boston, MA: Kluwer Academic Publishers. ​[[https://​www.amazon.com/​Factors-Interactive-International-Engineering-Computer/​dp/​0792384679/​]]
  
-Balentine, B. (2006). The power of the pause. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 89-91). Victoria, Canada: TMA Associates.+{{anchor:​balentine2006:​BAL2}}Balentine, B. (2006). The power of the pause. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 89-91). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​VUI-Visions-Expert-Effective-Interface/​dp/​1412083737]]
  
-Balentine, B. (2007). It’s better to be a good machine than a bad person. Annapolis, MD: ICMI Press.+{{anchor:​balentine2007:​BAL3}}Balentine, B. (2007). It’s better to be a good machine than a bad person. Annapolis, MD: ICMI Press. ​[[https://​www.amazon.com/​Better-Good-Machine-Than-Person/​dp/​1932558098]]
  
-Balentine, B. (2010). Next-generation IVR avoids first-generation user interface mistakes. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 71–74). Victoria, Canada: TMA Associates.+{{anchor:​balentine2010:​BAL4}}Balentine, B. (2010). Next-generation IVR avoids first-generation user interface mistakes. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 71–74). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Balentine, B., Ayer, C. M., Miller, C. L., & Scott, B. L. (1997). Debouncing the speech button: A sliding capture window device for synchronizing turn-taking. International Journal of Speech Technology, 2, 7–19.+{{anchor:​balentine1997:​BAL5}}Balentine, B., Ayer, C. M., Miller, C. L., & Scott, B. L. (1997). Debouncing the speech button: A sliding capture window device for synchronizing turn-taking. International Journal of Speech Technology, 2, 7–19. ​[[https://​doi.org/​10.1007/​BF02539819]]
  
-Balentine, B., & Morgan, D. P. (2001). How to build a speech recognition application:​ A style guide for telephony dialogues, 2nd edition. San Ramon, CA: EIG Press.+{{anchor:​balentine2001:​BAL6}}Balentine, B., & Morgan, D. P. (2001). How to build a speech recognition application:​ A style guide for telephony dialogues, 2nd edition. San Ramon, CA: EIG Press. ​[[https://​www.amazon.com/​How-Build-Speech-Recognition-Application/​dp/​0967127823]]
  
-Barkin, E. (2009). But is it natural? Speech Technology, 14(2), 21–24.+{{anchor:​barkin:​BAR1}}Barkin, E. (2009). But is it natural? Speech Technology, 14(2), 21–24. ​[[http://​search.proquest.com/​docview/​212198708]]
  
-Beattie, G. W., & Barnard, P. J. (1979). The temporal structure of natural telephone conversations (directory enquiry calls). Linguistics,​ 17, 213–229.+{{anchor:​beattie:​BEA1}}Beattie, G. W., & Barnard, P. J. (1979). The temporal structure of natural telephone conversations (directory enquiry calls). Linguistics,​ 17, 213–229. ​[[https://​doi.org/​10.1515/​ling.1979.17.3-4.213]]
  
-Berndt, R. S., Mitchum, C., Burton, M., & Haendiges, A. (2004). Comprehension of reversible sentences in aphasia: The effects of verb meaning. Cognitive Neuropsychology,​ 21, 229–245.+{{anchor:​berndt:​BER1}}Berndt, R. S., Mitchum, C., Burton, M., & Haendiges, A. (2004). Comprehension of reversible sentences in aphasia: The effects of verb meaning. Cognitive Neuropsychology,​ 21, 229–245. ​[[https://​doi.org/​10.1080/​02643290342000456]]
  
-Bitner, M. J., Ostrom, A. L., & Meuter, M. L. (2002). Implementing successful self-service technologies. Academy of Management Executive, 16(4), 96–108.+{{anchor:​bitner:​BIT1}}Bitner, M. J., Ostrom, A. L., & Meuter, M. L. (2002). Implementing successful self-service technologies. Academy of Management Executive, 16(4), 96–108. ​[[https://​doi.org/​10.5465/​ame.2002.8951333]]
  
-Bloom, J., Gilbert, J. E., Houwing, T., Hura, S., Issar, S., Kaiser, L., et al. (2005). Ten criteria for measuring effective voice user interfaces. Speech Technology, 10(9), 31–35.+{{anchor:​bloom2005:​BLO1}}Bloom, J., Gilbert, J. E., Houwing, T., Hura, S., Issar, S., Kaiser, L., et al. (2005). Ten criteria for measuring effective voice user interfaces. Speech Technology, 10(9), 31–35. ​[[https://​www.speechtechmag.com/​Articles/​Editorial/​Feature/​Ten-Criteria-for-Measuring-Effective-Voice-User-Interfaces-29443.aspx]]
  
-Bloom, R., Pick, L., Borod, J., Rorie, K., Andelman, F., Obler, L., Sliwinski, M., Campbell, A., Tweedy, J., & Welkowitz, J. (1999). Psychometric aspects of verbal pragmatic ratings. Brain and Language, 68, 553–565.+{{anchor:​bloom1999:​BLO2}}Bloom, R., Pick, L., Borod, J., Rorie, K., Andelman, F., Obler, L., Sliwinski, M., Campbell, A., Tweedy, J., & Welkowitz, J. (1999). Psychometric aspects of verbal pragmatic ratings. Brain and Language, 68, 553–565. ​[[https://​doi.org/​10.1006/​brln.1999.2128]]
  
-Boretz, A. (2009). VUI standards: The great debate. Speech Technology, 14(8), 14-19.+{{anchor:​boretz:​BOR1}}Boretz, A. (2009). VUI standards: The great debate. Speech Technology, 14(8), 14-19. ​[[http://​search.proquest.com/​docview/​212191853]]
  
-Boyce, S. J. (2008). User interface design for natural language systems: From research to reality. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 43–80). New York, NY: Springer.+{{anchor:​boyce2008:​BOY1}}Boyce, S. J. (2008). User interface design for natural language systems: From research to reality. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 43–80). New York, NY: Springer. ​[[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]]
  
-Boyce, S., & Viets, M. (2010). When is it my turn to talk?: Building smart, lean menus. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 108–112). Victoria, Canada: TMA Associates.+{{anchor:​boyce2010:​BOY2}}Boyce, S., & Viets, M. (2010). When is it my turn to talk?: Building smart, lean menus. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 108–112). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Broadbent, D. E. (1977). Language and ergonomics. Applied Ergonomics, 8, 15–18.+{{anchor:​broadbent:​BRO1}}Broadbent, D. E. (1977). Language and ergonomics. Applied Ergonomics, 8, 15–18. ​[[https://​doi.org/​10.1016/​0003-6870(77)90111-9]]
  
-Byrne, B. (2003). “Conversational” isn’t always what you think it is. Speech Technology, 8(4), 16–19.+{{anchor:​bryne:​BYR1}}Byrne, B. (2003). “Conversational” isn’t always what you think it is. Speech Technology, 8(4), 16–19. ​[[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=30039]]
  
-Callejas, Z., & López-Cózar,​ R. (2008). Relations between de-facto criteria in the evaluation of a spoken dialogue system. Speech Communication,​ 50, 646-665.+{{anchor:​callejas:​CAL1}}Callejas, Z., & López-Cózar,​ R. (2008). Relations between de-facto criteria in the evaluation of a spoken dialogue system. Speech Communication,​ 50, 646-665. ​[[https://​doi.org/​10.1016/​j.specom.2008.04.004]]
  
-ChangC. (2006). When service ​failsThe role of the salesperson and the customerPsychology & Marketing, 23(3), 203–224.+{{anchor:​calteaux:​CAL2}}CalteauxK., Grover A., & van Huyssteen, G. (2012). Business drivers and design choices for multilingual IVRs: A governement ​service ​delivery case study. Retrieved from [[http://www.mica.edu.vn/​sltu2012/​files/​proceedings/​7.pdf]]
  
-ChapanisA. (1988). Some generalizations about generalizationHuman Factors30253-267.+{{anchor:​chang:​CHA1}}ChangC. (2006). When service fails: The role of the salesperson and the customerPsychology & Marketing23(3)203–224. [[https://​doi.org/​10.1002/​mar.20096]]
  
-ClarkH. H. (1996). Using languageCambridgeUKCambridge University Press.+{{anchor:​chapanis:​CHA2}}ChapanisA. (1988). Some generalizations about generalizationHuman Factors30, 253-267. [[https://​doi.org/​10.1177/​001872088803000301]]
  
-Clark, H. H. (2004). Pragmatics of language ​performanceIn L. R. Horn & G. Ward (Eds.), Handbook of pragmatics (pp. 365–382). Oxford, UK: Blackwell.+{{anchor:​clark1996:​CLA1}}Clark, H. H. (1996). Using language. ​Cambridge, UK: Cambridge University Press[[https://​www.amazon.com/​Using-Language-Herbert-H-Clark-ebook/​dp/​B016MYWOUG]]
  
-CohenM. H., Giangola, JP.Balogh, J. (2004). Voice user interface designBostonMAAddison-Wesley.+{{anchor:​clark2004:​CLA2}}ClarkH. H. (2004)Pragmatics of language performanceIn L. R. Horn GWard (Eds.), Handbook of pragmatics (pp365–382)OxfordUKBlackwell. [[https://​doi.org/​10.1002/​9780470756959.ch16]]
  
-CommarfordP. M., & Lewis, J. R(2005). Optimizing the pause length before presentation of global navigation commands. In Proceedings of HCI International 2005: Volume 2—The management of information:​ E-businessthe Weband mobile computing (pp1–7). StLouisMOMira Digital Publication.+{{anchor:​cohen:​COH1}}Cohen, M. H., Giangola, J. P., & BaloghJ(2004). Voice user interface designBostonMAAddison-Wesley[[https://​learning.oreilly.com/​library/​view/​voice-user-interface/​0321185765]]
  
-Commarford, P. M., Lewis, J. R., Al-Awar Smither, J. & Gentzler, M. D. (2008). A comparison ​of broad versus deep auditory menu structuresHuman Factors50(1), 77-89.+{{anchor:​commarford:​COM1}}Commarford, P. M., Lewis, J. R. (2005). Optimizing the pause length before presentation ​of global navigation commandsIn Proceedings of HCI International 2005: Volume 2—The management of information:​ E-businessthe Web, and mobile computing ​(pp. 1–7). St. LouisMO: Mira Digital Publication. [[http://​citeseerx.ist.psu.edu/​viewdoc/​summary?​doi=10.1.1.508.6365]]
  
-CouperM. P., SingerE., & TourangeauR(2004)Does voice matter? An interactive voice response ​(IVRexperimentJournal ​of Official Statistics20(3), 551–570.+{{anchor:​commarford2008:​COM2}}Commarford, P. M., LewisJ. R., Al-Awar Smither, J. GentzlerMD. (2008). A comparison ​of broad versus deep auditory menu structures. Human Factors50(1), 77-89. [[https://​doi.org/​10.1518/​001872008X250665]]
  
-CrystalTH., & HouseAS. (1990). Articulation rate and the duration of syllables and stress groups in connected speech. Journal of the Acoustical Society of America88101112.+{{anchor:​couper:​COU1}}CouperMP., Singer, E., & TourangeauR(2004)Does voice matter? An interactive voice response ​(IVRexperiment. Journal of Official Statistics20(3)551570. [[http://​search.proquest.com/​docview/​1266795179]]
  
-CunninghamLF., Young, C. E., & GerladinaJH. (2008). Consumer views of self-service technologiesThe Service Industries ​Journal, ​28(6)719-732.+{{anchor:​crystal:​CRY1}} CrystalTH., & HouseAS. (1990). Articulation rate and the duration ​of syllables and stress groups in connected speech. Journal ​of the Acoustical Society of America88101–112. [[https://​doi.org/​10.1121/​1.399955]]
  
-DahlD. (2006). Point/​counter point on personasSpeech Technology11(1), 18–21.+{{anchor:​cunningham:​CUN1}}CunninghamL. F., Young, C. E., & Gerladina, J. H. (2008). Consumer views of self-service technologiesThe Service Industries Journal28(6), 719-732[[https://​doi.org/​10.1080/​02642060801988522]] ​
  
-DamperR. I., & Gladstone, K. (2007). Experiences of usability evaluation of the IMAGINE speech-based interaction systemInternational Journal of Speech Technology, ​94150.+{{anchor:​dahl:​DAH1}}DahlD. (2006). Point/​counter point on personas. Speech Technology, ​11(1)1821. [[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=29584]]
  
-Damper, R. I., & SoonklangT. (2007). ​Subjective ​evaluation of techniques for proper name pronunciationIEEE Transactions on Audio, ​Speech, ​and Language Processing15(8), 2213-2221.+{{anchor:​damperg2007:​DAM1}}Damper, R. I., & GladstoneK. (2007). ​Experiences of usability ​evaluation of the IMAGINE speech-based interaction systemInternational Journal of Speech ​Technology941–50[[https://​doi.org/​10.1007/​s10772-006-9003-4]]
  
-DavidsonN., McInnes, F., & JackM. A. (2004). Usability ​of dialogue design strategies ​for automated surname capture. Speech ​Communication4355–70.+{{anchor:​dampers2007:​DAM2}}DamperRI., & SoonklangT. (2007). Subjective evaluation ​of techniques ​for proper name pronunciationIEEE Transactions on Audio, ​Speech, ​and Language Processing15(8), 2213-2221. [[https://​doi.org/​10.1109/​TASL.2007.904192]]
  
-DoughertyM(2010)What’s universally availablebut rarely used? In WMeisel (Ed.), Speech in the User Interface: Lessons from Experience ​(pp. 117-120). VictoriaCanadaTMA Associates.+{{anchor:​davidson:​DAV1}}DavidsonN., McInnes, F., & Jack, MA. (2004). Usability of dialogue design strategies for automated surname capture. Speech Communication43, 55–70. [[https://​doi.org/​10.1016/​j.specom.2004.02.002]]
  
-DuludeL. (2002). Automated telephone answering systems and agingBehaviour and Information Technology21(3), 171–184.+{{anchor:​dougherty:​DOU1}}DoughertyM. (2010). What’s universally available, but rarely used? In WMeisel (Ed.)Speech in the User Interface: Lessons from Experience ​(pp. 117-120). VictoriaCanada: TMA Associates. [[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Durrande-MoreauA. (1999). Waiting for service: Ten years of empirical researchInternational Journal of Service Industry Management10(2), 171–189.+{{anchor:​dulude:​DUL1}}DuludeL. (2002). Automated telephone answering systems and agingBehaviour and Information Technology21(3), 171–184. [[https://​doi.org/​10.1080/​0144929021000013482]]
  
-EdworthyJ. & Hellier, E. (2006). Complex nonverbal auditory signals and speech warningsIn (WogalterM. S., Ed.) Handbook of Warnings ​(pp. 199-220). Mahwah, NJLawrence Erlbaum.+{{anchor:​durrande-moreau:​DUR1}}Durrande-MoreauA. (1999). Waiting for service: Ten years of empirical researchInternational Journal of Service Industry Management10(2), 171–189[[https://​doi.org/​10.1108/​09564239910264334]]
  
-Enterprise Integration Group. (2000). Speech Recognition 1999 R&D Program: User interface design recommendations final reportSan RamonCAAuthor.+{{anchor:​edworthy:​EDW1}}Edworthy,​ J. & Hellier, E. (2006). Complex nonverbal auditory signals and speech warningsIn (WogalterM. S., Ed.) Handbook of Warnings (pp. 199-220). Mahwah, NJLawrence Erlbaum. [[https://​www.amazon.com/​Handbook-Warnings-Human-Factors-Ergonomics-ebook/​dp/​B07CSSLTTJ]]
  
-Ervin-Tripp,​ S. (1993). Conversational discourse. In J. B. Gleason ​N. B. Ratner (Eds.), Psycholinguistics (pp. 238–270)Fort WorthTXHarcourt Brace Jovanovich.+{{anchor:​enterprise:​ENT1}}Enterprise Integration Group. (2000). Speech Recognition 1999 R&D Program: User interface design recommendations final reportSan RamonCAAuthor.
  
-EvansDG., Draffan, EA., James, A.Blenkhorn, P. (2006). Do text-to-speech synthesizers pronounce correctly? A preliminary study. In KMiesenberger et al. (Eds.), ​Proceedings of ICCHP (pp. 855862). BerlinGermanySpringer-Verlag.+{{anchor:​ervin-tripp:​ERV1}}Ervin-TrippS(1993)Conversational discourseIn JBGleason ​NBRatner ​(Eds.), ​Psycholinguistics ​(pp. 238270). Fort WorthTXHarcourt Brace Jovanovich. [[https://​www.amazon.com/​Psycholinguistics-Nan-Bernstein-Ratner/​dp/​0030559642]]
  
-FerreiraF. (2003). The misinterpretation of noncanonical sentencesCognitive Psychology47, 164203.+{{anchor:​evans:​EVA1}}EvansD. G., Draffan, E. A., James, A., & Blenkhorn, P. (2006). Do text-to-speech synthesizers pronounce correctly? A preliminary studyIn K. Miesenberger et al. (Eds.)Proceedings of ICCHP (pp. 855862). Berlin, Germany: Springer-Verlag. [[https://​doi.org/​10.1007/​11788713_124]]
  
-Fosler-LussierE., Amdal, I., & Juo, H. J. (2005). A framework for predicting speech recognition errorsSpeech Communication46153170.+{{anchor:​ferreira:​FER1}}FerreiraF. (2003). The misinterpretation of noncanonical sentencesCognitive Psychology47164203[[https://​doi.org/​10.1016/​S0010-0285(03)00005-7]]
  
-FrankishC., & Noyes, J. (1990). Sources of human error in data entry tasks using speech ​inputHuman Factors32(6)697716.+{{anchor:​fosler-lussier:​FOS1}}Fosler-LussierE., Amdal, I., & JuoH. J. (2005). A framework for predicting ​speech ​recognition errorsSpeech Communication46153170. [[https://​doi.org/​10.1016/​j.specom.2005.03.003]]
  
-FriedJ., & EdmondsonR. (2006). How customer perceived latency measures success ​in voice self-serviceBusiness Communications Review36(3), 2632.+{{anchor:​frankish:​FRA1}}FrankishC., & NoyesJ. (1990). Sources of human error in data entry tasks using speech inputHuman Factors32(6), 697716. [[https://​doi.org/​10.1177/​001872089003200607]] ​
  
-FröhlichP. (2005). Dealing with system response times in interactive speech applicationsIn Proceedings of CHI 2005 (pp. 1379–1382). PortlandORACM.+{{anchor:​fried:​FRI1}}FriedJ., & Edmondson, R. (2006). How customer perceived latency measures success ​in voice self-serviceBusiness Communications Review, 36(3), 26–32. [[http://​www.webtorials.com/​main/​resource/​papers/​BCR/​paper101/​fried-03-06.pdf]] 
  
-FromkinV., Rodman, R., & Hyams, N. (1998). An introduction to language ​(6th ed.). Fort WorthTXHarcourt Brace Jovanovich.+{{anchor: fröhlich:​FRO1}}FröhlichP. (2005). Dealing with system response times in interactive speech applications. In Proceedings of CHI 2005 (pp1379–1382). PortlandORACM. [[https://​doi.org/​10.1145/​1056808.1056921]]
  
-Gardner-BonneauDJ. (1992). Human factors in interactive voice response applications“Common sense” is an uncommon commodityJournal of the American Voice I/O Society, 12, 1-12.+{{anchor:​fromkin:​FRO2}}FromkinV., Rodman, R., & Hyams, N. (1998). An introduction to language (6th ed.). Fort Worth, TXHarcourt Brace Jovanovich[[https://www.amazon.com/​Introduction-Language-6th-Sixth/​dp/​B0035E4B26]]
  
-Gardner-Bonneau,​ D. (1999). Guidelines for speech-enabled IVR application design. In D. Gardner-Bonneau (Ed.), ​Human factors ​and voice interactive systems (pp147-162). Boston, MA: Kluwer Academic Publishers.+{{anchor:​gardner-bonneau1992:​GAR1}}Gardner-Bonneau,​ D. J. (1992). Human factors ​in interactive ​voice response applications:​ “Common sense” is an uncommon commodityJournal of the American Voice I/O Society, 12, 1-12.
  
-GarrettM. F. (1990). Sentence processing. In D. N. Osherson & H. Lasnik ​(Eds.), Language: An invitation to cognitive science ​(pp. 133–176). Cambridge, MA: MIT Press.+{{anchor:​gardner-bonneau1999:​GAR2}}Gardner-BonneauD. (1999). Guidelines for speech-enabled IVR application design. In D. Gardner-Bonneau ​(Ed.), Human factors and voice interactive systems ​(pp. 147-162). Boston, MA: Kluwer Academic Publishers. [[https://​www.amazon.com/​Factors-Interactive-International-Engineering-Computer/​dp/​0792384679]]
  
-GleasonJB., & Ratner, ​N. B. (1993). PsycholinguisticsFort WorthTXHarcourt Brace Jovanovich.+{{anchor:​garrett:​GAR3}}GarrettMF. (1990). Sentence processing. In D. N. Osherson & HLasnik ​(Eds.), Language: An invitation to cognitive science (pp133–176)CambridgeMAMIT Press. [[https://​www.amazon.com/​Invitation-Cognitive-Science-Vol-Language/​dp/​0262650339]]
  
-Gould, J. D.Boies, SJ., Levy, S., Richards, JT., & Schoonard, J. (1987). The 1984 Olympics message system: A test of behavioral principles of system design. Communications of the ACM, 30, 758-569.+{{anchor:​giebutowksi:​GIE1}}Giebutowski, J. (2017December 18)Multilingual IVR 5 Big Ways to Get It Exactly WRONG Marketing MessagesRetrieved from [[https://​www.marketingmessages.com/​multilingual-ivr-5-big-ways-to-get-it-exactly-wrong]]
  
-{{anchor:graham2005:GrahamGM. (2005). Voice branding in AmericaAlpharettaGAVivid Voices.}} +{{anchor:gleason:GLE1}}GleasonJB., & Ratner, N. B. (1993). PsycholinguisticsFort WorthTXHarcourt Brace Jovanovich[[https://​www.amazon.com/​Psycholinguistics-Nan-Bernstein-Ratner/​dp/​0030559642]]
  
-{{anchor:Graham2010}} +{{anchor:goodwin:​GOO1}}GoodwinA. (2018, February 21). 5 Multilingual IVR Tips to Take Your Business Global [Web log post]Retrieved from [[https://www.west.com/​blog/​interactive-services/​multilingual-ivr-take-business-global]]
-GrahamG. M. (2010). Speech recognition,​ the brand and the voice: How to choose a voice for your applicationIn W. Meisel (Ed.), Speech in the user interfaceLessons from experience (pp93–98). Victoria, Canada: TMA Associates.+
  
-GriceHP(1975)Logic and conversationIn PCole & J. LMorgan (Eds.)Syntax and semanticsvolume 3: Speech acts (pp41–58). New YorkNYAcademic Press.+{{anchor:​gould:​GOU1}}GouldJD., Boies, SJ., Levy, S., Richards, ​J. T., & SchoonardJ(1987). The 1984 Olympics message system: A test of behavioral principles of system design. Communications of the ACM30, 758-569. [[https://​doi.org/​10.1145/​30401.30402]]
  
-GuinnI. (2010). You can’t think of everything: The importance of tuning speech applicationsIn W. Meisel (Ed.)Speech in the user interfaceLessons from experience (pp89–92)Victoria, Canada: TMA Associates.+{{anchor:​graham2005:​GRA1}}GrahamG. M. (2005). Voice branding in AmericaAlpharettaGAVivid Voices[[https://​www.amazon.com/​Voice-Branding-America-Marcus-Graham/​dp/​0975989502]]
  
-HafnerK(2004, Sept9). A voice with personalityjust trying ​to helpThe New York TimesRetrieved ​from www.nytimes.com/2004/09/09/​technology/​circuits/​09emil.html.+{{anchor:​graham2010:​GRA2}}GrahamGM(2010). Speech recognitionthe brand and the voice: How to choose a voice for your applicationIn WMeisel (Ed.), Speech in the user interface: Lessons ​from experience (pp. 93–98). Victoria, Canada: TMA Associates. [[https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227]]
  
-Halstead-NusslochR. (1989). The design of phone-based interfaces for consumers. In Proceedings of CHI 1989 (pp. 347352). AustinTXACM.+{{anchor:​grice:​GRI1}}GriceH. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics, volume 3: Speech acts (pp. 4158). New YorkNYAcademic Press[[https://​www.amazon.com/​Syntax-Semantics-3-Speech-Acts/​dp/​0127854231]]
  
-HarrisR. A. (2005). Voice interaction designCrafting the new conversational ​speech ​systemsSan FranciscoCAMorgan Kaufmann.+{{anchor:​guinn:​GUI1}}GuinnI. (2010). You can’t think of everythingThe importance of tuning ​speech ​applicationsIn W. Meisel (Ed.)Speech in the user interfaceLessons from experience (pp. 89–92). Victoria, Canada: TMA Associates. [[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-HeinsR., Franzke, M., Durian, M., & Bayya, A(1997)Turn-taking as a design principle for barge-in in spoken language systemsInternational Journal of Speech Technology, 2, 155-164.+{{anchor:​hafner:​HAF1}}HafnerK(2004Sept9)A voice with personalityjust trying to helpThe New York TimesRetrieved from [[www.nytimes.com/​2004/​09/​09/​technology/​circuits/​09emil.html]]
  
-HentonC. (2003). The name game: Pronunciation puzzles ​for TTSSpeech Technology, 8(5), 32-35.+{{anchor:​halstead-nussloch:​HAL1}}Halstead-NusslochR. (1989). The design of phone-based interfaces ​for consumersIn Proceedings of CHI 1989 (pp. 347–352). AustinTX: ACM[[https://​doi.org/​10.1016/​0003-6870(91)90015-A]] ​
  
-HoneKS., & Graham, R. (2000). Towards a tool for the subjective assessment of speech ​system interfaces (SASSI)Natural Language Engineering6(3–4), 287–303.+{{anchor:​harris:​HAR1}}HarrisRA. (2005). Voice interaction design: Crafting ​the new conversational ​speech ​systemsSan FranciscoCA: Morgan Kaufmann. [[https://​www.amazon.com/​Voice-Interaction-Design-Conversational-Technologies-ebook/​dp/​B001CPLXXK]]
  
-HuangX., AceroA., & HonH. (2001). Spoken ​language ​processing: A guide to theory, algorithm and system developmentUpper Saddle RiverNJPrentice Hall.+{{anchor:​heins:​HEI1}}HeinsR., FranzkeM., Durian, M., & BayyaA. (1997). Turn-taking as a design principle for barge-in in spoken ​language ​systemsInternational Journal of Speech Technology2, 155-164. [[https://​doi.org/​10.1007/​BF02208827]]
  
-HuguenardB. R., Lurch, F. J., Junker, B. W., Patz, R. J., & Kass, R. E. (1997). Working-memory failure in phone-based interactionACM Transactions on Computer-Human Interaction4(2), 67–102.+{{anchor:​henton:​HEN1}}HentonC. (2003). The name game: Pronunciation puzzles for TTSSpeech Technology8(5), 32-35. [[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=29501]]
  
-HunterP. (2009). More isn't betterbut (help me withsomething else isFrom the design-outloud blog.+{{anchor:​hone:​HON1}}HoneK. S., & Graham, R. (2000). Towards a tool for the subjective assessment of speech system interfaces (SASSI). Natural Language Engineering6(3–4), 287–303[[https://​doi.org/​10.1017/​S1351324900002497]]
  
-HuraSL. (2008). What counts as VUI? Speech Technology13(9), 7.+{{anchor:​houwing:​HOU1}}HouwingT., & Greiner, P. (2005). Design issues in multilingual applications. (SPEECH-WORLD[TM]) (interactive voice response systems). Customer Interaction Solutions23(12), 88–93. Retrieved from [[http://​search.proquest.com/​docview/​208150344]]
  
-HuraSL. (2010). My big fat main menuThe case for strategically breaking the rulesIn W. Meisel (Ed.)Speech in the User InterfaceLessons from Experience (pp 113-116)Victoria, CanadaTMA Associates.+{{anchor:​huang:​HUA1}}HuangX., Acero, A., & Hon, H. (2001). Spoken language processingA guide to theory, algorithm and system developmentUpper Saddle RiverNJPrentice Hall[[https://www.amazon.com/​Spoken-Language-Processing-Algorithm-Development/​dp/​0130226165]] ​
  
-JainAK., & PankantiS. (2008). Beyond fingerprintingScientific American299(3), 78-81.+{{anchor:​huguenard:​HUG1}}HuguenardBR., Lurch, F. J., Junker, B. W., Patz, R. J., & KassR. E. (1997). Working-memory failure in phone-based interactionACM Transactions on Computer-Human Interaction4(2), 67–102. [[https://​doi.org/​10.1145/​254945.254947]]
  
-JelinekF. (1997). Statistical methods for speech recognition. CambridgeMAMIT Press.+{{anchor:​hunter:​HUN1}}HunterP. (2009). More isn't betterbut (help me with) something else is. From the design-outloud blog. [[http://​blog.design-outloud.com/2009]]
  
-JoeR. (2007). The elements of style. ​Speech Technology, ​12(8), 20–24.+{{anchor:​hura2008:​HUR1}}HuraS. L. (2008). What counts as VUI? Speech Technology, ​13(9), 7. [[http://​search.proquest.com/​docview/​212185822/​]] ​
  
-JohnstoneA., Berry, U., Nguyen, T., & Asper, A. (1994). There was a long pauseInfluencing turn-taking behaviour in human-human and human-computer spoken dialogues. International Journal of Human-Computer Studies, 41, 383–411.+{{anchor:​hura2010:​HUR2}}HuraSL(2010)My big fat main menu: The case for strategically breaking the rules. In WMeisel ​(Ed.), Speech in the User Interface: Lessons from Experience (pp 113-116). Victoria, CanadaTMA Associates. [[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-KaiserL., Krogh, P., Leathem, C., McTernan, F., Nelson, C., Parks, M. C., & Turney, S. (2008). ​Thinking outside the boxDesigning for the overall user experienceFrom the 2008 Workshop on the Maturation of VUI.+{{anchor:​jain:​JAI1}}JainAK., & Pankanti, S. (2008). ​Beyond fingerprinting. Scientific American, 299(3), 78-81. [[https://doi.org/10.1038/​scientificamerican0908-78]]
  
-KarrayL., & Martin, A. (2003). Toward improving speech detection robustness ​for speech recognition ​in adverse conditionsSpeech Communication40, 261–276.+{{anchor:​jelinek:​JEL1}}JelinekF. (1997). Statistical methods ​for speech recognition. ​CambridgeMA: MIT Press. [[https://​www.amazon.com/​Frederick-Jelinek-Statistical-Methods-Recognition/​dp/​B008VS12VO]] ​
  
-KaushanskyK. (2006). Voice authentication – not just another speech applicationIn W. Meisel ​(Ed.), VUI VisionsExpert Views on Effective Voice User Interface Design (pp139-142). Victoria, Canada: TMA Associates.+{{anchor:​joe:​JOE1}}JoeR. (2007). The elements of styleSpeech Technology, 12(8), 20–24. [[http://search.proquest.com/​docview/​212188958]]
  
-KlattD. (1987). Review of text-to-speech conversion for English. Journal of the Acoustical Society of America82737793Audio samples available at <​www.cs.indiana.edu/rhythmsp/ASA/Contents.html>.+{{anchor:​johnstone:​JOH1}}JohnstoneA., Berry, U., Nguyen, T., & Asper, A. (1994). There was a long pause: Influencing turn-taking behaviour in human-human and human-computer spoken dialoguesInternational ​Journal of Human-Computer Studies41383411[[https://doi.org/​10.1006/ijhc.1995.1018]]
  
-KleijnenM., de RuyterK., & Wetzels, M. (2007). An assessment of value creation in mobile service delivery and the moderating role of time consciousnessJournal ​of Retailing, 83(1), 33–46.+{{anchor:​kaiser:​KAI1}}KaiserL., KroghP., Leathem, C., McTernan, F., Nelson, C., Parks, M. C., & Turney, S. (2008). Thinking outside ​the box: Designing for the overall user experienceFrom the 2008 Workshop on the Maturation ​of VUI.
  
-Klie, L. (2010). When in Rome. Speech ​Technology15(3), 20-24.+{{anchor:​karray:​KAR1}}Karray, L., & Martin, A. (2003). Towards improving speech detection robustness for speech recognition ​in adverse conditions. Speech ​Communication40, 261–276. [[https://​doi.org/​10.1016/​S0167-6393(02)00066-3]] 
  
-KnottBA., Bushey, RR., & Martin, J. M. (2004)Natural language prompts for an automated call routerExamples increase the clarity of user responses. In Proceedings of the Human Factors and Ergonomics Society 48th annual meeting ​(pp. 736–739). Santa MonicaCAHuman Factors and Ergonomics Society.+{{anchor:​kaushanksy:​KAU1}}KaushanskyK(2006)Voice authentication – not just another speech applicationIn WMeisel ​(Ed.), VUI VisionsExpert Views on Effective Voice User Interface Design ​(pp. 139-142). VictoriaCanadaTMA Associates. [[https://​www.amazon.com/​VUI-Visions-Expert-Effective-Interface/​dp/​1412083737]] ​
  
-KortumP., & Peres, S. C. (2006). An exploration ​of the use of complete songs as auditory progress barsIn Proceedings ​of the Human Factors and Ergonomics ​Society ​50th annual meeting (pp. 20712075)Santa Monica, CAHFES.+{{anchor:​klatt:​KLA1}}KlattD. (1987). Review ​of text-to-speech conversion for EnglishJournal ​of the Acoustical ​Society ​of America, 82, 737793Audio samples available at <​[[www.cs.indiana.edu/​rhythmsp/​ASA/​Contents.html]]>​. [[https://​doi.org/​10.1121/​1.395275]]
  
-KortumP., & PeresS. C. (2007). ​A survey ​of secondary activities ​of telephone callers who are put on holdIn Proceedings ​of the Human Factors and Ergonomics Society 51st annual Meeting ​(pp. 1153–1157). Santa MonicaCAHFES.+{{anchor:​kleijnen:​KLE1}}KleijnenM., de Ruyter, K., & WetzelsM. (2007). ​An assessment ​of value creation in mobile service delivery and the moderating role of time consciousnessJournal ​of Retailing, 83(1), 33–46. [[https://​doi.org/​10.1016/​j.jretai.2006.10.004]] 
  
-KortumP., Peres, S. C., Knott, B. A., & Bushey, R. (2005). The effect of auditory progress bars on consumer’s estimation of telephone wait timeIn Proceedings of the Human Factors and Ergonomics Society 49th annual meeting ​(pp. 628632)Santa Monica, CAHFES.+{{anchor:​klie2007:​KLI1}}KlieL. (2007). It’s a persona, not a personalitySpeech Technology, 12(5), 2226[[http://​search.proquest.com/​docview/​212204672]] ​
  
-KotanC., & Lewis, J. R. (2006). Investigation of confirmation strategies for speech recognition applicationsIn Proceedings of the Human Factors and Ergonomics Society 50th annual meeting ​(pp. 728–732). Santa Monica, CAHuman Factors and Ergonomics Society.+{{anchor:​klie2010:​KLI2}}KlieL. (2010). When in RomeSpeech Technology, 15(3), 20-24[[http://​search.proquest.com/​docview/​325176389/​]] ​
  
-Kotelly, B. (2003). The art and business of speech recognitionCreating ​the noble voiceBostonMAPearson Education.+{{anchor:​knott:​KNO1}}Knott, B. A., Bushey, R. R., & Martin, J. M. (2004). Natural language prompts for an automated call routerExamples increase ​the clarity of user responses. In Proceedings of the Human Factors and Ergonomics Society 48th annual meeting (pp. 736–739)Santa MonicaCAHuman Factors and Ergonomics Society. [[https://​doi.org/​10.1177/​154193120404800407]] ​
  
-KotellyB. (2006). ​Six tips for better branding. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design ​(pp. 61-64). VictoriaCanadaTMA Associates.+{{anchor:​kortum2006:​KOR1}}KortumP., & Peres, S. C. (2006). ​An exploration of the use of complete songs as auditory progress bars. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting ​(pp. 2071–2075). Santa MonicaCAHFES. [[https://​doi.org/​10.1177/​154193120605001776]]
  
-KrahmerE., Swerts, M., Theune, M., & WeegelsM. (2001). Error detection in spoken human-machine interactionInternational Journal ​of Speech Technology, 4, 1930.+{{anchor:​kortum2007:​KOR2}}KortumP., & PeresS. C. (2007). A survey of secondary activities of telephone callers who are put on holdIn Proceedings ​of the Human Factors and Ergonomics Society 51st annual Meeting (pp. 11531157). Santa Monica, CA: HFES. [[https://​doi.org/​10.1177/​154193120705101821]] ​
  
-LaiJ., Karat, C.-M., & YankelovichN. (2008). Conversational speech interfaces and technology. In A. Sears & J. A. Jacko (Eds.) The human-computer interaction handbook: Fundamentals,​ evolving technologies, ​and emerging applications ​(pp. 381-391). New YorkNYLawrence Erlbaum.+{{anchor:​kortum2005:​KOR3}}KortumP., PeresS. C., Knott, B. A., & BusheyR. (2005). The effect of auditory progress bars on consumer’s estimation of telephone wait time. In Proceedings of the Human Factors ​and Ergonomics Society 49th annual meeting ​(pp. 628–632). Santa MonicaCAHFES. [[https://​doi.org/​10.1177/​154193120504900406]]
  
-Larson, J. A. (2005). Ten guidelines ​for designing a successful voice user interfaceSpeech Technology, 10(1), 51-53.+{{anchor:​kotan:​KOT1}}Kotan,​ C., & Lewis, J. R. (2006). Investigation of confirmation strategies ​for speech recognition applicationsIn Proceedings of the Human Factors and Ergonomics Society 50th annual meeting ​(pp. 728–732). Santa MonicaCA: Human Factors and Ergonomics Society. [[https://​doi.org/​10.1177/​154193120605000524]] ​
  
-LeppikP. (2005). Does forcing callers to use self-service work? Quality Times22, 1-3Downloaded 2/18/2009 from http://www.vocalabs.com/resources/newsletter/newsletter22.html.+{{anchor:​kotelly2003:​KOT2}}KotellyB. (2003). The art and business of speech recognition:​ Creating the noble voice. BostonMA: Pearson Education[[https://www.amazon.com/Art-Business-Speech-Recognition-Creating/dp/0321154924]]
  
-LeppikP. (2006). ​Developing metrics part 1Bad metricsThe Customer Service SurveyRetrieved from www.vocalabs.com/resources/blog/C834959743/​E20061205170807/​index.html.+{{anchor:​kotelly2006:​KOT3}}KotellyB. (2006). ​Six tips for better branding. In W. Meisel (Ed.), VUI VisionsExpert Views on Effective Voice User Interface Design (pp61-64)Victoria, Canada: TMA Associates. [[https://www.amazon.com/VUI-Visions-Expert-Effective-Interface/dp/1412083737]]
  
-LeppikP. (2012). The customer frustration indexGolden ValleyMN: Vocal LaboratoriesDownloaded 7/23/2012 from http://www.vocalabs.com/download-ncss-cross-industry-report-customer-frustration-index-q2-2012.+{{anchor:​krahmer:​KRA1}}Krahmer,​ E., Swerts, M., Theune, M., & WeegelsM. (2001). Error detection in spoken human-machine interactionInternational Journal of Speech Technology4, 19–30[[https://doi.org/10.1023/​A:​1009648614566]]
  
-LeppikP., & LeppikD. (2005). Gourmet customer service: ​scientific approach to improving the caller experienceEden PrairieMNVocaLabs.+{{anchor:​lai:​LAI1}}LaiJ., Karat, C.-M., & YankelovichN. (2008). Conversational speech interfaces and technology. In A. Sears & J. A. Jacko (Eds.) The human-computer interaction handbook: Fundamentalsevolving technologies,​ and emerging applications (pp. 381-391). New York, NYLawrence Erlbaum. [[https://​www.amazon.com/​Human-Computer-Interaction-Handbook-Fundamentals-Technologies-ebook/​dp/​B0083V45J0]] ​
  
-Lewis, J.R. (1982). Testing small system customer set-upIn Proceedings of the Human Factors Society 26th Annual Meeting ​(pp. 718-720). Santa MonicaCAHuman Factors Society.+{{anchor:​larson:​LAR1}}Larson, J. A. (2005). Ten guidelines for designing a successful voice user interfaceSpeech Technology, 10(1), 51-53. [[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=29608]]
  
-LewisJ. R. (2005). ​Frequency distributions for names and unconstrained words associated with the letters of the English alphabet. In Proceedings of HCI International 2005: Posters (pp. 1–5)St. Louis, MO: Mira Digital Publication. Available at http://drjim.0catch.com/hcii05-368-wordfrequency.pdf.+{{anchor:​leppik2005:​LEP1}}LeppikP. (2005). ​Does forcing callers to use self-service work? Quality Times, 22, 1-3Downloaded 2/18/2009 from [[http://www.vocalabs.com/resources/​newsletter/​newsletter22.html]]
  
-LewisJ. R. (2006). ​Effectiveness of various automated readability measures for the competitive evaluation of user documentationIn Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp624–628)Santa Monica, CA: Human Factors and Ergonomics Society.+{{anchor:​leppik2006:​LEP2}}LeppikP. (2006). ​Developing metrics part 1: Bad metricsThe Customer Service SurveyRetrieved from [[www.vocalabs.com/​resources/​blog/​C834959743/​E20061205170807/​index.html]]
  
-LewisJ. R. (2007). Advantages and disadvantages of press or say <x> speech user interfaces (TechRepBCR-UX-2007-0002. Retrieved ​from http://drjim.0catch.com/2007_AdvantagesAndDisadvantagesOfPressOrSaySpeechUserInter.pdf). Boca Raton, FL: IBM Corp.+{{anchor:​leppik2012:​LEP3}} LeppikP. (2012). The customer frustration indexGolden Valley, MN: Vocal LaboratoriesDownloaded 7/​23/​2012 ​from [[http://www.vocalabs.com/download-ncss-cross-industry-report-customer-frustration-index-q2-2012]]
  
-LewisJR. (2008). Usability evaluation of a speech recognition IVR. In T. Tullis & B. Albert (Eds.), Measuring ​the user experience, ​Chapter 10Case studies (pp244–252)Amsterdam, Netherlands:​ Morgan-Kaufman.+{{anchor:​leppikl2005:​LEP4}}LeppikP., & Leppik, D. (2005). Gourmet customer service: A scientific approach to improving ​the caller ​experience. Eden PrairieMNVocaLabs[[https://​www.amazon.com/​Gourmet-Customer-Service-Scientific-Experience/​dp/​0976405504]]  ​
  
-Lewis, J. R. (2011). Practical speech user interface designBoca RatonFLCRC Press, Taylor & Francis Group.+{{anchor:​lewis1982:​LEW1}}Lewis, J.R. (1982). Testing small system customer set-upIn Proceedings of the Human Factors Society 26th Annual Meeting (pp. 718-720). Santa MonicaCAHuman Factors Society. [[https://​doi.org/​10.1177/​154193128202600810]] ​
  
-Lewis, J. R. (2012). Usability testing. In G. Salvendy (Ed.), Handbook ​of Human Factors and Ergonomics, 4th ed. (pp. 1267-1312). New YorkNYJohn Wiley.+{{anchor:​lewis2004:​LEW2}}Lewis, J. R. (2004). Effect of speaker and sampling rate on MOS-X ratings of concatenative TTS voices. In Proceedings ​of the Human Factors and Ergonomics ​Society ​(pp. 759-763). Santa MonicaCAHFES. [[https://​doi.org/​10.1177/​154193120404800504]]
  
-Lewis, J. R., & Commarford, P. M. (2003). Developing a voice-spelling alphabet ​for PDAsIBM Systems Journal, 42(4), 624–638. Available at http://​drjim.0catch.com/​2003_DevelopingAVoiceSpellingAlphabetForPDAs.pdf.+{{anchor:​lewis2005:​LEW3}} ​Lewis, J. R. (2005). Frequency distributions ​for names and unconstrained words associated with the letters of the English alphabetIn Proceedings of HCI International 2005: Posters ​(pp. 1–5). St. LouisMO: Mira Digital Publication. Available at [[http://​drjim.0catch.com/​hcii05-368-wordfrequency.pdf]]
  
-Lewis, J. R., Commarford, P. M., Kennedy, P. J., and Sadowski, W. J. (2008). Handheld electronic devices. In C. Melody Carswell (Ed.), Reviews ​of Human Factors and Ergonomics, Vol. 4 (pp. 105-148). Santa Monica, CA: Human Factors and Ergonomics Society. ​Available at http://drjim.0catch.com/2008_HandheldElectronicDevices.pdf.+{{anchor:​lewis2006:​LEW4}}Lewis, J. R. (2006). Effectiveness of various automated readability measures for the competitive evaluation of user documentation. In Proceedings ​of the Human Factors and Ergonomics ​Society 50th annual meeting ​(pp. 624–628). Santa Monica, CA: Human Factors and Ergonomics Society. ​[[https://doi.org/10.1177/154193120605000501]]
  
-Lewis, J. R., Commarford, P. M., & Kotan, C. (2006). Web-based comparison of two styles of auditory presentation:​ All TTS versus rapidly mixed TTS and recordings. In Proceedings ​of the Human Factors and Ergonomics Society 50th annual meeting ​(pp723–727). Santa MonicaCAHuman Factors and Ergonomics Society.+{{anchor:​lewis2007:​LEW5}}Lewis, J. R. (2007). Advantages ​and disadvantages ​of press or say <x> speech user interfaces ​(TechRep. BCR-UX-2007-0002. Retrieved from [[http://​drjim.0catch.com/​2007_AdvantagesAndDisadvantagesOfPressOrSaySpeechUserInter.pdf]]). Boca RatonFLIBM Corp.
  
-Lewis, J. R., Potosnak, K. M., and Magyar, R. L. (1997). Keys and keyboards. In M. Helandar, ​T. K. Landauer, and PPrabhu ​(Eds.), ​Handbook of Human-Computer Interaction ​(pp. 1285-1315). Amsterdam: ​ElsevierAvailable at http://drjim.0catch.com/1997_KeysAndKeyboards.pdf.+{{anchor:​lewis2008:​LEW6}}Lewis, J. R. (2008). Usability evaluation of a speech recognition IVR. In T. Tullis & BAlbert ​(Eds.), ​Measuring the user experience, Chapter 10: Case studies ​(pp. 244–252). Amsterdam, NetherlandsMorgan-Kaufman[[https://www.amazon.com/Measuring-User-Experience-Interactive-Technologies/​dp/​0123735580]]
  
-Lewis, J. R., Simone, J. E., & Bogacz, M. (2000). Designing common functions for speech-only user interfacesRationalessample dialogs, potential uses for event counting, and sample grammars (TechReport 29.3287, available at <http://drjim.0catch.com/always-ral.pdf>​). Raleigh, NC: IBM Corp.+{{anchor:​lewis2011:​LEW7}}Lewis, J. R. (2011). Practical ​speech user interface design. Boca Raton, FLCRC PressTaylor & Francis Group[[https://www.amazon.com/Practical-Speech-Interface-Factors-Ergonomics-ebook/​dp/​B008KZ6TAM]]  ​
  
-LibermanAM., Harris, KS., Hoffman, HS., & GriffithBC(1957). The discrimination of speech sounds within and across phoneme boundariesJournal of Experimental Psychology, 54, 358–368.+{{anchor:​lewis2012:​LEW8}}LewisJR(2012)Usability testingIn GSalvendy (Ed.)Handbook of Human Factors and Ergonomics4th ed(pp1267-1312). New York, NY: John Wiley[[https://​www.amazon.com/​Handbook-Factors-Ergonomics-Gavriel-Salvendy/​dp/​0470528389]]
  
-Litman, D., Hirschberg, J., & Swerts, M. (2006). Characterizing and predicting corrections in spoken dialogue systemsComputational Linguistics32(3), 417438.+{{anchor:​lewis2003:​LEW9}}Lewis, J. R., & CommarfordP. M. (2003). Developing a voice-spelling alphabet for PDAsIBM Systems Journal42(4), 624638. Available at [[http://​drjim.0catch.com/​2003_DevelopingAVoiceSpellingAlphabetForPDAs.pdf]]
  
-LombardE. (1911). Le signe de l’elevation de la voixAnnales des maladies de l’oreille et du larynx37101–199.+{{anchor:​lewisc2008:​LEW10}}LewisJ. R., Commarford, P. M., Kennedy, P. J., and Sadowski, W. J. (2008). Handheld electronic devicesIn C. Melody Carswell (Ed.)Reviews of Human Factors and ErgonomicsVol. 4 (pp. 105-148). Santa Monica, CA: Human Factors and Ergonomics Society. Available at [[http://​drjim.0catch.com/​2008_HandheldElectronicDevices.pdf]]
  
-MachadoS., Duarte, E., TelesJ., Reis, L., & RebeloF. (2012). Selection ​of a voice for a speech signal for personalized warningsThe effect ​of speaker'​s gender ​and voice pitchWork41, 3592-3598.+{{anchor:​lewisc2006:​LEW11}}LewisJR., CommarfordPM., & KotanC. (2006). Web-based comparison of two styles ​of auditory presentationAll TTS versus rapidly mixed TTS and recordings. In Proceedings ​of the Human Factors ​and Ergonomics Society 50th annual meeting (pp723–727). Santa MonicaCA: Human Factors and Ergonomics Society. [[https://​doi.org/​10.1177/​154193120605000523]] ​
  
-MarguliesE. (2005). Adventures in turn-taking:​ Notes on success ​and failure in turn cue coupling. In AVIOS 2005 proceedings ​(pp. 1–10). San Jose, CAAVIOS.+{{anchor:​lewis1997:​LEW12}}LewisJ. R., Potosnak, K. M., and Magyar, R. L. (1997). Keys and keyboards. In M. Helander, T. K. Landauer, and P. Prabhu (Eds.), Handbook of Human-Computer Interaction ​(pp. 1285-1315). AmsterdamElsevier. Available at [[http://​drjim.0catch.com/​1997_KeysAndKeyboards.pdf]]
  
-MarguliesMK. (1980). Effects of talker differences on speech ​intelligibility in the hearing impairedDoctoral dissertationCity University of New York.+{{anchor:​lewis2000:​LEW13}}LewisJR., Simone, J. E., & Bogacz, M. (2000). Designing common functions for speech-only user interfaces: Rationales, sample dialogs, potential uses for event counting, and sample grammars (Tech. Report 29.3287, available at <​[[http://​drjim.0catch.com/​always-ral.pdf]]>​)RaleighNC: IBM Corp.
  
-Marics, M. A., & EngelbeckG(1997)Designing voice menu applications for telephones. In M. HelanderTKLandauer, & PPrabhu ​(Eds.), Handbook ​of human-computer interaction2nd edition (pp1085-1102)Amsterdam, Netherlands:​ Elsevier.+{{anchor:​liberman:​LIB1}}LibermanA. M., HarrisKS., Hoffman, HS., & Griffith, B. C. (1957)The discrimination of speech sounds within and across phoneme boundaries. Journal ​of Experimental Psychology54, 358–368[[https://​doi.org/10.1037/​h0044417]]
  
-Markowitz, J. (2010). VUI concepts for speaker verification. In WMeisel ​(Ed.), Speech in the User InterfaceLessons from Experience (pp161-166)Victoria, Canada: TMA Associates.+{{anchor:​litman:​LIT1}}Litman,​ D., Hirschberg, J., & Swerts, M. (2006). Characterizing and predicting corrections in spoken dialogue systemsComputational Linguistics,​ 32(3), 417–438. [[https://doi.org/10.1162/​coli.2006.32.3.417]] 
  
-MassaroD. (1975). Preperceptual imagesprocessing timeand perceptual units in speech perceptionIn DMassaro (Ed.), Understanding language: An information-processing analysis of speech perception, reading, and psycholinguistics (pp. 125–150). New York, NY: Academic Press.+{{anchor:​lombard:​LOM1}}LombardE. (1911). Le signe de l’elevation de la voix. Annales des maladies de l’oreille et du larynx37101–199[[http://​paul.sobriquet.net/wp-content/​uploads/​2007/​02/​lombard-1911-p-h-mason-2006.pdf]]
  
-McInnesF., AttwaterD., EdgingtonM. D., SchmidtM. S., & JackM. A. (1999). User attitudes to concatenated natural ​speech and text-to-speech synthesis in an automated information serviceIn Proceedings of Eurospeech99 (pp. 831–834). BudapestHungaryESCA.+{{anchor:​machado:​MAC1}}MachadoS., DuarteE., TelesJ., ReisL., & RebeloF. (2012). Selection of a voice for a speech ​signal for personalized warnings: The effect of speaker'​s gender ​and voice pitchWork41, 3592-3598. [[https://doi.org/​10.3233/​WOR-2012-0670-3592]] ​
  
-McInnesFR., Nairn, IA., Attwater, D. J., Edgington, M. D., & Jack, M. A. (1999). A comparison of confirmation strategies for fluent telephone dialogues. EdinburghUKCentre for Communication Interface Research.+{{anchor:​margulies2005:​MAR1}}MarguliesE(2005)Adventures in turn-taking:​ Notes on success and failure in turn cue couplingIn AVIOS 2005 proceedings (pp1–10). San JoseCAAVIOS.
  
-McKellin, W. H., Shahin, K., Hodgson, M., Jamieson, J., & Pichora-Fuller, ​K. (2007). Pragmatics ​of conversation and communication ​in noisy settingsJournal ​of Pragmatics, 39, 2159–2184.+{{anchor:​margulies1990:​MAR2}}Margulies, M. K. (1980). Effects ​of talker differences on speech intelligibility ​in the hearing impairedDoctoral dissertation,​ City University ​of New York.
  
-McKienzieJ. (2009). Menu pauses: How long? [PowerPoint Slides]Paper presented at SpeechTek 2009New YorkNYSpeechTek.+{{anchor:​marics:​MAR3}}MaricsM. A., & Engelbeck, G. (1997). Designing voice menu applications for telephonesIn MHelanderT. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer interaction,​ 2nd edition (pp. 1085-1102). Amsterdam, NetherlandsElsevier[[https://​www.amazon.com/​Handbook-Human-Computer-Interaction-Second-Helander-dp-0444818626/​dp/​0444818626]]
  
-McTearM., O’Neill, I., Hanna, P., & Liu, X. (2005). Handling errors and determining confirmation strategies—an object based approachSpeech Communication45, 249–269.+{{anchor:​markowitz:​MAR4}}MarkowitzJ(2010)VUI concepts for speaker verificationIn WMeisel ​(Ed.), Speech in the User Interface: Lessons from Experience (pp161-166)VictoriaCanada: TMA Associates[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-MillerG. A. (1956). The magical number sevenplus or minus two: Some limits on our capacity for processing ​informationThe Psychological Review63, 81-97.+{{anchor:​massaro:​MAS1}}MassaroD. (1975). Preperceptual images, processing ​time, and perceptual units in speech perceptionIn D. Massaro (Ed.)Understanding language: An information-processing analysis of speech perception, reading, and psycholinguistics (pp125–150). New York, NY: Academic Press. [[https://​www.amazon.com/​Understanding-Language-Information-Processing-Perception-Psycholinguistics-ebook/​dp/​B01JOZRWWA]]  ​
  
-MillerG. A. (1962). Some psychological studies ​of grammarAmerican Psychologist,​ 17, 748762.+{{anchor:​mcinnesa1999:​MCI1}}McInnesF., Attwater, D., Edgington, M. D., Schmidt, M. S., & Jack, M. A. (1999). User attitudes to concatenated natural speech and text-to-speech synthesis in an automated information service. In Proceedings ​of Eurospeech99 (pp831834). Budapest, Hungary: ESCA. [[https://​www.isca-speech.org/​archive/​archive_papers/​eurospeech_1999/​e99_0831.pdf]]
  
-MinkerW., Pitterman, J., Pitterman, A., StraußP.-M., & BühlerD. (2007). Challenges in speech-based human-computer interaction. International Journal ​of Speech Technology, 10, 109–119.+{{anchor:​mcinnesn1999:​MCI2}}McInnesFR., NairnI. A., AttwaterDJ., Edgington, ​M. D., & JackM. A. (1999). A comparison ​of confirmation strategies for fluent telephone dialogues. EdinburghUK: Centre for Communication Interface Research. [[http://​citeseerx.ist.psu.edu/​viewdoc/​download?​doi=10.1.1.473.3649&​rep=rep1&​type=pdf]] ​
  
-MościckiE.K., ElkinsE. F., BaumH. M., & McNamaraP. M. (1985). Hearing loss in the elderly: An epidemiologic study of the Framingham Heart Study cohort. Ear and Hearing ​Journal, ​6184-190.+{{anchor:​mckellin:​MCK1}}McKellinWH., ShahinK., Hodgson, M., Jamieson, J., & Pichora-FullerK. (2007). Pragmatics ​of conversation ​and communication in noisy settings. ​Journal ​of Pragmatics392159–2184. [[https://​doi.org/​10.1016/​j.pragma.2006.11.012]] 
  
-MunichorN., & Rafaeli, A. (2007). Numbers or apologiesCustomer reactions to telephone waiting time fillersJournal of Applied Psychology, 92(2)511–518.+{{anchor:​mckienzie:​MCK2}}McKienzieJ. (2009). Menu pauses: How long[PowerPoint Slides]Paper presented at SpeechTek 2009. New YorkNY: SpeechTek.
  
-NairneJ. (2002). Remembering over the short-term: The case against the standard modelAnnual Review of Psychology5353-81.+{{anchor:​mctear:​MCT1}}McTearM., O’Neill, I., Hanna, P., & Liu, X. (2005). Handling errors and determining confirmation strategies—an object based approachSpeech Communication45249–269. [[https://​doi.org/​10.1016/​j.specom.2004.11.006]] 
  
-NassC., & Brave, S. (2005). Wired for speechHow voice activates and advances the human-computer relationshipCambridgeMAMIT Press.+{{anchor:​miller1956:​MIL1}}MillerGA. (1956). The magical number seven, plus or minus twoSome limits on our capacity for processing informationThe Psychological Review63, 81-97. [[http://​www2.psych.utoronto.ca/​users/​peterson/​psy430s2001/​Miller%20GA%20Magical%20Seven%20Psych%20Review%201955.pdf]] 
  
-NassC., & Yen, C. (2010). The man who lied to his laptop: What machines teach us about human relationshipsNew YorkNYPenguin Group.+{{anchor:​miller1962:​MIL2}}MillerGA. (1962). Some psychological studies of grammarAmerican Psychologist17, 748–762. [[http://​search.proquest.com/​docview/​1289830820/​]]
  
-NémethG., KissG., ZainkóC., OlaszyG., & TóthB. (2008). Speech generation ​in mobile phones. In D. Gardner-Bonneau & HE. Blanchard (Eds.)Human factors and voice interactive systems (2nd ed.) (pp. 163191)New York, NYSpringer.+{{anchor:​minker:​MIN1}}MinkerW., PittermanJ., PittermanA., StraußP.-M., & BühlerD. (2007). Challenges ​in speech-based human-computer interactionInternational Journal of Speech Technology10, 109119[[https://doi.org/​10.1007/​s10772-009-9023-y]]
  
-NorthAC., HargreavesDJ., & McKendrickJ. (1999). Music and on-hold waiting timeBritish ​Journal ​of Psychology90161–164.+{{anchor:​mościcki:​MOS1}}MościckiE.K., ElkinsEF., Baum, H. M., & McNamaraP. M. (1985). Hearing loss in the elderly: An epidemiologic study of the Framingham Heart Study cohortEar and Hearing ​Journal, ​6184-190[[https://​doi.org/​10.1097/​00003446-198507000-00003]]
  
-NovickD. G., Hansen, B., Sutton, S., & MarshallC. R. (1999). Limiting factors of automated ​telephone ​dialoguesIn D. Gardner-Bonneau ​(Ed.), Human factors and voice interactive systems (pp. 163186)Boston, MAKluwer Academic Publishers.+{{anchor:​munichor:​MUN1}}MunichorN., & RafaeliA. (2007). Numbers or apologies? Customer reactions to telephone ​waiting time fillersJournal of Applied Psychology, 92(2), 511518[[https://​doi.org/​10.1037/​0021-9010.92.2.511]]
  
-OgdenW. C., & Bernick, P. (1997). Using natural language interfacesIn M. HelanderT. K. Landauer& P. Prabhu (Eds.), Handbook of human-computer interaction (pp137–161)Amsterdam, Netherlands:​ Elsevier.+{{anchor:​nairne:​NAI1}}NairneJ. (2002). Remembering over the short-term: The case against the standard modelAnnual Review of Psychology5353-81[[http://​search.proquest.com/​docview/​205754757]]
  
-OstendorfM., Kannan, A., Austin, S., Kimball, O., Schwartz, R., & Rohlicek, J. R. (1991). Integration of diverse recognition methodologies through reevaluation of n-best sentence hypotheses. In Proceedings of DARPA Workshop on Speech ​and Natural Language (pp. 83-87)StroudsburgPAAssociation for Computational Linguistics<http://acl.ldc.upenn.edu/H/H91/H91-1013.pdf>​+{{anchor:​nass2005:​NAS1}}NassC., & Brave, S. (2005). Wired for speech: How voice activates ​and advances the human-computer relationshipCambridgeMAMIT Press[[https://www.amazon.com/Wired-Speech-Activates-Human-Computer-Relationship-ebook/dp/B001949SMM]] ​
  
-OsunaEE. (1985). The psychological cost of waitingJournal of Mathematical Psychology29, 82–105.+{{anchor:​nass2010:​NAS2}}NassC., & Yen, C. (2010). The man who lied to his laptop: What machines teach us about human relationshipsNew YorkNY: Penguin Group. 
 +[[https://​www.amazon.com/​Man-Who-Lied-His-Laptop/​dp/​1617230049]]
  
-ParkinsonF. (2012). Alphanumeric Confirmation ​User DataPresentation at SpeechTek 2012available at http://www.speechtek.com/2012/Presentations.aspx (search for Parkinson in Session B102).+{{anchor:​németh:​NEM1}}Németh,​ G., Kiss, G., Zainkó, C., Olaszy, G., & TóthB. (2008). Speech generation in mobile phones. In D. Gardner-Bonneau ​HE. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 163–191). New YorkNY: Springer. ​ [[https://www.amazon.com/Factors-Interactive-Systems-Communication-Technology/dp/​038725482X]]
  
-PieracciniR(2010)Continuous automated speech tuning and the return of statistical grammarsIn WMeisel (Ed.)Speech in the user interface: Lessons from experience (pp255–259). VictoriaCanadaTMA Associates.+{{anchor:​north:​NOR1}}NorthAC., Hargreaves, DJ., & McKendrick, J(1999). Music and on-hold waiting time. British Journal of Psychology90, 161–164. [[https://​doi.org/​10.1348/​000712699161215]]
  
-Pieraccini, R. (2012). The voice in the machine: Building computers that understand speechCambridge, MA: MIT Press.+{{anchor:​novick:​NOV1}}NovickD. G., Hansen, B., Sutton, S., & Marshall, C. R. (1999). Limiting factors of automated telephone dialogues. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp163–186). Boston, MA: Kluwer Academic Publishers. [[https://​www.amazon.com/​Factors-Interactive-International-Engineering-Computer/​dp/​0792384679]]
  
-PolkoskyMD. (2001). User preference for system processing tones (TechRep29.3436). RaleighNCIBM.+{{anchor:​ogden:​OGD1}}OgdenWC., & Bernick, P. (1997). Using natural language interfacesIn MHelander, T. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer interaction (pp. 137–161). AmsterdamNetherlandsElsevier. [[https://​www.amazon.com/​Handbook-Human-Computer-Interaction-Second-Helander-dp-0444818626/​dp/​0444818626]]
  
-Polkosky, M. D. (2002). Initial psychometric evaluation ​of the Pragmatic Rating Scale for Dialogues ​(TechReport 29.3634). Boca RatonFLIBM.+{{anchor:​ostendorf:​OST1}}Ostendorf, M., Kannan, A., Austin, S., Kimball, O., Schwartz, R., & Rohlicek, J. R. (1991). Integration ​of diverse recognition methodologies through reevaluation of n-best sentence hypotheses. In Proceedings of DARPA Workshop on Speech and Natural Language ​(pp83-87). StroudsburgPAAssociation for Computational Linguistics. [[http://​acl.ldc.upenn.edu/​H/​H91/​H91-1013.pdf]]
  
-PolkoskyMD. (2005a). Toward a social-cognitive psychology ​of speech technology: Affective responses to speech-based e-serviceUnpublished doctoral dissertation,​ University ​of South Florida.+{{anchor:​osuna:​OSU1}}OsunaEE. (1985). The psychological cost of waitingJournal ​of Mathematical Psychology, 29, 82–105[[https://​doi.org/​10.1016/​0022-2496(85)90020-3]]
  
-PolkoskyM. D. (2005b). What is speech usabilityanyway? Speech Technology, 10(9), 22–25.+{{anchor:​parkinson:​PAR1}}ParkinsonF. (2012). Alphanumeric Confirmation & User Data. Presentation at SpeechTek 2012available at [[http://​www.speechtek.com/​2012/​Presentations.aspx]] ​(search for Parkinson in Session B102).
  
-PolkoskyM. D. (2006). Respect: It’s not what you say, it’s how you say itSpeech Technology, 11(5), 1621.+{{anchor:​pieraccini2010:​PIE1}}PieracciniR. (2010). Continuous automated speech tuning and the return of statistical grammarsIn W. Meisel ​(Ed.), Speech in the user interface: Lessons from experience (pp. 255259)Victoria, Canada: TMA Associates. [[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-PolkoskyM. D. (2008). Machines as mediatorsThe challenge of technology for interpersonal communication theory and researchIn E. Konjin (Ed.)Mediated interpersonal communication (pp34–57)New York, NY: Routledge.+{{anchor:​pieraccini2012:​PIE2}}PieracciniR. (2012). The voice in the machineBuilding computers that understand speechCambridgeMA: MIT Press[[https://​www.amazon.com/​Voice-Machine-Building-Computers-Understand/​dp/​0262533294]]  ​
  
-Polkosky, M. D., & Lewis, J. R. (2002). Effect of auditory waiting cues on time estimation in speech recognition telephony applicationsInternational Journal of Human-Computer Interaction14, 423–446.+{{anchor:​polkosky2001:​POL1}}Polkosky, M. D. (2001). User preference for system processing tones (TechRep. 29.3436). RaleighNC: IBM. [[https://​www.researchgate.net/​publication/​240626208_User_Preference_for_Turntaking_Tones_2_Participant_Source_Issues_and_Additional_Data]]
  
-Polkosky, M. D., & Lewis, J. R. (2003). Expanding the MOS: Development and psychometric evaluation of the MOS-R and MOS-XInternational Journal of Speech Technology6, 161–182.+{{anchor:​polkosky2002:​POL2}}Polkosky, M. D. (2002). Initial ​psychometric evaluation of the Pragmatic Rating Scale for Dialogues (TechReport 29.3634). Boca RatonFL: IBM.
  
-RamosL. (1993). The effects of on-hold telephone music on the number ​of premature disconnections ​to a statewide protective services abuse hot lineJournal of Music Therapy30(2), 119–129.+{{anchor:​polkosky2005a:​POL3}}PolkoskyM. D. (2005a). Toward a social-cognitive psychology ​of speech technology: Affective responses ​to speech-based e-serviceUnpublished doctoral dissertationUniversity of South Florida. [[https://​scholarcommons.usf.edu/​etd/​819/​]] ​
  
-ReevesB., & Nass, C. (2003). The media equation: How people treat computerstelevisionand new media like real people and places. ChicagoILUniversity of Chicago Press.+{{anchor:​polkosky2005b:​POL4}}PolkoskyMD. (2005b). What is speech usabilityanyway? Speech Technology10(9)22–25. [[https://​www.speechtechmag.com/​Articles/​Editorial/​Features/​What-Is-Speech-Usability-Anyway-29601.aspx]]
  
-Reinders, M., Dabholkar, P. A., & Frambach, R. T. (2008). Consequences of forcing consumers to use technology-based self-serviceJournal of Service Research, 11(2), 107-123.+{{anchor:​polkosky2006:​POL5}}Polkosky, M. D. (2006). Respect: It’s not what you say, it’s how you say itSpeech Technology, 11(5), 16–21. [[https://​www.speechtechmag.com/​Articles/​Editorial/​Features/​Ivy-League-IVR-29587.aspx]] ​
  
-Resnick, M. & Sanchez, J. (2004). Effects ​of organizational scheme ​and labeling on task performance in product-centered and user-centered web sitesHuman Factors46104-117.+{{anchor:​polkosky2008:​POL6}}Polkosky, M. D. (2008). Machines as mediators: The challenge ​of technology for interpersonal communication theory ​and researchIn E. Konjin (Ed.)Mediated interpersonal communication (pp. 34–57). New YorkNY: Routledge[[https://​www.amazon.com/​Mediated-Interpersonal-Communication-Leas/​dp/​0805863044]]
  
-RobertsF., Francis, A. L., & MorganM. (2006). The interaction ​of inter-turn silence with prosodic ​cues in listener perceptions of “trouble” in conversationSpeech Communication4810791093.+{{anchor:​polkoskyl2002:​POL7}}PolkoskyMD., & LewisJ. R. (2002). Effect ​of auditory waiting ​cues on time estimation ​in speech recognition telephony applicationsInternational Journal of Human-Computer Interaction14423446. [[https://​doi.org/​10.1080/​10447318.2002.9669128]] ​
  
-RolandiW. (2003). ​When you don’t know what you don’t know. Speech Technology, ​8(4)28.+{{anchor:​polkosky2003:​POL8}}PolkoskyM. D., & Lewis, J. R. (2003). ​Expanding the MOS: Development and psychometric evaluation of the MOS-R and MOS-XInternational Journal of Speech Technology, ​6161–182. [[https://​doi.org/​10.1023/​A:​1022390615396]]
  
-RolandiW. (2004a). Improving customer service with speechSpeech Technology9(5), 14.+{{anchor:​ramos:​RAM1}}RamosL. (1993). The effects of on-hold telephone music on the number of premature disconnections to a statewide protective services abuse hot lineJournal of Music Therapy30(2), 119–129[[https://​doi.org/​10.1093/​jmt/​30.2.119]]
  
-RolandiW. (2004b). Rolandi'​s razor. Speech Technology9(4)39.+{{anchor:​reeves:​REE1}}ReevesB., & Nass, C. (2003). The media equation: How people treat computerstelevisionand new media like real people and places. Chicago, IL: University of Chicago Press. [[https://​www.amazon.com/​Equation-Reeves-Clifford-Language-Paperback/​dp/​B00E2RJ3GE]]
  
-RolandiW. (2005). The impotence ​of being earnestSpeech Technology10(1), 22.+{{anchor:​reinders:​REI1}}ReindersM., Dabholkar, P. A., & Frambach, R. T. (2008). Consequences ​of forcing consumers to use technology-based self-serviceJournal of Service Research11(2), 107-123. [[https://​doi.org/​10.1177/​1094670508324297]] ​
  
-RolandiW. (2006). The alpha bailSpeech Technology11(1)56.+{{anchor:​resnick:​RES1}}ResnickM. & Sanchez, J. (2004). Effects of organizational scheme and labeling on task performance in product-centered and user-centered web sitesHuman Factors46104-117. [[https://​doi.org/​10.1518/​hfes.46.1.104.30390]]
  
-RolandiW. (2007a). Aligning customer and company goals through VUI. Speech ​Technology12(2)6.+{{anchor:​roberts:​ROB1}}RobertsF., Francis, A. L., & Morgan, M. (2006). The interaction of inter-turn silence with prosodic cues in listener perceptions of “trouble” in conversation. Speech ​Communication481079–1093. [[https://​doi.org/​10.1016/​j.specom.2006.02.001]] 
  
-Rolandi, W. (2007b). The pains of main are plainly VUIs bane. Speech Technology, ​12(1), 6.+{{anchor:​rolandi2003:​ROL1}}Rolandi, W. (2003). When you dont know what you don’t know. Speech Technology, ​8(4), 28. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​When-You-Dont-Know-When-You-Dont-Know-29821.aspx]]
  
-Rolandi, W. (2007c). The persona craze nears an end. Speech Technology, ​12(5), 9.+{{anchor:​rolandi2004a:​ROL2}}Rolandi, W. (2004a). Improving customer service with speech. Speech Technology, ​9(5), 14. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​Improving-Customer-Service-with-Speech-31763.aspx]] ​
  
-RosenbaumS. (1989). Usability evaluations versus usability testing: When and why? IEEE Transactions on Professional Communication32210-216.+{{anchor:​rolandi2004b:​ROL3}}RolandiW. (2004b). Rolandi'​s razor. Speech Technology9(4)39. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​Rolandi%27s-Razor-29820.aspx]]
  
-RosenfeldR., Olsen, D., & Rudnicky, A. (2001). Universal speech interfacesInteractions8(6), 34-44.+{{anchor:​rolandi2005:​ROL4}}RolandiW. (2005). The impotence of being earnestSpeech Technology10(1), 22. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Impotence-of-Being-Earnest-29816.aspx]] ​
  
-Sadowski, W. J. (2001). Capabilities and limitations of Wizard of Oz evaluations of speech user interfacesIn Proceedings of HCI International 2001: Usability evaluation and interface design ​(pp. 139–142). MahwahNJLawrence Erlbaum.+{{anchor:​rolandi2006:​ROL5}}Rolandi, W. (2006). The alpha bailSpeech Technology, 11(1), 56. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Alpha-Bail-30090.aspx]]
  
-Sadowski, W. J., & Lewis, J. R. (2001). Usability evaluation of the IBM WebSphere “WebVoice” demo (TechRep. 29.3387available at drjim.0catch.com/vxmllive1-ral.pdf). West Palm Beach, FL: IBM Corp.+{{anchor:​rolandi2007a:​ROL6}}Rolandi, W. (2007a). Aligning customer and company goals through VUISpeech Technology, 12(2)6[[https://​www.speechtechmag.com/Articles/​Archives/​The-Human-Factor/​Aligning-Customer-and-Company-Goals-Through-VUI-29800.aspx]] ​
  
-SauroJ. (2009). Estimating productivity:​ Composite operators for keystroke level modelingIn JackoJ.A. (Ed.), Proceedings of the 13th International Conference on Human–Computer Interaction,​ HCII 2009 (pp352-361). Berlin, GermanySpringer-Verlag.+{{anchor:​rolandi2007b:​ROL7}}RolandiW. (2007b). The pains of main are plainly VUI’s baneSpeech Technology12(1), 6[[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Pains-of-Main-Are-Plainly-VUIs-Bane-29801.aspx]]
  
-SauroJ., & Lewis, JR. (2012). Quantifying the user experiencePractical statistics for user researchBurlington, MA: Morgan Kaufmann.+{{anchor:​rolandi2007c:​ROL8}}RolandiW(2007c)The persona craze nears an endSpeech Technology, 12(5), 9[[https://www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Persona-Craze-Nears-an-End-36315.aspx]] ​
  
-SchegloffE. A. (2000). Overlapping talk and the organization of turn-taking for conversation. Language in Society291–63.+{{anchor:​rosenbaum:​ROS1}}RosenbaumS. (1989). Usability evaluations versus usability testing: When and why? IEEE Transactions on Professional Communication32210-216. [[https://​doi.org/​10.1109/​47.44533]]
  
-Schoenborn CA., & MaranoM. (1988). Current estimates from the national health interview survey: United States 1987In Vital and Health Statisticsseries 10#166Washington, D.C.: Government Printing Office.+{{anchor:​rosenfeld:​ROS2}}Rosenfeld,​ R., Olsen, D., & RudnickyA. (2001). Universal speech interfacesInteractions8(6)34-44[[https://​doi.org/10.1145/384076.384085]] ​
  
-SheederT., & Balogh, ​J. (2003). Say it like you mean it: Priming for structure in caller responses to a spoken dialog systemInternational Journal ​of Speech Technology, 6, 103111.+{{anchor:​sadowski2001:​SAD1}}SadowskiW. J. (2001). Capabilities and limitations of Wizard of Oz evaluations of speech user interfacesIn Proceedings ​of HCI International 2001: Usability evaluation and interface design (pp. 139142). Mahwah, NJ: Lawrence Erlbaum. [[https://​www.amazon.com/​Usability-Evaluation-Interface-Design-Engineering/​dp/​0805836071]]
  
-SchumacherRM., Jr., Hardzinski, M. L., & SchwartzAL. (1995). Increasing the usability ​of interactive voice response systems: Research and guidelines for phone-based interfacesHuman Factors37, 251–264.+{{anchor:​sadowskil2001:​SAD2}}SadowskiWJ., & LewisJR. (2001). Usability evaluation ​of the IBM WebSphere “WebVoice” demo (Tech. Rep. 29.3387, available at [[drjim.0catch.com/​vxmllive1-ral.pdf]]). West Palm BeachFL: IBM Corp.
  
-ShinnP. (2009). ​Getting persona – IVR voice gender, intelligibility & the aging. In Speech Strategy News (November, pp. 37-39).+{{anchor:​sauro2009:​SAU1}}SauroJ. (2009). ​Estimating productivity:​ Composite operators for keystroke level modeling. In Jacko, J.A. (Ed.)Proceedings of the 13th International Conference on Human–Computer Interaction,​ HCII 2009 (pp. 352-361). Berlin, Germany: Springer-Verlag. [[https://​doi.org/​10.1007/​978-3-642-02574-7_40]]
  
-ShinnP., Basson, S. H., & MarguliesM. (2009). The impact of IVR voice talent selection on intelligibilityPresentation at SpeechTek 2009Available at <www.speechtek.com/2009/program.aspx?​SessionID=2386>​.+{{anchor:​sauro2012:​SAU2}}SauroJ., & LewisJ. R. (2012). Quantifying the user experience: Practical statistics for user researchBurlington, MA: Morgan Kaufmann[[https://​learning.oreilly.com/library/view/​quantifying-the-user/​9780123849687/​]]
  
-ShriverS., & Rosenfeld, R. (2002). Keywords for a universal speech interface. In Proceedings ​of CHI 2002 (pp. 726-727)MinneapolisMNACM.+{{anchor:​schegloff:​SCH1}}SchegloffEA. (2000). Overlapping talk and the organization ​of turn-taking for conversationLanguage in Society29, 1–63. [[https://​doi.org/​10.1017/​S0047404500001019]]
  
-SkantzeG. (2005). Exploring human error recovery strategiesImplications for spoken dialogue systemsSpeech Communication45325–341.+{{anchor:​schoenborn:​SCH2}}Schoenborn C. A., & MaranoM. (1988). Current estimates from the national health interview surveyUnited States 1987In Vital and Health Statisticsseries 10#166. Washington, D.C.: Government Printing Office. [[https://​www.cdc.gov/​nchs/​data/​series/​sr_10/​sr10_166.pdf]]
  
-Spiegel, M. F. (1997). Advanced database preprocessing ​and preparations that enable telecommunication services ​based on speech synthesisSpeech Communication235162.+{{anchor:​schumacher:​SCH3}}SchumacherR. M., Jr., Hardzinski, M. L., & Schwartz, A. L. (1995). Increasing the usability of interactive voice response systems: Research ​and guidelines for phone-based interfacesHuman Factors37251264. [[https://​doi.org/​10.1518/​001872095779064672]] ​
  
-SpiegelMF. (2003a). Proper name pronunciations ​for speech technology applications. International Journal of Speech Technology, 6, 419-427.+{{anchor:​sheeder:​SHE1}}SheederT., & Balogh, J. (2003). Say it like you mean it: Priming ​for structure in caller responses to a spoken dialog system. International Journal of Speech Technology, 6, 103–111. [[https://​doi.org/​10.1023/​A:​1022326328600]]
  
-SpiegelM. F. (2003b). The difficulties with names: Overcoming barriers to personal ​voice services. Speech ​Technology, 8(3)12-15.+{{anchor:​shinn2009:​SHI1}}ShinnP. (2009). Getting persona – IVR voice gender, intelligibility & the agingIn Speech ​Strategy News (Novemberpp. 37-39).
  
-Stivers, T.; Enfield, N. J.; Brown, P.; EnglertC.; HayashiM.; Heinemann, T.; HoymannG.; RossanoF.; de Ruiter, JP.; Yoon, K.-E.; Levinson, SC. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences, 106 (26), 10587-10592.+{{anchor:​shinnb2009:​SHI2}}Shinn, P., BassonSH., & MarguliesM(2009)The impact of IVR voice talent selection on intelligibilityPresentation at SpeechTek 2009Available at <[[www.speechtek.com/2009/program.aspx?​SessionID=2386]]>​
  
-SuhmB. (2008). IVR usability engineering using guidelines and analyses of end-to-end calls. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems, 2nd edition ​(pp. 1-41). New YorkNYSpringer.+{{anchor:​shriver:​SHR1}}ShriverS., & Rosenfeld, R. (2002). Keywords for a universal speech interface. In Proceedings of CHI 2002 (pp. 726-727). MinneapolisMNACM. [[http://​www.cs.cmu.edu/​~roni/​papers/​ShriverRosenfeld02b.pdf]]
  
-SuhmB., Freeman, B., & GettyD(2001)Curing the menu blues in touch-tone voice interfacesIn Proceedings of CHI 2001 (pp131-132)The Hague, Netherlands:​ ACM.+{{anchor:​skantze:​SKA1}}SkantzeG(2005)Exploring human error recovery strategies: Implications for spoken dialogue systems. Speech Communication45325–341[[https://​doi.org/10.1016/j.specom.2004.11.005]] 
  
-SuhmB., Bers, J., McCarthy, D., Freeman, B., GettyD., Godfrey, K., & Peterson, P. (2002). A comparative study of speech in the call center: Natural language call routing vs. touch-tone menus. In Proceedings of CHI 2002 (pp. 283–290). Minneapolis,​ MN: ACM.+{{anchor:​spiegel1997:​SPI1}}SpiegelMF(1997)Advanced database preprocessing and preparations that enable telecommunication services based on speech synthesisSpeech Communication2351–62[[https://​doi.org/10.1016/​S0167-6393(97)00039-3]]
  
-ToledanoD. T., Pozo, R. F., Trapote, Á. H., & Gómez, L. H. (2006). Usability evaluation ​of multi-modal biometric verification systems. Interacting with Computers181101-1122.+{{anchor:​spiegel2003a:​SPI2}}SpiegelM. F. (2003a). Proper name pronunciations for speech technology applications. International Journal ​of Speech Technology6419-427. [[https://​doi.org/​10.1023/​A:​1025721319650]]
  
-TomkoS., Harris, T. K., Toth, A., Sanders, J., Rudnicky, A., & Rosenfeld, R. (2005). Towards efficient human machine speech communicationThe speech graffiti projectACM Transactions on Speech ​and Language Processing2(1), 1-27.+{{anchor:​spiegel2003b:​SPI3}}SpiegelMF. (2003b). The difficulties with namesOvercoming barriers to personal voice services. Speech ​Technology8(3), 12-15. [[https://​www.speechtechmag.com/​Articles/​Editorial/​Feature/​The-Difficulties-with-Names-29614.aspx]]
  
-TorresF., HurtadoLF., GarcíaF., SanchisE., & Segarra, E. (2005). Error handling ​in a stochastic dialog system through confidence measuresSpeech Communication45211–229.+{{anchor:​stivers:​STI1}}StiversT.; EnfieldN. J.; BrownP.; Englert, C.; HayashiM.; HeinemannT.; HoymannG.; RossanoF.; de RuiterJ. P.; YoonK.-E.; Levinson, S. C. (2009). Universals and cultural variation ​in turn-taking in conversationProceedings of the National Academy of Sciences106 (26)10587-10592. [[https://​doi.org/​10.1073/​pnas.0903616106]] ​
  
-Turunen, M., Hakulinen, J., & Kainulainen,​ A(2006). Evaluation of a spoken dialogue system with usability tests and long-term pilot studiesSimilarities and differencesIn Proceedings of the 9th International Conference on Spoken Language Processing (pp. 1057-1060). Pittsburgh, PA: ICSLP.+{{anchor:​studio52:​STU1}}Studio52(2019April 9)5 Reasons why your IVR should be multilingualRetrieved from [[https://studio52.tv/5-reasons-why-your-ivr-should-be-multilingual]]
  
-UnzickerD. K. (1999). The psychology ​of being put on hold: An exploratory study of service qualityPsychology ​Marketing16(4), 327–350.+{{anchor:​suhm2008:​SUH1}}SuhmB. (2008). IVR usability engineering using guidelines and analyses ​of end-to-end callsIn D. Gardner-Bonneau ​H. E. Blanchard (Eds.), Human factors and voice interactive systems2nd edition ​(pp. 1-41). New YorkNY: Springer. [[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]]
  
-VaccaJR. (2007). Biometric technologies and verification systemsBurlingtonMAElsevier.+{{anchor:​suhm2001:​SUH2}}SuhmB., Freeman, B., & Getty, D. (2001). Curing the menu blues in touch-tone voice interfacesIn Proceedings of CHI 2001 (pp. 131-132). The HagueNetherlandsACM. [[https://​10.1145/​634067.634147]]
  
-VirziR. A., & Huitema, J. S. (1997). Telephone-based menus: Evidence that broader is better than deeper. In Proceedings of the Human Factors and Ergonomics Society 41st Annual Meeting ​(pp. 315-319). Santa MonicaCAHuman Factors and Ergonomics Society.+{{anchor:​suhm2002:​SUH3}}SuhmB., Bers, J., McCarthy, D., Freeman, B., Getty, D., Godfrey, K., & Peterson, P. (2002). A comparative study of speech in the call center: Natural language call routing vs. touch-tone menus. In Proceedings of CHI 2002 (pp. 283–290). MinneapolisMNACM. [[https://​doi.org/​10.1145/​503376.503427]]
  
-Voice Messaging User Interface Forum. (1990). Specification documentCedar KnollsNJProbe Research.+{{anchor:​toledano:​TOL1}}Toledano,​ D. T., Pozo, R. F., Trapote, Á. H., & Gómez, L. H. (2006). Usability evaluation of multi-modal biometric verification systemsInteracting with Computers18, 1101-1122. ​ [[https://​doi.org/​10.1016/​j.intcom.2006.01.004]] 
  
-WalkerMA., FromerJ., Di FabbrizioG., MestelC., & HindleD. (1998). What can I say?Evaluating a spoken language interface to emailIn Proceedings of CHI 1998 (pp. 582–589). Los AngelesCAACM.+{{anchor:​tomko:​TOM1}}TomkoS., Harris, T. K., TothA., SandersJ., RudnickyA., & RosenfeldR. (2005). Towards efficient human machine speech communicationThe speech graffiti projectACM Transactions on Speech and Language Processing, 2(1), 1-27. [[https://​doi.org/​10.1145/​1075389.1075391]]
  
-WattWC(1968)HabitabilityAmerican Documentation19(3), 338351.+{{anchor:​torres:​TOR1}}TorresF., Hurtado, LF., García, F., Sanchis, E., & Segarra, E. (2005). Error handling in a stochastic dialog system through confidence measures. Speech Communication45, 211229. [[https://​doi.org/​10.1016/​j.specom.2004.10.014]]
  
-Weegels, M. F. (2000). Users’ conceptions ​of voice-operated information servicesInternational Journal ​of Speech Technology3, 75–82.+{{anchor:​turunen:​TUR1}}Turunen, M., Hakulinen, J., & Kainulainen,​ A. (2006). Evaluation ​of a spoken dialogue system with usability tests and long-term pilot studies: Similarities and differencesIn Proceedings ​of the 9th International Conference on Spoken Language Processing (pp. 1057-1060). PittsburghPA: ICSLP. [[http://​citeseerx.ist.psu.edu/​viewdoc/​download?​doi=10.1.1.142.4349&​rep=rep1&​type=pdf]]
  
-WilkieJ., McInnes, F., Jack, MA.LittlewoodP. (2007). Hidden menu options in automated human-computer telephone dialoguesDissonance in the user’s mental modelBehaviour & Information Technology, 26(6), 517-534.+{{anchor:​unzicker:​UNZ1}}UnzickerDK(1999)The psychology of being put on hold: An exploratory study of service qualityPsychology ​Marketing16(4), 327–350[[https://doi.org/​10.1002/​(SICI)1520-6793(199907)16:​4<​327::​AID-MAR4>​3.0.CO;​2-G]] ​
  
-Williams, J. D., & Witt, S. M. (2004). A comparison of dialog strategies for call routingInternational Journal of Speech Technology7, 9–24.+{{anchor:​vacca:​VAC1}}Vacca, J. R. (2007). Biometric technologies and verification systemsBurlingtonMA: Elsevier. [[https://​www.amazon.com/​Biometric-Technologies-Verification-Systems-Vacca/​dp/​0750679670]]
  
-WilsonTP., & ZimmermanDH. (1986). The structure ​of silence between turns in two-party conversationDiscourse Processes9, 375–390.+{{anchor:​virzi:​VIR1}}VirziRA., & HuitemaJS. (1997). Telephone-based menus: Evidence that broader is better than deeper. In Proceedings ​of the Human Factors and Ergonomics Society 41st Annual Meeting (pp. 315-319)Santa MonicaCA: Human Factors and Ergonomics Society. h[[ttp://​search.proquest.com/​docview/​235451367]]
  
-Wolters, M., Georgila, K., Moore, J. D., Logie, R. H., MacPherson, S. E., & Watson, M. (2009). Reducing working memory load in spoken dialogue systemsInteracting with Computers, 21276-287.+{{anchor:​voice:​VOI1}}Voice Messaging User Interface Forum. (1990). Specification documentCedar KnollsNJ: Probe Research.
  
-WrightLE., HartleyMW., & LewisJ. R. (2002). Conditional probabilities for IBM Voice Browser 2.0 alpha and alphanumeric recognition ​(TechRep29.3498Retrieved from http://drjim.0catch.com/alpha2-acc.pdf). West Palm Beach, FL: IBM.+{{anchor:​walker:​WAL1}}WalkerMA., FromerJ., Di Fabbrizio, G., Mestel, C., & HindleD. (1998). What can I say?: Evaluating a spoken language interface to emailIn Proceedings of CHI 1998 (pp582–589)Los Angeles, CA: ACM[[http://www.difabbrizio.com/papers/​chi98-elvis.pdf]] 
  
-YagilD. (2001). Ingratiation and assertiveness in the service provider-customer dyadJournal of Service Research3(4), 345353.+{{anchor:​watt:​WAT1}}WattW. C. (1968). HabitabilityAmerican Documentation19(3), 338351. [[https://​doi.org/​10.1002/​asi.5090190324]]
  
-Yang, F., & Heeman, P. A. (2010). Initiative conflicts in task-oriented dialogueComputer ​Speech ​and Language24175189.+{{anchor:​weegels:​WEE1}}WeegelsM. F. (2000). Users’ conceptions of voice-operated information servicesInternational Journal of Speech ​Technology37582[[https://​doi.org/​10.1023/​A:​1009633011507]]
  
-YellinE(2009)Your call is (not thatimportant to usCustomer service and what it reveals about our world and our livesNew YorkNYFree Press.+{{anchor:​wilkie:​WIL1}}WilkieJ., McInnes, F., Jack, M. A., & Littlewood, P. (2007). Hidden menu options in automated human-computer telephone dialoguesDissonance in the user’s mental modelBehaviour & Information Technology26(6), 517-534. [[https://​doi.org/​10.1080/​01449290600717783]]
  
-Yudkowsky, M. (2008). The creepiness factor. Speech Technology, ​13(8)4.+{{anchor:​williams:​WIL2}}WilliamsJ. D., & Witt, S. M. (2004). A comparison of dialog strategies for call routingInternational Journal of Speech Technology, ​79–24. [[https://​doi.org/​10.1023/​B:​IJST.0000004803.47697.bd]] 
  
-YuschikM(2008)Silence locations and durations in dialog management. In D. Gardner-Bonneau & H. E. Blanchard ​(Eds.), Human factors and voice interactive systems, 2nd edition (pp231-253)New YorkNYSpringer.+{{anchor:​wilson:​WIL3}}WilsonTP., & Zimmerman, ​D. H. (1986). The structure of silence between turns in two-party conversationDiscourse Processes9, 375–390. [[https://​doi.org/​10.1080/​01638538609544649]]
  
-Zoltan-Ford, E. (1991). How to get people to say and type what computers can understandInternational Journal of Man-Machine Studies34527–547.+{{anchor:​wolters:​WOL1}}WoltersM., Georgila, K., Moore, J. D., Logie, R. H., MacPherson, S. E., & Watson, M. (2009). Reducing working memory load in spoken dialogue systemsInteracting with Computers21276-287. [[https://​doi.org/​10.1016/​j.intcom.2009.05.009]] 
  
-Zurif, E. B. (1990). Language and the brain. In D. N. Osherson & H. Lasnik (Eds.), Language: An invitation to cognitive science (pp. 177–198). Cambridge, MA: MIT Press.+{{anchor:​wright:​WRI1}}Wright,​ L. E., Hartley, M. W., & Lewis, J. R. (2002). Conditional probabilities for IBM Voice Browser 2.0 alpha and alphanumeric recognition (Tech. Rep. 29.3498. Retrieved from [[http://​drjim.0catch.com/​alpha2-acc.pdf]]). West Palm Beach, FL: IBM. 
 + 
 +{{anchor:​yagil:​YAG1}}Yagil,​ D. (2001). Ingratiation and assertiveness in the service provider-customer dyad. Journal of Service Research, 3(4), 345–353. [[https://​doi.org/​10.1177/​109467050134007]]  
 + 
 +{{anchor:​yang:​YAN1}}Yang,​ F., & Heeman, P. A. (2010). Initiative conflicts in task-oriented dialogue. Computer Speech and Language, 24, 175–189. [[https://​doi.org/​10.1016/​j.csl.2009.04.003]] 
 + 
 +{{anchor:​yellin:​YEL1}}Yellin,​ E. (2009). Your call is (not that) important to us: Customer service and what it reveals about our world and our lives. New York, NY: Free Press. [[https://​www.amazon.com/​Your-Call-Not-That-Important/​dp/​1416546898]] 
 + 
 +{{anchor:​yudkowsky:​YUD1}}Yudkowsky,​ M. (2008). The creepiness factor. Speech Technology, 13(8), 4. [[https://​www.speechtechmag.com/​Articles/​Archives/​Industry-View/​The-Creepiness-Factor-51037.aspx]] 
 + 
 +{{anchor:​yuschik:​YUS1}}Yuschik,​ M. (2008). Silence locations and durations in dialog management. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems, 2nd edition (pp. 231-253). New York, NY: Springer. [[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]] 
 + 
 +{{anchor:​zoltan-ford:​ZOL1}}Zoltan-Ford,​ E. (1991). How to get people to say and type what computers can understand. International Journal of Man-Machine Studies, 34, 527–547. [[http://​www.speech.kth.se/​~edlund/​bielefeld/​references/​zoltan-ford-1991.pdf]]  
 + 
 +{{anchor:​zurif:​ZUR1}}Zurif, E. B. (1990). Language and the brain. In D. N. Osherson & H. Lasnik (Eds.), Language: An invitation to cognitive science (pp. 177–198). Cambridge, MA: MIT Press. ​[[https://​www.amazon.com/​Invitation-Cognitive-Science-Vol-Language/​dp/​0262650339]]