meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
references [2019/06/25 19:00]
crispin_reedy_yahoo.com
references [2019/08/08 13:47] (current)
lisa.illgen_concentrix.com Updated with new reference
Line 1: Line 1:
 ==== References ==== ==== References ====
-Aaron, A., Eide, E., & Pitrelli, J. F. (2005). Conversational computers. Scientific American, 292(6), 64–69.+{{anchor:​aaron:​AAR1}}Aaron, A., Eide, E., & Pitrelli, J. F. (2005). Conversational computers. Scientific American, 292(6), 64–69. ​[[https://​doi.org/​10.1038/​scientificamerican0605-64]]
  
-Adlin, X., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Waltham, MA: Morgan Kaufmann.+{{anchor:​adlin:​ADL1}}Adlin, X., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Waltham, MA: Morgan Kaufmann. ​[[https://​learning.oreilly.com/​library/​view/​the-essential-persona/​9780123814180/​xhtml/​title.html]]
  
-Ahlén, S., Kaiser, L., & Olvera, E. (2004). Are you listening to your Spanish speakers? Speech Technology, 9(4), 10-15.+{{anchor:​ahlén:​AHL1}}Ahlén, S., Kaiser, L., & Olvera, E. (2004). Are you listening to your Spanish speakers? Speech Technology, 9(4), 10-15. ​[[https://​doi.org/​10.1007/​s10772-005-4759-5]]
  
-Ainsworth, W. A., & Pratt, S. R. (1992). Feedback strategies for error correction in speech recognition systems. International Journal of Man-Machine Studies, 36, 833–842.+{{anchor:​ainsworth1992:​AIN1}} ​Ainsworth, W. A., & Pratt, S. R. (1992). Feedback strategies for error correction in speech recognition systems. International Journal of Man-Machine Studies, 36, 833–842. ​[[https://​doi.org/​10.1016/​0020-7373(92)90075-V]]
  
-Ainsworth, W. A., & Pratt, S. R. (1993). Comparing error correction strategies in speech recognition systems. In C. Baber & J. M. Noyes (Eds.), Interactive speech technology: Human factors issues in the application of speech input/​output to computers (pp. 131–135). London, UK: Taylor & Francis.+{{anchor:​ainsworth1993:​AIN2}} ​Ainsworth, W. A., & Pratt, S. R. (1993). Comparing error correction strategies in speech recognition systems. In C. Baber & J. M. Noyes (Eds.), Interactive speech technology: Human factors issues in the application of speech input/​output to computers (pp. 131–135). London, UK: Taylor & Francis. ​[[https://​www.amazon.com/​Interactive-Speech-Technology-Application-Computers/​dp/​074840127X]]
  
-Alwan, J., & Suhm, B. (2010). Beyond best practices: A data-driven approach to maximizing self-service. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 99–105). Victoria, Canada: TMA Associates.+{{anchor:​alwan:​ALW1}}Alwan, J., & Suhm, B. (2010). Beyond best practices: A data-driven approach to maximizing self-service. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 99–105). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Attwater, D. (2008). Speech and touch-tone in harmony [PowerPoint Slides]. Paper presented at SpeechTek 2008. New York, NY: SpeechTek.+{{anchor:​attwater:​ATT1}}Attwater, D. (2008). Speech and touch-tone in harmony [PowerPoint Slides]. Paper presented at SpeechTek 2008. New York, NY: SpeechTek.
  
-Baddeley, A. D., & Hitch, G. (1974). Is working memory still working? American Psychologist,​ 56, 851-864.+{{anchor:​baddeley:​BAD1}}Baddeley, A. D., & Hitch, G. (1974). Is working memory still working? American Psychologist,​ 56, 851-864. ​[[https://​doi.org/​10.1037/​0003-066X.56.11.851]]
  
-Bailey, R. W. (1989). Human performance engineering:​ Using human factors/​ergonomics to achieve computer system usability. Englewood Cliffs, NJ: Prentice-Hall.+{{anchor:​bailey:​BAI1}}Bailey, R. W. (1989). Human performance engineering:​ Using human factors/​ergonomics to achieve computer system usability. Englewood Cliffs, NJ: Prentice-Hall. ​[[https://​www.amazon.com/​Human-Performance-Engineering-Ergonomics-Usability/​dp/​0134451805]]
  
-Bailly, G. (2003). Close shadowing natural versus synthetic speech. International Journal of Speech Technology, 6, 11–19.+{{anchor:​bailly:​BAI2}}Bailly, G. (2003). Close shadowing natural versus synthetic speech. International Journal of Speech Technology, 6, 11–19. ​[[https://​doi.org/​10.1023/​A:​1021091720511]]
  
-Balentine, B. (1999). Re-engineering the speech menu. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 205-235). Boston, MA: Kluwer Academic Publishers.+{{anchor:​balentine1999:​BAL1}}Balentine, B. (1999). Re-engineering the speech menu. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 205-235). Boston, MA: Kluwer Academic Publishers. ​[[https://​www.amazon.com/​Factors-Interactive-International-Engineering-Computer/​dp/​0792384679/​]]
  
-Balentine, B. (2006). The power of the pause. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 89-91). Victoria, Canada: TMA Associates.+{{anchor:​balentine2006:​BAL2}}Balentine, B. (2006). The power of the pause. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 89-91). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​VUI-Visions-Expert-Effective-Interface/​dp/​1412083737]]
  
-Balentine, B. (2007). It’s better to be a good machine than a bad person. Annapolis, MD: ICMI Press.+{{anchor:​balentine2007:​BAL3}}Balentine, B. (2007). It’s better to be a good machine than a bad person. Annapolis, MD: ICMI Press. ​[[https://​www.amazon.com/​Better-Good-Machine-Than-Person/​dp/​1932558098]]
  
-Balentine, B. (2010). Next-generation IVR avoids first-generation user interface mistakes. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 71–74). Victoria, Canada: TMA Associates.+{{anchor:​balentine2010:​BAL4}}Balentine, B. (2010). Next-generation IVR avoids first-generation user interface mistakes. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 71–74). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Balentine, B., Ayer, C. M., Miller, C. L., & Scott, B. L. (1997). Debouncing the speech button: A sliding capture window device for synchronizing turn-taking. International Journal of Speech Technology, 2, 7–19.+{{anchor:​balentine1997:​BAL5}}Balentine, B., Ayer, C. M., Miller, C. L., & Scott, B. L. (1997). Debouncing the speech button: A sliding capture window device for synchronizing turn-taking. International Journal of Speech Technology, 2, 7–19. ​[[https://​doi.org/​10.1007/​BF02539819]]
  
-Balentine, B., & Morgan, D. P. (2001). How to build a speech recognition application:​ A style guide for telephony dialogues, 2nd edition. San Ramon, CA: EIG Press.+{{anchor:​balentine2001:​BAL6}}Balentine, B., & Morgan, D. P. (2001). How to build a speech recognition application:​ A style guide for telephony dialogues, 2nd edition. San Ramon, CA: EIG Press. ​[[https://​www.amazon.com/​How-Build-Speech-Recognition-Application/​dp/​0967127823]]
  
-Barkin, E. (2009). But is it natural? Speech Technology, 14(2), 21–24.+{{anchor:​barkin:​BAR1}}Barkin, E. (2009). But is it natural? Speech Technology, 14(2), 21–24. ​[[http://​search.proquest.com/​docview/​212198708]]
  
-Beattie, G. W., & Barnard, P. J. (1979). The temporal structure of natural telephone conversations (directory enquiry calls). Linguistics,​ 17, 213–229.+{{anchor:​beattie:​BEA1}}Beattie, G. W., & Barnard, P. J. (1979). The temporal structure of natural telephone conversations (directory enquiry calls). Linguistics,​ 17, 213–229. ​[[https://​doi.org/​10.1515/​ling.1979.17.3-4.213]]
  
-Berndt, R. S., Mitchum, C., Burton, M., & Haendiges, A. (2004). Comprehension of reversible sentences in aphasia: The effects of verb meaning. Cognitive Neuropsychology,​ 21, 229–245.+{{anchor:​berndt:​BER1}}Berndt, R. S., Mitchum, C., Burton, M., & Haendiges, A. (2004). Comprehension of reversible sentences in aphasia: The effects of verb meaning. Cognitive Neuropsychology,​ 21, 229–245. ​[[https://​doi.org/​10.1080/​02643290342000456]]
  
-Bitner, M. J., Ostrom, A. L., & Meuter, M. L. (2002). Implementing successful self-service technologies. Academy of Management Executive, 16(4), 96–108.+{{anchor:​bitner:​BIT1}}Bitner, M. J., Ostrom, A. L., & Meuter, M. L. (2002). Implementing successful self-service technologies. Academy of Management Executive, 16(4), 96–108. ​[[https://​doi.org/​10.5465/​ame.2002.8951333]]
  
-Bloom, J., Gilbert, J. E., Houwing, T., Hura, S., Issar, S., Kaiser, L., et al. (2005). Ten criteria for measuring effective voice user interfaces. Speech Technology, 10(9), 31–35.+{{anchor:​bloom2005:​BLO1}}Bloom, J., Gilbert, J. E., Houwing, T., Hura, S., Issar, S., Kaiser, L., et al. (2005). Ten criteria for measuring effective voice user interfaces. Speech Technology, 10(9), 31–35. ​[[https://​www.speechtechmag.com/​Articles/​Editorial/​Feature/​Ten-Criteria-for-Measuring-Effective-Voice-User-Interfaces-29443.aspx]]
  
-Bloom, R., Pick, L., Borod, J., Rorie, K., Andelman, F., Obler, L., Sliwinski, M., Campbell, A., Tweedy, J., & Welkowitz, J. (1999). Psychometric aspects of verbal pragmatic ratings. Brain and Language, 68, 553–565.+{{anchor:​bloom1999:​BLO2}}Bloom, R., Pick, L., Borod, J., Rorie, K., Andelman, F., Obler, L., Sliwinski, M., Campbell, A., Tweedy, J., & Welkowitz, J. (1999). Psychometric aspects of verbal pragmatic ratings. Brain and Language, 68, 553–565. ​[[https://​doi.org/​10.1006/​brln.1999.2128]]
  
-Boretz, A. (2009). VUI standards: The great debate. Speech Technology, 14(8), 14-19.+{{anchor:​boretz:​BOR1}}Boretz, A. (2009). VUI standards: The great debate. Speech Technology, 14(8), 14-19. ​[[http://​search.proquest.com/​docview/​212191853]]
  
-Boyce, S. J. (2008). User interface design for natural language systems: From research to reality. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 43–80). New York, NY: Springer.+{{anchor:​boyce2008:​BOY1}}Boyce, S. J. (2008). User interface design for natural language systems: From research to reality. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 43–80). New York, NY: Springer. ​[[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]]
  
-Boyce, S., & Viets, M. (2010). When is it my turn to talk?: Building smart, lean menus. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 108–112). Victoria, Canada: TMA Associates.+{{anchor:​boyce2010:​BOY2}}Boyce, S., & Viets, M. (2010). When is it my turn to talk?: Building smart, lean menus. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 108–112). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Broadbent, D. E. (1977). Language and ergonomics. Applied Ergonomics, 8, 15–18.+{{anchor:​broadbent:​BRO1}}Broadbent, D. E. (1977). Language and ergonomics. Applied Ergonomics, 8, 15–18. ​[[https://​doi.org/​10.1016/​0003-6870(77)90111-9]]
  
-Byrne, B. (2003). “Conversational” isn’t always what you think it is. Speech Technology, 8(4), 16–19.+{{anchor:​bryne:​BYR1}}Byrne, B. (2003). “Conversational” isn’t always what you think it is. Speech Technology, 8(4), 16–19. ​[[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=30039]]
  
-Callejas, Z., & López-Cózar,​ R. (2008). Relations between de-facto criteria in the evaluation of a spoken dialogue system. Speech Communication,​ 50, 646-665.+{{anchor:​callejas:​CAL1}}Callejas, Z., & López-Cózar,​ R. (2008). Relations between de-facto criteria in the evaluation of a spoken dialogue system. Speech Communication,​ 50, 646-665. ​[[https://​doi.org/​10.1016/​j.specom.2008.04.004]]
  
-ChangC. (2006). When service ​failsThe role of the salesperson and the customerPsychology & Marketing, 23(3), 203–224.+{{anchor:​calteaux:​CAL2}}CalteauxK., Grover A., & van Huyssteen, G. (2012). Business drivers and design choices for multilingual IVRs: A governement ​service ​delivery case study. Retrieved from [[http://www.mica.edu.vn/​sltu2012/​files/​proceedings/​7.pdf]]
  
-ChapanisA. (1988). Some generalizations about generalizationHuman Factors30253-267.+{{anchor:​chang:​CHA1}}ChangC. (2006). When service fails: The role of the salesperson and the customerPsychology & Marketing23(3)203–224. [[https://​doi.org/​10.1002/​mar.20096]]
  
-ClarkH. H. (1996). Using languageCambridgeUKCambridge University Press.+{{anchor:​chapanis:​CHA2}}ChapanisA. (1988). Some generalizations about generalizationHuman Factors30, 253-267. [[https://​doi.org/​10.1177/​001872088803000301]]
  
-Clark, H. H. (2004). Pragmatics of language ​performanceIn L. R. Horn & G. Ward (Eds.), Handbook of pragmatics (pp. 365–382). Oxford, UK: Blackwell.+{{anchor:​clark1996:​CLA1}}Clark, H. H. (1996). Using language. ​Cambridge, UK: Cambridge University Press[[https://​www.amazon.com/​Using-Language-Herbert-H-Clark-ebook/​dp/​B016MYWOUG]]
  
-CohenM. H., Giangola, JP.Balogh, J. (2004). Voice user interface designBostonMAAddison-Wesley.+{{anchor:​clark2004:​CLA2}}ClarkH. H. (2004)Pragmatics of language performanceIn L. R. Horn GWard (Eds.), Handbook of pragmatics (pp365–382)OxfordUKBlackwell. [[https://​doi.org/​10.1002/​9780470756959.ch16]]
  
-CommarfordP. M., & Lewis, J. R(2005). Optimizing the pause length before presentation of global navigation commands. In Proceedings of HCI International 2005: Volume 2—The management of information:​ E-businessthe Weband mobile computing (pp1–7). StLouisMOMira Digital Publication.+{{anchor:​cohen:​COH1}}Cohen, M. H., Giangola, J. P., & BaloghJ(2004). Voice user interface designBostonMAAddison-Wesley[[https://​learning.oreilly.com/​library/​view/​voice-user-interface/​0321185765]]
  
-Commarford, P. M., Lewis, J. R., Al-Awar Smither, J. & Gentzler, M. D. (2008). A comparison ​of broad versus deep auditory menu structuresHuman Factors50(1), 77-89.+{{anchor:​commarford:​COM1}}Commarford, P. M., Lewis, J. R. (2005). Optimizing the pause length before presentation ​of global navigation commandsIn Proceedings of HCI International 2005: Volume 2—The management of information:​ E-businessthe Web, and mobile computing ​(pp. 1–7). St. LouisMO: Mira Digital Publication. [[http://​citeseerx.ist.psu.edu/​viewdoc/​summary?​doi=10.1.1.508.6365]]
  
-CouperM. P., SingerE., & TourangeauR(2004)Does voice matter? An interactive voice response ​(IVRexperimentJournal ​of Official Statistics20(3), 551–570.+{{anchor:​commarford2008:​COM2}}Commarford, P. M., LewisJ. R., Al-Awar Smither, J. GentzlerMD. (2008). A comparison ​of broad versus deep auditory menu structures. Human Factors50(1), 77-89. [[https://​doi.org/​10.1518/​001872008X250665]]
  
-CrystalTH., & HouseAS. (1990). Articulation rate and the duration of syllables and stress groups in connected speech. Journal of the Acoustical Society of America88101112.+{{anchor:​couper:​COU1}}CouperMP., Singer, E., & TourangeauR(2004)Does voice matter? An interactive voice response ​(IVRexperiment. Journal of Official Statistics20(3)551570. [[http://​search.proquest.com/​docview/​1266795179]]
  
-CunninghamLF., Young, C. E., & GerladinaJH. (2008). Consumer views of self-service technologiesThe Service Industries ​Journal, ​28(6)719-732.+{{anchor:​crystal:​CRY1}} CrystalTH., & HouseAS. (1990). Articulation rate and the duration ​of syllables and stress groups in connected speech. Journal ​of the Acoustical Society of America88101–112. [[https://​doi.org/​10.1121/​1.399955]]
  
-DahlD. (2006). Point/​counter point on personasSpeech Technology11(1), 18–21.+{{anchor:​cunningham:​CUN1}}CunninghamL. F., Young, C. E., & Gerladina, J. H. (2008). Consumer views of self-service technologiesThe Service Industries Journal28(6), 719-732[[https://​doi.org/​10.1080/​02642060801988522]] ​
  
-DamperR. I., & Gladstone, K. (2007). Experiences of usability evaluation of the IMAGINE speech-based interaction systemInternational Journal of Speech Technology, ​94150.+{{anchor:​dahl:​DAH1}}DahlD. (2006). Point/​counter point on personas. Speech Technology, ​11(1)1821. [[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=29584]]
  
-Damper, R. I., & SoonklangT. (2007). ​Subjective ​evaluation of techniques for proper name pronunciationIEEE Transactions on Audio, ​Speech, ​and Language Processing15(8), 2213-2221.+{{anchor:​damperg2007:​DAM1}}Damper, R. I., & GladstoneK. (2007). ​Experiences of usability ​evaluation of the IMAGINE speech-based interaction systemInternational Journal of Speech ​Technology941–50[[https://​doi.org/​10.1007/​s10772-006-9003-4]]
  
-DavidsonN., McInnes, F., & JackM. A. (2004). Usability ​of dialogue design strategies ​for automated surname capture. Speech ​Communication4355–70.+{{anchor:​dampers2007:​DAM2}}DamperRI., & SoonklangT. (2007). Subjective evaluation ​of techniques ​for proper name pronunciationIEEE Transactions on Audio, ​Speech, ​and Language Processing15(8), 2213-2221. [[https://​doi.org/​10.1109/​TASL.2007.904192]]
  
-DoughertyM(2010)What’s universally availablebut rarely used? In WMeisel (Ed.), Speech in the User Interface: Lessons from Experience ​(pp. 117-120). VictoriaCanadaTMA Associates.+{{anchor:​davidson:​DAV1}}DavidsonN., McInnes, F., & Jack, MA. (2004). Usability of dialogue design strategies for automated surname capture. Speech Communication43, 55–70. [[https://​doi.org/​10.1016/​j.specom.2004.02.002]]
  
-DuludeL. (2002). Automated telephone answering systems and agingBehaviour and Information Technology21(3), 171–184.+{{anchor:​dougherty:​DOU1}}DoughertyM. (2010). What’s universally available, but rarely used? In WMeisel (Ed.)Speech in the User Interface: Lessons from Experience ​(pp. 117-120). VictoriaCanada: TMA Associates. [[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Durrande-MoreauA. (1999). Waiting for service: Ten years of empirical researchInternational Journal of Service Industry Management10(2), 171–189.+{{anchor:​dulude:​DUL1}}DuludeL. (2002). Automated telephone answering systems and agingBehaviour and Information Technology21(3), 171–184. [[https://​doi.org/​10.1080/​0144929021000013482]]
  
-EdworthyJ. & Hellier, E. (2006). Complex nonverbal auditory signals and speech warningsIn (WogalterM. S., Ed.) Handbook of Warnings ​(pp. 199-220). Mahwah, NJLawrence Erlbaum.+{{anchor:​durrande-moreau:​DUR1}}Durrande-MoreauA. (1999). Waiting for service: Ten years of empirical researchInternational Journal of Service Industry Management10(2), 171–189[[https://​doi.org/​10.1108/​09564239910264334]]
  
-Enterprise Integration Group. (2000). Speech Recognition 1999 R&D Program: User interface design recommendations final reportSan RamonCAAuthor.+{{anchor:​edworthy:​EDW1}}Edworthy,​ J. & Hellier, E. (2006). Complex nonverbal auditory signals and speech warningsIn (WogalterM. S., Ed.) Handbook of Warnings (pp. 199-220). Mahwah, NJLawrence Erlbaum. [[https://​www.amazon.com/​Handbook-Warnings-Human-Factors-Ergonomics-ebook/​dp/​B07CSSLTTJ]]
  
-Ervin-Tripp,​ S. (1993). Conversational discourse. In J. B. Gleason ​N. B. Ratner (Eds.), Psycholinguistics (pp. 238–270)Fort WorthTXHarcourt Brace Jovanovich.+{{anchor:​enterprise:​ENT1}}Enterprise Integration Group. (2000). Speech Recognition 1999 R&D Program: User interface design recommendations final reportSan RamonCAAuthor.
  
-EvansDG., Draffan, EA., James, A.Blenkhorn, P. (2006). Do text-to-speech synthesizers pronounce correctly? A preliminary study. In KMiesenberger et al. (Eds.), ​Proceedings of ICCHP (pp. 855862). BerlinGermanySpringer-Verlag.+{{anchor:​ervin-tripp:​ERV1}}Ervin-TrippS(1993)Conversational discourseIn JBGleason ​NBRatner ​(Eds.), ​Psycholinguistics ​(pp. 238270). Fort WorthTXHarcourt Brace Jovanovich. [[https://​www.amazon.com/​Psycholinguistics-Nan-Bernstein-Ratner/​dp/​0030559642]]
  
-FerreiraF. (2003). The misinterpretation of noncanonical sentencesCognitive Psychology47, 164203.+{{anchor:​evans:​EVA1}}EvansD. G., Draffan, E. A., James, A., & Blenkhorn, P. (2006). Do text-to-speech synthesizers pronounce correctly? A preliminary studyIn K. Miesenberger et al. (Eds.)Proceedings of ICCHP (pp. 855862). Berlin, Germany: Springer-Verlag. [[https://​doi.org/​10.1007/​11788713_124]]
  
-Fosler-LussierE., Amdal, I., & Juo, H. J. (2005). A framework for predicting speech recognition errorsSpeech Communication46153170.+{{anchor:​ferreira:​FER1}}FerreiraF. (2003). The misinterpretation of noncanonical sentencesCognitive Psychology47164203[[https://​doi.org/​10.1016/​S0010-0285(03)00005-7]]
  
-FrankishC., & Noyes, J. (1990). Sources of human error in data entry tasks using speech ​inputHuman Factors32(6)697716.+{{anchor:​fosler-lussier:​FOS1}}Fosler-LussierE., Amdal, I., & JuoH. J. (2005). A framework for predicting ​speech ​recognition errorsSpeech Communication46153170. [[https://​doi.org/​10.1016/​j.specom.2005.03.003]]
  
-FriedJ., & EdmondsonR. (2006). How customer perceived latency measures success ​in voice self-serviceBusiness Communications Review36(3), 2632.+{{anchor:​frankish:​FRA1}}FrankishC., & NoyesJ. (1990). Sources of human error in data entry tasks using speech inputHuman Factors32(6), 697716. [[https://​doi.org/​10.1177/​001872089003200607]] ​
  
-FröhlichP. (2005). Dealing with system response times in interactive speech applicationsIn Proceedings of CHI 2005 (pp. 1379–1382). PortlandORACM.+{{anchor:​fried:​FRI1}}FriedJ., & Edmondson, R. (2006). How customer perceived latency measures success ​in voice self-serviceBusiness Communications Review, 36(3), 26–32. [[http://​www.webtorials.com/​main/​resource/​papers/​BCR/​paper101/​fried-03-06.pdf]] 
  
-FromkinV., Rodman, R., & Hyams, N. (1998). An introduction to language ​(6th ed.). Fort WorthTXHarcourt Brace Jovanovich.+{{anchor: fröhlich:​FRO1}}FröhlichP. (2005). Dealing with system response times in interactive speech applications. In Proceedings of CHI 2005 (pp1379–1382). PortlandORACM. [[https://​doi.org/​10.1145/​1056808.1056921]]
  
-Gardner-BonneauDJ. (1992). Human factors in interactive voice response applications“Common sense” is an uncommon commodityJournal of the American Voice I/O Society, 12, 1-12.+{{anchor:​fromkin:​FRO2}}FromkinV., Rodman, R., & Hyams, N. (1998). An introduction to language (6th ed.). Fort Worth, TXHarcourt Brace Jovanovich[[https://www.amazon.com/​Introduction-Language-6th-Sixth/​dp/​B0035E4B26]]
  
-Gardner-Bonneau,​ D. (1999). Guidelines for speech-enabled IVR application design. In D. Gardner-Bonneau (Ed.), ​Human factors ​and voice interactive systems (pp147-162). Boston, MA: Kluwer Academic Publishers.+{{anchor:​gardner-bonneau1992:​GAR1}}Gardner-Bonneau,​ D. J. (1992). Human factors ​in interactive ​voice response applications:​ “Common sense” is an uncommon commodityJournal of the American Voice I/O Society, 12, 1-12.
  
-GarrettM. F. (1990). Sentence processing. In D. N. Osherson & H. Lasnik ​(Eds.), Language: An invitation to cognitive science ​(pp. 133–176). Cambridge, MA: MIT Press.+{{anchor:​gardner-bonneau1999:​GAR2}}Gardner-BonneauD. (1999). Guidelines for speech-enabled IVR application design. In D. Gardner-Bonneau ​(Ed.), Human factors and voice interactive systems ​(pp. 147-162). Boston, MA: Kluwer Academic Publishers. [[https://​www.amazon.com/​Factors-Interactive-International-Engineering-Computer/​dp/​0792384679]]
  
-GleasonJB., & Ratner, ​N. B. (1993). PsycholinguisticsFort WorthTXHarcourt Brace Jovanovich.+{{anchor:​garrett:​GAR3}}GarrettMF. (1990). Sentence processing. In D. N. Osherson & HLasnik ​(Eds.), Language: An invitation to cognitive science (pp133–176)CambridgeMAMIT Press. [[https://​www.amazon.com/​Invitation-Cognitive-Science-Vol-Language/​dp/​0262650339]]
  
-Gould, J. D.Boies, SJ., Levy, S., Richards, JT., & Schoonard, J. (1987). The 1984 Olympics message system: A test of behavioral principles of system design. Communications of the ACM, 30, 758-569.+{{anchor:​giebutowksi:​GIE1}}Giebutowski, J. (2017December 18)Multilingual IVR 5 Big Ways to Get It Exactly WRONG Marketing MessagesRetrieved from [[https://​www.marketingmessages.com/​multilingual-ivr-5-big-ways-to-get-it-exactly-wrong]]
  
-{{anchor:graham2005:GrahamGM. (2005). Voice branding in AmericaAlpharettaGAVivid Voices.}} +{{anchor:gleason:GLE1}}GleasonJB., & Ratner, N. B. (1993). PsycholinguisticsFort WorthTXHarcourt Brace Jovanovich[[https://​www.amazon.com/​Psycholinguistics-Nan-Bernstein-Ratner/​dp/​0030559642]]
  
 +{{anchor:​goodwin:​GOO1}}Goodwin,​ A. (2018, February 21). 5 Multilingual IVR Tips to Take Your Business Global [Web log post]. Retrieved from [[https://​www.west.com/​blog/​interactive-services/​multilingual-ivr-take-business-global]]
  
-{{anchor:Graham2010:GrahamGM(2010)Speech recognitionthe brand and the voice: How to choose a voice for your applicationIn WMeisel (Ed.)Speech in the user interface: Lessons from experience (pp93–98). VictoriaCanadaTMA Associates. +{{anchor:gould:GOU1}}GouldJD., Boies, S. J., Levy, S., Richards, JT., & Schoonard, J(1987). The 1984 Olympics message system: A test of behavioral principles of system design. Communications of the ACM30, 758-569. [[https://doi.org/​10.1145/​30401.30402]]
-}}+
  
-GriceHP. (1975). Logic and conversationIn P. Cole & J. L. Morgan (Eds.), Syntax and semanticsvolume 3Speech acts (pp41–58)New York, NY: Academic Press.+{{anchor:​graham2005:​GRA1}}GrahamGM. (2005). Voice branding in AmericaAlpharettaGAVivid Voices[[https://​www.amazon.com/​Voice-Branding-America-Marcus-Graham/​dp/​0975989502]]
  
-GuinnI. (2010). ​You can’t think of everythingThe importance of tuning speech applications. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 8992). Victoria, Canada: TMA Associates.+{{anchor:​graham2010:​GRA2}}GrahamG. M. (2010). ​Speech recognition,​ the brand and the voiceHow to choose a voice for your application. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 9398). Victoria, Canada: TMA Associates. ​[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-HafnerK(2004, Sept9). A voice with personalityjust trying to helpThe New York TimesRetrieved from www.nytimes.com/2004/09/09/​technology/​circuits/​09emil.html.+{{anchor:​grice:​GRI1}}GriceHP(1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.)Syntax and semantics, volume 3: Speech acts (pp. 41–58). New York, NY: Academic Press[[https://www.amazon.com/Syntax-Semantics-3-Speech-Acts/dp/0127854231]]
  
-Halstead-NusslochR. (1989). The design ​of phone-based interfaces for consumers. In Proceedings of CHI 1989 (pp. 347352). AustinTXACM.+{{anchor:​guinn:​GUI1}}GuinnI. (2010). You can’t think of everything: ​The importance ​of tuning speech applications. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience ​(pp. 8992). VictoriaCanadaTMA Associates[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-HarrisR. A. (2005). Voice interaction design: Crafting the new conversational speech systems. San FranciscoCA: Morgan Kaufmann.+{{anchor:​hafner:​HAF1}}HafnerK. (2004, Sept. 9). A voice with personalityjust trying to help. The New York Times. Retrieved from [[www.nytimes.com/​2004/​09/​09/​technology/​circuits/​09emil.html]]
  
-Heins, R., Franzke, M., Durian, M., & Bayya, A. (1997). Turn-taking as a design ​principle for barge-in in spoken language systemsInternational Journal ​of Speech Technology2, 155-164.+{{anchor:​halstead-nussloch:​HAL1}}Halstead-Nussloch, R. (1989). The design ​of phone-based interfaces for consumersIn Proceedings ​of CHI 1989 (pp. 347–352). AustinTX: ACM[[https://​doi.org/​10.1016/​0003-6870(91)90015-A]] ​
  
-HentonC. (2003). The name gamePronunciation puzzles for TTSSpeech Technology8(5), 32-35.+{{anchor:​harris:​HAR1}}HarrisR. A. (2005). Voice interaction designCrafting the new conversational speech systemsSan FranciscoCA: Morgan Kaufmann[[https://​www.amazon.com/​Voice-Interaction-Design-Conversational-Technologies-ebook/​dp/​B001CPLXXK]]
  
-HoneKS., & GrahamR. (2000). Towards ​tool for the subjective assessment of speech system interfaces (SASSI)Natural Language Engineering6(3–4)287–303.+{{anchor:​heins:​HEI1}}HeinsR., Franzke, M., Durian, M., & BayyaA. (1997). Turn-taking as design principle ​for barge-in in spoken language systemsInternational Journal of Speech Technology2155-164. [[https://​doi.org/​10.1007/​BF02208827]]
  
-HuangX., Acero, A., & Hon, H. (2001). Spoken language processingA guide to theory, algorithm and system developmentUpper Saddle RiverNJPrentice Hall.+{{anchor:​henton:​HEN1}}HentonC. (2003). The name gamePronunciation puzzles for TTSSpeech Technology8(5), 32-35. [[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=29501]]
  
-HuguenardBR., Lurch, F. J., Junker, B. W., Patz, R. J., & Kass, R. E. (1997). Working-memory failure in phone-based interaction. ACM Transactions on Computer-Human Interaction, 4(2), 67102.+{{anchor:​hone:​HON1}}HoneKS., & Graham, R. (2000)Towards a tool for the subjective assessment of speech system interfaces ​(SASSI). Natural Language Engineering6(3–4), 287303. [[https://​doi.org/​10.1017/​S1351324900002497]]
  
-Hunter, P. (2009). More isn't betterbut (help me withsomething else isFrom the design-outloud blog.+{{anchor:​houwing:​HOU1}}Houwing,​ T., & Greiner, P. (2005). Design issues in multilingual applications. (SPEECH-WORLD[TM]) (interactive voice response systems). Customer Interaction Solutions23(12), 88–93Retrieved from [[http://​search.proquest.com/​docview/​208150344]]
  
-HuraSL. (2008). What counts as VUI? Speech Technology13(9)7.+{{anchor:​huang:​HUA1}}HuangX., Acero, A., & Hon, H. (2001). Spoken language processing: A guide to theoryalgorithm and system development. Upper Saddle RiverNJ: Prentice Hall. [[https://​www.amazon.com/​Spoken-Language-Processing-Algorithm-Development/​dp/​0130226165]] ​
  
-HuraSL(2010)My big fat main menu: The case for strategically breaking the rulesIn W. Meisel (Ed.)Speech in the User Interface: Lessons from Experience ​(pp 113-116). VictoriaCanadaTMA Associates.+{{anchor:​huguenard:​HUG1}}HuguenardBR., Lurch, FJ., Junker, B. W., Patz, R. J., & Kass, R. E. (1997). Working-memory failure in phone-based interaction. ACM Transactions on Computer-Human Interaction4(2), 67–102. [[https://​doi.org/​10.1145/​254945.254947]]
  
-JainAK., & Pankanti, S. (2008). Beyond fingerprintingScientific American, 299(3), 78-81.+{{anchor:​hunter:​HUN1}}HunterP(2009)More isn't betterbut (help me withsomething else isFrom the design-outloud blog[[http://​blog.design-outloud.com/2009]]
  
-JelinekF. (1997). Statistical methods for speech recognition. CambridgeMAMIT Press.+{{anchor:​hura2008:​HUR1}}HuraS. L. (2008). What counts as VUI? Speech Technology13(9), 7. [[http://​search.proquest.com/​docview/​212185822/​]] ​
  
-JoeR. (2007). The elements of styleSpeech Technology12(8), 20–24.+{{anchor:​hura2010:​HUR2}}HuraS. L. (2010). My big fat main menu: The case for strategically breaking the rulesIn W. Meisel (Ed.)Speech in the User Interface: Lessons from Experience ​(pp 113-116). VictoriaCanada: TMA Associates[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-Johnstone, A., Berry, U., Nguyen, T., & AsperA. (1994). There was a long pause: Influencing turn-taking behaviour in human-human and human-computer spoken dialoguesInternational Journal of Human-Computer Studies41383–411.+{{anchor:​jain:​JAI1}}Jain, A. K., & PankantiS. (2008). Beyond fingerprintingScientific American299(3)78-81[[https://​doi.org/​10.1038/​scientificamerican0908-78]]
  
-Kaiser, L., Krogh, P., Leathem, C., McTernan, F., Nelson, C., Parks, M. C., & Turney, S(2008). Thinking outside the boxDesigning for the overall user experienceFrom the 2008 Workshop on the Maturation of VUI.+{{anchor:​jelinek:​JEL1}}Jelinek, F. (1997)Statistical methods for speech recognitionCambridgeMA: MIT Press[[https://www.amazon.com/​Frederick-Jelinek-Statistical-Methods-Recognition/​dp/​B008VS12VO]] ​
  
-KarrayL., & Martin, A. (2003). Toward improving speech detection robustness for speech recognition in adverse conditions. Speech ​Communication40261276.+{{anchor:​joe:​JOE1}}JoeR. (2007). The elements of style. Speech ​Technology12(8)2024. [[http://​search.proquest.com/​docview/​212188958]]
  
-KaushanskyK(2006)Voice authentication – not just another speech applicationIn WMeisel ​(Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp139-142)VictoriaCanadaTMA Associates.+{{anchor:​johnstone:​JOH1}}JohnstoneA., Berry, U., Nguyen, T., & Asper, A. (1994). There was a long pause: Influencing turn-taking behaviour in human-human and human-computer spoken dialoguesInternational Journal of Human-Computer Studies41, 383–411. [[https://​doi.org/​10.1006/​ijhc.1995.1018]]
  
-KlattD(1987)Review of text-to-speech conversion for EnglishJournal of the Acoustical Society of America82737–793Audio samples available at <www.cs.indiana.edu/​rhythmsp/​ASA/​Contents.html>.+{{anchor:​kaiser:​KAI1}}KaiserL., Krogh, P., Leathem, C., McTernanF., Nelson, C., Parks, MC., & Turney, S(2008). Thinking outside the box: Designing for the overall user experience. From the 2008 Workshop on the Maturation of VUI.
  
-KleijnenM., de Ruyter, K., & WetzelsM. (2007). An assessment of value creation ​in mobile service delivery and the moderating role of time consciousnessJournal of Retailing83(1)3346.+{{anchor:​karray:​KAR1}}KarrayL., & MartinA. (2003). Towards improving speech detection robustness for speech recognition ​in adverse conditionsSpeech Communication40261276[[https://​doi.org/​10.1016/​S0167-6393(02)00066-3]] ​
  
-KlieL. (2010). When in RomeSpeech Technology, 15(3), 20-24.+{{anchor:​kaushanksy:​KAU1}}KaushanskyK. (2006). Voice authentication – not just another speech applicationIn W. Meisel ​(Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 139-142)Victoria, Canada: TMA Associates. [[https://​www.amazon.com/​VUI-Visions-Expert-Effective-Interface/​dp/​1412083737]] ​
  
-KnottBA., BusheyRR., & Martin, JM(2004)Natural language prompts for an automated call routerExamples increase the clarity of user responsesIn Proceedings of the Human Factors and Ergonomics Society 48th annual meeting (pp736–739). Santa Monica, CA: Human Factors and Ergonomics Society.+{{anchor:​klatt:​KLA1}}KlattD(1987)Review of text-to-speech conversion for English. Journal of the Acoustical Society of America82737–793Audio samples available at <[[www.cs.indiana.edu/​rhythmsp/​ASA/​Contents.html]]>​[[https://doi.org/10.1121/1.395275]]
  
-KortumP., & PeresS. C. (2006). An exploration ​of the use of complete songs as auditory progress barsIn Proceedings ​of the Human Factors and Ergonomics Society 50th annual meeting ​(pp. 2071–2075). Santa MonicaCAHFES.+{{anchor:​kleijnen:​KLE1}}Kleijnen,​ M., de RuyterK., & WetzelsM. (2007). An assessment ​of value creation in mobile service delivery and the moderating role of time consciousnessJournal ​of Retailing, 83(1), 33–46. [[https://​doi.org/​10.1016/​j.jretai.2006.10.004]] 
  
-KortumP., & Peres, S. C. (2007). ​A survey of secondary activities of telephone callers who are put on holdIn Proceedings of the Human Factors and Ergonomics Society 51st annual Meeting ​(pp. 1153–1157). Santa MonicaCAHFES.+{{anchor:​klie2007:​KLI1}}KlieL. (2007). ​It’s a persona, not a personalitySpeech Technology, 12(5), 22–26. [[http://​search.proquest.com/​docview/​212204672]] ​
  
-KortumP., Peres, S. C., Knott, B. A., & Bushey, R. (2005). The effect of auditory progress bars on consumer’s estimation of telephone wait timeIn Proceedings of the Human Factors and Ergonomics Society 49th annual meeting ​(pp. 628–632). Santa Monica, CAHFES.+{{anchor:​klie2010:​KLI2}}KlieL. (2010). When in RomeSpeech Technology, 15(3), 20-24[[http://​search.proquest.com/​docview/​325176389/​]] ​
  
-KotanC., & Lewis, J. R. (2006). Investigation of confirmation strategies ​for speech recognition applications. In Proceedings of the Human Factors and Ergonomics Society ​50th annual meeting (pp. 728732). Santa Monica, CA: Human Factors and Ergonomics Society.+{{anchor:​knott:​KNO1}}KnottB. A., Bushey, R. R., & Martin, J. M. (2004). Natural language prompts ​for an automated call router: Examples increase the clarity of user responses. In Proceedings of the Human Factors and Ergonomics Society ​48th annual meeting (pp. 736739). Santa Monica, CA: Human Factors and Ergonomics Society. ​[[https://​doi.org/​10.1177/​154193120404800407]] ​
  
-KotellyB. (2003). The art and business ​of speech recognition:​ Creating ​the noble voiceBostonMAPearson Education.+{{anchor:​kortum2006:​KOR1}}KortumP., & Peres, S. C. (2006). An exploration ​of the use of complete songs as auditory progress bars. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 2071–2075)Santa MonicaCAHFES. [[https://​doi.org/​10.1177/​154193120605001776]]
  
-KotellyB. (2006). Six tips for better branding. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design ​(pp. 61-64). VictoriaCanadaTMA Associates.+{{anchor:​kortum2007:​KOR2}}KortumP., & Peres, S. C. (2007). A survey of secondary activities of telephone callers who are put on hold. In Proceedings of the Human Factors and Ergonomics Society 51st annual Meeting ​(pp. 1153–1157). Santa MonicaCAHFES. [[https://​doi.org/​10.1177/​154193120705101821]] ​
  
-KrahmerE., SwertsM., TheuneM., & WeegelsM. (2001). Error detection in spoken human-machine interactionInternational Journal ​of Speech Technology, 4, 1930.+{{anchor:​kortum2005:​KOR3}}KortumP., PeresS. C., KnottB. A., & BusheyR. (2005). The effect of auditory progress bars on consumer’s estimation of telephone wait timeIn Proceedings ​of the Human Factors and Ergonomics Society 49th annual meeting (pp. 628632). Santa Monica, CA: HFES. [[https://​doi.org/​10.1177/​154193120504900406]]
  
-Lai, J., Karat, C.-M., & YankelovichN. (2008). Conversational ​speech ​interfaces and technology. In A. Sears & J. A. Jacko (Eds.) The human-computer interaction handbook: Fundamentals,​ evolving technologies, ​and emerging applications ​(pp. 381-391). New YorkNYLawrence Erlbaum.+{{anchor:​kotan:​KOT1}}Kotan, C., & LewisJ. R. (2006). Investigation of confirmation strategies for speech ​recognition applications. In Proceedings of the Human Factors ​and Ergonomics Society 50th annual meeting ​(pp. 728–732). Santa MonicaCAHuman Factors and Ergonomics Society. [[https://​doi.org/​10.1177/​154193120605000524]] ​
  
-LarsonJ. A. (2005). Ten guidelines for designing a successful ​voice user interfaceSpeech Technology10(1), 51-53.+{{anchor:​kotelly2003:​KOT2}}KotellyB. (2003). The art and business of speech recognition:​ Creating the noble voice. ​BostonMA: Pearson Education[[https://​www.amazon.com/​Art-Business-Speech-Recognition-Creating/​dp/​0321154924]]
  
-LeppikP. (2005). Does forcing callers to use self-service work? Quality Times22, 1-3Downloaded 2/18/2009 from http://www.vocalabs.com/resources/newsletter/newsletter22.html.+{{anchor:​kotelly2006:​KOT3}}KotellyB. (2006). Six tips for better branding. In W. Meisel (Ed.)VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 61-64)Victoria, Canada: TMA Associates. [[https://www.amazon.com/VUI-Visions-Expert-Effective-Interface/dp/1412083737]]
  
-LeppikP(2006)Developing metrics part 1: Bad metricsThe Customer Service SurveyRetrieved from www.vocalabs.com/resources/blog/C834959743/​E20061205170807/​index.html.+{{anchor:​krahmer:​KRA1}}KrahmerE., Swerts, M., Theune, M., & Weegels, M(2001)Error detection in spoken human-machine interactionInternational Journal of Speech Technology, 4, 19–30. [[https://doi.org/10.1023/​A:​1009648614566]]
  
-LeppikP. (2012). The customer frustration indexGolden ValleyMNVocal LaboratoriesDownloaded 7/23/2012 from http://www.vocalabs.com/download-ncss-cross-industry-report-customer-frustration-index-q2-2012.+{{anchor:​lai:​LAI1}}Lai,​ J., Karat, C.-M., & YankelovichN. (2008). Conversational speech interfaces and technology. In A. Sears & J. A. Jacko (Eds.) ​The human-computer interaction handbook: Fundamentals,​ evolving technologies,​ and emerging applications (pp. 381-391)New YorkNYLawrence Erlbaum[[https://www.amazon.com/Human-Computer-Interaction-Handbook-Fundamentals-Technologies-ebook/​dp/​B0083V45J0]] ​
  
-LeppikP., & Leppik, D. (2005). ​Gourmet customer service: A scientific approach to improving the caller experienceEden PrairieMNVocaLabs.+{{anchor:​larson:​LAR1}}LarsonJA. (2005). ​Ten guidelines for designing a successful voice user interfaceSpeech Technology10(1), 51-53. [[https://​www.speechtechmag.com/​Articles/​ReadArticle.aspx?​ArticleID=29608]]
  
-LewisJ.R. (1982). Testing small system customer set-up. In Proceedings of the Human Factors Society 26th Annual Meeting (pp. 718-720)Santa Monica, CAHuman Factors Society.+{{anchor:​leppik2005:​LEP1}}LeppikP. (2005). Does forcing callers to use self-service work? Quality Times, 22, 1-3Downloaded 2/18/2009 from [[http://​www.vocalabs.com/​resources/​newsletter/​newsletter22.html]]
  
-LewisJ. R. (2005). Frequency distributions for names and unconstrained words associated with the letters of the English alphabet. In Proceedings of HCI International 2005Posters (pp1–5)StLouis, MO: Mira Digital PublicationAvailable at http://drjim.0catch.com/hcii05-368-wordfrequency.pdf.+{{anchor:​leppik2006:​LEP2}}LeppikP. (2006). Developing metrics part 1Bad metricsThe Customer Service SurveyRetrieved from [[www.vocalabs.com/resources/blog/C834959743/​E20061205170807/​index.html]]
  
-LewisJ. R. (2006). Effectiveness of various automated readability measures for the competitive evaluation of user documentationIn Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 624–628). Santa MonicaCAHuman Factors and Ergonomics Society.+{{anchor:​leppik2012:​LEP3}} LeppikP. (2012). The customer frustration indexGolden ValleyMNVocal Laboratories. Downloaded 7/23/2012 from [[http://​www.vocalabs.com/​download-ncss-cross-industry-report-customer-frustration-index-q2-2012]]
  
-LewisJR. (2007). Advantages and disadvantages of press or say <x> speech user interfaces (TechRepBCR-UX-2007-0002. Retrieved from http://drjim.0catch.com/2007_AdvantagesAndDisadvantagesOfPressOrSaySpeechUserInter.pdf). Boca Raton, FL: IBM Corp.+{{anchor:​leppikl2005:​LEP4}}LeppikP., & Leppik, D. (2005). Gourmet customer service: A scientific approach to improving the caller experienceEden Prairie, MN: VocaLabs[[https://www.amazon.com/Gourmet-Customer-Service-Scientific-Experience/​dp/​0976405504]]  ​
  
-Lewis, J. R. (2008). Usability evaluation of a speech recognition IVR. In T. Tullis & B. Albert (Eds.), Measuring ​the user experience, Chapter 10: Case studies ​(pp. 244–252). AmsterdamNetherlandsMorgan-Kaufman.+{{anchor:​lewis1982:​LEW1}}Lewis, J.R. (1982). Testing small system customer set-up. In Proceedings of the Human Factors Society 26th Annual Meeting ​(pp. 718-720). Santa MonicaCAHuman Factors Society. [[https://​doi.org/​10.1177/​154193128202600810]] ​
  
-Lewis, J. R. (2011). Practical speech user interface designBoca RatonFLCRC Press, Taylor & Francis Group.+{{anchor:​lewis2004:​LEW2}}Lewis, J. R. (2004). Effect of speaker and sampling rate on MOS-X ratings of concatenative TTS voicesIn Proceedings of the Human Factors and Ergonomics Society (pp. 759-763). Santa MonicaCAHFES. [[https://​doi.org/​10.1177/​154193120404800504]]
  
-Lewis, J. R. (2012). Usability testing. In G. Salvendy (Ed.), Handbook ​of Human Factors and Ergonomics, 4th ed. (pp. 1267-1312). New YorkNYJohn Wiley.+{{anchor:​lewis2005:​LEW3}} ​Lewis, J. R. (2005). Frequency distributions for names and unconstrained words associated with the letters of the English alphabet. In Proceedings ​of HCI International 2005: Posters ​(pp. 1–5). St. LouisMOMira Digital Publication. Available at [[http://​drjim.0catch.com/​hcii05-368-wordfrequency.pdf]]
  
-Lewis, J. R., & Commarford, P. M. (2003). Developing a voice-spelling alphabet ​for PDAsIBM Systems Journal, 42(4), 624–638Available at http://drjim.0catch.com/2003_DevelopingAVoiceSpellingAlphabetForPDAs.pdf.+{{anchor:​lewis2006:​LEW4}}Lewis, J. R. (2006). Effectiveness of various automated readability measures ​for the competitive evaluation of user documentationIn Proceedings of the Human Factors and Ergonomics Society 50th annual meeting ​(pp. 624–628)Santa Monica, CA: Human Factors and Ergonomics Society. [[https://doi.org/10.1177/154193120605000501]]
  
-Lewis, J. R., Commarford, P. M., Kennedy, P. J., and Sadowski, W. J. (2008). Handheld electronic devices. In C. Melody Carswell ​(Ed.), Reviews of Human Factors and Ergonomics, Vol4 (pp. 105-148). Santa Monica, CA: Human Factors and Ergonomics SocietyAvailable at http://​drjim.0catch.com/​2008_HandheldElectronicDevices.pdf.+{{anchor:​lewis2007:​LEW5}}Lewis, J. R. (2007). Advantages and disadvantages of press or say <x> speech user interfaces ​(TechRepBCR-UX-2007-0002Retrieved from [[http://​drjim.0catch.com/​2007_AdvantagesAndDisadvantagesOfPressOrSaySpeechUserInter.pdf]]). Boca Raton, FL: IBM Corp.
  
-Lewis, J. R., Commarford, P. M., & Kotan, C. (2006). Web-based comparison ​of two styles of auditory presentation:​ All TTS versus rapidly mixed TTS and recordings. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting ​(pp. 723727). Santa MonicaCAHuman Factors and Ergonomics Society.+{{anchor:​lewis2008:​LEW6}}Lewis, J. R. (2008). Usability evaluation ​of a speech recognition IVR. In T. Tullis & B. Albert (Eds.), Measuring ​the user experience, Chapter 10: Case studies ​(pp. 244252). AmsterdamNetherlandsMorgan-Kaufman[[https://​www.amazon.com/​Measuring-User-Experience-Interactive-Technologies/​dp/​0123735580]]
  
-Lewis, J. R., Potosnak, K. M., and Magyar, R. L. (1997). Keys and keyboardsIn M. HelandarT. K. Landauerand PPrabhu (Eds.), Handbook of Human-Computer Interaction (pp. 1285-1315). Amsterdam: Elsevier. Available at http://drjim.0catch.com/1997_KeysAndKeyboards.pdf.+{{anchor:​lewis2011:​LEW7}}Lewis, J. R. (2011). Practical speech user interface designBoca RatonFL: CRC PressTaylor & Francis Group[[https://www.amazon.com/Practical-Speech-Interface-Factors-Ergonomics-ebook/​dp/​B008KZ6TAM]]  ​
  
-Lewis, J. R., Simone, JE., & Bogacz, M. (2000)Designing common functions for speech-only user interfaces: Rationales, sample dialogs, potential uses for event counting, and sample grammars ​(TechReport 29.3287available at <http://drjim.0catch.com/always-ral.pdf>​). Raleigh, NC: IBM Corp.+{{anchor:​lewis2012:​LEW8}}Lewis, J. R. (2012)Usability testingIn GSalvendy ​(Ed.)Handbook of Human Factors ​and Ergonomics, 4th ed. (pp1267-1312)New YorkNY: John Wiley. [[https://www.amazon.com/Handbook-Factors-Ergonomics-Gavriel-Salvendy/​dp/​0470528389]]
  
-LibermanAM., Harris, K. S., Hoffman, H. S., & GriffithBC. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal ​of Experimental Psychology54358368.+{{anchor:​lewis2003:​LEW9}}LewisJR., & CommarfordPM. (2003). Developing a voice-spelling alphabet for PDAsIBM Systems ​Journal, ​42(4)624638. Available at [[http://​drjim.0catch.com/​2003_DevelopingAVoiceSpellingAlphabetForPDAs.pdf]]
  
-LitmanD., Hirschberg, J., & SwertsM. (2006). Characterizing and predicting corrections in spoken dialogue systemsComputational Linguistics32(3), 417–438.+{{anchor:​lewisc2008:​LEW10}}LewisJ. R., CommarfordP. M., Kennedy, P. J., and SadowskiW. J. (2008). Handheld electronic devicesIn C. Melody Carswell (Ed.)Reviews of Human Factors and Ergonomics, Vol. 4 (pp. 105-148). Santa MonicaCA: Human Factors and Ergonomics Society. Available at [[http://​drjim.0catch.com/​2008_HandheldElectronicDevices.pdf]]
  
-LombardE(1911)Le signe de l’elevation de la voixAnnales des maladies de l’oreille et du larynx37101199.+{{anchor:​lewisc2006:​LEW11}}LewisJR., Commarford, P. M., & KotanC. (2006). Web-based comparison of two styles of auditory presentation:​ All TTS versus rapidly mixed TTS and recordings. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 723727). Santa Monica, CA: Human Factors and Ergonomics Society. [[https://​doi.org/​10.1177/​154193120605000523]] ​
  
-MachadoS., Duarte, E., TelesJ., Reis, L., & RebeloF. (2012). Selection of a voice for a speech signal for personalized warnings: The effect of speaker'​s gender ​and voice pitchWork413592-3598.+{{anchor:​lewis1997:​LEW12}}LewisJR., PotosnakKM., and MagyarR. L. (1997). Keys and keyboardsIn M. HelanderT. K. Landauerand P. Prabhu (Eds.), Handbook of Human-Computer Interaction (pp. 1285-1315). Amsterdam: Elsevier. Available at [[http://​drjim.0catch.com/​1997_KeysAndKeyboards.pdf]]
  
-Margulies, E. (2005). Adventures in turn-takingNotes on success ​and failure in turn cue coupling. In AVIOS 2005 proceedings ​(pp1–10). San JoseCAAVIOS.+{{anchor:​lewis2000:​LEW13}}LewisJ. R., Simone, J. E., & Bogacz, M. (2000). Designing common functions for speech-only user interfacesRationales, sample dialogs, potential uses for event counting, ​and sample grammars ​(Tech. Report 29.3287, available at <​[[http://​drjim.0catch.com/​always-ral.pdf]]>). RaleighNCIBM Corp.
  
-Margulies, M. K. (1980). Effects ​of talker differences on speech ​intelligibility in the hearing impairedDoctoral dissertation,​ City University ​of New York.+{{anchor:​liberman:​LIB1}}LibermanA. M., Harris, ​K. S., Hoffman, H. S., & Griffith, B. C. (1957). The discrimination ​of speech ​sounds within and across phoneme boundariesJournal ​of Experimental Psychology, 54, 358–368. [[https://​doi.org/​10.1037/​h0044417]]
  
-MaricsMA., & EngelbeckG. (1997). Designing voice menu applications for telephonesIn M. HelanderTKLandauer, & PPrabhu (Eds.), Handbook of human-computer interaction,​ 2nd edition (pp1085-1102)Amsterdam, Netherlands:​ Elsevier.+{{anchor:​litman:​LIT1}}LitmanD., Hirschberg, J., & SwertsM. (2006). Characterizing and predicting corrections in spoken dialogue systemsComputational Linguistics32(3), 417–438[[https://​doi.org/10.1162/coli.2006.32.3.417]] 
  
-MarkowitzJ. (2010). VUI concepts for speaker verificationIn W. Meisel (Ed.)Speech in the User InterfaceLessons from Experience (pp161-166)Victoria, Canada: TMA Associates.+{{anchor:​lombard:​LOM1}}LombardE. (1911). Le signe de l’elevation de la voixAnnales des maladies de l’oreille et du larynx37, 101–199. [[http://paul.sobriquet.net/​wp-content/​uploads/​2007/​02/​lombard-1911-p-h-mason-2006.pdf]]
  
-MassaroD(1975)Preperceptual imagesprocessing timeand perceptual units in speech perceptionIn DMassaro (Ed.), Understanding language: An information-processing analysis ​of speech ​perception, reading, ​and psycholinguistics (pp. 125–150)New YorkNYAcademic Press.+{{anchor:​machado:​MAC1}}MachadoS., Duarte, E., TelesJ., Reis, L., & Rebelo, F(2012). Selection ​of a voice for a speech ​signal for personalized warnings: The effect of speaker'​s gender ​and voice pitchWork41, 3592-3598. [[https://doi.org/​10.3233/​WOR-2012-0670-3592]] ​
  
-McInnesF., Attwater, D., Edgington, M. D., Schmidt, M. S., & Jack, M. A. (1999). User attitudes to concatenated natural speech ​and text-to-speech synthesis ​in an automated information service. In Proceedings of Eurospeech99 ​(pp. 831834). BudapestHungaryESCA.+{{anchor:​margulies2005:​MAR1}}MarguliesE. (2005). Adventures in turn-taking:​ Notes on success ​and failure ​in turn cue coupling. In AVIOS 2005 proceedings ​(pp. 110). San JoseCAAVIOS.
  
-McInnes, F. R., Nairn, I. A., Attwater, D. J., Edgington, M. D., & Jack, M. A. (1999). A comparison ​of confirmation strategies for fluent telephone dialoguesEdinburghUK: Centre for Communication Interface Research.+{{anchor:​margulies1990:​MAR2}}Margulies, M. K. (1980). Effects ​of talker differences on speech intelligibility in the hearing impairedDoctoral dissertationCity University of New York.
  
-McKellinWH., ShahinK., Hodgson, ​M., Jamieson, J., & Pichora-Fuller,​ K. (2007)Pragmatics ​of conversation and communication in noisy settingsJournal of Pragmatics39, 2159–2184.+{{anchor:​marics:​MAR3}}MaricsMA., & EngelbeckG(1997). Designing voice menu applications for telephones. In M. HelanderT. KLandauer, & PPrabhu ​(Eds.), Handbook ​of human-computer interaction,​ 2nd edition (pp1085-1102). AmsterdamNetherlands:​ Elsevier[[https://​www.amazon.com/​Handbook-Human-Computer-Interaction-Second-Helander-dp-0444818626/​dp/​0444818626]]
  
-McKienzie, J. (2009). Menu pausesHow long? [PowerPoint Slides]Paper presented at SpeechTek 2009New YorkNYSpeechTek.+{{anchor:​markowitz:​MAR4}}Markowitz, J. (2010). VUI concepts for speaker verification. In W. Meisel (Ed.), Speech in the User InterfaceLessons from Experience (pp161-166)VictoriaCanadaTMA Associates[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-McTearM., O’Neill, I., HannaP., & Liu, X. (2005)Handling errors and determining confirmation strategies—an object based approach. Speech Communication45249269.+{{anchor:​massaro:​MAS1}}MassaroD(1975)Preperceptual imagesprocessing timeand perceptual units in speech perceptionIn DMassaro ​(Ed.)Understanding language: An information-processing analysis of speech perceptionreading, and psycholinguistics (pp. 125150). New York, NY: Academic Press. [[https://​www.amazon.com/​Understanding-Language-Information-Processing-Perception-Psycholinguistics-ebook/​dp/​B01JOZRWWA]]  ​
  
-MillerG. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing ​information. ​The Psychological Review, 6381-97.+{{anchor:​mcinnesa1999:​MCI1}}McInnesF., Attwater, D., Edgington, M. D., Schmidt, M. S., & Jack, M. A. (1999). User attitudes to concatenated natural speech and text-to-speech synthesis in an automated ​information ​serviceIn Proceedings of Eurospeech99 (pp. 831–834). BudapestHungary: ESCA. [[https://​www.isca-speech.org/​archive/​archive_papers/​eurospeech_1999/​e99_0831.pdf]]
  
-MillerG. A. (1962). Some psychological studies ​of grammarAmerican Psychologist17, 748–762.+{{anchor:​mcinnesn1999:​MCI2}}McInnesF. R., Nairn, I. A., Attwater, D. J., Edgington, M. D., & Jack, M. A. (1999). A comparison ​of confirmation strategies for fluent telephone dialoguesEdinburghUK: Centre for Communication Interface Research. [[http://​citeseerx.ist.psu.edu/​viewdoc/​download?​doi=10.1.1.473.3649&​rep=rep1&​type=pdf]] ​
  
-Minker, W., PittermanJ., PittermanA., StraußP.-M., & BühlerD. (2007). ​Challenges ​in speech-based human-computer interactionInternational ​Journal of Speech Technology10109119.+{{anchor:​mckellin:​MCK1}}McKellin, W. H., ShahinK., HodgsonM., JamiesonJ., & Pichora-FullerK. (2007). ​Pragmatics of conversation and communication ​in noisy settings. Journal of Pragmatics3921592184. [[https://​doi.org/​10.1016/​j.pragma.2006.11.012]] 
  
-MościckiE.K., Elkins, E. F., Baum, H. M., & McNamara, P. M. (1985). Hearing loss in the elderlyAn epidemiologic study of the Framingham Heart Study cohortEar and Hearing Journal6, 184-190.+{{anchor:​mckienzie:​MCK2}}McKienzieJ. (2009). Menu pausesHow long? [PowerPoint Slides]Paper presented at SpeechTek 2009. New YorkNY: SpeechTek.
  
-MunichorN., & RafaeliA. (2007). Numbers or apologies? Customer reactions to telephone waiting time fillersJournal of Applied Psychology92(2)511518.+{{anchor:​mctear:​MCT1}}McTearM., O’Neill, I., Hanna, P., & LiuX. (2005). Handling errors and determining confirmation strategies—an object based approachSpeech Communication45249269. [[https://​doi.org/​10.1016/​j.specom.2004.11.006]] 
  
-NairneJ. (2002). Remembering over the short-term: ​The case against the standard modelAnnual ​Review ​of Psychology5353-81.+{{anchor:​miller1956:​MIL1}}MillerG. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing informationThe Psychological ​Review, ​63, 81-97. [[http://​www2.psych.utoronto.ca/​users/​peterson/​psy430s2001/​Miller%20GA%20Magical%20Seven%20Psych%20Review%201955.pdf]] 
  
-NassC., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationshipCambridgeMAMIT Press.+{{anchor:​miller1962:​MIL2}}MillerGA. (1962). Some psychological studies of grammarAmerican Psychologist17, 748–762. [[http://​search.proquest.com/​docview/​1289830820/​]]
  
-NassC., & YenC. (2010). The man who lied to his laptop: What machines teach us about human relationshipsNew YorkNYPenguin Group.+{{anchor:​minker:​MIN1}}MinkerW., Pitterman, J., Pitterman, A., Strauß, P.-M., & BühlerD. (2007). Challenges in speech-based ​human-computer interactionInternational Journal of Speech Technology10, 109–119. [[https://​doi.org/​10.1007/​s10772-009-9023-y]]
  
-NémethG., Kiss, G., ZainkóC., OlaszyG., & TóthB. (2008). Speech generation ​in mobile phonesIn D. Gardner-Bonneau & HEBlanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 163–191). New York, NY: Springer.+{{anchor:​mościcki:​MOS1}}MościckiE.K., ElkinsE. F., BaumH. M., & McNamaraP. M. (1985). Hearing loss in the elderly: An epidemiologic study of the Framingham Heart Study cohortEar and Hearing Journal, 6, 184-190[[https://​doi.org/10.1097/​00003446-198507000-00003]]
  
-NorthA. C., Hargreaves, D. J., & McKendrickJ. (1999). Music and on-hold ​waiting time. British ​Journal of Psychology, ​90161164.+{{anchor:​munichor:​MUN1}}MunichorN., & RafaeliA. (2007). Numbers or apologies? Customer reactions to telephone ​waiting time fillers. Journal of Applied ​Psychology, ​92(2)511518. [[https://​doi.org/​10.1037/​0021-9010.92.2.511]]
  
-NovickD. G., Hansen, B., Sutton, S., & Marshall, C. R. (1999). Limiting factors of automated telephone dialogues. In D. Gardner-Bonneau (Ed.)Human factors and voice interactive systems (pp163–186)Boston, MA: Kluwer Academic Publishers.+{{anchor:​nairne:​NAI1}}NairneJ. (2002). Remembering over the short-term: The case against the standard modelAnnual Review of Psychology, 5353-81[[http://​search.proquest.com/​docview/​205754757]]
  
-OgdenW. C., & BernickP. (1997). Using natural language interfaces. In M. Helander, T. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer ​interaction (pp. 137–161)AmsterdamNetherlandsElsevier.+{{anchor:​nass2005:​NAS1}}Nass, C., & BraveS. (2005). Wired for speech: How voice activates and advances the human-computer ​relationshipCambridgeMAMIT Press[[https://​www.amazon.com/​Wired-Speech-Activates-Human-Computer-Relationship-ebook/​dp/​B001949SMM]] ​
  
-OstendorfM., Kannan, A., Austin, S., Kimball, O., Schwartz, R., & RohlicekJ. R. (1991). Integration of diverse recognition methodologies through reevaluation of n-best sentence hypotheses. In Proceedings of DARPA Workshop on Speech and Natural Language (pp. 83-87)StroudsburgPAAssociation for Computational Linguistics<http://acl.ldc.upenn.edu/H/H91/H91-1013.pdf>​+{{anchor:​nass2010:​NAS2}}NassC., & YenC. (2010). The man who lied to his laptop: What machines teach us about human relationshipsNew YorkNYPenguin Group. 
 +[[https://www.amazon.com/Man-Who-Lied-His-Laptop/dp/1617230049]]
  
-OsunaEE. (1985). The psychological cost of waitingJournal of Mathematical Psychology29, 82105.+{{anchor:​németh:​NEM1}}NémethG., Kiss, G., Zainkó, C., Olaszy, G., & Tóth, B. (2008). Speech generation in mobile phonesIn D. Gardner-Bonneau & H. E. Blanchard (Eds.)Human factors and voice interactive systems (2nd ed.) (pp. 163191). New York, NY: Springer. ​ [[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]]
  
-ParkinsonF. (2012). Alphanumeric Confirmation & User DataPresentation at SpeechTek 2012available at http://www.speechtek.com/​2012/Presentations.aspx (search for Parkinson in Session B102).+{{anchor:​north:​NOR1}}North,​ A. C., Hargreaves, D. J., & McKendrickJ. (1999). Music and on-hold waiting timeBritish Journal of Psychology90, 161–164. [[https://doi.org/10.1348/​000712699161215]]
  
-Pieraccini, R. (2010). Continuous automated speech tuning and the return ​of statistical grammars. In WMeisel ​(Ed.), ​Speech in the user interface: Lessons from experience ​(pp. 255259). VictoriaCanadaTMA Associates.+{{anchor:​novick:​NOV1}}NovickD. G., Hansen, B., Sutton, S., & Marshall, C. R. (1999). Limiting factors ​of automated telephone dialogues. In DGardner-Bonneau ​(Ed.), ​Human factors and voice interactive systems ​(pp. 163186). BostonMAKluwer Academic Publishers. [[https://​www.amazon.com/​Factors-Interactive-International-Engineering-Computer/​dp/​0792384679]]
  
-PieracciniR. (2012). The voice in the machine: Building computers that understand speechCambridgeMAMIT Press.+{{anchor:​ogden:​OGD1}}OgdenW. C., & Bernick, P. (1997). Using natural language interfacesIn M. HelanderT. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer interaction (pp. 137–161). Amsterdam, NetherlandsElsevier. [[https://​www.amazon.com/​Handbook-Human-Computer-Interaction-Second-Helander-dp-0444818626/​dp/​0444818626]]
  
-Polkosky, M. D. (2001). User preference for system processing tones (TechRep. 29.3436). RaleighNCIBM.+{{anchor:​ostendorf:​OST1}}Ostendorf, M., Kannan, A., Austin, S., Kimball, O., Schwartz, R., & Rohlicek, J. R. (1991). Integration of diverse recognition methodologies through reevaluation of n-best sentence hypotheses. In Proceedings of DARPA Workshop on Speech and Natural Language ​(pp83-87). StroudsburgPAAssociation for Computational Linguistics. [[http://​acl.ldc.upenn.edu/​H/​H91/​H91-1013.pdf]]
  
-PolkoskyMD. (2002). Initial psychometric evaluation ​of the Pragmatic Rating Scale for Dialogues (TechReport ​29.3634). Boca RatonFLIBM.+{{anchor:​osuna:​OSU1}}OsunaEE. (1985). The psychological cost of waitingJournal of Mathematical Psychology, ​29, 82–105. [[https://doi.org/​10.1016/​0022-2496(85)90020-3]]
  
-PolkoskyM. D. (2005a). Toward a social-cognitive psychology of speech technology: Affective responses to speech-based e-serviceUnpublished doctoral dissertationUniversity of South Florida.+{{anchor:​parkinson:​PAR1}}ParkinsonF. (2012). Alphanumeric Confirmation & User DataPresentation at SpeechTek 2012available at [[http://​www.speechtek.com/​2012/​Presentations.aspx]] (search for Parkinson in Session B102).
  
-PolkoskyM. D. (2005b). What is speech ​usabilityanyway? ​Speech ​Technology, 10(9), 22–25.+{{anchor:​pieraccini2010:​PIE1}}PieracciniR. (2010). Continuous automated ​speech ​tuning and the return of statistical grammars. In W. Meisel (Ed.), Speech ​in the user interface: Lessons from experience ​(pp. 255–259). VictoriaCanada: TMA Associates[[https://​www.amazon.com/​Speech-User-Interface-Lessons-Experience/​dp/​1426926227]]
  
-PolkoskyM. D. (2006). RespectIt’s not what you say, it’s how you say itSpeech Technology11(5), 16–21.+{{anchor:​pieraccini2012:​PIE2}}PieracciniR. (2012). The voice in the machineBuilding computers that understand speechCambridgeMA: MIT Press. [[https://​www.amazon.com/​Voice-Machine-Building-Computers-Understand/​dp/​0262533294]]  ​
  
-Polkosky, M. D. (2008). Machines as mediators: The challenge of technology ​for interpersonal communication theory and researchIn EKonjin (Ed.), Mediated interpersonal communication (pp34–57). New YorkNYRoutledge.+{{anchor:​polkosky2001:​POL1}}Polkosky, M. D. (2001). User preference ​for system processing tones (TechRep29.3436). RaleighNCIBM. [[https://​www.researchgate.net/​publication/​240626208_User_Preference_for_Turntaking_Tones_2_Participant_Source_Issues_and_Additional_Data]]
  
-Polkosky, M. D., & Lewis, J. R. (2002). ​Effect ​of auditory waiting cues on time estimation in speech recognition telephony applicationsInternational Journal of Human-Computer Interaction,​ 14423–446.+{{anchor:​polkosky2002:​POL2}}Polkosky, M. D. (2002). ​Initial psychometric evaluation ​of the Pragmatic Rating Scale for Dialogues (TechReport 29.3634). Boca RatonFL: IBM.
  
-Polkosky, M. D., & Lewis, J. R. (2003). Expanding the MOS: Development and psychometric evaluation ​of the MOS-R and MOS-XInternational Journal of Speech Technology6, 161–182.+{{anchor:​polkosky2005a:​POL3}}Polkosky, M. D. (2005a). Toward a social-cognitive psychology ​of speech technology: Affective responses to speech-based e-serviceUnpublished doctoral dissertationUniversity of South Florida. [[https://​scholarcommons.usf.edu/​etd/​819/​]] ​
  
-RamosL. (1993). The effects of on-hold telephone music on the number of premature disconnections to a statewide protective services abuse hot line. Journal of Music Therapy30(2), 119129.+{{anchor:​polkosky2005b:​POL4}}PolkoskyM. D. (2005b). What is speech usabilityanyway? Speech Technology, 10(9), 2225. [[https://​www.speechtechmag.com/​Articles/​Editorial/​Features/​What-Is-Speech-Usability-Anyway-29601.aspx]]
  
-ReevesB., & Nass, C. (2003). The media equationHow people treat computerstelevision, and new media like real people and placesChicagoILUniversity of Chicago Press.+{{anchor:​polkosky2006:​POL5}}PolkoskyMD. (2006). RespectIt’s not what you sayit’s how you say itSpeech Technology11(5), 16–21. [[https://​www.speechtechmag.com/​Articles/​Editorial/​Features/​Ivy-League-IVR-29587.aspx]] ​
  
-Reinders, M., Dabholkar, P. A., & Frambach, R. T. (2008). ​Consequences ​of forcing consumers to use technology-based self-serviceJournal of Service Research11(2), 107-123.+{{anchor:​polkosky2008:​POL6}}Polkosky, M. D. (2008). ​Machines as mediators: The challenge ​of technology ​for interpersonal communication theory and researchIn E. Konjin (Ed.)Mediated interpersonal communication ​(pp. 34–57). New YorkNY: Routledge[[https://​www.amazon.com/​Mediated-Interpersonal-Communication-Leas/​dp/​0805863044]]
  
-Resnick, M. & Sanchez, J. (2004). Effects ​of organizational scheme and labeling ​on task performance ​in product-centered and user-centered web sites. Human Factors46104-117.+{{anchor:​polkoskyl2002:​POL7}}Polkosky, M. D., Lewis, J. R. (2002). Effect ​of auditory waiting cues on time estimation ​in speech recognition telephony applicationsInternational Journal of Human-Computer Interaction14423–446. [[https://​doi.org/​10.1080/​10447318.2002.9669128]] ​
  
-RobertsF., Francis, A. L., & MorganM. (2006). The interaction ​of inter-turn silence with prosodic cues in listener perceptions of “trouble” in conversation. Speech ​Communication4810791093.+{{anchor:​polkosky2003:​POL8}}PolkoskyMD., & LewisJ. R. (2003). Expanding the MOS: Development and psychometric evaluation ​of the MOS-R and MOS-XInternational Journal of Speech ​Technology6161182[[https://​doi.org/​10.1023/​A:​1022390615396]]
  
-RolandiW. (2003). When you don’t know what you don’t knowSpeech Technology8(4), 28.+{{anchor:​ramos:​RAM1}}RamosL. (1993). The effects of on-hold telephone music on the number of premature disconnections to a statewide protective services abuse hot lineJournal of Music Therapy30(2), 119–129[[https://​doi.org/​10.1093/​jmt/​30.2.119]]
  
-RolandiW. (2004a). Improving customer service with speech. Speech Technology9(5)14.+{{anchor:​reeves:​REE1}}ReevesB., & Nass, C. (2003). The media equation: How people treat computerstelevisionand new media like real people and places. Chicago, IL: University of Chicago Press. [[https://​www.amazon.com/​Equation-Reeves-Clifford-Language-Paperback/​dp/​B00E2RJ3GE]]
  
-RolandiW. (2004b). Rolandi'​s razorSpeech Technology9(4), 39.+{{anchor:​reinders:​REI1}}ReindersM., Dabholkar, P. A., & Frambach, R. T. (2008). Consequences of forcing consumers to use technology-based self-serviceJournal of Service Research11(2), 107-123. [[https://​doi.org/​10.1177/​1094670508324297]] ​
  
-RolandiW. (2005). The impotence ​of being earnestSpeech Technology, 10(1), 22.+{{anchor:​resnick:​RES1}}ResnickM. & Sanchez, J. (2004). Effects ​of organizational scheme and labeling on task performance in product-centered and user-centered web sitesHuman Factors46, 104-117. [[https://​doi.org/​10.1518/​hfes.46.1.104.30390]]
  
-RolandiW. (2006). The alpha bail. Speech ​Technology11(1)56.+{{anchor:​roberts:​ROB1}}RobertsF., Francis, A. L., & Morgan, M. (2006). The interaction of inter-turn silence with prosodic cues in listener perceptions of “trouble” in conversation. Speech ​Communication481079–1093. [[https://​doi.org/​10.1016/​j.specom.2006.02.001]] 
  
-Rolandi, W. (2007a). Aligning customer and company goals through VUI. Speech Technology, ​12(2), 6.+{{anchor:​rolandi2003:​ROL1}}Rolandi, W. (2003). When you don’t know what you don’t know. Speech Technology, ​8(4), 28[[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​When-You-Dont-Know-When-You-Dont-Know-29821.aspx]]
  
-Rolandi, W. (2007b). The pains of main are plainly VUI’s bane. Speech Technology, ​12(1), 6.+{{anchor:​rolandi2004a:​ROL2}}Rolandi, W. (2004a). Improving customer service with speech. Speech Technology, ​9(5), 14. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​Improving-Customer-Service-with-Speech-31763.aspx]] ​
  
-Rolandi, W. (2007c). The persona craze nears an end. Speech Technology, ​12(5), 9.+{{anchor:​rolandi2004b:​ROL3}}Rolandi, W. (2004b). Rolandi'​s razor. Speech Technology, ​9(4), 39. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​Rolandi%27s-Razor-29820.aspx]]
  
-RosenbaumS. (1989). Usability evaluations versus usability testing: When and why? IEEE Transactions on Professional Communication32210-216.+{{anchor:​rolandi2005:​ROL4}}RolandiW. (2005). The impotence of being earnest. Speech Technology10(1)22. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Impotence-of-Being-Earnest-29816.aspx]] ​
  
-RosenfeldR., Olsen, D., & Rudnicky, A. (2001). Universal speech interfacesInteractions8(6), 34-44.+{{anchor:​rolandi2006:​ROL5}}RolandiW. (2006). The alpha bailSpeech Technology11(1), 56. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Alpha-Bail-30090.aspx]]
  
-Sadowski, W. J. (2001). Capabilities ​and limitations of Wizard of Oz evaluations of speech user interfacesIn Proceedings of HCI International 2001: Usability evaluation and interface design ​(pp. 139–142). MahwahNJLawrence Erlbaum.+{{anchor:​rolandi2007a:​ROL6}}Rolandi, W. (2007a). Aligning customer ​and company goals through VUISpeech Technology, 12(2), 6. [[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​Aligning-Customer-and-Company-Goals-Through-VUI-29800.aspx]] ​
  
-Sadowski, W. J., & Lewis, J. R. (2001). Usability evaluation ​of the IBM WebSphere “WebVoice” demo (TechRep. 29.3387available at drjim.0catch.com/vxmllive1-ral.pdf). West Palm Beach, FL: IBM Corp.+{{anchor:​rolandi2007b:​ROL7}}Rolandi, W. (2007b). The pains of main are plainly VUI’s baneSpeech Technology12(1), 6. [[https://​www.speechtechmag.com/Articles/​Archives/​The-Human-Factor/​The-Pains-of-Main-Are-Plainly-VUIs-Bane-29801.aspx]]
  
-SauroJ. (2009). Estimating productivity:​ Composite operators for keystroke level modelingIn JackoJ.A. (Ed.), Proceedings of the 13th International Conference on Human–Computer Interaction,​ HCII 2009 (pp352-361). Berlin, GermanySpringer-Verlag.+{{anchor:​rolandi2007c:​ROL8}}RolandiW. (2007c). The persona craze nears an endSpeech Technology12(5), 9[[https://​www.speechtechmag.com/​Articles/​Archives/​The-Human-Factor/​The-Persona-Craze-Nears-an-End-36315.aspx]] ​
  
-SauroJ., & Lewis, J. R. (2012). Quantifying the user experiencePractical statistics for user research. BurlingtonMAMorgan Kaufmann.+{{anchor:​rosenbaum:​ROS1}}RosenbaumS. (1989). Usability evaluations versus usability testingWhen and why? IEEE Transactions on Professional Communication32, 210-216. [[https://​doi.org/​10.1109/​47.44533]]
  
-SchegloffE. A. (2000). Overlapping talk and the organization of turn-taking for conversationLanguage in Society291–63.+{{anchor:​rosenfeld:​ROS2}}RosenfeldR., Olsen, D., & Rudnicky, ​A. (2001). Universal speech interfacesInteractions8(6)34-44. [[https://​doi.org/​10.1145/​384076.384085]] ​
  
-Schoenborn C. A.& Marano, M. (1988). Current estimates from the national health interview survey: United States 1987. In Vital and Health Statistics, series 10, #166WashingtonD.C.: Government Printing Office.+{{anchor:​sadowski2001:​SAD1}}SadowskiW. J. (2001). Capabilities and limitations of Wizard of Oz evaluations of speech user interfaces. In Proceedings of HCI International 2001: Usability evaluation ​and interface design (pp139–142). MahwahNJ: Lawrence Erlbaum[[https://​www.amazon.com/​Usability-Evaluation-Interface-Design-Engineering/​dp/​0805836071]]
  
-SheederT., & Balogh, J. (2003). Say it like you mean it: Priming for structure in caller responses to a spoken dialog system. International Journal ​of Speech Technology6103–111.+{{anchor:​sadowskil2001:​SAD2}}SadowskiW. J., & Lewis, J. R. (2001). Usability evaluation ​of the IBM WebSphere “WebVoice” demo (Tech. Rep. 29.3387available at [[drjim.0catch.com/​vxmllive1-ral.pdf]]). West Palm BeachFL: IBM Corp.
  
-SchumacherRM., Jr., Hardzinski, ML., & Schwartz, ​A. L. (1995). Increasing the usability of interactive voice response systemsResearch and guidelines for phone-based interfacesHuman Factors, 37, 251–264.+{{anchor:​sauro2009:​SAU1}}SauroJ(2009)Estimating productivity:​ Composite operators for keystroke level modelingIn JackoJ.A. (Ed.), Proceedings of the 13th International Conference on Human–Computer Interaction,​ HCII 2009 (pp. 352-361). Berlin, GermanySpringer-Verlag[[https://​doi.org/​10.1007/​978-3-642-02574-7_40]]
  
-ShinnP. (2009). Getting persona – IVR voice gender, intelligibility & the agingIn Speech Strategy News (Novemberpp37-39).+{{anchor:​sauro2012:​SAU2}}SauroJ., & Lewis, J. R. (2012). Quantifying ​the user experience: Practical statistics for user researchBurlingtonMA: Morgan Kaufmann[[https://​learning.oreilly.com/​library/​view/​quantifying-the-user/​9780123849687/​]]
  
-ShinnP., Basson, S. H., & Margulies, M. (2009). The impact ​of IVR voice talent selection on intelligibilityPresentation at SpeechTek 2009Available at <​www.speechtek.com/2009/program.aspx?​SessionID=2386>​.+{{anchor:​schegloff:​SCH1}}SchegloffEA. (2000). Overlapping talk and the organization ​of turn-taking for conversationLanguage in Society, 29, 1–63[[https://doi.org/10.1017/​S0047404500001019]]
  
-Shriver, S., & RosenfeldR. (2002). Keywords for a universal speech interface. In Proceedings of CHI 2002 (pp726-727). MinneapolisMNACM.+{{anchor:​schoenborn:​SCH2}}Schoenborn C. A., & MaranoM. (1988). Current estimates from the national health interview survey: United States 1987. In Vital and Health Statistics, series 10, #166WashingtonD.C.Government Printing Office. [[https://​www.cdc.gov/​nchs/​data/​series/​sr_10/​sr10_166.pdf]]
  
-SkantzeG. (2005). Exploring human error recovery strategiesImplications ​for spoken dialogue systemsSpeech Communication45325341.+{{anchor:​schumacher:​SCH3}}SchumacherR. M., Jr., Hardzinski, M. L., & Schwartz, A. L. (1995). Increasing the usability of interactive voice response systemsResearch and guidelines ​for phone-based interfacesHuman Factors37251264. [[https://​doi.org/​10.1518/​001872095779064672]] ​
  
-SpiegelMF. (1997). Advanced database preprocessing and preparations that enable telecommunication services based on speech synthesis. Speech ​Communication235162.+{{anchor:​sheeder:​SHE1}}SheederT., & Balogh, J. (2003). Say it like you mean it: Priming for structure in caller responses to a spoken dialog systemInternational Journal of Speech ​Technology6103111. [[https://​doi.org/​10.1023/​A:​1022326328600]]
  
-SpiegelM. F. (2003a). Proper name pronunciations for speech technology applicationsInternational Journal of Speech ​Technology6, 419-427.+{{anchor:​shinn2009:​SHI1}}ShinnP. (2009). Getting persona – IVR voice gender, intelligibility & the agingIn Speech ​Strategy News (Novemberpp. 37-39).
  
-SpiegelMF. (2003b). The difficulties with names: Overcoming barriers to personal ​voice servicesSpeech Technology, 8(3), 12-15.+{{anchor:​shinnb2009:​SHI2}}ShinnP., Basson, S. H., & Margulies, M. (2009). The impact of IVR voice talent selection on intelligibilityPresentation at SpeechTek 2009. Available at <​[[www.speechtek.com/​2009/​program.aspx?​SessionID=2386]]>​
  
-StiversT.; EnfieldN. J.; BrownP.; Englert, C.; Hayashi, M.; Heinemann, T.; Hoymann, G.; Rossano, F.; de Ruiter, J. P.; Yoon, K.-E.; LevinsonSC(2009)Universals and cultural variation in turn-taking in conversationProceedings of the National Academy of Sciences, 106 (26), 10587-10592.+{{anchor:​shriver:​SHR1}}ShriverS., & RosenfeldR(2002)Keywords for a universal speech interfaceIn Proceedings of CHI 2002 (pp726-727)MinneapolisMN: ACM[[http://​www.cs.cmu.edu/​~roni/​papers/​ShriverRosenfeld02b.pdf]]
  
-SuhmB. (2008). IVR usability engineering using guidelines and analyses of end-to-end callsIn DGardner-Bonneau & HEBlanchard (Eds.), Human factors and voice interactive systems, 2nd edition (pp1-41)New York, NY: Springer.+{{anchor:​skantze:​SKA1}}SkantzeG. (2005). Exploring human error recovery strategies: Implications for spoken dialogue systemsSpeech Communication,​ 45, 325–341[[https://​doi.org/10.1016/j.specom.2004.11.005]] 
  
-SuhmB., Freeman, B., & GettyD(2001)Curing the menu blues in touch-tone voice interfaces. In Proceedings of CHI 2001 (pp. 131-132). The Hague, Netherlands:​ ACM.+{{anchor:​spiegel1997:​SPI1}}SpiegelMF(1997). Advanced database preprocessing and preparations that enable telecommunication services based on speech synthesis. Speech Communication2351–62[[https://​doi.org/​10.1016/S0167-6393(97)00039-3]]
  
-SuhmB., Bers, J., McCarthy, D., Freeman, B., Getty, D., Godfrey, K., & Peterson, P. (2002). A comparative study of speech ​in the call center: Natural language call routing vstouch-tone menusIn Proceedings of CHI 2002 (pp283–290)Minneapolis,​ MNACM.+{{anchor:​spiegel2003a:​SPI2}}SpiegelMF. (2003a). Proper name pronunciations for speech ​technology applicationsInternational Journal of Speech Technology, 6, 419-427[[https://​doi.org/10.1023/A:1025721319650]]
  
-ToledanoD. T., Pozo, R. F., Trapote, ÁH., & GómezLH(2006)Usability evaluation of multi-modal biometric verification systems. Interacting ​with Computers, 18, 1101-1122.+{{anchor:​spiegel2003b:​SPI3}}SpiegelM. F. (2003b)The difficulties with names: Overcoming barriers to personal voice servicesSpeech Technology8(3)12-15[[https://​www.speechtechmag.com/​Articles/​Editorial/​Feature/​The-Difficulties-with-Names-29614.aspx]]
  
-TomkoS., Harris, TK., TothA., SandersJ., RudnickyA., & RosenfeldR. (2005). Towards efficient human machine speech communication:​ The speech graffiti project. ACM Transactions on Speech ​and Language Processing2(1), 1-27.+{{anchor:​stivers:​STI1}}StiversT.; EnfieldNJ.; BrownP.; EnglertC.; HayashiM.; HeinemannT.; HoymannG.; RossanoF.; de RuiterJ. P.; YoonK.-E.; Levinson, S. C. (2009). Universals ​and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences106 (26), 10587-10592. [[https://​doi.org/​10.1073/​pnas.0903616106]] ​
  
-Torres, F., Hurtado, L. F., García, F., Sanchis, E., & Segarra, E. (2005). Error handling in a stochastic dialog system through confidence measuresSpeech Communication,​ 45, 211–229.+{{anchor:​studio52:​STU1}}Studio52(2019April 9). 5 Reasons why your IVR should be multilingualRetrieved from [[https://​studio52.tv/​5-reasons-why-your-ivr-should-be-multilingual]]
  
-TurunenM., Hakulinen, J., & Kainulainen,​ A. (2006). Evaluation of a spoken dialogue system with usability ​tests and long-term pilot studies: Similarities and differences. In Proceedings of the 9th International Conference on Spoken Language Processing ​(pp. 1057-1060). PittsburghPAICSLP.+{{anchor:​suhm2008:​SUH1}}SuhmB. (2008). IVR usability ​engineering using guidelines ​and analyses of end-to-end calls. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems, 2nd edition ​(pp. 1-41). New YorkNYSpringer[[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]]
  
-UnzickerDK. (1999). The psychology of being put on hold: An exploratory study of service qualityPsychology & Marketing, 16(4), 327–350.+{{anchor:​suhm2001:​SUH2}}SuhmB., Freeman, B., & Getty, D. (2001). Curing the menu blues in touch-tone voice interfacesIn Proceedings of CHI 2001 (pp. 131-132). The HagueNetherlands:​ ACM. [[https://​10.1145/​634067.634147]]
  
-Vacca, J. R. (2007). Biometric technologies and verification systemsBurlingtonMAElsevier.+{{anchor:​suhm2002:​SUH3}}Suhm,​ B., Bers, J., McCarthy, D., Freeman, B., Getty, D., Godfrey, K., & Peterson, P. (2002). A comparative study of speech in the call center: Natural language call routing vstouch-tone menus. In Proceedings of CHI 2002 (pp. 283–290). MinneapolisMNACM. [[https://​doi.org/​10.1145/​503376.503427]]
  
-Virzi, R. A., & HuitemaJS. (1997). Telephone-based menus: Evidence that broader is better than deeper. In Proceedings ​of the Human Factors and Ergonomics Society 41st Annual Meeting (pp. 315-319)Santa MonicaCAHuman Factors and Ergonomics Society.+{{anchor:​toledano:​TOL1}}Toledano,​ D. T., Pozo, R. F., Trapote, Á. H., & GómezLH. (2006). Usability evaluation ​of multi-modal biometric verification systemsInteracting with Computers18, 1101-1122. ​ [[https://​doi.org/​10.1016/​j.intcom.2006.01.004]] 
  
-Voice Messaging User Interface Forum. (1990). Specification documentCedar KnollsNJProbe Research.+{{anchor:​tomko:​TOM1}}Tomko,​ S., Harris, T. K., Toth, A., Sanders, J., Rudnicky, A., & Rosenfeld, R. (2005). Towards efficient human machine speech communication:​ The speech graffiti projectACM Transactions on Speech and Language Processing2(1), 1-27. [[https://​doi.org/​10.1145/​1075389.1075391]]
  
-WalkerM. A., FromerJ., Di FabbrizioG., MestelC., & HindleD. (1998). What can I say?: Evaluating ​spoken language interface to emailIn Proceedings of CHI 1998 (pp. 582589)Los Angeles, CAACM.+{{anchor:​torres:​TOR1}}TorresF., HurtadoL. F., GarcíaF., SanchisE., & SegarraE. (2005). Error handling in stochastic dialog system through confidence measuresSpeech Communication,​ 45, 211229[[https://​doi.org/​10.1016/​j.specom.2004.10.014]]
  
-WattWC. (1968). HabitabilityAmerican Documentation,​ 19(3), 338–351.+{{anchor:​turunen:​TUR1}}TurunenM., Hakulinen, J., & Kainulainen,​ A. (2006). Evaluation of a spoken dialogue system with usability tests and long-term pilot studies: Similarities and differencesIn Proceedings of the 9th International Conference on Spoken Language Processing ​(pp. 1057-1060). PittsburghPA: ICSLP. [[http://​citeseerx.ist.psu.edu/​viewdoc/​download?​doi=10.1.1.142.4349&​rep=rep1&​type=pdf]]
  
-WeegelsMF. (2000). Users’ conceptions ​of voice-operated information services. International Journal ​of Speech Technology37582.+{{anchor:​unzicker:​UNZ1}}UnzickerDK. (1999). The psychology ​of being put on hold: An exploratory study of service quality. Psychology & Marketing16(4)327350[[https://​doi.org/​10.1002/​(SICI)1520-6793(199907)16:​4<​327::​AID-MAR4>​3.0.CO;​2-G]] ​
  
-Wilkie, J., McInnes, F., Jack, M. A., & Littlewood, P. (2007). ​Hidden menu options in automated human-computer telephone dialoguesDissonance in the user’s mental modelBehaviour & Information Technology, 26(6), 517-534.+{{anchor:​vacca:​VAC1}}Vacca, J. R. (2007). ​Biometric technologies and verification systems. Burlington, MAElsevier[[https://​www.amazon.com/​Biometric-Technologies-Verification-Systems-Vacca/​dp/​0750679670]]
  
-WilliamsJD., & Witt, S. M. (2004). A comparison of dialog strategies for call routingInternational Journal ​of Speech Technology7, 9–24.+{{anchor:​virzi:​VIR1}}VirziRA., & HuitemaJ. S. (1997). Telephone-based menus: Evidence that broader is better than deeperIn Proceedings ​of the Human Factors and Ergonomics Society 41st Annual Meeting (pp. 315-319). Santa MonicaCA: Human Factors and Ergonomics Society. h[[ttp://​search.proquest.com/​docview/​235451367]]
  
-Wilson, T. P., & Zimmerman, D. H. (1986). The structure of silence between turns in two-party conversationDiscourse Processes, 9375–390.+{{anchor:​voice:​VOI1}}Voice Messaging User Interface Forum. (1990). Specification documentCedar KnollsNJ: Probe Research.
  
-Wolters, M., Georgila, K., Moore, J. D., LogieR. H., MacPhersonS. E., & WatsonM. (2009). Reducing working memory load in spoken ​dialogue systemsInteracting with Computers21, 276-287.+{{anchor:​walker:​WAL1}}Walker, M. A., Fromer, J., Di FabbrizioG., MestelC., & HindleD. (1998). What can I say?: Evaluating a spoken ​language interface to emailIn Proceedings of CHI 1998 (pp. 582–589). Los AngelesCA: ACM. [[http://​www.difabbrizio.com/​papers/​chi98-elvis.pdf]] 
  
-WrightL. E., Hartley, M. W., & Lewis, J. R. (2002). Conditional probabilities for IBM Voice Browser 2.0 alpha and alphanumeric recognition ​(TechRep. 29.3498. Retrieved from http://drjim.0catch.com/alpha2-acc.pdf). West Palm Beach, FL: IBM.+{{anchor:​watt:​WAT1}}Watt, W. C. (1968). HabitabilityAmerican Documentation,​ 19(3), 338–351[[https://doi.org/10.1002/asi.5090190324]]
  
-YagilD. (2001). Ingratiation and assertiveness in the service provider-customer dyad. Journal of Service Research, 3(4)345353.+{{anchor:​weegels:​WEE1}}WeegelsM. F. (2000). Users’ conceptions of voice-operated information servicesInternational ​Journal of Speech Technology, 3, 7582. [[https://​doi.org/​10.1023/​A:​1009633011507]]
  
-Yang, F., & HeemanP. A. (2010). Initiative conflicts ​in task-oriented dialogueComputer Speech and Language24175–189.+{{anchor:​wilkie:​WIL1}}Wilkie,​ J., McInnes, F., JackM. A., & Littlewood, P. (2007). Hidden menu options ​in automated human-computer telephone dialogues: Dissonance in the user’s mental modelBehaviour & Information Technology26(6)517-534. [[https://​doi.org/​10.1080/​01449290600717783]]
  
-YellinE. (2009). Your call is (not that) important to us: Customer service and what it reveals about our world and our livesNew YorkNYFree Press.+{{anchor:​williams:​WIL2}}WilliamsJ. D., & Witt, S. M. (2004). A comparison of dialog strategies for call routingInternational Journal of Speech Technology7, 9–24. [[https://​doi.org/​10.1023/​B:​IJST.0000004803.47697.bd]] 
  
-YudkowskyM. (2008). The creepiness factorSpeech Technology13(8)4.+{{anchor:​wilson:​WIL3}}WilsonT. P., & Zimmerman, D. H. (1986). The structure of silence between turns in two-party conversationDiscourse Processes9375–390. [[https://​doi.org/​10.1080/​01638538609544649]]
  
-Yuschik, M. (2008)Silence locations and durations in dialog managementIn D. Gardner-Bonneau & H. E. Blanchard (Eds.)Human factors and voice interactive systems2nd edition (pp231-253). New YorkNYSpringer.+{{anchor:​wolters:​WOL1}}Wolters, M., Georgila, K., Moore, J. D., Logie, R. H., MacPherson, S. E., & WatsonM(2009). Reducing working memory load in spoken dialogue systems. Interacting with Computers21, 276-287. [[https://​doi.org/​10.1016/​j.intcom.2009.05.009]] 
  
-Zoltan-Ford, E. (1991). How to get people to say and type what computers can understandInternational Journal of Man-Machine Studies, 34527–547.+{{anchor:​wright:​WRI1}}WrightL. E., Hartley, M. W., & Lewis, J. R. (2002). Conditional probabilities for IBM Voice Browser 2.0 alpha and alphanumeric recognition (TechRep. 29.3498. Retrieved from [[http://​drjim.0catch.com/​alpha2-acc.pdf]]). West Palm BeachFL: IBM.
  
-Zurif, E. B. (1990). Language and the brain. In D. N. Osherson & H. Lasnik (Eds.), Language: An invitation to cognitive science (pp. 177–198). Cambridge, MA: MIT Press.+{{anchor:​yagil:​YAG1}}Yagil,​ D. (2001). Ingratiation and assertiveness in the service provider-customer dyad. Journal of Service Research, 3(4), 345–353. [[https://​doi.org/​10.1177/​109467050134007]]  
 + 
 +{{anchor:​yang:​YAN1}}Yang,​ F., & Heeman, P. A. (2010). Initiative conflicts in task-oriented dialogue. Computer Speech and Language, 24, 175–189. [[https://​doi.org/​10.1016/​j.csl.2009.04.003]] 
 + 
 +{{anchor:​yellin:​YEL1}}Yellin,​ E. (2009). Your call is (not that) important to us: Customer service and what it reveals about our world and our lives. New York, NY: Free Press. [[https://​www.amazon.com/​Your-Call-Not-That-Important/​dp/​1416546898]] 
 + 
 +{{anchor:​yudkowsky:​YUD1}}Yudkowsky,​ M. (2008). The creepiness factor. Speech Technology, 13(8), 4. [[https://​www.speechtechmag.com/​Articles/​Archives/​Industry-View/​The-Creepiness-Factor-51037.aspx]] 
 + 
 +{{anchor:​yuschik:​YUS1}}Yuschik,​ M. (2008). Silence locations and durations in dialog management. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems, 2nd edition (pp. 231-253). New York, NY: Springer. [[https://​www.amazon.com/​Factors-Interactive-Systems-Communication-Technology/​dp/​038725482X]] 
 + 
 +{{anchor:​zoltan-ford:​ZOL1}}Zoltan-Ford,​ E. (1991). How to get people to say and type what computers can understand. International Journal of Man-Machine Studies, 34, 527–547. [[http://​www.speech.kth.se/​~edlund/​bielefeld/​references/​zoltan-ford-1991.pdf]]  
 + 
 +{{anchor:​zurif:​ZUR1}}Zurif, E. B. (1990). Language and the brain. In D. N. Osherson & H. Lasnik (Eds.), Language: An invitation to cognitive science (pp. 177–198). Cambridge, MA: MIT Press. ​[[https://​www.amazon.com/​Invitation-Cognitive-Science-Vol-Language/​dp/​0262650339]]