meta data for this page
  •  

References

AAR1

Aaron, A., Eide, E., & Pitrelli, J. F. (2005). Conversational computers. Scientific American, 292(6), 64–69. https://doi.org/10.1038/scientificamerican0605-64

ADL1

Adlin, X., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Waltham, MA: Morgan Kaufmann. https://learning.oreilly.com/library/view/the-essential-persona/9780123814180/xhtml/title.html

AHL1

Ahlén, S., Kaiser, L., & Olvera, E. (2004). Are you listening to your Spanish speakers? Speech Technology, 9(4), 10-15. https://doi.org/10.1007/s10772-005-4759-5

AIN1

Ainsworth, W. A., & Pratt, S. R. (1992). Feedback strategies for error correction in speech recognition systems. International Journal of Man-Machine Studies, 36, 833–842. https://doi.org/10.1016/0020-7373(92)90075-V

AIN2

Ainsworth, W. A., & Pratt, S. R. (1993). Comparing error correction strategies in speech recognition systems. In C. Baber & J. M. Noyes (Eds.), Interactive speech technology: Human factors issues in the application of speech input/output to computers (pp. 131–135). London, UK: Taylor & Francis. https://www.amazon.com/Interactive-Speech-Technology-Application-Computers/dp/074840127X

ALW1

Alwan, J., & Suhm, B. (2010). Beyond best practices: A data-driven approach to maximizing self-service. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 99–105). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

ATT1

Attwater, D. (2008). Speech and touch-tone in harmony [PowerPoint Slides]. Paper presented at SpeechTek 2008. New York, NY: SpeechTek.

BAD1

Baddeley, A. D., & Hitch, G. (1974). Is working memory still working? American Psychologist, 56, 851-864. https://doi.org/10.1037/0003-066X.56.11.851

BAI1

Bailey, R. W. (1989). Human performance engineering: Using human factors/ergonomics to achieve computer system usability. Englewood Cliffs, NJ: Prentice-Hall. https://www.amazon.com/Human-Performance-Engineering-Ergonomics-Usability/dp/0134451805

BAI2

Bailly, G. (2003). Close shadowing natural versus synthetic speech. International Journal of Speech Technology, 6, 11–19. https://doi.org/10.1023/A:1021091720511

BAL1

Balentine, B. (1999). Re-engineering the speech menu. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 205-235). Boston, MA: Kluwer Academic Publishers. https://www.amazon.com/Factors-Interactive-International-Engineering-Computer/dp/0792384679/

BAL2

Balentine, B. (2006). The power of the pause. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 89-91). Victoria, Canada: TMA Associates. https://www.amazon.com/VUI-Visions-Expert-Effective-Interface/dp/1412083737

BAL3

Balentine, B. (2007). It’s better to be a good machine than a bad person. Annapolis, MD: ICMI Press. https://www.amazon.com/Better-Good-Machine-Than-Person/dp/1932558098

BAL4

Balentine, B. (2010). Next-generation IVR avoids first-generation user interface mistakes. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 71–74). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

BAL5

Balentine, B., Ayer, C. M., Miller, C. L., & Scott, B. L. (1997). Debouncing the speech button: A sliding capture window device for synchronizing turn-taking. International Journal of Speech Technology, 2, 7–19. https://doi.org/10.1007/BF02539819

BAL6

Balentine, B., & Morgan, D. P. (2001). How to build a speech recognition application: A style guide for telephony dialogues, 2nd edition. San Ramon, CA: EIG Press. https://www.amazon.com/How-Build-Speech-Recognition-Application/dp/0967127823

BAR1

Barkin, E. (2009). But is it natural? Speech Technology, 14(2), 21–24. http://search.proquest.com/docview/212198708

BEA1

Beattie, G. W., & Barnard, P. J. (1979). The temporal structure of natural telephone conversations (directory enquiry calls). Linguistics, 17, 213–229. https://doi.org/10.1515/ling.1979.17.3-4.213

BER1

Berndt, R. S., Mitchum, C., Burton, M., & Haendiges, A. (2004). Comprehension of reversible sentences in aphasia: The effects of verb meaning. Cognitive Neuropsychology, 21, 229–245. https://doi.org/10.1080/02643290342000456

BIT1

Bitner, M. J., Ostrom, A. L., & Meuter, M. L. (2002). Implementing successful self-service technologies. Academy of Management Executive, 16(4), 96–108. https://doi.org/10.5465/ame.2002.8951333

BLO1

Bloom, J., Gilbert, J. E., Houwing, T., Hura, S., Issar, S., Kaiser, L., et al. (2005). Ten criteria for measuring effective voice user interfaces. Speech Technology, 10(9), 31–35. https://www.speechtechmag.com/Articles/Editorial/Feature/Ten-Criteria-for-Measuring-Effective-Voice-User-Interfaces-29443.aspx

BLO2

Bloom, R., Pick, L., Borod, J., Rorie, K., Andelman, F., Obler, L., Sliwinski, M., Campbell, A., Tweedy, J., & Welkowitz, J. (1999). Psychometric aspects of verbal pragmatic ratings. Brain and Language, 68, 553–565. https://doi.org/10.1006/brln.1999.2128

BOR1

Boretz, A. (2009). VUI standards: The great debate. Speech Technology, 14(8), 14-19. http://search.proquest.com/docview/212191853

BOY1

Boyce, S. J. (2008). User interface design for natural language systems: From research to reality. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 43–80). New York, NY: Springer. https://www.amazon.com/Factors-Interactive-Systems-Communication-Technology/dp/038725482X

BOY2

Boyce, S., & Viets, M. (2010). When is it my turn to talk?: Building smart, lean menus. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 108–112). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

BRO1

Broadbent, D. E. (1977). Language and ergonomics. Applied Ergonomics, 8, 15–18. https://doi.org/10.1016/0003-6870(77)90111-9

BYR1

Byrne, B. (2003). “Conversational” isn’t always what you think it is. Speech Technology, 8(4), 16–19. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=30039

CAL1

Callejas, Z., & López-Cózar, R. (2008). Relations between de-facto criteria in the evaluation of a spoken dialogue system. Speech Communication, 50, 646-665. https://doi.org/10.1016/j.specom.2008.04.004

CAL2

Calteaux, K., Grover A., & van Huyssteen, G. (2012). Business drivers and design choices for multilingual IVRs: A governement service delivery case study. Retrieved from http://www.mica.edu.vn/sltu2012/files/proceedings/7.pdf

CHA1

Chang, C. (2006). When service fails: The role of the salesperson and the customer. Psychology & Marketing, 23(3), 203–224. https://doi.org/10.1002/mar.20096

CHA2

Chapanis, A. (1988). Some generalizations about generalization. Human Factors, 30, 253-267. https://doi.org/10.1177/001872088803000301

CLA1

Clark, H. H. (1996). Using language. Cambridge, UK: Cambridge University Press. https://www.amazon.com/Using-Language-Herbert-H-Clark-ebook/dp/B016MYWOUG

CLA2

Clark, H. H. (2004). Pragmatics of language performance. In L. R. Horn & G. Ward (Eds.), Handbook of pragmatics (pp. 365–382). Oxford, UK: Blackwell. https://doi.org/10.1002/9780470756959.ch16

COH1

Cohen, M. H., Giangola, J. P., & Balogh, J. (2004). Voice user interface design. Boston, MA: Addison-Wesley. https://learning.oreilly.com/library/view/voice-user-interface/0321185765

COM1

Commarford, P. M., & Lewis, J. R. (2005). Optimizing the pause length before presentation of global navigation commands. In Proceedings of HCI International 2005: Volume 2—The management of information: E-business, the Web, and mobile computing (pp. 1–7). St. Louis, MO: Mira Digital Publication. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.508.6365

COM2

Commarford, P. M., Lewis, J. R., Al-Awar Smither, J. & Gentzler, M. D. (2008). A comparison of broad versus deep auditory menu structures. Human Factors, 50(1), 77-89. https://doi.org/10.1518/001872008X250665

COU1

Couper, M. P., Singer, E., & Tourangeau, R. (2004). Does voice matter? An interactive voice response (IVR) experiment. Journal of Official Statistics, 20(3), 551–570. http://search.proquest.com/docview/1266795179

CRY1

Crystal, T. H., & House, A. S. (1990). Articulation rate and the duration of syllables and stress groups in connected speech. Journal of the Acoustical Society of America, 88, 101–112. https://doi.org/10.1121/1.399955

CUN1

Cunningham, L. F., Young, C. E., & Gerladina, J. H. (2008). Consumer views of self-service technologies. The Service Industries Journal, 28(6), 719-732. https://doi.org/10.1080/02642060801988522

DAH1

Dahl, D. (2006). Point/counter point on personas. Speech Technology, 11(1), 18–21. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=29584

DAM1

Damper, R. I., & Gladstone, K. (2007). Experiences of usability evaluation of the IMAGINE speech-based interaction system. International Journal of Speech Technology, 9, 41–50. https://doi.org/10.1007/s10772-006-9003-4

DAM2

Damper, R. I., & Soonklang, T. (2007). Subjective evaluation of techniques for proper name pronunciation. IEEE Transactions on Audio, Speech, and Language Processing, 15(8), 2213-2221. https://doi.org/10.1109/TASL.2007.904192

DAV1

Davidson, N., McInnes, F., & Jack, M. A. (2004). Usability of dialogue design strategies for automated surname capture. Speech Communication, 43, 55–70. https://doi.org/10.1016/j.specom.2004.02.002

DOU1

Dougherty, M. (2010). What’s universally available, but rarely used? In W. Meisel (Ed.), Speech in the User Interface: Lessons from Experience (pp. 117-120). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

DUL1

Dulude, L. (2002). Automated telephone answering systems and aging. Behaviour and Information Technology, 21(3), 171–184. https://doi.org/10.1080/0144929021000013482

DUR1

Durrande-Moreau, A. (1999). Waiting for service: Ten years of empirical research. International Journal of Service Industry Management, 10(2), 171–189. https://doi.org/10.1108/09564239910264334

EDW1

Edworthy, J. & Hellier, E. (2006). Complex nonverbal auditory signals and speech warnings. In (Wogalter, M. S., Ed.) Handbook of Warnings (pp. 199-220). Mahwah, NJ: Lawrence Erlbaum. https://www.amazon.com/Handbook-Warnings-Human-Factors-Ergonomics-ebook/dp/B07CSSLTTJ

ENT1

Enterprise Integration Group. (2000). Speech Recognition 1999 R&D Program: User interface design recommendations final report. San Ramon, CA: Author.

ERV1

Ervin-Tripp, S. (1993). Conversational discourse. In J. B. Gleason & N. B. Ratner (Eds.), Psycholinguistics (pp. 238–270). Fort Worth, TX: Harcourt Brace Jovanovich. https://www.amazon.com/Psycholinguistics-Nan-Bernstein-Ratner/dp/0030559642

EVA1

Evans, D. G., Draffan, E. A., James, A., & Blenkhorn, P. (2006). Do text-to-speech synthesizers pronounce correctly? A preliminary study. In K. Miesenberger et al. (Eds.), Proceedings of ICCHP (pp. 855–862). Berlin, Germany: Springer-Verlag. https://doi.org/10.1007/11788713_124

FER1

Ferreira, F. (2003). The misinterpretation of noncanonical sentences. Cognitive Psychology, 47, 164–203. https://doi.org/10.1016/S0010-0285(03)00005-7

FOS1

Fosler-Lussier, E., Amdal, I., & Juo, H. J. (2005). A framework for predicting speech recognition errors. Speech Communication, 46, 153–170. https://doi.org/10.1016/j.specom.2005.03.003

FRA1

Frankish, C., & Noyes, J. (1990). Sources of human error in data entry tasks using speech input. Human Factors, 32(6), 697–716. https://doi.org/10.1177/001872089003200607

FRI1

Fried, J., & Edmondson, R. (2006). How customer perceived latency measures success in voice self-service. Business Communications Review, 36(3), 26–32. http://www.webtorials.com/main/resource/papers/BCR/paper101/fried-03-06.pdf

FRO1

Fröhlich, P. (2005). Dealing with system response times in interactive speech applications. In Proceedings of CHI 2005 (pp. 1379–1382). Portland, OR: ACM. https://doi.org/10.1145/1056808.1056921

FRO2

Fromkin, V., Rodman, R., & Hyams, N. (1998). An introduction to language (6th ed.). Fort Worth, TX: Harcourt Brace Jovanovich. https://www.amazon.com/Introduction-Language-6th-Sixth/dp/B0035E4B26

GAR1

Gardner-Bonneau, D. J. (1992). Human factors in interactive voice response applications: “Common sense” is an uncommon commodity. Journal of the American Voice I/O Society, 12, 1-12.

GAR2

Gardner-Bonneau, D. (1999). Guidelines for speech-enabled IVR application design. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 147-162). Boston, MA: Kluwer Academic Publishers. https://www.amazon.com/Factors-Interactive-International-Engineering-Computer/dp/0792384679

GAR3

Garrett, M. F. (1990). Sentence processing. In D. N. Osherson & H. Lasnik (Eds.), Language: An invitation to cognitive science (pp. 133–176). Cambridge, MA: MIT Press. https://www.amazon.com/Invitation-Cognitive-Science-Vol-Language/dp/0262650339

GIE1

Giebutowski, J. (2017, December 18). Multilingual IVR 5 Big Ways to Get It Exactly WRONG Marketing Messages. Retrieved from https://www.marketingmessages.com/multilingual-ivr-5-big-ways-to-get-it-exactly-wrong

GLE1

Gleason, J. B., & Ratner, N. B. (1993). Psycholinguistics. Fort Worth, TX: Harcourt Brace Jovanovich. https://www.amazon.com/Psycholinguistics-Nan-Bernstein-Ratner/dp/0030559642

GOO1

Goodwin, A. (2018, February 21). 5 Multilingual IVR Tips to Take Your Business Global [Web log post]. Retrieved from https://www.west.com/blog/interactive-services/multilingual-ivr-take-business-global

GOU1

Gould, J. D., Boies, S. J., Levy, S., Richards, J. T., & Schoonard, J. (1987). The 1984 Olympics message system: A test of behavioral principles of system design. Communications of the ACM, 30, 758-569. https://doi.org/10.1145/30401.30402

GRA1

Graham, G. M. (2005). Voice branding in America. Alpharetta, GA: Vivid Voices. https://www.amazon.com/Voice-Branding-America-Marcus-Graham/dp/0975989502

GRA2

Graham, G. M. (2010). Speech recognition, the brand and the voice: How to choose a voice for your application. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 93–98). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

GRI1

Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics, volume 3: Speech acts (pp. 41–58). New York, NY: Academic Press. https://www.amazon.com/Syntax-Semantics-3-Speech-Acts/dp/0127854231

GUI1

Guinn, I. (2010). You can’t think of everything: The importance of tuning speech applications. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 89–92). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

HAF1

Hafner, K. (2004, Sept. 9). A voice with personality, just trying to help. The New York Times. Retrieved from www.nytimes.com/2004/09/09/technology/circuits/09emil.html

HAL1

Halstead-Nussloch, R. (1989). The design of phone-based interfaces for consumers. In Proceedings of CHI 1989 (pp. 347–352). Austin, TX: ACM. https://doi.org/10.1016/0003-6870(91)90015-A

HAR1

Harris, R. A. (2005). Voice interaction design: Crafting the new conversational speech systems. San Francisco, CA: Morgan Kaufmann. https://www.amazon.com/Voice-Interaction-Design-Conversational-Technologies-ebook/dp/B001CPLXXK

HEI1

Heins, R., Franzke, M., Durian, M., & Bayya, A. (1997). Turn-taking as a design principle for barge-in in spoken language systems. International Journal of Speech Technology, 2, 155-164. https://doi.org/10.1007/BF02208827

HEN1

Henton, C. (2003). The name game: Pronunciation puzzles for TTS. Speech Technology, 8(5), 32-35. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=29501

HON1

Hone, K. S., & Graham, R. (2000). Towards a tool for the subjective assessment of speech system interfaces (SASSI). Natural Language Engineering, 6(3–4), 287–303. https://doi.org/10.1017/S1351324900002497

HOU1

Houwing, T., & Greiner, P. (2005). Design issues in multilingual applications. (SPEECH-WORLD[TM]) (interactive voice response systems). Customer Interaction Solutions, 23(12), 88–93. Retrieved from http://search.proquest.com/docview/208150344

HUA1

Huang, X., Acero, A., & Hon, H. (2001). Spoken language processing: A guide to theory, algorithm and system development. Upper Saddle River, NJ: Prentice Hall. https://www.amazon.com/Spoken-Language-Processing-Algorithm-Development/dp/0130226165

HUG1

Huguenard, B. R., Lurch, F. J., Junker, B. W., Patz, R. J., & Kass, R. E. (1997). Working-memory failure in phone-based interaction. ACM Transactions on Computer-Human Interaction, 4(2), 67–102. https://doi.org/10.1145/254945.254947

HUN1

Hunter, P. (2009). More isn't better, but (help me with) something else is. From the design-outloud blog. http://blog.design-outloud.com/2009

HUR1

Hura, S. L. (2008). What counts as VUI? Speech Technology, 13(9), 7. http://search.proquest.com/docview/212185822/

HUR2

Hura, S. L. (2010). My big fat main menu: The case for strategically breaking the rules. In W. Meisel (Ed.), Speech in the User Interface: Lessons from Experience (pp 113-116). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

JAI1

Jain, A. K., & Pankanti, S. (2008). Beyond fingerprinting. Scientific American, 299(3), 78-81. https://doi.org/10.1038/scientificamerican0908-78

JEL1

Jelinek, F. (1997). Statistical methods for speech recognition. Cambridge, MA: MIT Press. https://www.amazon.com/Frederick-Jelinek-Statistical-Methods-Recognition/dp/B008VS12VO

JOE1

Joe, R. (2007). The elements of style. Speech Technology, 12(8), 20–24. http://search.proquest.com/docview/212188958

JOH1

Johnstone, A., Berry, U., Nguyen, T., & Asper, A. (1994). There was a long pause: Influencing turn-taking behaviour in human-human and human-computer spoken dialogues. International Journal of Human-Computer Studies, 41, 383–411. https://doi.org/10.1006/ijhc.1995.1018

KAI1

Kaiser, L., Krogh, P., Leathem, C., McTernan, F., Nelson, C., Parks, M. C., & Turney, S. (2008). Thinking outside the box: Designing for the overall user experience. From the 2008 Workshop on the Maturation of VUI.

KAR1

Karray, L., & Martin, A. (2003). Towards improving speech detection robustness for speech recognition in adverse conditions. Speech Communication, 40, 261–276. https://doi.org/10.1016/S0167-6393(02)00066-3

KAU1

Kaushansky, K. (2006). Voice authentication – not just another speech application. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 139-142). Victoria, Canada: TMA Associates. https://www.amazon.com/VUI-Visions-Expert-Effective-Interface/dp/1412083737

KLA1

Klatt, D. (1987). Review of text-to-speech conversion for English. Journal of the Acoustical Society of America, 82, 737–793. Audio samples available at <www.cs.indiana.edu/rhythmsp/ASA/Contents.html>. https://doi.org/10.1121/1.395275

KLE1

Kleijnen, M., de Ruyter, K., & Wetzels, M. (2007). An assessment of value creation in mobile service delivery and the moderating role of time consciousness. Journal of Retailing, 83(1), 33–46. https://doi.org/10.1016/j.jretai.2006.10.004

KLI1

Klie, L. (2007). It’s a persona, not a personality. Speech Technology, 12(5), 22–26. http://search.proquest.com/docview/212204672

KLI2

Klie, L. (2010). When in Rome. Speech Technology, 15(3), 20-24. http://search.proquest.com/docview/325176389/

KNO1

Knott, B. A., Bushey, R. R., & Martin, J. M. (2004). Natural language prompts for an automated call router: Examples increase the clarity of user responses. In Proceedings of the Human Factors and Ergonomics Society 48th annual meeting (pp. 736–739). Santa Monica, CA: Human Factors and Ergonomics Society. https://doi.org/10.1177/154193120404800407

KOR1

Kortum, P., & Peres, S. C. (2006). An exploration of the use of complete songs as auditory progress bars. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 2071–2075). Santa Monica, CA: HFES. https://doi.org/10.1177/154193120605001776

KOR2

Kortum, P., & Peres, S. C. (2007). A survey of secondary activities of telephone callers who are put on hold. In Proceedings of the Human Factors and Ergonomics Society 51st annual Meeting (pp. 1153–1157). Santa Monica, CA: HFES. https://doi.org/10.1177/154193120705101821

KOR3

Kortum, P., Peres, S. C., Knott, B. A., & Bushey, R. (2005). The effect of auditory progress bars on consumer’s estimation of telephone wait time. In Proceedings of the Human Factors and Ergonomics Society 49th annual meeting (pp. 628–632). Santa Monica, CA: HFES. https://doi.org/10.1177/154193120504900406

KOT1

Kotan, C., & Lewis, J. R. (2006). Investigation of confirmation strategies for speech recognition applications. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 728–732). Santa Monica, CA: Human Factors and Ergonomics Society. https://doi.org/10.1177/154193120605000524

KOT2

Kotelly, B. (2003). The art and business of speech recognition: Creating the noble voice. Boston, MA: Pearson Education. https://www.amazon.com/Art-Business-Speech-Recognition-Creating/dp/0321154924

KOT3

Kotelly, B. (2006). Six tips for better branding. In W. Meisel (Ed.), VUI Visions: Expert Views on Effective Voice User Interface Design (pp. 61-64). Victoria, Canada: TMA Associates. https://www.amazon.com/VUI-Visions-Expert-Effective-Interface/dp/1412083737

KRA1

Krahmer, E., Swerts, M., Theune, M., & Weegels, M. (2001). Error detection in spoken human-machine interaction. International Journal of Speech Technology, 4, 19–30. https://doi.org/10.1023/A:1009648614566

LAI1

Lai, J., Karat, C.-M., & Yankelovich, N. (2008). Conversational speech interfaces and technology. In A. Sears & J. A. Jacko (Eds.) The human-computer interaction handbook: Fundamentals, evolving technologies, and emerging applications (pp. 381-391). New York, NY: Lawrence Erlbaum. https://www.amazon.com/Human-Computer-Interaction-Handbook-Fundamentals-Technologies-ebook/dp/B0083V45J0

LAR1

Larson, J. A. (2005). Ten guidelines for designing a successful voice user interface. Speech Technology, 10(1), 51-53. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=29608

LEP1

Leppik, P. (2005). Does forcing callers to use self-service work? Quality Times, 22, 1-3. Downloaded 2/18/2009 from http://www.vocalabs.com/resources/newsletter/newsletter22.html

LEP2

Leppik, P. (2006). Developing metrics part 1: Bad metrics. The Customer Service Survey. Retrieved from www.vocalabs.com/resources/blog/C834959743/E20061205170807/index.html

LEP3

Leppik, P. (2012). The customer frustration index. Golden Valley, MN: Vocal Laboratories. Downloaded 7/23/2012 from http://www.vocalabs.com/download-ncss-cross-industry-report-customer-frustration-index-q2-2012

LEP4

Leppik, P., & Leppik, D. (2005). Gourmet customer service: A scientific approach to improving the caller experience. Eden Prairie, MN: VocaLabs. https://www.amazon.com/Gourmet-Customer-Service-Scientific-Experience/dp/0976405504

LEW1

Lewis, J.R. (1982). Testing small system customer set-up. In Proceedings of the Human Factors Society 26th Annual Meeting (pp. 718-720). Santa Monica, CA: Human Factors Society. https://doi.org/10.1177/154193128202600810

LEW2

Lewis, J. R. (2004). Effect of speaker and sampling rate on MOS-X ratings of concatenative TTS voices. In Proceedings of the Human Factors and Ergonomics Society (pp. 759-763). Santa Monica, CA: HFES. https://doi.org/10.1177/154193120404800504

LEW3

Lewis, J. R. (2005). Frequency distributions for names and unconstrained words associated with the letters of the English alphabet. In Proceedings of HCI International 2005: Posters (pp. 1–5). St. Louis, MO: Mira Digital Publication. Available at http://drjim.0catch.com/hcii05-368-wordfrequency.pdf

LEW4

Lewis, J. R. (2006). Effectiveness of various automated readability measures for the competitive evaluation of user documentation. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 624–628). Santa Monica, CA: Human Factors and Ergonomics Society. https://doi.org/10.1177/154193120605000501

LEW5

Lewis, J. R. (2007). Advantages and disadvantages of press or say <x> speech user interfaces (Tech. Rep. BCR-UX-2007-0002. Retrieved from http://drjim.0catch.com/2007_AdvantagesAndDisadvantagesOfPressOrSaySpeechUserInter.pdf). Boca Raton, FL: IBM Corp.

LEW6

Lewis, J. R. (2008). Usability evaluation of a speech recognition IVR. In T. Tullis & B. Albert (Eds.), Measuring the user experience, Chapter 10: Case studies (pp. 244–252). Amsterdam, Netherlands: Morgan-Kaufman. https://www.amazon.com/Measuring-User-Experience-Interactive-Technologies/dp/0123735580

LEW7

Lewis, J. R. (2011). Practical speech user interface design. Boca Raton, FL: CRC Press, Taylor & Francis Group. https://www.amazon.com/Practical-Speech-Interface-Factors-Ergonomics-ebook/dp/B008KZ6TAM

LEW8

Lewis, J. R. (2012). Usability testing. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics, 4th ed. (pp. 1267-1312). New York, NY: John Wiley. https://www.amazon.com/Handbook-Factors-Ergonomics-Gavriel-Salvendy/dp/0470528389

LEW9

Lewis, J. R., & Commarford, P. M. (2003). Developing a voice-spelling alphabet for PDAs. IBM Systems Journal, 42(4), 624–638. Available at http://drjim.0catch.com/2003_DevelopingAVoiceSpellingAlphabetForPDAs.pdf

LEW10

Lewis, J. R., Commarford, P. M., Kennedy, P. J., and Sadowski, W. J. (2008). Handheld electronic devices. In C. Melody Carswell (Ed.), Reviews of Human Factors and Ergonomics, Vol. 4 (pp. 105-148). Santa Monica, CA: Human Factors and Ergonomics Society. Available at http://drjim.0catch.com/2008_HandheldElectronicDevices.pdf

LEW11

Lewis, J. R., Commarford, P. M., & Kotan, C. (2006). Web-based comparison of two styles of auditory presentation: All TTS versus rapidly mixed TTS and recordings. In Proceedings of the Human Factors and Ergonomics Society 50th annual meeting (pp. 723–727). Santa Monica, CA: Human Factors and Ergonomics Society. https://doi.org/10.1177/154193120605000523

LEW12

Lewis, J. R., Potosnak, K. M., and Magyar, R. L. (1997). Keys and keyboards. In M. Helander, T. K. Landauer, and P. Prabhu (Eds.), Handbook of Human-Computer Interaction (pp. 1285-1315). Amsterdam: Elsevier. Available at http://drjim.0catch.com/1997_KeysAndKeyboards.pdf

LEW13

Lewis, J. R., Simone, J. E., & Bogacz, M. (2000). Designing common functions for speech-only user interfaces: Rationales, sample dialogs, potential uses for event counting, and sample grammars (Tech. Report 29.3287, available at <http://drjim.0catch.com/always-ral.pdf>). Raleigh, NC: IBM Corp.

LIB1

Liberman, A. M., Harris, K. S., Hoffman, H. S., & Griffith, B. C. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54, 358–368. https://doi.org/10.1037/h0044417

LIT1

Litman, D., Hirschberg, J., & Swerts, M. (2006). Characterizing and predicting corrections in spoken dialogue systems. Computational Linguistics, 32(3), 417–438. https://doi.org/10.1162/coli.2006.32.3.417

LOM1

Lombard, E. (1911). Le signe de l’elevation de la voix. Annales des maladies de l’oreille et du larynx, 37, 101–199. http://paul.sobriquet.net/wp-content/uploads/2007/02/lombard-1911-p-h-mason-2006.pdf

MAC1

Machado, S., Duarte, E., Teles, J., Reis, L., & Rebelo, F. (2012). Selection of a voice for a speech signal for personalized warnings: The effect of speaker's gender and voice pitch. Work, 41, 3592-3598. https://doi.org/10.3233/WOR-2012-0670-3592

MAR1

Margulies, E. (2005). Adventures in turn-taking: Notes on success and failure in turn cue coupling. In AVIOS 2005 proceedings (pp. 1–10). San Jose, CA: AVIOS.

MAR2

Margulies, M. K. (1980). Effects of talker differences on speech intelligibility in the hearing impaired. Doctoral dissertation, City University of New York.

MAR3

Marics, M. A., & Engelbeck, G. (1997). Designing voice menu applications for telephones. In M. Helander, T. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer interaction, 2nd edition (pp. 1085-1102). Amsterdam, Netherlands: Elsevier. https://www.amazon.com/Handbook-Human-Computer-Interaction-Second-Helander-dp-0444818626/dp/0444818626

MAR4

Markowitz, J. (2010). VUI concepts for speaker verification. In W. Meisel (Ed.), Speech in the User Interface: Lessons from Experience (pp. 161-166). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

MAS1

Massaro, D. (1975). Preperceptual images, processing time, and perceptual units in speech perception. In D. Massaro (Ed.), Understanding language: An information-processing analysis of speech perception, reading, and psycholinguistics (pp. 125–150). New York, NY: Academic Press. https://www.amazon.com/Understanding-Language-Information-Processing-Perception-Psycholinguistics-ebook/dp/B01JOZRWWA

MCI1

McInnes, F., Attwater, D., Edgington, M. D., Schmidt, M. S., & Jack, M. A. (1999). User attitudes to concatenated natural speech and text-to-speech synthesis in an automated information service. In Proceedings of Eurospeech99 (pp. 831–834). Budapest, Hungary: ESCA. https://www.isca-speech.org/archive/archive_papers/eurospeech_1999/e99_0831.pdf

MCI2

McInnes, F. R., Nairn, I. A., Attwater, D. J., Edgington, M. D., & Jack, M. A. (1999). A comparison of confirmation strategies for fluent telephone dialogues. Edinburgh, UK: Centre for Communication Interface Research. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.473.3649&rep=rep1&type=pdf

MCK1

McKellin, W. H., Shahin, K., Hodgson, M., Jamieson, J., & Pichora-Fuller, K. (2007). Pragmatics of conversation and communication in noisy settings. Journal of Pragmatics, 39, 2159–2184. https://doi.org/10.1016/j.pragma.2006.11.012

MCK2

McKienzie, J. (2009). Menu pauses: How long? [PowerPoint Slides]. Paper presented at SpeechTek 2009. New York, NY: SpeechTek.

MCT1

McTear, M., O’Neill, I., Hanna, P., & Liu, X. (2005). Handling errors and determining confirmation strategies—an object based approach. Speech Communication, 45, 249–269. https://doi.org/10.1016/j.specom.2004.11.006

MIL1

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63, 81-97. http://www2.psych.utoronto.ca/users/peterson/psy430s2001/Miller%20GA%20Magical%20Seven%20Psych%20Review%201955.pdf

MIL2

Miller, G. A. (1962). Some psychological studies of grammar. American Psychologist, 17, 748–762. http://search.proquest.com/docview/1289830820/

MIN1

Minker, W., Pitterman, J., Pitterman, A., Strauß, P.-M., & Bühler, D. (2007). Challenges in speech-based human-computer interaction. International Journal of Speech Technology, 10, 109–119. https://doi.org/10.1007/s10772-009-9023-y

MOS1

Mościcki, E.K., Elkins, E. F., Baum, H. M., & McNamara, P. M. (1985). Hearing loss in the elderly: An epidemiologic study of the Framingham Heart Study cohort. Ear and Hearing Journal, 6, 184-190. https://doi.org/10.1097/00003446-198507000-00003

MUN1

Munichor, N., & Rafaeli, A. (2007). Numbers or apologies? Customer reactions to telephone waiting time fillers. Journal of Applied Psychology, 92(2), 511–518. https://doi.org/10.1037/0021-9010.92.2.511

NAI1

Nairne, J. (2002). Remembering over the short-term: The case against the standard model. Annual Review of Psychology, 53, 53-81. http://search.proquest.com/docview/205754757

NAS1

Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press. https://www.amazon.com/Wired-Speech-Activates-Human-Computer-Relationship-ebook/dp/B001949SMM

NAS2

Nass, C., & Yen, C. (2010). The man who lied to his laptop: What machines teach us about human relationships. New York, NY: Penguin Group. https://www.amazon.com/Man-Who-Lied-His-Laptop/dp/1617230049

NEM1

Németh, G., Kiss, G., Zainkó, C., Olaszy, G., & Tóth, B. (2008). Speech generation in mobile phones. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems (2nd ed.) (pp. 163–191). New York, NY: Springer. https://www.amazon.com/Factors-Interactive-Systems-Communication-Technology/dp/038725482X

NOR1

North, A. C., Hargreaves, D. J., & McKendrick, J. (1999). Music and on-hold waiting time. British Journal of Psychology, 90, 161–164. https://doi.org/10.1348/000712699161215

NOV1

Novick, D. G., Hansen, B., Sutton, S., & Marshall, C. R. (1999). Limiting factors of automated telephone dialogues. In D. Gardner-Bonneau (Ed.), Human factors and voice interactive systems (pp. 163–186). Boston, MA: Kluwer Academic Publishers. https://www.amazon.com/Factors-Interactive-International-Engineering-Computer/dp/0792384679

OGD1

Ogden, W. C., & Bernick, P. (1997). Using natural language interfaces. In M. Helander, T. K. Landauer, & P. Prabhu (Eds.), Handbook of human-computer interaction (pp. 137–161). Amsterdam, Netherlands: Elsevier. https://www.amazon.com/Handbook-Human-Computer-Interaction-Second-Helander-dp-0444818626/dp/0444818626

OST1

Ostendorf, M., Kannan, A., Austin, S., Kimball, O., Schwartz, R., & Rohlicek, J. R. (1991). Integration of diverse recognition methodologies through reevaluation of n-best sentence hypotheses. In Proceedings of DARPA Workshop on Speech and Natural Language (pp. 83-87). Stroudsburg, PA: Association for Computational Linguistics. http://acl.ldc.upenn.edu/H/H91/H91-1013.pdf

OSU1

Osuna, E. E. (1985). The psychological cost of waiting. Journal of Mathematical Psychology, 29, 82–105. https://doi.org/10.1016/0022-2496(85)90020-3

PAR1

Parkinson, F. (2012). Alphanumeric Confirmation & User Data. Presentation at SpeechTek 2012, available at http://www.speechtek.com/2012/Presentations.aspx (search for Parkinson in Session B102).

PIE1

Pieraccini, R. (2010). Continuous automated speech tuning and the return of statistical grammars. In W. Meisel (Ed.), Speech in the user interface: Lessons from experience (pp. 255–259). Victoria, Canada: TMA Associates. https://www.amazon.com/Speech-User-Interface-Lessons-Experience/dp/1426926227

PIE2

Pieraccini, R. (2012). The voice in the machine: Building computers that understand speech. Cambridge, MA: MIT Press. https://www.amazon.com/Voice-Machine-Building-Computers-Understand/dp/0262533294

POL1

Polkosky, M. D. (2001). User preference for system processing tones (Tech. Rep. 29.3436). Raleigh, NC: IBM. https://www.researchgate.net/publication/240626208_User_Preference_for_Turntaking_Tones_2_Participant_Source_Issues_and_Additional_Data

POL2

Polkosky, M. D. (2002). Initial psychometric evaluation of the Pragmatic Rating Scale for Dialogues (Tech. Report 29.3634). Boca Raton, FL: IBM.

POL3

Polkosky, M. D. (2005a). Toward a social-cognitive psychology of speech technology: Affective responses to speech-based e-service. Unpublished doctoral dissertation, University of South Florida. https://scholarcommons.usf.edu/etd/819/

POL4

Polkosky, M. D. (2005b). What is speech usability, anyway? Speech Technology, 10(9), 22–25. https://www.speechtechmag.com/Articles/Editorial/Features/What-Is-Speech-Usability-Anyway-29601.aspx

POL5

Polkosky, M. D. (2006). Respect: It’s not what you say, it’s how you say it. Speech Technology, 11(5), 16–21. https://www.speechtechmag.com/Articles/Editorial/Features/Ivy-League-IVR-29587.aspx

POL6

Polkosky, M. D. (2008). Machines as mediators: The challenge of technology for interpersonal communication theory and research. In E. Konjin (Ed.), Mediated interpersonal communication (pp. 34–57). New York, NY: Routledge. https://www.amazon.com/Mediated-Interpersonal-Communication-Leas/dp/0805863044

POL7

Polkosky, M. D., & Lewis, J. R. (2002). Effect of auditory waiting cues on time estimation in speech recognition telephony applications. International Journal of Human-Computer Interaction, 14, 423–446. https://doi.org/10.1080/10447318.2002.9669128

POL8

Polkosky, M. D., & Lewis, J. R. (2003). Expanding the MOS: Development and psychometric evaluation of the MOS-R and MOS-X. International Journal of Speech Technology, 6, 161–182. https://doi.org/10.1023/A:1022390615396

RAM1

Ramos, L. (1993). The effects of on-hold telephone music on the number of premature disconnections to a statewide protective services abuse hot line. Journal of Music Therapy, 30(2), 119–129. https://doi.org/10.1093/jmt/30.2.119

REE1

Reeves, B., & Nass, C. (2003). The media equation: How people treat computers, television, and new media like real people and places. Chicago, IL: University of Chicago Press. https://www.amazon.com/Equation-Reeves-Clifford-Language-Paperback/dp/B00E2RJ3GE

REI1

Reinders, M., Dabholkar, P. A., & Frambach, R. T. (2008). Consequences of forcing consumers to use technology-based self-service. Journal of Service Research, 11(2), 107-123. https://doi.org/10.1177/1094670508324297

RES1

Resnick, M. & Sanchez, J. (2004). Effects of organizational scheme and labeling on task performance in product-centered and user-centered web sites. Human Factors, 46, 104-117. https://doi.org/10.1518/hfes.46.1.104.30390

ROB1

Roberts, F., Francis, A. L., & Morgan, M. (2006). The interaction of inter-turn silence with prosodic cues in listener perceptions of “trouble” in conversation. Speech Communication, 48, 1079–1093. https://doi.org/10.1016/j.specom.2006.02.001

ROL1

Rolandi, W. (2003). When you don’t know what you don’t know. Speech Technology, 8(4), 28. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/When-You-Dont-Know-When-You-Dont-Know-29821.aspx

ROL2

Rolandi, W. (2004a). Improving customer service with speech. Speech Technology, 9(5), 14. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/Improving-Customer-Service-with-Speech-31763.aspx

ROL3

Rolandi, W. (2004b). Rolandi's razor. Speech Technology, 9(4), 39. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/Rolandi%27s-Razor-29820.aspx

ROL4

Rolandi, W. (2005). The impotence of being earnest. Speech Technology, 10(1), 22. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/The-Impotence-of-Being-Earnest-29816.aspx

ROL5

Rolandi, W. (2006). The alpha bail. Speech Technology, 11(1), 56. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/The-Alpha-Bail-30090.aspx

ROL6

Rolandi, W. (2007a). Aligning customer and company goals through VUI. Speech Technology, 12(2), 6. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/Aligning-Customer-and-Company-Goals-Through-VUI-29800.aspx

ROL7

Rolandi, W. (2007b). The pains of main are plainly VUI’s bane. Speech Technology, 12(1), 6. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/The-Pains-of-Main-Are-Plainly-VUIs-Bane-29801.aspx

ROL8

Rolandi, W. (2007c). The persona craze nears an end. Speech Technology, 12(5), 9. https://www.speechtechmag.com/Articles/Archives/The-Human-Factor/The-Persona-Craze-Nears-an-End-36315.aspx

ROS1

Rosenbaum, S. (1989). Usability evaluations versus usability testing: When and why? IEEE Transactions on Professional Communication, 32, 210-216. https://doi.org/10.1109/47.44533

ROS2

Rosenfeld, R., Olsen, D., & Rudnicky, A. (2001). Universal speech interfaces. Interactions, 8(6), 34-44. https://doi.org/10.1145/384076.384085

SAD1

Sadowski, W. J. (2001). Capabilities and limitations of Wizard of Oz evaluations of speech user interfaces. In Proceedings of HCI International 2001: Usability evaluation and interface design (pp. 139–142). Mahwah, NJ: Lawrence Erlbaum. https://www.amazon.com/Usability-Evaluation-Interface-Design-Engineering/dp/0805836071

SAD2

Sadowski, W. J., & Lewis, J. R. (2001). Usability evaluation of the IBM WebSphere “WebVoice” demo (Tech. Rep. 29.3387, available at drjim.0catch.com/vxmllive1-ral.pdf). West Palm Beach, FL: IBM Corp.

SAU1

Sauro, J. (2009). Estimating productivity: Composite operators for keystroke level modeling. In Jacko, J.A. (Ed.), Proceedings of the 13th International Conference on Human–Computer Interaction, HCII 2009 (pp. 352-361). Berlin, Germany: Springer-Verlag. https://doi.org/10.1007/978-3-642-02574-7_40

SAU2

Sauro, J., & Lewis, J. R. (2012). Quantifying the user experience: Practical statistics for user research. Burlington, MA: Morgan Kaufmann. https://learning.oreilly.com/library/view/quantifying-the-user/9780123849687/

SCH1

Schegloff, E. A. (2000). Overlapping talk and the organization of turn-taking for conversation. Language in Society, 29, 1–63. https://doi.org/10.1017/S0047404500001019

SCH2

Schoenborn C. A., & Marano, M. (1988). Current estimates from the national health interview survey: United States 1987. In Vital and Health Statistics, series 10, #166. Washington, D.C.: Government Printing Office. https://www.cdc.gov/nchs/data/series/sr_10/sr10_166.pdf

SCH3

Schumacher, R. M., Jr., Hardzinski, M. L., & Schwartz, A. L. (1995). Increasing the usability of interactive voice response systems: Research and guidelines for phone-based interfaces. Human Factors, 37, 251–264. https://doi.org/10.1518/001872095779064672

SHE1

Sheeder, T., & Balogh, J. (2003). Say it like you mean it: Priming for structure in caller responses to a spoken dialog system. International Journal of Speech Technology, 6, 103–111. https://doi.org/10.1023/A:1022326328600

SHI1

Shinn, P. (2009). Getting persona – IVR voice gender, intelligibility & the aging. In Speech Strategy News (November, pp. 37-39).

SHI2

Shinn, P., Basson, S. H., & Margulies, M. (2009). The impact of IVR voice talent selection on intelligibility. Presentation at SpeechTek 2009. Available at <www.speechtek.com/2009/program.aspx>

SHR1

Shriver, S., & Rosenfeld, R. (2002). Keywords for a universal speech interface. In Proceedings of CHI 2002 (pp. 726-727). Minneapolis, MN: ACM. http://www.cs.cmu.edu/~roni/papers/ShriverRosenfeld02b.pdf

SKA1

Skantze, G. (2005). Exploring human error recovery strategies: Implications for spoken dialogue systems. Speech Communication, 45, 325–341. https://doi.org/10.1016/j.specom.2004.11.005

SPI1

Spiegel, M. F. (1997). Advanced database preprocessing and preparations that enable telecommunication services based on speech synthesis. Speech Communication, 23, 51–62. https://doi.org/10.1016/S0167-6393(97)00039-3

SPI2

Spiegel, M. F. (2003a). Proper name pronunciations for speech technology applications. International Journal of Speech Technology, 6, 419-427. https://doi.org/10.1023/A:1025721319650

SPI3

Spiegel, M. F. (2003b). The difficulties with names: Overcoming barriers to personal voice services. Speech Technology, 8(3), 12-15. https://www.speechtechmag.com/Articles/Editorial/Feature/The-Difficulties-with-Names-29614.aspx

STI1

Stivers, T.; Enfield, N. J.; Brown, P.; Englert, C.; Hayashi, M.; Heinemann, T.; Hoymann, G.; Rossano, F.; de Ruiter, J. P.; Yoon, K.-E.; Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences, 106 (26), 10587-10592. https://doi.org/10.1073/pnas.0903616106

STU1

Studio52. (2019, April 9). 5 Reasons why your IVR should be multilingual. Retrieved from https://studio52.tv/5-reasons-why-your-ivr-should-be-multilingual

SUH1

Suhm, B. (2008). IVR usability engineering using guidelines and analyses of end-to-end calls. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems, 2nd edition (pp. 1-41). New York, NY: Springer. https://www.amazon.com/Factors-Interactive-Systems-Communication-Technology/dp/038725482X

SUH2

Suhm, B., Freeman, B., & Getty, D. (2001). Curing the menu blues in touch-tone voice interfaces. In Proceedings of CHI 2001 (pp. 131-132). The Hague, Netherlands: ACM. https://10.1145/634067.634147

SUH3

Suhm, B., Bers, J., McCarthy, D., Freeman, B., Getty, D., Godfrey, K., & Peterson, P. (2002). A comparative study of speech in the call center: Natural language call routing vs. touch-tone menus. In Proceedings of CHI 2002 (pp. 283–290). Minneapolis, MN: ACM. https://doi.org/10.1145/503376.503427

TOL1

Toledano, D. T., Pozo, R. F., Trapote, Á. H., & Gómez, L. H. (2006). Usability evaluation of multi-modal biometric verification systems. Interacting with Computers, 18, 1101-1122. https://doi.org/10.1016/j.intcom.2006.01.004

TOM1

Tomko, S., Harris, T. K., Toth, A., Sanders, J., Rudnicky, A., & Rosenfeld, R. (2005). Towards efficient human machine speech communication: The speech graffiti project. ACM Transactions on Speech and Language Processing, 2(1), 1-27. https://doi.org/10.1145/1075389.1075391

TOR1

Torres, F., Hurtado, L. F., García, F., Sanchis, E., & Segarra, E. (2005). Error handling in a stochastic dialog system through confidence measures. Speech Communication, 45, 211–229. https://doi.org/10.1016/j.specom.2004.10.014

TUR1

Turunen, M., Hakulinen, J., & Kainulainen, A. (2006). Evaluation of a spoken dialogue system with usability tests and long-term pilot studies: Similarities and differences. In Proceedings of the 9th International Conference on Spoken Language Processing (pp. 1057-1060). Pittsburgh, PA: ICSLP. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.4349&rep=rep1&type=pdf

UNZ1

Unzicker, D. K. (1999). The psychology of being put on hold: An exploratory study of service quality. Psychology & Marketing, 16(4), 327–350. https://doi.org/10.1002/(SICI)1520-6793(199907)16:4<327::AID-MAR4>3.0.CO;2-G

VAC1

Vacca, J. R. (2007). Biometric technologies and verification systems. Burlington, MA: Elsevier. https://www.amazon.com/Biometric-Technologies-Verification-Systems-Vacca/dp/0750679670

VIR1

Virzi, R. A., & Huitema, J. S. (1997). Telephone-based menus: Evidence that broader is better than deeper. In Proceedings of the Human Factors and Ergonomics Society 41st Annual Meeting (pp. 315-319). Santa Monica, CA: Human Factors and Ergonomics Society. http://search.proquest.com/docview/235451367

VOI1

Voice Messaging User Interface Forum. (1990). Specification document. Cedar Knolls, NJ: Probe Research.

WAL1

Walker, M. A., Fromer, J., Di Fabbrizio, G., Mestel, C., & Hindle, D. (1998). What can I say?: Evaluating a spoken language interface to email. In Proceedings of CHI 1998 (pp. 582–589). Los Angeles, CA: ACM. http://www.difabbrizio.com/papers/chi98-elvis.pdf

WAT1

Watt, W. C. (1968). Habitability. American Documentation, 19(3), 338–351. https://doi.org/10.1002/asi.5090190324

WEE1

Weegels, M. F. (2000). Users’ conceptions of voice-operated information services. International Journal of Speech Technology, 3, 75–82. https://doi.org/10.1023/A:1009633011507

WIL1

Wilkie, J., McInnes, F., Jack, M. A., & Littlewood, P. (2007). Hidden menu options in automated human-computer telephone dialogues: Dissonance in the user’s mental model. Behaviour & Information Technology, 26(6), 517-534. https://doi.org/10.1080/01449290600717783

WIL2

Williams, J. D., & Witt, S. M. (2004). A comparison of dialog strategies for call routing. International Journal of Speech Technology, 7, 9–24. https://doi.org/10.1023/B:IJST.0000004803.47697.bd

WIL3

Wilson, T. P., & Zimmerman, D. H. (1986). The structure of silence between turns in two-party conversation. Discourse Processes, 9, 375–390. https://doi.org/10.1080/01638538609544649

WOL1

Wolters, M., Georgila, K., Moore, J. D., Logie, R. H., MacPherson, S. E., & Watson, M. (2009). Reducing working memory load in spoken dialogue systems. Interacting with Computers, 21, 276-287. https://doi.org/10.1016/j.intcom.2009.05.009

WRI1

Wright, L. E., Hartley, M. W., & Lewis, J. R. (2002). Conditional probabilities for IBM Voice Browser 2.0 alpha and alphanumeric recognition (Tech. Rep. 29.3498. Retrieved from http://drjim.0catch.com/alpha2-acc.pdf). West Palm Beach, FL: IBM.

YAG1

Yagil, D. (2001). Ingratiation and assertiveness in the service provider-customer dyad. Journal of Service Research, 3(4), 345–353. https://doi.org/10.1177/109467050134007

YAN1

Yang, F., & Heeman, P. A. (2010). Initiative conflicts in task-oriented dialogue. Computer Speech and Language, 24, 175–189. https://doi.org/10.1016/j.csl.2009.04.003

YEL1

Yellin, E. (2009). Your call is (not that) important to us: Customer service and what it reveals about our world and our lives. New York, NY: Free Press. https://www.amazon.com/Your-Call-Not-That-Important/dp/1416546898

YUD1

Yudkowsky, M. (2008). The creepiness factor. Speech Technology, 13(8), 4. https://www.speechtechmag.com/Articles/Archives/Industry-View/The-Creepiness-Factor-51037.aspx

YUS1

Yuschik, M. (2008). Silence locations and durations in dialog management. In D. Gardner-Bonneau & H. E. Blanchard (Eds.), Human factors and voice interactive systems, 2nd edition (pp. 231-253). New York, NY: Springer. https://www.amazon.com/Factors-Interactive-Systems-Communication-Technology/dp/038725482X

ZOL1

Zoltan-Ford, E. (1991). How to get people to say and type what computers can understand. International Journal of Man-Machine Studies, 34, 527–547. http://www.speech.kth.se/~edlund/bielefeld/references/zoltan-ford-1991.pdf

ZUR1

Zurif, E. B. (1990). Language and the brain. In D. N. Osherson & H. Lasnik (Eds.), Language: An invitation to cognitive science (pp. 177–198). Cambridge, MA: MIT Press. https://www.amazon.com/Invitation-Cognitive-Science-Vol-Language/dp/0262650339