2020 Board Election Results

We’re pleased to announce that Jamey White (Concentrix), Amy Goodwin (Verizon), and Crispin Reedy (Versay) will be returning to the board. These members will serve a two-year term.

Daniel O’Sullivan: Challenges with Implementing Adaptive Interfaces for Voice Self Service

Join us for our February Brown Bag session featuring Daniel O’Sullivan. Registration is free for all interested through Eventbrite.

Tuesday, February 11 at 12 PM – 1 PM CDT
Register For This Event

This presentation provides the audience with an overview of the challenges and benefits associated with implementing voice interfaces that adapt in real time based on user behavior. Production results on deployments at banks, a travel service, and a health insurer are given. Learn when (and if) it makes economic sense to apply adaptive principles for various types of voice applications.

About Daniel O’Sullivan
Dan is the Founder and CEO of Gyst Technologies, a software development company dedicated to improving the efficiency and productivity of speech related technologies. This is Dan’s third speech technology startup, having successfully sold both prior companies. Dan has an Electrical Engineering undergraduate degree from the Dublin Institute of Technology in Ireland, and a Masters in Computer Science from NYU Tandon School of Engineering. He is also a former Member of Technical Staff at Bell Labs, has 20+ years experience in the call center/speech/IVR business, and is the current Programming Chair and a Director at the MIT Enterprise Forum New York City.

Call for Nominations – 2020 ACIxD Board Elections

The board of the Association for Conversational Interaction Design (ACIxD) is preparing for the annual election of new board members.

Three seats are up for re-election. The current members whose terms are complete are Crispin Reedy (Versay), Amy Goodwin (Verizon), and Jamey White (Concentrix). Members who have a year left to serve are Helen Vanscoy (PTP), Mark Smolensky (AT&T), Dawn Harpster (PTP), Kristie Goss Flenord (Concentrix), Shelley Moore (Verizon), and Leslie Carroll Walker (Concentrix).

Any number of candidates may run. You can nominate yourself or someone else (we definitely encourage self-nomination — if nominating someone else, please first make sure they’ve agreed to serve if elected).

If you would like to run for the board in this election, please email contact@acixd.org by Friday, February 14th. In that email, please include a brief statement about your candidacy, which we will publish to members in the election. Please limit statements to 500 words or less.

After the election, the board will determine its officers (President, Secretary, Treasurer, etc.). In terms of time commitment, the board meets by phone once every month, with additional meetings called as necessary. Apart from this, the workload varies, depending on current ACIxD activities. Some weeks as little as 1 hour is required; at other times, 5 or more hours/week may be required.

Elections will be conducted via on-line ballot, starting February 17th, ending March 2nd. Candidates must be current voting members of ACIxD. If you’re not a member and want to participate in the election, just join at the Professional level. If you have any questions about your membership status for the election, email us at contact@acixd.org.

Thank you very much, and we look forward to hearing from you!

ACIxD 2020 Election Committee

Deborah Dahl: Finding Entities for Natural Language Applications

Join us for our January Brown Bag session featuring Deborah Dahl. Registration is free for all interested through Eventbrite.

Tuesday, January 21 at 12 PM – 1 PM CDT
Register For This Event

Natural language toolkits like Google Dialog Flow, Microsoft LUIS and the Alexa Skills Kit are powerful tools for developing natural language applications, but they assume that developers start out having a set of entities and intents in mind. The tools themselves give developers a way to execute their designs, but the designs are left up to the developer. This talk will discuss how to find entities and intents in the first place. There are two complementary scenarios — in the first case, there is an existing corpus of utterances collected from sources such as call center logs, chatbot logs, logs from earlier versions of the system, and even simulations. This is the best case. For that scenario, we discuss generic tools that are available for identifying potential entities based on a corpus. For example, using part of speech tagging, we can identify proper nouns, which often have a good chance of being the values of entities. In the other scenario, where there isn’t a preexisting corpus, resources for finding potential entities include reviewing the concepts found in other resources. These include back end resources such as API’s and databases, customer service websites, and information about where users are clicking on websites. Starting out development after establishing a good set of entities will significantly reduce time spent reannotating, testing, and redoing back end integrations.

About Deborah Dahl
I focus on designing and building innovative applications of speech and natural language technology. I work with all kinds of customers, including startups, large enterprises, and government agencies. I frequently speak at industry conferences such as the Conversational Interaction Conference, Voice Summit, and SpeechTEK. I also have extensive experience in speech, multimodal and accessibility standards activities in the World Wide Web Consortium, having served as Chair of the Multimodal Interaction Working Group. I am a member of the Board of Directors of AVIOS, (the Applied Voice Input Output Society), a member of the Editorial Board of Speech Technology Magazine, and I am a co-chair of the SpeechTEK conference program. I have over 30 years of experience in speech and natural language technologies, including work on research, defense, government, and commercial systems. In addition to my three books, I have also published many technical papers and book chapters.

George Salazar: Challenges of Implementing Voice Control for Space Applications

In our November Brown Bag, George Salazar, Human-Computer Technical Discipline Lead, Avionics Systems, NASA/Johnson Space Center, speaks to us about designing voice control applications for astronauts. Registration is free for all interested through Eventbrite.

Tuesday, November 5 at 12 PM – 1 PM CDT
Register For This Event

This presentation provides the audience with an overview of the challenges for implementation of voice control in space applications that include the hardware, software, environment, and, more importantly, the astronaut. Past voice control applications in space are given. Learn how to apply key learnings from these applications to applications here on Earth.

Mr. George Salazar received his Bachelors of Science in Electrical Engineering from the University of Houston and his Masters of Science in Systems Engineering from Southern Methodist University. He has over 35 years of experience in telemetry, communications, speech control, command and data handling, audio, displays and controls, intelligent lighting, project management, and systems engineering. He has been involved with the design of advanced telemetry, speech recognition and intelligent systems of which he has received various patents. He is currently serving at NASA’s Johnson Space Center as the Human Computer Interface Technical Discipline Lead to develop advanced human interfaces as well as serving as the Displays and Controls Subsystem Manager for the Commercial Crew Program. He is a registered professional engineer in the state of Texas. In addition, he has Expert Systems Engineer Professional certification through the International Council on Systems Engineering.