Join us for our January Brown Bag session featuring Deborah Dahl. Registration is free for all interested through Eventbrite.
Tuesday, January 21 at 12 PM – 1 PM CDT
Register For This Event
Natural language toolkits like Google Dialog Flow, Microsoft LUIS and the Alexa Skills Kit are powerful tools for developing natural language applications, but they assume that developers start out having a set of entities and intents in mind. The tools themselves give developers a way to execute their designs, but the designs are left up to the developer. This talk will discuss how to find entities and intents in the first place. There are two complementary scenarios — in the first case, there is an existing corpus of utterances collected from sources such as call center logs, chatbot logs, logs from earlier versions of the system, and even simulations. This is the best case. For that scenario, we discuss generic tools that are available for identifying potential entities based on a corpus. For example, using part of speech tagging, we can identify proper nouns, which often have a good chance of being the values of entities. In the other scenario, where there isn’t a preexisting corpus, resources for finding potential entities include reviewing the concepts found in other resources. These include back end resources such as API’s and databases, customer service websites, and information about where users are clicking on websites. Starting out development after establishing a good set of entities will significantly reduce time spent reannotating, testing, and redoing back end integrations.
About Deborah Dahl
I focus on designing and building innovative applications of speech and natural language technology. I work with all kinds of customers, including startups, large enterprises, and government agencies. I frequently speak at industry conferences such as the Conversational Interaction Conference, Voice Summit, and SpeechTEK. I also have extensive experience in speech, multimodal and accessibility standards activities in the World Wide Web Consortium, having served as Chair of the Multimodal Interaction Working Group. I am a member of the Board of Directors of AVIOS, (the Applied Voice Input Output Society), a member of the Editorial Board of Speech Technology Magazine, and I am a co-chair of the SpeechTEK conference program. I have over 30 years of experience in speech and natural language technologies, including work on research, defense, government, and commercial systems. In addition to my three books, I have also published many technical papers and book chapters.
1 Comment
I registered for this event but have not received call in information.