Machine Learning-based NLP models typically take a long time to produce accurate results. Therefore, enterprises might be hesitant to adopt Conversational AI. Kore.ai mitigates this hesitancy by using a unique triple-engine approach, combining Machine Learning, Fundamental Meaning, and Knowledge Graph engines for maximum conversation accuracy with little upfront training. It enables virtual assistants to understand and process multisentence messages, multiple intents, and contextual references made by the user, patterns, and idiomatic sentences, and more. The XO Platform uses the three engines as shown in the following figure:
Figure 1. Three-engine approach of XO Platform
- The Machine Learning Engine uses ML algorithms and queries training data to determine the best match for a user’s intent, and searches for patterns to train and tune the NL engine. The engine enables your large sets of training data to use this information from the start.
- The Fundamental Meaning Engine understands semantics, grammar, rules, and language nuances. It allows a virtual assistant to understand meaning by parsing a user’s input by meaning, position, conjugation, capitalization, plurality, and other factors to recognize the intent and extract key entities, or field data, needed to complete a task.
- The Knowledge Graph Engine provides an ontology, through domain terms and the relationships between them, provides a framework that limits false positives. Synonyms for each term easily extend the identification possibilities without the need to provide large numbers of alternative training questions. Using the Ontology Generator, the key domain terms in Q&As can be identified and an ontology can be automatically generated.
All the three Kore.ai engines finally deliver their findings to the Kore.ai Ranking and Resolver component as either exact matches or probable matches. The Ranking and Resolver component determines the top scorer of the entire NLP computation.