A multi-purpose platform for conversational AI agents, which provides the functionality necessary to simulate natural conversation in diverse situations. Our platform employs tools such as intention recognition, interaction control, motion generation, conversational proficiency assessment, Human-in-the-Loop machine learning and dialogue control software, which choreographs all platform functions.
Conversation patterns currently supported by our platform include:
In the near future, we plan to support a variety of conversation patterns that are adaptable for use in various industries. Our platform will enable the efficient and cost-effective joint decision-making between humans and AI agents at point of sale and retail sites.
For the use of our dialogue system, we have designed and developed the Duplex Communication and Transactional Streaming Framework (DUCTS – joint development with Intelligent Framework Research Institute), an open source communication framework that allows mutual modules to be processed robustly and without delay, even when accessed by multiple users simultaneously.
A voice search engine (Pull-based information transmission), such as a smart speaker, can be built on a conventional web paradigm, since it operates by 1) receiving a user’s voice query and 2) consequently responding with the appropriate search result. In a more “conversational” setting however, it is not obvious who – the system or the user – should take their turn to speak, causing difficulties in handling regular, linear server-client communications.
DUCTS natively supports conversational interaction and advanced asynchronous communication. It consists of a server and client (currently compatible with mobile devices) Web, and VR/AR) that enable communication based on fine data units such as those delimited by utterance clauses. DUCTS supports text, voice and multimodal data. The cloud system also scales according to the number of concurrent user connections.