This section provides sizing recommendations for the Kore.ai XO Platform.
Consider several factors for sizing the Kore.ai XO Platform deployment, such as the number of concurrent conversation sessions, and the number of messages in sessions. With its modular and microservices design, the Kore.ai XO Platform can be scaled easily depending on customer requirements.
Conversation sessions are defined as uninterrupted interactions between the bot and the user. Users include customers who interact with the Kore.ai XO Platform and agents who use the platform for assistance. These sessions are used in multiple analytics dashboards in the Bot Builder and Bot Admin Console platforms. They are created for all interactions with the virtual assistant.
Kore.ai is a microservice architecture with several modular components such as RabbitMQ, Redis, Presto, and MongoDB. Each of these components can be scaled based on usage. The CPU and memory requirements for each of these components are aggregated and provided in Table 2.
Kore.ai uses pretrained models and a rules engine to determine the intent of the user request. There are no large datasets that must be trained so there is no significant storage requirement. Enterprise class flash storage in the PowerEdge servers is sufficient to satisfy the IOPs requirement of Kore.ai.
We recommend three sizing deployments for our validated design:
The following table describes the recommended sizing for the three deployments:
Table 2. Recommended sizing for the Kore.ai XO Platform
Deployment | Number of concurrent sessions | CPU and memory requirements |
Minimum production | 100 |
|
Mainstream | 2000 |
|
High performance | 5500 |
|