Avaamo Announces Reference Architecture to Scale Conversational AI Deployments

Avaamo’s reference architecture will allow enterprises to focus on the benefits of AI for better customer relationships and a streamlined workforce

Avaamo, a deep-learning software firm that specialises in conversational interfaces to solve certain high impact problems in enterprises, announced that it will unveil a reference architecture for artificial intelligence (AI) applications. The architecture will offer enterprises wide-ranging and out-of-the-box configurations to simply scale the conversational AI to millions of customer interactions.

Designed to make computers understands humans, Avaamo’s conversational AI platform allows large enterprises in deploying high impact conversational assistants. This offers vertical-specific solution in several industries including finance, healthcare and telecommunications, among others.

A standard production Intel Xeon Gold 6140 processor-based dual socket server was used in the platform in order to assess optimum server configuration for different workloads. This enables businesses in speedily deploying enterprise-grade virtual assistants on present data center resources, without much investment or learning curve.

Focus on AI benefits

Speaking about the development, Ram Menon, CEO, Avaamo, said, “Our goal from the beginning has been to design and deliver conversational AI technology that effortlessly becomes part of large enterprises. Working with Intel we’ve been able to create a reference architecture that’s a one-stop shop for large enterprises looking to massively scale their in-house conversational AI deployments. This technology will allow enterprises to focus on the benefits of AI for better customer relationships and a streamlined workforce.”

The conversational AI platform has been optimised for technologies of Intel and is developed for addressing the traditional cold-start problem in AI by:

  • Ingesting unclassified data
  • Performing unsupervised machine learning (ML) model creation
  • Optimising the model for runtime execution
  • Enhancing the ML model with customer-specific knowledge resources

It can scale with a configuration of 72-core of a single server for addressing up to 100 concurrent sessions. This offers huge flexibility for bigger enterprises to share powerful hardware of Intel across standard as well as AI-specific computing workloads.

LEAVE A REPLY

Please enter your comment!
Please enter your name here