MongoDB Inc.

12/02/2024 | News release | Distributed by Public on 12/02/2024 10:22

The MongoDB AI Applications Program: Delivering Customer Value

When people ask me about MongoDB, I tell them that they've probably interacted with MongoDB without realizing it. In fact, many of the world's leading companies-including 70% of the Fortune 100-are powered by MongoDB. Everything we do at MongoDB is about serving our customers, but that often happens in the background, where our work is invisible to many users.

In my case, that means building an ecosystem of partners who enable customer innovation. A recent example is how MongoDB teamed up with Amazon Web Services (AWS) and Amazon Bedrock to help Base39-a Brazilian fintech provider-automate loan analysis, decreasing decision time from three days to one hour, and reducing cost per loan analysis by 96%. And there's the Indian company IndiaDataHub, which joined the MongoDB AI Applications Program (MAAP) to access AI expertise, in-depth support, and a full spectrum of technologies to enhance AI functionality within IndiaDataHub's analytics platform. This includes connecting relevant data in MongoDB with Meta's AI models to perform sentiment analysis on text datasets.

I could go on and on-after all, tens of thousands of MongoDB's customers have success stories like these. Enabling customer success is precisely why we launched MAAP last summer, and why the program has evolved since.

Customers tell us that they want to take advantage of AI, but they're unsure how to navigate a fast-moving market, how to control costs, and how to unlock business value from their AI investments. So with MAAP, MongoDB offers customers a full AI stack and an integrated set of professional services to help them keep pace with the latest innovations, identify the best AI use cases, and to help them future-proof AI investments.

With today's announcement, Capgemini, Confluent, IBM, QuantumBlack, AI by McKinsey, and Unstructured have joined the 22 companies that now comprise the MAAP partner network. Which means that the MAAP ecosystem (which was founded with Accenture, Anthropic, Anyscale, Arcee AI, AWS, Cohere, Credal, Fireworks AI, Google Cloud, gravity9, LangChain, LlamaIndex, Microsoft Azure, Nomic, PeerIslands, Pureinsights, and Together AI) offers additional cutting-edge AI integration and solutions to customers-and more ways to set them on the path to AI success.

CentralReach: Making an impact on autism with AI

More than 150 customers have already gotten involved with MAAP, but I'm particularly excited to share the work of CentralReach.

CentralReach provides an AI-powered electronic medical record (EMR) platform that is designed to improve outcomes for children and adults diagnosed with autism and related intellectual and developmental disabilities (IDD).

Prior to working with MongoDB and MAAP, CentralReach was looking for an experienced partner to further connect and aggregate its more than 4 billion financial and clinical data points across its suite of solutions.

CentralReach leveraged MongoDB's document model to aggregate the company's diverse forms of information from assessments to clinical data collection, so the company could build rich AI-assisted solutions on top of its database. Meanwhile, MAAP partners helped CentralReach to design and optimize multiple layers of its comprehensive buildout. All of this will enable CentralReach to support initiatives such as value-based outcome measurement, clinical supervision, and care delivery efficacy. With these new data layers in place, providers will be able to make substantial improvements to their clinical delivery to optimize care for all those they serve.

"As a mission-driven organization, CentralReach is always looking to innovate on behalf of the clinical professionals-and the more than 350,000 autism and IDD learners-that we serve globally," said Chris Sullens, CEO of CentralReach. "So being able to lean on MongoDBs database technology and draw on the collective expertise of the MAAP partner network-in addition to MongoDB's tech expertise and services-to help us improve outcomes for our customers and their clients worldwide has been invaluable."

Working backward from customer needs

The addition of Capgemini, Confluent, IBM, QuantumBlack, AI by McKinsey, and Unstructured to the MAAP partner network offers customers additional technology and AI support options. It also builds on MongoDB's larger partner ecosystem, which is designed to give customers flexibility and choice.

By working closely with our partners on product launches, integrations, and real-world challenges, MongoDB has been able to bring a better understanding of the challenges facing customers-and to give them the resources and confidence to move forward with groundbreaking technology like AI.

Examples of support MAAP has offered customers include:

  • Guidance on chunking strategies for an AI-native healthcare provider providing patient recommendations based on complex data sources

  • Collaboration on advanced retrieval techniques to improve response accuracies for a large consultancy to automate manual research

  • Evaluation of embedding models for multi-modal data stores for a well-known automaker developing diagnostic applications

  • Guidance on architectures for complex agentic workflows for a mature enterprise technology provider augmenting customer service workflows

One way we offer this support is through the MAAP Center of Excellence (CoE). The MAAP CoE comprises AI technical experts from across MongoDB and the MAAP partner ecosystem who collaborate with customers to understand their challenges, technical requirements, and timelines. The MAAP CoE can then recommend custom full-stack architectures and implementation best practices, optimized for the customer's specific use case and requirements.

Indeed, customization is intrinsic to MAAP: MongoDB and our MAAP partners will meet customers wherever they are to help them achieve their goals. For example, if an organization wants to fully own its AI application development, MongoDB and partners can provide guidance and expertise. And in cases where customers want hands-on support, we can help speed projects with professional services.

Ultimately, we want MAAP customers-and anyone who works with MongoDB's partner ecosystem at large-to feel empowered to own their application development, and to transform challenges into opportunities. Let's build the next big thing together!

To learn more about building AI-powered apps with MongoDB, see MongoDB's AI Resources Hub, the Partner Ecosystem Catalog, or visit the MAAP page. And check out our partner Confluent's own blog post about MAAP!

AI-Powered Call Centers: A New Era of Customer Service

Customer satisfaction is critical for insurance companies. Studies have shown that companies with superior customer experiences consistently outperform their peers. In fact, McKinsey found that life and property/casualty insurers with superior customer experiences saw a significant 20% and 65% increase in Total Shareholder Return , respectively, over five years. A satisfied customer is a loyal customer. They are 80% more likely to renew their policies, directly contributing to sustainable growth. However, one major challenge faced by many insurance companies is the inefficiency of their call centers. Agents often struggle to quickly locate and deliver accurate information to customers, leading to frustration and dissatisfaction. This article explores how Dataworkz and MongoDB can transform call center operations. By converting call recordings into searchable vectors (numerical representations of data points in a multi-dimensional space), businesses can quickly access relevant information and improve customer service. We'll dig into how the integration of Amazon Transcribe, Cohere, and MongoDB Atlas Vector Search-as well as Dataworkz's RAG-as-a-service platform- is achieving this transformation. From call recordings to vectors: A data-driven approach Customer service interactions are goldmines of valuable insights. By analyzing call recordings, we can identify successful resolution strategies and uncover frequently asked questions. In turn, by making this information-which is often buried in audio files- accessible to agents, they can give customers faster and more accurate assistance. However, the vast volume and unstructured nature of these audio files make it challenging to extract actionable information efficiently. To address this challenge, we propose a pipeline that leverages AI and analytics to transform raw audio recordings into vectors as shown in Figure 1: Storage of raw audio files: Past call recordings are stored in their original audio format Processing of the audio files with AI and analytics services (such as Amazon Transcribe Call Analytics ): speech-to-text conversion, summarization of content, and vectorization Storage of vectors and metadata: The generated vectors and associated metadata (e.g., call timestamps, agent information) are stored in an operational data store Figure 1: Customer service call insight extraction and vectorization flow Once the data is stored in vector format within the operational data store, it becomes accessible for real-time applications. This data can be consumed directly through vector search or integrated into a retrieval-augmented generation (RAG) architecture, a technique that combines the capabilities of large language models (LLMs) with external knowledge sources to generate more accurate and informative outputs. Introducing Dataworkz: Simplifying RAG implementation Building RAG pipelines can be cumbersome and time-consuming for developers who must learn yet another stack of technologies. Especially in this initial phase, where companies want to experiment and move fast, it is essential to leverage tools that allow us to abstract complexity and don't require deep knowledge of each component in order to experiment with and realize the benefits of RAG quickly. Dataworkz offers a powerful and composable RAG-as-a-service platform that streamlines the process of building RAG applications for enterprises. To operationalize RAG effectively, organizations need to master five key capabilities: ETL for LLMs: Dataworkz connects with diverse data sources and formats, transforming the data to make it ready for consumption by generative AI applications. Indexing: The platform breaks down data into smaller chunks and creates embeddings that capture semantics, storing them in a vector database. Retrieval: Dataworkz ensures the retrieval of accurate information in response to user queries, a critical part of the RAG process. Synthesis: The retrieved information is then used to build the context for a foundational model, generating responses grounded in reality. Monitoring: With many moving parts in the RAG system, Dataworkz provides robust monitoring capabilities essential for production use cases. Dataworkz's intuitive point-and-click interface (as seen in Video 1) simplifies RAG implementation, allowing enterprises to quickly operationalize AI applications. The platform offers flexibility and choice in data connectors, embedding models, vector stores, and language models. Additionally, tools like A/B testing ensure the quality and reliability of generated responses. This combination of ease of use, optionality, and quality assurance is a key tenet of Dataworkz's "RAG as a Service" offering. Diving deeper: System architecture and functionalities Now that we've looked at the components of the pre-processing pipeline, let's explore the proposed real-time system architecture in detail. It comprises the following modules and functions (see Figure 2): Amazon Transcribe , which receives the audio coming from the customer's phone and converts it into text. Cohere 's embedding model, served through Amazon Bedrock , vectorizes the text coming from Transcribe. MongoDB Atlas Vector Search receives the query vector and returns a document that contains the most semantically similar FAQ in the database. Figure 2: System architecture and modules Here are a couple of FAQs we used for the demo: Q: "Can you explain the different types of coverage available for my home insurance?" A: "Home insurance typically includes coverage for the structure of your home, your personal belongings, liability protection, and additional living expenses in case you need to temporarily relocate. I can provide more detailed information on each type if you'd like." Q: "What is the process for adding a new driver to my auto insurance policy?" A: "To add a new driver to your auto insurance policy, I'll need some details about the driver, such as their name, date of birth, and driver's license number. We can add them to your policy over the phone, or you can do it through our online portal." Note that the question is reported just for reference, and it's not used for retrieval. The actual question is provided by the user through the voice interface and then matched in real-time with the answers in the database using Vector Search. This information is finally presented to the customer service operator in text form (see Fig. 3). The proposed architecture is simple but very powerful, easy to implement, and effective. Moreover, it can serve as a foundation for more advanced use cases that require complex interactions, such as agentic workflows , and iterative and multi-step processes that combine LLMs and hybrid search to complete sophisticated tasks. Figure 3: App interface, displaying what has been asked by the customer (left) and how the information is presented to the customer service operator (right) This solution not only impacts human operator workflows but can also underpin chatbots and voicebots, enabling them to provide more relevant and contextual customer responses. Building a better future for customer service By seamlessly integrating analytical and operational data streams, insurance companies can significantly enhance both operational efficiency and customer satisfaction. Our system empowers businesses to optimize staffing, accelerate inquiry resolution, and deliver superior customer service through data-driven, real-time insights. To embark on your own customer service transformation, explore our GitHub repository and take advantage of the Dataworkz free tier .

New Course for Building AI Applications with MongoDB on AWS

Developers everywhere want to expand the limits of what they can build with new generative AI technologies. But the AI market and its offerings have evolved so quickly that for many developers, keeping up can feel overwhelming. As we've entered the AI era, MongoDB and Amazon Web Services (AWS) have built upon our eight year partnership to deliver technology integrations-like MongoDB Atlas's integrations with Amazon Bedrock and Amazon Q Developer (formerly CodeWhisperer)-that simplify the process of building and deploying gen AI applications. By combining MongoDB's integrated operational and vector database capabilities with AWS's AI infrastructure solutions, our goal is to make it easier for our developer community to innovate with AI. So, to help developers get started, we're launching a new, free MongoDB Learning Badge focused on Building AI Applications with MongoDB on AWS . Building AI with MongoDB on AWS This is MongoDB University's first AWS Learning Badge, and with it, we've focused on teaching developers how Amazon Bedrock and Atlas work together-including how to create a knowledge base in Amazon Bedrock, configure a knowledge base to use Atlas, inspect how a query is answered, create an Agent to answer questions based on data in Atlas, and configure guardrails that support responsible agentic behavior. In short, developers will learn how to remove the heavy lifting of infrastructure configuration and integration so they can get up and running with innovative new semantic search and RAG applications faster. Amazon Bedrock is a fully managed service from AWS that offers a choice of high-performing foundation models from leading AI companies via a single API, along with a broad set of capabilities organizations need to build secure, high-performing AI applications. Developers can connect Bedrock to MongoDB Atlas for blazing-fast vector searches and secure vector storage with minimal coding. With the integration, developers' can use their proprietary data alongside industry-leading foundation models to launch AI applications that deliver hyper-intelligent and hyper-relevant results. Tens of thousands of customers are running MongoDB Atlas on AWS, and many have already embarked successfully on cutting-edge AI journeys. Take Scalestack for example, which used MongoDB Atlas Vector Search to build a RAG-powered AI copilot, named Spotlight, and is now using Bedrock's customizable models to enhance Spotlight's relevance and performance. Meanwhile, Base39 -a Brazilian fintech provider-used MongoDB Atlas and Amazon Bedrock to automate loan analysis, decreasing decision time from three days to one hour and reducing cost per loan analysis by 96%. Badge up with MongoDB MongoDB Learning Badges are a powerful way to demonstrate your dedication to continuous learning. These digital credentials not only validate your educational accomplishments but also stand as a testament to your expertise and skill. Whether you're a seasoned developer, an aspiring data scientist, or an enthusiastic student, earning a MongoDB badge can elevate your professional profile and unlock new opportunities in your field. Learn, prepare, and earn Complete the Learning Badge Path and pass a brief assessment to earn your badge. Upon completion, you'll receive an email with your official Credly badge and digital certificate, ready to share on social media, in email signatures, or on your resume. Additionally, you'll gain inclusion in the Credly Talent Directory, where you will be visible to recruiters from top employers. Millions of builders have been trained through MongoDB University courses-join them and get started building your AI future with MongoDB Atlas and AWS. And if you're attending AWS re:Invent 2024, come find MongoDB at Booth #824. The first 100 people to receive their learning badge will receive a special gift! Start learning today