09/25/2024 | Press release | Distributed by Public on 09/25/2024 12:46
Artificial intelligence is revolutionizing the way that we work and innovate. The collaboration between Dell Technologies and Meta continues to push the boundaries of what's possible in the AI ecosystem. Our work with Meta centers on creating on-premises AI infrastructure that seamlessly integrates with Llama models and the newly launched Llama Stack. This relationship enables organizations to develop sophisticated AI solutions, free from the complexities and constraints associated with cloud-only environments.
As leaders in the industry, we're proud to be the first on-premises infrastructure provider to offer standardized AI solutions optimized for the Llama ecosystem.1 Our approach sets a new standard in open-source AI integration, delivering innovative solutions and fostering collaboration.
Dell AI Solutions with Llama are carefully designed to complement the Llama Stack and dramatically simplify the developer experience with Llama models. The Llama Stack defines and standardizes the software building blocks needed to build generative AI applications. It supports various stages of the AI development lifecycle, from initial model training to fine-tuning, through deployment in production environments. The latest updates to Llama Stack include tight integration with the full suite of PyTorch libraries, enabling turnkey deployment of retrieval augmented generation (RAG) and tooling-enabled applications.
By combining Llama Stack with Dell's AI Factory, organizations have enterprise-grade infrastructure that makes it easy to prototype and build agent-based AI applications with Llama models. Driven by the Dell PowerEdge XE9680 and equipped with NVIDIA H100 GPUs, this reference architecture guarantees optimal performance and scalability to simplify application development. The architecture provides:
Dell AI Solutions with Llama offer exceptional computational power, reliability and scalability, making it easy to implement agentic workflows that involve multiple models and memory banks collaborating to solve complex business problems. The reference architecture with the Llama Stack simplifies the implementation of enterprise use cases like inferencing, RAG and synthetic data generation. Integrating these with Llama Guard models allows enterprises to establish guardrails for added safety.
"Our collaboration with Dell Technologies is transforming the AI landscape by integrating cutting-edge Llama models with market-leading Dell AI Solutions. Dell offers standardized on-premises deployment solutions for the Llama Stack, optimized for seamless integration with existing frameworks. Together, we enable large enterprises to deploy and scale the Llama Stack effortlessly, empowering them to retain control over data within their own data center."
-Ahmad Al-Dahle, Head of GenAI, Meta
AI application development can present numerous challenges, from high costs and scalability issues to difficulty in procuring necessary resources. Choosing the right architecture requires careful model selection and optimization, while the complexity of integrating and maintaining AI solutions further complicates the process. These hurdles necessitate a strategic approach combining technical proficiency and resource investment to achieve success.
Dell AI Solutions with Llama addresses these challenges and provides several benefits for data scientists, developers and IT decision-makers including:
These benefits translate into improved efficiency, reduced costs and a competitive advantage in the rapidly evolving AI landscape.
Meta is committed to building advanced open models to accelerate generative AI adoption. The release of Llama 3.2 marks a significant advancement in AI capabilities, empowering developers to create a new generation of AI experiences that are private, efficient and responsible. Llama 3.2 introduces a versatile suite of multilingual models, ranging from 1B to 90B parameters, capable of processing both text and images. These models include lightweight text-only options (1B and 3B) as well as vision LLMs (11B and 90B), supporting long context lengths and optimized for inference with advanced query attention mechanisms. Designed for accessibility, privacy and responsible innovation, Llama 3.2 provides enterprises with a powerful tool to deploy AI across diverse environments and use cases.
Highlights of Llama 3.2 include:
Llama 3.2 11B and 90B support image reasoning, including document-level understanding, such as interpreting charts and graphs, and captioning images. Llama 3.2 also supports real-time and batch inference, optimized for both interactive and high-throughput workloads. When deployed with Dell's AI Factory solutions, these new multimodal models allow enterprises to securely utilize them for various applications, such as detecting manufacturing defects, enhancing healthcare diagnostic accuracy, and improving retail inventory management.
Dell Technologies offers guidance on AI target use cases, data management requirements, operational skills and processes. Dell Services experts can assist you at every stage of your GenAI journey and drive faster, more holistic business outcomes with Llama models. Our AI services experts work with your team to help your team define the key opportunities, challenges, and priorities.
Unsure where to start? Contact your Dell sales rep for a free half-day facilitated workshop to determine how your business can benefit from GenAI.
Learn more about Dell AI Solutions here.
Together, Dell and Meta are committed to fostering an open and inclusive AI community, where developers and businesses can collaborate and innovate. Whether you're looking to enhance existing AI workflows or pioneer new solutions, Dell and Meta's collaboration offers the resources and support needed to turn your AI vision into reality.
1 Based on internal analysis, September 2024.