• Deploying LLM Applications - From Production to Performance Optimization

  • Dec 17 2024
  • Length: 20 mins
  • Podcast

Deploying LLM Applications - From Production to Performance Optimization

  • Summary

  • Learn how to successfully deploy Large Language Model (LLM) applications in this practical chapter. Discover key techniques for adapting data pipelines, like semantic caching and feature injection, and explore strategies to optimize inference for faster processing and lower latency while managing cloud platform costs.

    The chapter also covers important aspects of user interface design and explains how to orchestrate AI agents for complex tasks using methods like prompt splitting and chaining to interact efficiently with external data.

    Finally, see how user-friendly platforms simplify the development and deployment process, helping you bring LLM-powered applications to production faster and more effectively.

    Show More Show Less

What listeners say about Deploying LLM Applications - From Production to Performance Optimization

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.