Episodes

  • From tinyML to the edge of AI: Introducing the EDGE AI FOUNDATION
    Nov 25 2024

    Send us a text

    Discover how the EDGE AI FOUNDATION is evolving from its roots in the tinyML Foundation to becoming a central hub for innovation and collaboration. Learn how initiatives like EDGE AI Labs and the EDGE AIP program are bridging the gap between academia and industry, training future AI leaders while tackling the ethical challenges of responsible AI development.

    Explore the transformative potential of generative AI on edge devices, from providing vital healthcare diagnostics in remote areas to enabling adaptive robotics in factories. We'll highlight compelling reasons for companies to engage with the Edge AI Foundation, offering unparalleled access to cutting-edge research, top talent, and a voice in shaping the industry's future. As we navigate through real-life scenarios and ethical considerations, you’ll see why the urgency and opportunity surrounding Edge AI is something you don't want to miss.

    Join us on this journey to ensure the benefits of AI are shared widely and responsibly by visiting edgeaifoundation.org.

    Learn more about the EDEG AI FOUNDATION - edgeaifoundation.org

    Show More Show Less
    14 mins
  • Unveiling the Technological Breakthroughs of ExecuTorch with Meta's Chen Lai
    Nov 21 2024

    Send us a text

    Unlock the secrets to deploying machine learning models on edge devices with Chen Lai from the PyTorch Edge team at Meta. Discover how XTorch, a brainchild of the PyTorch team, is transforming edge deployment by addressing challenges like memory constraints and hardware diversity. Get an insider's view on the technical collaborations with tech giants like Apple, Arm, Qualcomm, and MediaTek, which are revolutionizing the deployment of advanced language models like LLAMA on platforms such as iOS and Android. With Chen's expert insights, explore the fascinating process of converting PyTorch models into executable programs optimized for performance, stability, and broad hardware compatibility, ensuring seamless integration from server to edge environments.

    Immerse yourself in the world of XTorch within the Red Bull Ecosystem, where deploying machine learning models becomes effortless even without extensive hardware knowledge. Learn how key components like Torchexport and Torchio capture compute graphs and support quantization, elevating edge deployment capabilities. Discover how Torchchat facilitates large language model inference on various devices, ensuring compatibility with popular models from Hugging Face. As we wrap up, hear about the community impact of Meta's Executorch initiative, showcasing a commitment to innovation and collaboration. Chen shares his passion and dedication to advancing edge computing, leaving a lasting impression on listeners eager for the next wave of technological breakthroughs.

    Learn more about the tinyML Foundation - tinyml.org

    Show More Show Less
    31 mins
  • Revolutionizing TinyML: Integrating Large Language Models for Enhanced Efficiency
    Nov 14 2024

    Send us a text

    Unlock the future of TinyML by learning how to harness the power of large language models, as we sit down with Roberto Morabito to dissect this intriguing technological convergence. Discover how the collaborative efforts with Eurocom and the University of Helsinki are shaping a groundbreaking framework designed to elevate TinyML's lifecycle management. We promise to unravel the complexities and opportunities that stem from integrating these technologies, focusing on the essential role of prompt templates and the dynamic challenges posed by hardware constraints. Through a proof-of-concept demonstration, we bring you invaluable insights into resource consumption, potential bottlenecks, and the exciting prospect of automating lifecycle stages.

    Our conversation ventures into optimizing language models for end devices, delving into the transformative potential of Arduinos and single-board computers in enhancing efficiency and slashing costs. Roberto shares his expertise on the nuances of model conversion across varying hardware capabilities, revealing the impact this has on success rates. The episode crescendos with a compelling discussion on automating industrial time series forecasting, underscoring the critical need for adaptive solutions to maintain accuracy and efficiency. Through Roberto's expert insights, listeners are invited to explore the forefront of technology that is poised to revolutionize industrial applications.

    Learn more about the EDEG AI FOUNDATION - edgeaifoundation.org

    Show More Show Less
    27 mins
  • Harnessing Edge AI: Transforming Industries with Advanced Transformer Models with Dave McCarthy of IDC and Pete Bernard of tinyML Foundation
    Nov 7 2024

    Send us a text

    Unlock the transformative potential of edge computing with the insights of industry experts Dave McCarthy from IDC and Pete Bernard. Ever wondered how advanced transformer models are catalyzing technological leaps at the edge? This episode promises to enlighten you on the nuances of AI-ready infrastructure, pushing the boundaries of autonomous operations in multi-cloud and multi-edge environments. With an emphasis on trust, security, and sustainability, our guests illuminate the strategic importance of optimizing edge designs and the benefits of hybrid and multi-cloud strategies.

    Explore the dynamic world of Edge AI as we dissect the complexities of heavy and light edge scenarios, particularly within industrial contexts. Dave and Pete help navigate the shift from centralized systems to the cutting-edge distributed frameworks necessary for processing the explosion of data generated outside traditional data centers. Discover how Edge AI and TinyML are reshaping industries by empowering smarter devices and solutions, pushing AI workloads from the cloud to resource-constrained environments for improved efficiency and real-time data processing.

    Dive into the fascinating migration of AI workloads from the cloud to the edge, driven by the demands of smart cities and critical infrastructure. Our experts share insights from global surveys, examining how inference is increasingly shifting to the edge, while training remains cloud-based. Listen in as we explore the evolving edge AI hardware landscape, cost-effective solutions, and the burgeoning interest in specialized models. Uncover emerging generative AI use cases poised to revolutionize various sectors, and gain a glimpse into the future opportunities and challenges in the ever-evolving landscape of edge AI. Join us for a riveting discussion that promises to leave you informed and inspired.

    Learn more about the tinyML Foundation - tinyml.org

    Show More Show Less
    34 mins
  • Transforming the Edge with Generative AI: Unraveling Innovations Beyond Chatbots with Danilo Pau, IEEE Fellow from STMicroelectronics
    Oct 31 2024

    Send us a text

    Generative AI is poised to transform the edge, but what does this mean for technology, innovation, and everyday life? Join us for an enlightening discussion led by Danilo Pau, featuring a distinguished panel of experts including Dave, Roberto Chen, Alok, Seung-Yang, Arniban, and Alberto. They promise to unravel the mysteries beyond large language models and chatbots, offering fresh insights into the interplay of algorithms, software, chips, and methodologies from an EDA perspective. The conversation aims to democratize generative AI, making its groundbreaking potential accessible to all, and sparking inspiration throughout the tiny ML community.

    With gratitude to the TinyML Foundation for their invaluable support, this episode builds on the momentum of previous forums. Reflecting on the foundation laid by Davis Sawyer's inspiring March session, we explore how generative AI is not just a cloud-based innovation but a technology set to revolutionize the edge. Expect to hear how these developments could impact everyone's work and life, and gain a glimpse into the collective vision for the future from our esteemed speakers. Don’t miss the chance to be part of this vibrant exchange, where innovation is not just discussed but propelled forward.

    Learn more about the tinyML Foundation - tinyml.org

    Show More Show Less
    7 mins
  • Revolutionizing Weather Forecasting with Acoustic Smart Technology
    Oct 27 2024

    Send us a text

    Unlock the secrets to revolutionary weather forecasting with our latest episode featuring Jonah Beysens, the brilliant mind behind the Aurora project. Imagine a world where remote and resource-limited areas can access reliable weather data without the hassle of maintaining traditional mechanical stations. Learn how the Aurora, an acoustic smart weather station, is making this possible. Born from the TinyML Challenge 2022, Aurora employs a TinyML board with a microphone to classify wind and rain intensity, ensuring robustness and ease of deployment. Jonah walks us through the journey of crafting a real-world dataset and shares the collaborative spirit behind making this data open for community-driven innovation.

    Discover how the shift towards low-cost, acoustics-based devices is reshaping weather forecasting, offering enhanced spatial resolution with multiple ground stations. Jonah sheds light on the collaborative efforts to refine prediction models with open datasets, emphasizing the profound global impact, especially in developing nations where agriculture depends heavily on accurate forecasts. As we discuss the ongoing work to make weather data more accessible worldwide, we highlight the role of community and open access in driving forward weather-related technologies. Join us in exploring how these innovative solutions promise timely, dependable forecasts, paving the way for a future where environmental data is a shared resource for all.

    Learn more about the tinyML Foundation - tinyml.org

    Show More Show Less
    25 mins
  • Revolutionizing Nano-UAV Technology with Cutting-Edge On-Device Learning Strategies
    Oct 20 2024

    Send us a text

    Unlock the secrets of edge computing and on-device learning with Elias Cereda, a trailblazing PhD student at the Dalle Molle Institute for Artificial Intelligence in Italy, as we explore the transformative potential of AI in nano-UAV technology. Discover how Elias is pioneering solutions to the challenges of domain shifts in AI models, particularly those trained in simulations and deployed in real-world scenarios. From enhancing privacy and security through on-device learning to mastering the intricacies of human pose estimation for autonomous navigation, this episode is a treasure trove of insights. We dive into the practicalities of deploying systems using the Crazyflie 2.1 drone alongside GreenWaves' sophisticated RISC-V GAP8 and GAP9 chips, all while managing tight power constraints.

    Take a closer look at the complexities of on-device and online learning with Elias as we delve into the nuances of continual learning. Learn how maintaining a balance between retaining previous knowledge and integrating new data is crucial in the field. Elias introduces us to a self-supervised metric that operates without initial labels, cleverly utilizing odometry data to refine model predictions. By considering human movement as measurement noise and refining data through post-processing techniques, the episode unveils strategies to enhance model performance without explicit tracking.

    For those eager to delve deeper into the methodologies discussed, Elias points to a comprehensive paper that promises to enrich your understanding of this cutting-edge technology.

    Learn more about the tinyML Foundation - tinyml.org

    Show More Show Less
    22 mins
  • Deploying TinyML Models at Scale: Insights on Monitoring and Automation with Alessandro Grande of Edge Impulse
    Oct 13 2024

    Send us a text

    Unlock the secrets of deploying TinyML models in real-world scenarios with Alessandro Grande, Head of Product at Edge Impulse. Curious about how TinyML has evolved since its early days? Alessandro takes us through a journey from his initial demos at Arm to the sophisticated, scalable deployments we see today. Learn why continuous model monitoring is not just important but essential for the reliability and functionality of machine learning applications, especially in large-scale IoT deployments. Alessandro shares actionable insights on how to maintain a continuous lifecycle for ML models to handle unpredictable changes and ensure sustained success.

    Delve into the intricacies of health-related use cases with a spotlight on the HIFE AI cough monitoring system. Discover best practices for data collection and preparation, including identifying outliers and leveraging Generative AI like ChatGPT 4.0 for efficient data labeling. We also emphasize the importance of building scalable infrastructure for automated ML development. Learn how continuous integration and continuous deployment (CI/CD) pipelines can enhance the lifecycle management of ML models, ensuring security and scalability from day one. This episode is a treasure trove of practical advice for anyone tackling the challenges of deploying ML models in diverse environments.

    Learn more about the tinyML Foundation - tinyml.org

    Show More Show Less
    21 mins