Переходьте в офлайн за допомогою програми Player FM !
Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google
Manage episode 441634742 series 3574631
Curious about how you can run a colossal 405 billion parameter model on a device with a mere 2 billion footprint? Join us with Mahesh Yadav from Google, as he shares his journey from developing small devices to working with massive language models. Mahesh reveals the groundbreaking possibilities of operating large models on minimal hardware, making internet-free, edge AI a reality even on devices as small as a smartwatch. This eye-opening discussion is packed with insights into the future of AI and edge computing that you don't want to miss.
Explore the strategic shifts by tech giants in the language model arena with Mahesh and our hosts. We dissect Microsoft's investment in OpenAI’s Phi model and Google's development of Gamma, exploring how increasing the parameters in large language models leads to emergent behaviors like logical reasoning and translation. Delving into the technical and financial implications of these advancements, we also address privacy concerns and the critical need for cost-effective model optimization in enterprise environments handling sensitive data.
Advancements in edge AI training take center stage as Mahesh unpacks the latest techniques for model size reduction. Learn about synthetic data generation and the use of quantization, pruning, and distillation to shrink models without losing accuracy. Mahesh also highlights practical applications of small language models in enterprise settings, from contract management to sentiment analysis, and discusses the challenges of deploying these models on edge devices. Tune in to discover cutting-edge strategies for model compression and adaptation, and how startups are leveraging base models with specialized adapters to revolutionize the AI landscape.
Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
Розділи
1. Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google (00:00:00)
2. Edge AI Development and Challenges (00:00:37)
3. Edge AI With Small Language Models (00:13:43)
4. Advancements in Edge AI Training (00:22:53)
5. Techniques for Model Size Reduction (00:27:15)
6. Applications of Small Language Models (00:37:40)
7. Discussion on NVIDIA, ONNX, and Acceleration (00:41:05)
8. Model Compression and Adaptation Techniques (00:53:57)
23 епізодів
Manage episode 441634742 series 3574631
Curious about how you can run a colossal 405 billion parameter model on a device with a mere 2 billion footprint? Join us with Mahesh Yadav from Google, as he shares his journey from developing small devices to working with massive language models. Mahesh reveals the groundbreaking possibilities of operating large models on minimal hardware, making internet-free, edge AI a reality even on devices as small as a smartwatch. This eye-opening discussion is packed with insights into the future of AI and edge computing that you don't want to miss.
Explore the strategic shifts by tech giants in the language model arena with Mahesh and our hosts. We dissect Microsoft's investment in OpenAI’s Phi model and Google's development of Gamma, exploring how increasing the parameters in large language models leads to emergent behaviors like logical reasoning and translation. Delving into the technical and financial implications of these advancements, we also address privacy concerns and the critical need for cost-effective model optimization in enterprise environments handling sensitive data.
Advancements in edge AI training take center stage as Mahesh unpacks the latest techniques for model size reduction. Learn about synthetic data generation and the use of quantization, pruning, and distillation to shrink models without losing accuracy. Mahesh also highlights practical applications of small language models in enterprise settings, from contract management to sentiment analysis, and discusses the challenges of deploying these models on edge devices. Tune in to discover cutting-edge strategies for model compression and adaptation, and how startups are leveraging base models with specialized adapters to revolutionize the AI landscape.
Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
Розділи
1. Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google (00:00:00)
2. Edge AI Development and Challenges (00:00:37)
3. Edge AI With Small Language Models (00:13:43)
4. Advancements in Edge AI Training (00:22:53)
5. Techniques for Model Size Reduction (00:27:15)
6. Applications of Small Language Models (00:37:40)
7. Discussion on NVIDIA, ONNX, and Acceleration (00:41:05)
8. Model Compression and Adaptation Techniques (00:53:57)
23 епізодів
Усі епізоди
×1 Panel Discussion - EDGE AI TAIPEI - Revolutionizing Edge Computing with AI-Driven Innovations 58:42
1 Revolutionizing Software Development with GenAI-Powered Edge Solutions with Anirban Bhattacharjee of Wipro 28:43
1 Tomorrow's Edge AI: Cutting-Edge Memory Optimization for Large Language Models with Seonyeong Heo of Kyung Hee University 30:29
1 Harnessing Edge AI: Transforming Industries with Advanced Transformer Models with Dave McCarthy of IDC and Pete Bernard of tinyML Foundation 33:53
1 Transforming the Edge with Generative AI: Unraveling Innovations Beyond Chatbots with Danilo Pau, IEEE Fellow from STMicroelectronics 6:47
Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.