Player FM - Internet Radio Done Right
109 subscribers
Checked 1h ago
Додано six років тому
Вміст надано The New Stack Podcast and The New Stack. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією The New Stack Podcast and The New Stack або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !
Переходьте в офлайн за допомогою програми Player FM !
The New Stack Podcast
Відзначити всі (не)відтворені ...
Manage series 2574278
Вміст надано The New Stack Podcast and The New Stack. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією The New Stack Podcast and The New Stack або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
…
continue reading
306 епізодів
Відзначити всі (не)відтворені ...
Manage series 2574278
Вміст надано The New Stack Podcast and The New Stack. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією The New Stack Podcast and The New Stack або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
…
continue reading
306 епізодів
Усі епізоди
×Clockwork began with a narrow goal—keeping clocks synchronized across servers—but soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockwork’s technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. Its flagship feature, FleetIQ, can reroute traffic around failing switches, preventing costly interruptions that might otherwise force teams to restart training from hours-old checkpoints. Although the company originated from Stanford research focused on clock synchronization for financial institutions, the team eventually recognized that packet-timing data could underpin powerful network telemetry and dynamic traffic control. By integrating with NVIDIA NCCL, TCP and RDMA libraries, Clockwork can not only measure congestion but also actively manage GPU communication to enhance both uptime and training efficiency. Learn more from The New Stack about the latest in Clockwork: Clockwork’s FleetIQ Aims To Fix AI’s Costly Network Bottleneck What Happens When 116 Makers Reimagine the Clock? Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an “in-between” space—needing more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups. Jupyter Deploy simplifies this by orchestrating an entire encrypted stack—using Docker, Terraform, OAuth2, and Let’s Encrypt—with minimal setup, removing the need to manually manage 15–20 cloud components. While it offers an easy on-ramp, Guinegagne notes that long-term use still requires some cloud understanding. Built by AWS’s AI Open Source team but deliberately vendor-neutral, it uses a template-based approach, enabling community-contributed deployment recipes for any cloud. Led by Brian Granger, the project aims to join the official Jupyter ecosystem, with future plans including Kubernetes integration for enterprise scalability. Learn more from The New Stack about the latest in Jupyter AI development: Introduction to Jupyter Notebooks for Developers Display AI-Generated Images in a Jupyter Notebook Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
T
The New Stack Podcast

In an interview at JupyterCon, Brian Granger — co-creator of Project Jupyter and senior principal technologist at AWS — reflected on Jupyter’s evolution and how AI is redefining open source sustainability. Originally inspired by physics’ modular principles, Granger and co-founder Fernando Pérez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning. Now, AI is accelerating development itself: Granger described rewriting Jupyter Server in Go, complete with tests, in just 30 minutes using an AI coding agent — a task once considered impossible. This shift challenges traditional notions of technical debt and could reshape how large open source projects evolve. Jupyter’s 2017 ACM Software System Award placed it among computing’s greats, but also underscored its global responsibility. Granger emphasized that sustaining Jupyter’s mission — empowering human reasoning, collaboration, and innovation — remains the team’s top priority in the AI era. Learn more from The New Stack about the latest in Jupyter AI development: Introduction to Jupyter Notebooks for Developers Display AI-Generated Images in a Jupyter Notebook Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personas— customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as “Claude Code” or “OpenAI Codex,” to coexist in one chat. Developers can even build and share their own personas as local or pip-installable packages. This flexibility was enabled by splitting Jupyter AI’s previously large, complex codebase into smaller, modular packages, allowing users to install or replace components as needed. Looking ahead, Qiu envisions Jupyter AI as an “ecosystem of AI personas,” enabling multi-agent collaboration where different personas handle roles like data science, engineering, and testing. With contributors from AWS, Apple, Quansight, and others, the project is poised to expand into a diverse, community-driven AI ecosystem. Learn more from The New Stack about the latest in Jupyter AI development: Introduction to Jupyter Notebooks for Developers Display AI-Generated Images in a Jupyter Notebook Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
T
The New Stack Podcast

1 Stop Writing Code, Start Writing Docs 1:03:25
1:03:25
Відтворити Пізніше
Відтворити Пізніше
Списки
Подобається
Подобається1:03:25
In this episode of The New Stack Podcast, hosts Alex Williams and Frederic Lardinois spoke with Keith Ballinger, Vice President and General Manager of Google Cloud Platform Developer Experience (GPC), about the evolution of agentic coding tools and the future of programming. Ballinger, a hands-on executive who still codes, discussed Gemini CLI, Google’s response to tools like Claude Code, and his broader philosophy on how developers should work with AI. He emphasized that these tools are in their “first inning” and that developers must “slow down to speed up” by writing clear guides, focusing on architecture, and documenting intent—treating AI as a collaborative coworker rather than a one-shot solution. Ballinger reflected on his early AI experiences, from Copilot at GitHub to modern agentic systems that automate tool use. He also explored the resurgence of the command line as an AI interface and predicted that programming will increasingly shift from writing code to expressing intent. Ultimately, he envisions a future where great programmers are great writers, focusing on clarity, problem decomposition, and design rather than syntax. Learn more from The New Stack about the latest in Google AI development: Why PyTorch Gets All the Love Lightning AI Brings a PyTorch Copilot to Its Development Environment Ray Comes to the PyTorch Foundation Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation’s Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendly Antiga emphasized that PyTorch has remained central across major AI shifts — from early neural networks to today’s generative AI boom — powering not just model training but also inference systems such as vLLM and SGLang used in production chatbots. Its flexibility also makes it ideal for reinforcement learning, now commonly used to fine-tune large language models (LLMs). On the PyTorch Foundation, Antiga noted that while it recently expanded to include projects likev LLM ,DeepSpeed, and Ray, the goal isn’t to become a vast umbrella organization. Instead, the focus is on user experience and success within the PyTorch ecosystem. Learn more from The New Stack about the latest in PyTorch: Why PyTorch Gets All the Love Lightning AI Brings a PyTorch Copilot to Its Development Environment Ray Comes to the PyTorch Foundation Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
T
The New Stack Podcast

Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machine learning. With the rise of large language models (LLMs), the company shifted toward agentic AI, introducing a library of specialized agents—like DevOps, SRE, AppSec, and FinOps agents—that operate behind a unified interface called Harness AI. These agents assist in building production pipelines, not deploying code directly, ensuring human oversight remains critical for compliance and security. Bansal emphasizes that AI in development isn't replacing people but accelerating workflows to meet tighter timelines. He also notes strong enterprise adoption, with even large, traditionally slower-moving organizations embracing AI integration. On the topic of an AI bubble, Bansal sees it as a natural part of innovation, akin to the Dotcom era, where market excitement can still lead to meaningful long-term transformation despite short-term volatility. Learn more from The New Stack about the latest in Harness' AI approach to software development: Harness AI Tackles Software Development’s Real Bottleneck Harnessing AI To Elevate Automated Software Testing Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
The agentic AI space faces challenges around secure, governed connectivity between agents, tools, large language models, and microservices. To address this, Solo.io developed two open-source projects: Kagent and Agentgateway. While Kagent, donated to the Cloud Native Computing Foundation, helps scale AI agents, it lacks a secure way to mediate communication between agents and tools. Enter Agentgateway, donated to the Linux Foundation, which provides governance, observability, and security for agent-to-agent and agent-to-tool traffic. Written in Rust, it supports protocols like MCP and A2A and integrates with Kubernetes Gateway API and inference gateways. Lin Sun, Solo.io’s head of open source, explained that Agentgateway allows developers to control which tools agents can access—offering flexibility to expose only tested or approved tools. This enables fine-grained policy enforcement and resilience in agent communication, similar to how service meshes manage microservice traffic. Agentgateway ensures secure and selective tool exposure, supporting scalable and secure agent ecosystems. Major players like AWS and Microsoft are also engaging in its development. Learn more from The New Stack about the latest in open source projects like Agentgateway: Learn more from The New Stack about the latest in open source projects like Agentgateway: Why Tech Giants Are Backing the New Agentgateway Project AI Agents Are Creating a New Security Nightmare for Enterprises and Startups Five Steps to Build AI Agents that Actually Deliver Business Results Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI's current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren't yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using large language models (LLMs) to enhance productivity. Sentry's AI-powered tool, Seer, uses GenAI to help developers debug more efficiently by identifying root causes and summarizing complex system data, mimicking some functions of senior engineers. However, Cramer emphasizes that human oversight remains essential, describing the current stage as "human in the loop" AI, useful for speeding up code reviews and identifying overlooked bugs. Cramer also addressed Sentry's shift from open source to fair source licensing due to frustration over third parties commercializing their software without contributing back. Sentry now uses Functional Source Licensing, which becomes Apache 2.0 after two years. This move aims to strike a balance between openness and preventing exploitation, while maintaining accessibility for users and avoiding fragmented product versions. Learn more from The New Stack about the latest in Sentry and David Cramer thoughts on AI development: Install Sentry to Monitor Live Applications Frontend Development Challenges for 2021 Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor's background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear's new agent-specific API played a key role in enabling this integration, providing agents like Cursor with context-aware sessions to interact efficiently with the platform. Developers can now offload tasks such as fixing issues, updating documentation, or managing dependencies to the Cursor agent. However, both Linear’s Tom Moor and Cursor’s Andrew Milich emphasized the importance of giving agents clear, thoughtful input. Simply assigning vague tasks like “@cursor, fix this” isn’t effective—developers still need to guide the agent with relevant context, such as links to similar pull requests. Milich and Moor also discussed the growing value and adoption of autonomous agents, and hinted at a future where more companies build agent-specific APIs to support these tools. The full interview is available via podcast or YouTube. Learn more from The New Stack about the latest in AI and development in Cursor AI and Linear: Install Cursor and Learn Programming With AI Help Using Cursor AI as Part of Your Development Workflow Anti-Agile Project Tracker Linear the Latest to Take on Jira Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team’s strong Kubernetes and Triton expertise. The company recently switched from GitHub Copilot to its own AI coding assistant, Windsurf, yielding a 10% productivity boost among 7,000 engineers. However, use of such tools isn’t mandatory—performance remains the main metric. Casey also addresses the impact of AI on junior developers, acknowledging that AI tools often handle tasks traditionally assigned to them. While ServiceNow still hires many interns, he sees the entry-level tech job market as increasingly vulnerable. Despite these concerns, Casey remains optimistic, viewing the AI revolution as transformative and ultimately beneficial, though not without disruption or risk. Learn more from The New Stack about the latest in AI and development in ServiceNow ServiceNow Launches a Control Tower for AI Agents ServiceNow Acquires Data.World To Expand Its AI Data Strategy Join our community of newsletter subscribers to stay on top of the news and at the top of your game . Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
The European Union’s upcoming Cyber Resilience Act (CRA) goes into effect in October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher "CRob" Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources. Developers often have no visibility into how their code is used, yet they’re increasingly burdened by legal and compliance demands from downstream users, such as requests for Software Bills of Materials (SBOMs) and conformity assessments. The CRA raises the stakes, with potential penalties in the billions for noncompliance, putting immense pressure on the open source ecosystem. Learn more from The New Stack about Open Source Security: Open Source Propels the Fall of Security by Obscurity There Is Just One Way To Do Open Source Security: Together Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
In this week’s The New Stack Agents , Zach Lloyd, founder and CEO of Warp, discussed the launch of Warp Code, the latest evolution of the Warp terminal into a full agentic development environment. Originally launched in 2022 to modernize the terminal, Warp now integrates powerful AI agents to help developers write, debug, and ship code. Key new features include a built-in file editor, project-structuring tools, agent-driven code review, and WARP.md files that guide agent behavior. Recognizing developers’ hesitation to trust AI-generated code, Warp emphasizes transparency and control, enabling users to inspect and steer the agent’s work in real time through "persistent input" and task list updates. While Warp supports terminal workflows, Lloyd says it’s now better viewed as an AI coding platform. Interestingly, the launch announcement was delivered from horseback in a Western-themed ad, reflecting Warp’s desire to stand out in a crowded field of conventional tech product rollouts. The quirky “Code on Warp” (C.O.W.) branding captured attention and embodied their unique approach. Learn more from The New Stack about the latest in AI and Warp: Warp Goes Agentic: A Developer Walk-Through of Warp 2.0 Developer Review of Warp for Windows, an AI Terminal App How AI Can Help You Learn the Art of Programming Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
In a recent episode of The New Stack Agents from the Open Source Summit in Amsterdam, Jim Zemlin, executive director of the Linux Foundation, discussed the evolving landscape of open source AI. While the Linux Foundation has helped build ecosystems like the CNCF for cloud-native computing, there's no unified umbrella foundation yet for open source AI. Existing efforts include the PyTorch Foundation and LF AI & Data, but AI development is still fragmented across models, tooling, and standards. Zemlin highlighted the industry's shift from foundational models to open-weight models and now toward inference stacks and agentic AI. He suggested a collective effort may eventually form but cautioned against forcing structure too early, stressing the importance of not hindering innovation. Foundations, he said, must balance scale with agility. On the debate over what qualifies as "open source" in AI, Zemlin adopted a pragmatic view, acknowledging the costs of creating frontier models. He supports open-weight models and believes fully open models, from data to deployment, may emerge over time. Learn more from The New Stack about the latest in AI and open source, AI in China, Europe's AI and security regulations, and more: Open Source Is Not Local Source, and the Case for Global Cooperation US Blocks Open Source ‘Help’ From These Countries Open Source Is Worth Defending Join our community of newsletter subscribers to stay on top of the news and at the top of your game./ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. However, to reach that point, enterprises must align their data strategies with their AI ambitions. Many have rushed into AI fearing obsolescence, but without preparing their data infrastructure, they're at risk of failure. Current legacy systems are not designed for the massive concurrency demands of agentic AI, potentially leading to underperformance. Verma emphasizes the need to move beyond siloed or "swim lane" databases toward unified, high-performance data platforms tailored for the scale and complexity of the AI era. Learn more from The New Stack about the latest evolution in AI infrastructure: How To Use AI To Design Intelligent, Adaptable Infrastructure How to Support Developers in Building AI Workloads Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.…
Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.



















