Global AI Weekly
Issue number: 143 | Tuesday, March 31, 2026
Highlights
ChatGPT Ads Hit $100M in 6 Weeks
OpenAI’s quiet ad experiment inside ChatGPT is already a breakout success, surpassing $100 million in annualized revenue just six weeks after launch. Even more striking, this comes from a fraction of users, hinting at massive untapped upside. With 600+ advertisers onboard and global expansion imminent, AI is rapidly becoming the next major ad platform. The real question: can OpenAI scale monetization without eroding the trust that made ChatGPT indispensable?
reuters.com
Sora: OpenAI closes AI video app and cancels $1bn Disney deal
OpenAI has shut down its AI video app, Sora, less than two years after its debut, which had a significant impact on the media industry. Alongside this decision, OpenAI has also canceled a $1 billion deal with Disney, marking a notable shift in its strategy. Steps moving forward remain unclear, but these moves suggest a reevaluation of priorities within the company.
bbc.com
Anthropic wins injunction against Trump administration over Defense Department saga
A federal judge has ruled in favor of Anthropic, ordering the Trump administration to roll back recent restrictions imposed on the AI company. The decision comes as part of an ongoing dispute involving the Defense Department. This injunction marks a significant step for Anthropic as it navigates regulatory challenges in its work on cutting-edge AI technologies.
techcrunch.comResearch
Learning to Drive with Natural Language Instructions
This video explores the innovative Vega framework, which trains autonomous vehicles to drive using natural language instructions. It highlights how Vega translates everyday language into actionable driving behaviors, enabling intuitive interactions between humans and machines. The focus is on its potential to simplify communication and enhance the adaptability of self-driving systems.
huggingface.co
Vectorizing the Trie: Efficient Constrained Decoding for LLM-based Generative Retrieval on Accelerators
This paper introduces a novel approach to enhance the efficiency of constrained decoding in large language model (LLM)-based generative retrieval systems. By vectorizing the trie data structure, the method optimizes decoding processes for accelerators like GPUs, enabling faster and more resource-efficient performance. The approach demonstrates significant advancements in handling constraints during text generation while maximizing hardware utilization.
arxiv.orgVideo
Securing AI-based development workflows
In this episode of the Made for Dev Show, host Sammy chats with Oleg Šelajev from Docker's Developer Relations team about enhancing the security of AI-based development workflows. They explore Docker Hardened Images (DHI), which are minimal and secure production-ready images for popular tools, as well as the new VM-based Docker Sandboxes that isolate AI agents to protect host systems and sensitive data. Key features like network proxy injection of credentials, granular control over network access, and the versatility of the MCP Gateway are highlighted as effective measures to safeguard AI projects and maintain tight security protocols. Whether you're running in "YOLO mode" or managing a team’s AI tools, this episode provides actionable insights for developers.
youtube.com
How to Pass Context in an Agentic AI Flow
Grant Miller breaks down how to effectively manage and pass context in agentic AI systems, focusing on context engineering, task history, and orchestration strategies. He highlights ways to optimize workflows and enhance AI interactions using dynamic, multi-system solutions. This guide is ideal for those looking to improve the efficiency and functionality of their AI-driven processes.
youtube.comArticles
Updates to GitHub Copilot interaction data usage policy
Starting April 24, GitHub will use interaction data from Copilot Free, Pro, and Pro+ users to enhance its AI models. Users who prefer not to share their data have the option to opt out of this data usage. This update aims to improve Copilot’s functionality while providing users with control over their data preferences.
github.blog
Stanford study outlines dangers of asking AI chatbots for personal advice
A Stanford study explores the risks of relying on AI chatbots for personal advice, shedding light on how their agreeable tendencies might lead to harmful outcomes. The research highlights concerns about these bots potentially reinforcing biases, misinformation, or unhealthy behaviors when providing guidance. It raises questions about the ethical design and application of AI in sensitive, human-centered scenarios.
techcrunch.comUpcoming Events
AgentCamp - Coming to a City Near You
AgentCamp continues to grow as a global series of hands-on gatherings dedicated to building and experimenting with AI agents. These community-driven events bring developers, founders, and AI enthusiasts together for practical sessions, collaborative building, and open exchange of ideas. Hosted in cities around the world, AgentCamp focuses on real-world experimentation, giving participants the space to prototype agent workflows, explore emerging tools, and learn directly from peers working at the edge of autonomous AI. Join the community to build, share, and help advance what AI agents can do in practice.
globalai.communityCode
How to Build Your Own Claude Code Skill
Every developer has their own routines and habits when it comes to coding, from writing commit messages to reviewing code or preparing pull requests. This guide encourages you to refine and structure those workflows into a personal coding skill set. By being intentional with your processes, you can create smoother, more efficient habits that make coding more effective and enjoyable.
freecodecamp.org
Claude Code auto mode: a safer way to skip permissions
Anthropic introduces Claude Code auto mode, designed to streamline the process of skipping permissions while maintaining safety. This feature ensures AI systems remain reliable and interpretable, prioritizing user trust and control. It’s part of their commitment to building technology that is both effective and secure.
anthropic.comPodcast
Last Week in AI
Last Week in AI brings you concise weekly updates on the most important happenings in the world of artificial intelligence. From breakthroughs and advancements to policy changes and industry trends, stay informed with everything you need to know about AI in an easy-to-digest format. Perfect for anyone wanting to keep up with the fast-paced evolution of technology.
open.spotify.com