Global AI Weekly
Issue number: 137 | Tuesday, February 17, 2026
Highlights
ChatGPT rolls out ads
OpenAI is introducing ads to its ChatGPT platform as a way to generate revenue and support the ongoing development and expansion of its AI technology. This move comes after previous testing of app suggestions faced criticism for resembling unwanted ads. As the company continues to grow, monetizing its popular chatbot has become essential to sustain its operations.
techcrunch.com
Microsoft AI chief confirms plan to ditch OpenAI
Mustafa Suleyman, co-founder of Google Deepmind and Microsoft AI lead, revealed that Microsoft is planning to reduce its reliance on OpenAI. He pointed out concerns over OpenAI's financial stability, which seem to be influencing this strategic shift. This move highlights Microsoft's effort to establish more independence in the competitive AI landscape.
windowscentral.com
AI Cold War: OpenAI Alleges DeepSeek ‘Distillation’ Tactics
OpenAI has warned U.S. lawmakers that Chinese AI startup DeepSeek may be using so-called distillation methods to rapidly train its next-gen models by extracting outputs from advanced U.S. systems, potentially bypassing access controls and “free-riding” on years of research investment. According to a memo reviewed by Reuters, OpenAI says DeepSeek staff used obfuscated routing and scripts to collect model outputs, reigniting tech-race tensions between American AI firms and Chinese rivals.
reuters.comResearch
FullStack-Agent: Enhancing Agentic Full-Stack Web Coding via Development-Oriented Testing and Repository Back-Translation
This paper introduces FullStack-Agent, a framework designed to improve full-stack web development through development-oriented testing and repository back-translation. It focuses on enhancing the efficiency and reliability of coding agents by incorporating testing methods that align with the development process. The approach also leverages repository back-translation to refine agents' understanding and performance in web development tasks.
arxiv.org
DeepGen 1.0: A Lightweight Unified Multimodal Model for Advancing Image Generation and Editing
This paper introduces DeepGen 1.0, a lightweight and unified multimodal model designed to enhance image generation and editing capabilities. It focuses on providing advanced techniques while maintaining efficiency and accessibility. The model aims to push the boundaries of creative visual outputs by combining innovative approaches in a streamlined framework.
huggingface.co
AI Meets Particle Physics: GPT-5.2 Breaks New Ground
OpenAI announced that GPT-5.2 played a central role in deriving a new theoretical physics result, publishing a preprint showing a previously assumed-zero gluon scattering amplitude can in fact be nonzero under specific conditions. The model first conjectured a simple general formula, which was then proven and verified with human and AI collaboration. This milestone highlights AI’s growing capability to assist at the frontier of scientific research.
openai.comVideo
Securing AI Agents with Zero Trust
Jeff Crume explores how Zero Trust principles enhance the security of AI agents by protecting autonomous systems and securing non-human identities against potential threats like prompt injection. With a focus on safeguarding innovation, he emphasizes the importance of implementing robust, AI-driven defenses to ensure secure operations. Learn how these cutting-edge security measures bolster trust and resilience in today's AI-driven landscape.
youtube.com
What is OpenRAG? Unlocking the Future of RAG in Generative AI
David Jones-Gilardi explores how OpenRAG leverages technologies like Retrieval Augmented Generation (RAG), Docling, OpenSearch, and Langflow to optimize generative AI workflows. This open-source solution is designed to enhance the precision, efficiency, and adaptability of AI systems, offering innovative advancements for cutting-edge applications. A fresh perspective on how AI tools can work smarter and more effectively.
youtube.comArticles
8× Smarter, Not Harder: Nvidia’s LLM Cost Breakthrough
Researchers at Nvidia have introduced a technique called Dynamic Memory Sparsification (DMS) that slashes large language model reasoning memory costs by up to eight times, without compromising accuracy. By intelligently compressing the KV cache that models use during long reasoning tasks, DMS lets LLMs “think” longer and handle more concurrent requests with far lower GPU memory pressure. This could cut inference costs dramatically and make advanced AI more efficient to deploy at scale.
venturebeat.com
Shadow AI practices: A wakeup call for enterprises
Executives may be focused on broader AI strategies, but shadow AI practices are already taking root within organizations, subtly altering risk landscapes. These unapproved tools and systems operate outside the boundaries of official policies, creating vulnerabilities that enterprises are often unprepared to address. It's a stark reminder for organizations to act swiftly and tighten controls before these hidden risks escalate further.
cio.com
Anthropic's AI‑built C compiler is not all that impressive
Anthropic's AI-built C compiler has sparked mixed reactions, with some fans hailing it as groundbreaking, while developers remain less impressed. The tool has limitations and doesn’t quite deliver the level of functionality or innovation many had hoped for. It highlights the growing gap between public excitement for AI advancements and the practical concerns of professionals working with them.
theregister.comUpcoming Events
AgentCon - The AI Agents World Tour Continues in 2026
AgentCon continues into 2026 with the AI Agents World Tour—one-day, developer-focused conferences dedicated to autonomous AI agents. Building on a successful run of events, the tour expands to even more cities worldwide, from San Francisco to Singapore and beyond. Join leading engineers, researchers, and builders to explore cutting-edge agent architectures, real-world use cases, and emerging best practices. Connect with the global AI community and help shape the future of autonomous AI.
globalai.communityCode
GitHub Agentic Workflows are now in technical preview
GitHub Agentic Workflows introduce a way to automate repository tasks using AI agents directly within GitHub Actions. Instead of relying on complex YAML, users can write workflows in simple Markdown, allowing AI to handle the more intelligent and dynamic aspects of the process. This feature, now in technical preview, simplifies workflow creation and enhances automation efficiency.
github.blog
Codex on Fire: Meet GPT-5.3-Codex-Spark
OpenAI just dropped GPT-5.3-Codex-Spark, an ultra-fast real-time coding AI now in research preview for ChatGPT Pro users. Built as a leaner variant of GPT-5.3-Codex and powered by low-latency Cerebras hardware, Spark delivers near-instant code generation (1,000+ tokens/sec) and tight interactive workflows for edits, logic tweaks, and interface refinements. It joins Codex’s long-horizon capabilities with a new real-time mode, redefining how developers iterate and build with AI.
openai.comPodcast
Intelligent Machines
The Intelligent Machines podcast focuses on the transformative impact of AI on our daily lives as it becomes integrated into devices like cars, smartphones, and appliances. It explores the promises and challenges of this technological revolution through conversations with AI pioneers, inventors, and innovators. Listeners gain insights into what’s real versus exaggerated while preparing for a future shaped by intelligent machines.
open.spotify.com