AI PulseIndustry Intel

AI Industry Intel: The Strategic News That Actually Moves the Frontier (Builders Edition)

Strategic AI news with a builders read. $25B Amazon-Anthropic, custom AI chips, Claude Mythos and Project Glasswing, 75% of Googles code is AI, Harvard switching to Claude, Figmas chatbot moment. Updated rolling.

Mike Kwal
· 20 min read
Claude Just Became Infrastructure — Amazon doubled into Anthropic ($25B); Anthropic designing its own AI chips; 75% of new Google code is AI-written; Big-7 security firms shipped Opus 4.7. INDUSTRY INTEL — MAY 2026. By Mike Kwal.

What’s covered in this report

  • The Anthropic-Amazon-CoreWeave money map — $25B in equity, $100B in compute, $30B/year in GPUs. Where the AI infrastructure is actually being built.
  • The chip wars expand — Anthropic joins Apple, Google, and Amazon in designing custom AI silicon. Why this matters for Claude’s future cost and speed.
  • Project Glasswing and the “Claude Mythos” — the model Anthropic reportedly held back. What we know, what we don’t, and how to think about it.
  • Distribution shocks — 75% of Google’s code is now AI-written, Gemini is in 4 million GM cars, and Harvard ditched ChatGPT for Claude.
  • Vendor crashes — Figma’s stock took a hit when Claude Design launched. The pattern is clear: every “AI feature inside a SaaS” is now a target.
  • The big-7 security stack — CrowdStrike, Microsoft, Palo Alto, SentinelOne, Deloitte, Accenture, PwC all deployed Claude Opus 4.7. Enterprise AI is no longer a maybe.

🚀 Plug this into Claude Code or Google Antigravity

Don’t want to read all this? Get the one-click implementation pack: download the spec, drop it into Claude Code, and let it implement on your site. Industry monitoring agent setup, the 4-question agency filter, client trust-signal templates, retainer-client email drafts for major shifts, and the tool-stack durability log — all included.

Get stuck? Want hands-on Q&A, weekly office hours, or help applying this to your specific Shopify / WordPress / Webflow site? That’s what the Talk-to-Build community is for — a technical support community for designers and creative directors building with AI.

I’m Mike Kwal. I run a daily AI Pulse video where I read the news so you don’t have to. This page is the longer-form version — the one where I slow down, connect the dots, and tell you what each headline actually means for someone building with AI.

This is not the full firehose. It’s the stuff that changes how I should build this week. I update this page every time the industry moves, which lately is most days.

Let’s go.


1. Amazon doubled down on Anthropic. $25B equity. $100B in cloud.

What happened. In late April 2026, Amazon and Anthropic announced an extension of their partnership: a fresh $25 billion equity round into Anthropic on top of Amazon’s prior investment, plus a multi-year commitment that pushes Anthropic’s total AWS compute spend past $100 billion. Anthropic also publicly committed to AWS Trainium chips as its primary training hardware.

Why it matters. The biggest question in AI right now is not “which model is smartest.” It’s “who can afford to keep training the next one?” Frontier models cost billions to train. Anthropic just got a runway long enough to keep up with OpenAI and Google for the next several model generations. AWS, in turn, locked in the customer that single-handedly justifies a decade of data-center buildout.

My read. If I’m a builder, this means Claude isn’t going anywhere. I pick my tools assuming Anthropic is here for the long haul. The same way I don’t worry about AWS disappearing, I don’t pick a different LLM provider out of fear that Claude won’t exist in 18 months. I build on the ecosystem with the deepest pockets. Right now that’s Claude on AWS.

Sources: Anthropic blog, Reuters coverage.


2. Anthropic is building its own AI chips. Apple, Google, and Amazon already do.

What happened. Reports surfaced in late April that Anthropic has hired silicon engineers and started designing custom AI accelerators. They join Apple (M-series, Apple Neural Engine), Google (TPU), and Amazon (Trainium / Inferentia) in the custom-chip game. Nvidia GPUs are still the workhorse — but every serious AI lab now wants its own backup plan.

Why it matters. Custom chips do two things: cut inference cost and break the Nvidia bottleneck. Right now, getting enough H100s or B200s is the single biggest constraint on how fast a model can scale. Owning your own chip line removes that pressure.

My read. This is a long game — chip design takes years. But the directional message I take from it is: the cost of using Claude in my client work will keep going down. The economics are pointed in my favor. AI features I can’t currently afford to bake into every client site will, within 12-24 months, become cheap enough to embed everywhere. I plan my client proposals with that in mind. The expensive AI feature I’m pricing today is the table-stakes feature I’ll be giving away next year.

Sources: The Information report on Anthropic silicon team, SemiAnalysis on AI chip economics.


3. The Anthropic + CoreWeave deal: $30 billion a year in GPU compute.

What happened. Anthropic and CoreWeave signed a multi-year infrastructure deal reported at roughly $30 billion annualized — one of the largest pure-compute deals ever announced. CoreWeave is the GPU specialist born out of crypto mining; today it’s a top-three Nvidia customer and the cloud of choice for frontier AI labs.

Why it matters. Anthropic now has compute coming from three sources: AWS (primary), Google Cloud (existing), and CoreWeave (new bulk capacity). That’s diversified supply for the most expensive ingredient in AI. It also tells you where the GPU shortage is heading: there is no shortage if you have $30B/year to spend, and a real one if you don’t.

My read. For me as a builder, this changes nothing about the API I call — but it changes everything about who can compete. The frontier model game is now a capital game. The good news? I don’t have to play it. I ride on top of it. My job is to build with the frontier, not to the frontier. That’s where the leverage is for a one-person agency like mine.

Sources: CoreWeave-Anthropic announcement, Bloomberg coverage of the deal size.


4. The Claude Mythos: “Project Glasswing” and the model they didn’t release.

What happened. Through April and early May, a story circulated — first in researcher circles, then in mainstream tech press — about an internal Anthropic model nicknamed “Project Glasswing.” The claim: Anthropic trained a model that scored too high on certain bio and cyber capability evals to release publicly. They’re keeping it internal and using a smaller, safer version (Opus 4.7) as the public release.

Anthropic has not officially confirmed every detail of the story. What they have confirmed publicly, in their Responsible Scaling Policy and several blog posts, is that they have eval thresholds at which they would not deploy a model. That part is real. The exact name “Glasswing” and the specific eval scores are still part rumor, part journalism.

Why it matters. Whether the rumor is 100% accurate or 70% accurate, the underlying story is true: AI labs now routinely train models they don’t ship. The frontier of capability is ahead of the frontier of public access. The gap is governed by safety reviews, eval thresholds, and government coordination.

My read. I don’t get distracted by the dragon. The model I can use today — Claude Opus 4.7 — is already wildly capable for what most of my client sites need. People obsessing over “the model they’re hiding” are usually procrastinating on shipping with the model they have. If you can talk it, you can build it — with the model that’s already in my browser tab.

The interesting takeaway is structural: I expect more “released vs. internal” gaps over the next 24 months. The gap will widen, not narrow. I plan my products and client work around what’s reliably available, not what’s whispered about.

Sources: Anthropic Responsible Scaling Policy, TIME on AI safety thresholds.


5. 75% of new code at Google is written by AI.

What happened. Sundar Pichai stated on Google’s Q1 2026 earnings call that more than 75% of the new code being written at Google is now AI-generated. A year ago that number was around 25%. Engineers review and approve, but the first draft is increasingly Claude, Gemini, or internal Google AI.

Why it matters. Google is the biggest software shop in the world. If 75% of their code is AI-written, the same shift is happening — quietly — at every other software company. The question for builders isn’t whether AI writes your code. It’s whether you’ve built the review and security workflow to catch what AI gets wrong.

My read. This is why I keep saying: the bottleneck moved. It used to be writing code. Now it’s reviewing code, testing code, and securing code. That’s where senior engineers earn their keep — and that’s where tools like Claude Security (covered in my Claude Security post) plug in directly.

If I’m a one-person builder shipping client websites, this is good news for me. The big shops have to figure out review workflows for thousands of engineers. I only have to figure it out for myself. I can move faster than any agency three times my size.

Sources: Google Q1 2026 earnings transcript, Bloomberg coverage of Pichai comments.


6. Gemini is now in 4 million GM cars.

What happened. General Motors announced that Google Gemini is being deployed across roughly 4 million GM vehicles as the in-car AI assistant. Drivers can talk to Gemini for navigation, music, climate, and increasingly — to control connected services and ask general questions. It’s a Google bet that voice-first AI in the car is the next big interface after the phone.

Why it matters. Cars are the next ChatGPT. People drive an average of 60 minutes a day. That’s an hour of voice-AI time per user — and most of it doesn’t involve a screen. The behavioral shift is real: people are getting comfortable talking to AI instead of typing into it.

My read. This is huge for Answer Engine Optimization — and it’s how I sell AEO to design clients who don’t think they need it. When someone asks Gemini-in-the-car “who does branding near me” or “recommend a great pizza in the Annex,” the AI’s answer is the answer. There are no blue links. There is no “scroll past the ads.” If my client’s Shopify or WordPress site isn’t structured to be cited by AI engines (see my AEO Pack), they’re invisible in the car.

The interface for finding businesses just changed under my feet. I’m structuring every client site I ship from this week forward to be machine-readable first.

Sources: GM press release on Gemini integration, Reuters on automotive AI deployment.


7. Harvard ditched ChatGPT for Claude.

What happened. Harvard University announced an institution-wide enterprise deal with Anthropic, providing Claude access to faculty, staff, and students — replacing prior recommendations around ChatGPT for many use cases. Several other elite universities (Yale, Princeton, MIT) have similar deals in flight.

Why it matters. Universities are leading indicators. Where Harvard, Yale, and MIT go, mid-tier universities follow within 12 months. K-12 follows within 24. The “default AI” for the next generation of professionals is being chosen right now — and Claude just won a major round.

My read. A lot of designers and agency owners ask me “which model should I use?” The answer used to be “ChatGPT for ease, Claude for quality.” That’s outdated. In May 2026, Claude is winning the institutional bake-offs. If my client’s audience is in education, healthcare, or any regulated industry, I build for Claude first. Their users will already have access to Claude through their employer or their school — so the content, schema, and AEO structure I ship needs to read clean to Claude before anything else.

Sources: Harvard Crimson on the Anthropic deal, Inside Higher Ed coverage.


8. Figma’s stock dropped when Claude Design launched. This will keep happening.

What happened. When Anthropic shipped Claude Design — a feature where you describe a UI in plain English and Claude generates production-grade design files — Figma’s stock fell sharply in after-hours trading. Investors did the math: if Claude can generate Figma-quality designs from a prompt, what is the design-tool moat actually worth?

Figma still has the better collaboration story, the design-system maturity, and the entrenched workflows. But the floor under “tool that turns descriptions into designs” just dropped to $0.

Why it matters. This is a pattern, not a one-off. Every SaaS that has “AI-generated content” as its core feature is sitting on top of an LLM that any user could call directly for a fraction of the price. The middle layer is shrinking.

My read. I’m a designer. I live in Figma. So when I see this headline, my first reaction isn’t “Figma is dead” — it’s “my software stack is going to keep shifting under me, and I need to bet on principles, not tools.” The principle is talk it, ship it. The tool I do that in is going to change every 12-18 months. That’s fine. I make sure my craft — taste, hierarchy, brand systems, conversion design — lives in my head, not in a vendor’s UI.

If I’m running an agency, I ask one question every quarter: what is each tool in my stack doing for my team that I couldn’t get by talking to Claude or Gemini directly? If the answer is “not much,” I’m paying for a wrapper that’s about to get unwrapped. If the answer is “real workflow, real collaboration, real client handoff” — Figma is fine. For now.

Sources: TechCrunch on Claude Design launch, The Verge on Figma stock movement.


9. OpenAI’s GPT-5.5 Cyber: the first AI with a “job title.”

What happened. OpenAI launched GPT-5.5 Cyber — a specialized variant of GPT-5.5 fine-tuned for cybersecurity work (vulnerability analysis, incident response, threat hunting). It’s being marketed not as “a model” but as a role — the first AI sold with a specific job title attached.

Why it matters. This is the same pattern as Claude Security on the Anthropic side. The big AI labs are no longer competing on generic IQ. They’re competing on vertical performance — who has the best AI for security, the best AI for legal, the best AI for medical, the best AI for design.

My read. Generic AI is becoming a commodity. Specialized AI is where the next wave of value lives — and that’s exactly the productization play I’m running for design clients. I don’t try to build a generic AI assistant — I build the AI flow for one specific job done by one specific person. “AI website intake for solo dental practices.” “AI product-photo cleanup for Etsy sellers.” “AI client onboarding for personal injury firms.” The narrower the job, the more I can charge for the design and automation around it.

OpenAI just blessed the strategy at the model layer. I run the same play at the website-and-workflow layer.

Sources: OpenAI GPT-5.5 Cyber announcement, Wall Street Journal on AI specialization.


10. Netlify’s Frontend-Design Skill turns prompts into shippable UI.

What happened. Netlify shipped a Frontend-Design Skill as part of their AI agent runners. You describe the UI you want, the agent generates production-grade, accessible, responsive components — and deploys them to a Netlify preview URL automatically. No design tools. No copy-paste from Claude. End-to-end prompt → deployed UI.

Why it matters. This collapses the distance between “I have an idea” and “there’s a working website at a real URL.” For builders who don’t have a designer, this is a massive unlock. For agencies that charge by the design hour, it’s a margin compression event.

My read. The path from idea to live website is now under 30 minutes for someone who knows the tools. This is exactly what if you can talk it, you can build it looks like in practice. If I’m not using tools like this on my client work — Netlify’s skill, Webflow’s AI builder, Vercel’s v0 — I’m working harder than I need to and pricing my time wrong.

The tactical takeaway: the next time a client sends me a landing page brief, I don’t open Figma first. I open Netlify. I talk it through. I ship the preview the same hour, then I bring Figma in for the polish pass. That’s the new sequence.

Sources: Netlify blog on Frontend-Design Skill, Smashing Magazine on AI-driven frontend.


11. The big-7 just deployed Claude Opus 4.7 across security.

What happened. When Anthropic launched Claude Security in public beta on May 4, 2026, the launch list of integrations and customers was the headline. CrowdStrike, Microsoft Security, Palo Alto Networks, and SentinelOne are integrating Claude Security into their own products. Deloitte, Accenture, and PwC are deploying it for their enterprise client base. That’s seven of the most important security and consulting firms on Earth, all in on the same model release.

Why it matters. Enterprise security is the most conservative buyer in the world. CISOs don’t adopt new tools casually — every adoption is months of compliance review, contract negotiation, and risk modeling. When seven of them ship the same model on day one of public beta, it means the back-channel work has been going on for months. The market validated Claude Opus 4.7 before the public got to.

My read. If I’m a designer building websites or apps for enterprise clients, this is my green light. I literally tell prospects: “this site was built and reviewed using the same Claude Opus 4.7 that runs Microsoft Security and PwC’s audit stack.” That’s a real trust signal in a client conversation, and it’s the kind of line that closes a deal when the buyer is nervous about “AI-built” anything.

For solo designers like me, the same logic flips upside down. I can run, in my own browser, the same model that PwC sells to Fortune 500 boards. The asymmetry is mine to use — and the price I quote my client doesn’t have to look like a Big 4 invoice.

Sources: Business Standard on Claude Security launch partners, Help Net Security, DevOps.com on enterprise integrations.


The frontier of AI capability is moving faster than the frontier of public attention. The headlines I skip this week are the building blocks of next year’s normal.


My $0.02 — How I read these signals as a designer

I’m a designer running an agency. None of these news items directly tell me how to design better. None of them give me a new color theory, a new typography rule, or a new way to balance a hero section. But every single one of them quietly changes the rules I work under — what I can promise clients, what I can charge, what stack I can bet on, and which pitches close.

Here’s how I actually read these signals as a designer running an agency for design clients.

$25B Amazon-into-Anthropic = Claude is now infrastructure. When the platform behind my AI tooling has $100B of compute committed, I can stop hedging. I bet my agency stack on Claude. That means I write my client proposals assuming Claude is around in three years, the same way I assume AWS is around in three years. I stop caveating “if this AI provider is still here” in my SOWs. That’s a small thing, but it tightens my pricing and my confidence.

Anthropic building its own chips = my tool prices stop ambushing me. Custom silicon means inference costs trend down, not up. Translation for my agency: the AI features I’m pricing into client packages today won’t suddenly double in cost mid-retainer. I can offer fixed-fee monthly continuity packages that include AI workflows without sweating the markup math every quarter.

75% of Google’s code is AI = the bar for “AI-built work” just rose. If Sundar’s flagship company writes most of its code with AI, my client deliverables — the AI-assisted Webflow builds, the WordPress AI plugins I install, the Shopify AI features I configure — better feel as polished as anything Google ships. I can’t sell “AI-built” as an excuse for sloppy. The standard is “you can’t tell where the AI ended and the designer began.”

Harvard switching to Claude = I tell clients which models I use, and why. When elite institutions pick a model, my clients notice. So now in my discovery calls I explicitly say: “I built this site with Claude Opus 4.7 — the same model Harvard chose, the same model running Microsoft Security.” That’s a trust ladder. I don’t make my clients guess what’s under the hood.

Figma’s chatbot moment = I bet on principles, not tools. Figma getting hit by Claude Design is the warning shot. Every tool in my design stack will keep shifting. So I don’t fall in love with any one tool. I fall in love with the principle — talk it, ship it. That principle survives every tool migration. I tell my team to learn the craft, not the software.

Big-7 deploying Claude Opus 4.7 = my single most powerful trust signal. When I’m in a client conversation and they’re nervous about AI-built work, I say: “the same model running Microsoft Security and PwC’s audit stack is reading my code.” That sentence does more for my close rate than any portfolio piece.

That’s the lens. Industry news is never just industry news. It’s the rule change for my agency, my pricing, and the next conversation I have with a client.


What does this mean for me as a builder?

Should I switch from ChatGPT to Claude?
Don’t switch — add. Most serious builders run both. Claude tends to win on long context, careful reasoning, and code review. ChatGPT tends to win on speed, image generation, and general “first draft” work. Use the right tool for the right step.

Is now a bad time to learn AI tools because they keep changing?
The opposite. The tools change weekly, but the underlying skill — talking to AI clearly, breaking work into prompts, reviewing output — barely changes. Learn the skill on whatever tool is in front of you today. The skill carries.

Are big-tech AI deals just a bubble?
Some of it is bubble money. Most of it isn’t. AWS doesn’t sign $100B compute deals on bubble logic — they sign them because the underlying demand from enterprise customers is real and growing. The bubble fear is which AI companies survive, not whether AI itself is real.

Will custom AI chips make Claude cheaper for me?
Yes — over a 12-24 month window. The economics of custom silicon flow downhill: lower training cost → lower inference cost → lower API cost → lower price for the apps I build on top. I plan products assuming AI compute gets 5-10x cheaper over the next two years.

Should I worry about a “secret Glasswing-class model” being released suddenly?
No. Even if such a model exists, it would be released gradually with safeguards. The bigger risk is the opposite: spending so much energy speculating about future models that I never ship with the model that’s already in front of me.

My SaaS tool just added “AI features.” Should I be worried it’s a wrapper?
Maybe. Ask the test question: “if I could talk to Claude or Gemini directly, would I still pay for this tool?” If yes — I’m paying for real workflow. If no — I’m paying for a UI on top of a model I already have access to. Plan accordingly.

How do you keep up with all of this?
I run a daily AI Pulse video. 30-60 seconds, one news item, one builder takeaway. Follow along on Instagram (@mikekwal) or YouTube. The news that lands here on the blog is the stuff worth long-term reference.


Want help applying this?

Four ways to go deeper:

  • Build with Builders. Join the Talk-to-Build community to Learn how to Earn money with AI, Download our AI Skills, Advance your business, Learn to build real assets for Website Design & Shopify stores — Gen-AI images, cinematic AI videos, conversational AI office secretaries — that you can sell to SMBs that want the outcomes but don’t have time to learn the skills.
  • Done-for-you. MK-Way builds AEO-ready websites and apps for design agencies and founders who want it shipped fast.
  • Quick question. DM me on Instagram. I read every message.
  • B2B / strategy. Connect on LinkedIn for deeper conversations about AI in design and agency work.

This page is part of the AI Pulse Asset Pack series. Mike Kwal updates it every time the industry shifts in a way that changes how a builder should work. If you commented “INFRA,” “INVEST,” “MYTHOS,” “CHIPS,” “DRIVE,” “DESIGN,” “CYBER,” or “NETLIFY” on one of his videos — this is the field report. Bookmark it.

Last updated: May 7, 2026.