
The Vercel incident is a reminder that AI tools are supply chain risk. That does not mean “do not use AI tools.” I use them, and I think they are useful. It does mean teams need to stop treating them like harmless browser tabs.
Vercel’s April 2026 security bulletin says the incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. The attacker used that access to take over the employee’s Google Workspace account, then pivoted into Vercel systems. The important part is not that the tool had AI in the name. It is that the tool had access.
Modern AI tools connect to email, code, tickets, documents, browsers, terminals, cloud dashboards, and internal systems. That access is exactly why they are useful, and exactly why they need to be treated seriously. If a tool can read secrets, query systems, trigger workflows, or bridge between accounts, it is part of the security boundary whether the team admits it or not.
The answer is not panic. It is normal security discipline: MFA, passkeys, restricted OAuth access, secret rotation when exposure is possible, activity log review, least privilege, and a clear understanding of which tools can access which systems. AI tools are now part of the development supply chain, so they should get the same scrutiny as CI providers, deployment tools, package registries, browser extensions, and SaaS integrations.