DEV Community

Cover image for Hardening AI Agents: The Vercel AI Static Analysis Standard
Ofri Peretz
Ofri Peretz

Posted on • Edited on • Originally published at ofriperetz.dev

Hardening AI Agents: The Vercel AI Static Analysis Standard

AI-native applications require a new security paradigm. Here is the first automated static analysis standard for the Vercel AI SDK, protecting your agents from prompt injection in CI/CD.

Quick Install

npm install --save-dev eslint-plugin-vercel-ai-security
Enter fullscreen mode Exit fullscreen mode

Flat Config

// eslint.config.js
import vercelAI from "eslint-plugin-vercel-ai-security";

export default [vercelAI.configs.recommended];
Enter fullscreen mode Exit fullscreen mode

Run ESLint

npx eslint .
Enter fullscreen mode Exit fullscreen mode

You'll see output like:

src/chat.ts
  8:3  error  πŸ”’ CWE-77 OWASP:LLM01 | Unvalidated prompt input
              Risk: Prompt injection vulnerability
              Fix: Use validated prompt: sanitizePrompt(userInput)

src/agent.ts
  24:5 error  πŸ”’ OWASP:LLM08 | Tool missing confirmation gate
              Risk: AI agent can execute arbitrary actions
              Fix: Add await requireUserConfirmation() before execution
Enter fullscreen mode Exit fullscreen mode

Rule Overview

Category Rules Examples
Prompt Injection 4 Unvalidated input, dynamic system prompts
Data Exfiltration 3 System prompt leaks, sensitive data in prompts
Agent Safety 3 Missing tool confirmation, unlimited steps
Resource Limits 4 Token limits, timeouts, abort signals
RAG Security 2 Content validation, embedding verification
Output Safety 3 Output filtering, validation

Quick Wins

Before

// ❌ Prompt Injection Risk
const { text } = await generateText({
  model: openai("gpt-4"),
  prompt: userInput, // Unvalidated!
});
Enter fullscreen mode Exit fullscreen mode

After

// βœ… Validated Input
const { text } = await generateText({
  model: openai("gpt-4"),
  prompt: sanitizePrompt(userInput),
  maxTokens: 1000,
  abortSignal: AbortSignal.timeout(30000),
});
Enter fullscreen mode Exit fullscreen mode

Before

// ❌ Unlimited Agent
const { result } = await generateText({
  model: openai("gpt-4"),
  tools: dangerousTools,
});
Enter fullscreen mode Exit fullscreen mode

After

// βœ… Limited Agent
const { result } = await generateText({
  model: openai("gpt-4"),
  tools: safeTools,
  maxSteps: 5,
});
Enter fullscreen mode Exit fullscreen mode

Available Presets

// Security-focused configuration
vercelAI.configs.recommended;

// Full OWASP LLM Top 10 coverage
vercelAI.configs["owasp-llm-top-10"];
Enter fullscreen mode Exit fullscreen mode

OWASP LLM Top 10 Mapping

Customizing Rules

// eslint.config.js
import vercelAI from "eslint-plugin-vercel-ai-security";

export default [
  vercelAI.configs.recommended,
  {
    rules: {
      // Configure max steps
      "vercel-ai/require-max-steps": ["error", { maxSteps: 10 }],

      // Make RAG validation a warning
      "vercel-ai/require-rag-content-validation": "warn",
    },
  },
];
Enter fullscreen mode Exit fullscreen mode

Quick Reference

# Install
npm install --save-dev eslint-plugin-vercel-ai-security

# Config (eslint.config.js)
import vercelAI from 'eslint-plugin-vercel-ai-security';
export default [vercelAI.configs.recommended];

# Run
npx eslint .
Enter fullscreen mode Exit fullscreen mode

Quick Install

πŸ“¦ npm: eslint-plugin-vercel-ai-security
πŸ“– Full Rule List
πŸ“– OWASP LLM Mapping

⭐ Star on GitHub


The Interlace ESLint Ecosystem
Interlace is a high-fidelity suite of static code analyzers designed to automate security, performance, and reliability for the modern Node.js stack. With over 330 rules across 18 specialized plugins, it provides 100% coverage for OWASP Top 10, LLM Security, and Database Hardening.

Explore the full Documentation

Β© 2026 Ofri Peretz. All rights reserved.


Build Securely.
I'm Ofri Peretz, a Security Engineering Leader and the architect of the Interlace Ecosystem. I build static analysis standards that automate security and performance for Node.js fleets at scale.

ofriperetz.dev | LinkedIn | GitHub

Top comments (0)