DEV Community

Cover image for I Built an AI Agent Showcase with Laravel AI SDK — Here’s How You Can Do It
Mohammad Ali Abdul Wahed
Mohammad Ali Abdul Wahed

Posted on

I Built an AI Agent Showcase with Laravel AI SDK — Here’s How You Can Do It

How Laravel’s new AI SDK makes building production-ready AI features surprisingly simple (with mock mode for instant demos)
Yesterday, Laravel released their official AI SDK. As someone who’s been watching the AI integration space closely, I knew this was going to be a game-changer for PHP developers.
So I did what any excited developer would do at 11 PM: I built a complete showcase application in one night.

The result? A fully functional AI agent with chat, image generation, text-to-speech, and vector search — all working in your browser right now, no API keys required.

🔗 Live Demo: https://laravel-ai-showcase.onrender.com/
💻 GitHub: https://github.com/aliabdm/laravel-ai-showcase

Here’s exactly how I built it, and how you can do the same.

The Problem with AI Integration
Let me be honest: integrating AI into applications has traditionally been a pain. You’re juggling multiple SDKs, managing API credentials, handling streaming responses, dealing with rate limits, and building fallback systems.

Most developers want to experiment with AI features, but the barrier to entry is high. You need API keys, credit cards, and often hours of setup before you can even see your first result.

Laravel’s AI SDK solves this brilliantly.

What We’re Building
A complete AI showcase with:

✨ AI Chat with real-time streaming
🎨 Image Generation
🔊 Text-to-Speech
🔍 Vector Search with embeddings
🎯 Mock Mode — try everything without API keys
🔄 Session-based mode switching for instant demos
Step 1: Project Setup & Installation
Let’s start fresh:

Create new Laravel project

laravel new ai-showcase
cd ai-showcase

Install Laravel AI SDK

composer require laravel/ai

Publish configuration

php artisan vendor:publish --tag=ai-config
The configuration is beautifully simple:

// config/ai.php
return [
'mode' => env('AI_MODE', 'mock'), // 'mock' or 'real'

'providers' => [
    'gemini' => [
        'api_key' => env('GEMINI_API_KEY'),
    ],
    'openai' => [
        'api_key' => env('OPENAI_API_KEY'),
    ],
],
Enter fullscreen mode Exit fullscreen mode

];
Step 2: The Secret Sauce — Mock Provider
Here’s what makes this project special: a Mock Provider that lets anyone try the app instantly.

Become a member
No API keys. No setup. Just immediate results.

// app/Ai/Providers/MockProvider.php
namespace App\Ai\Providers;
class MockProvider
{
public function text(string $prompt): string
{
return "Hello! I'm a mock AI assistant. You asked: {$prompt}";
}

public function streamText(string $prompt): \Generator
{
    $response = $this->text($prompt);
    $words = explode(' ', $response);

    foreach ($words as $word) {
        yield $word . ' ';
        usleep(100000); // 100ms delay for realistic streaming
    }
}

public function image(string $prompt): object
{
    return (object) [
        'url' => 'https://placehold.co/600x400?text=' . urlencode($prompt),
        'prompt' => $prompt,
    ];
}
Enter fullscreen mode Exit fullscreen mode

}
This simple class saves hours of API setup and makes the app instantly demoable.

Step 3: Smart Mode Switching
Users need to switch between Mock and Real modes seamlessly. Here’s how:

// app/Http/Middleware/AiModeMiddleware.php
namespace App\Http\Middleware;
class AiModeMiddleware
{
public function handle($request, $next)
{
if ($request->session()->has('ai_mode')) {
config(['ai.mode' => $request->session()->get('ai_mode')]);
}

    return $next($request);
}
Enter fullscreen mode Exit fullscreen mode

}
And the controller:

// app/Http/Controllers/ModeController.php
class ModeController extends Controller
{
public function switch(Request $request)
{
$request->validate(['mode' => 'required|in:mock,real']);

    $request->session()->put('ai_mode', $request->mode);

    return response()->json([
        'success' => true,
        'mode' => $request->mode,
        'message' => "Switched to {$request->mode} mode"
    ]);
}
Enter fullscreen mode Exit fullscreen mode

}
Users can now toggle modes with a single click. No server restart needed.

Step 4: Building AI Features
Chat Feature
public function chat(Request $request)
{
$request->validate(['message' => 'required|string']);

if (config('ai.mode') === 'mock') {
    $response = $this->mockProvider->text($request->message);
} else {
    $response = \Laravel\Ai\Facades\Ai::text($request->message);
}

return response()->json([
    'response' => (string) $response,
    'mode' => config('ai.mode')
]);
Enter fullscreen mode Exit fullscreen mode

}
Image Generation
public function generateImage(Request $request)
{
$request->validate(['prompt' => 'required|string']);

if (config('ai.mode') === 'mock') {
    $image = $this->mockProvider->image($request->prompt);
} else {
    $image = \Laravel\Ai\Facades\Ai::image($request->prompt)
        ->landscape()
        ->generate();
}

return response()->json(['url' => $image->url]);
Enter fullscreen mode Exit fullscreen mode

}
Text-to-Speech
public function textToSpeech(Request $request)
{
$request->validate(['text' => 'required|string']);

if (config('ai.mode') === 'mock') {
    $audio = $this->mockProvider->audio($request->text);
} else {
    $audio = \Laravel\Ai\Facades\Ai::audio($request->text)
        ->female()
        ->generate();
}

return response()->json(['url' => $audio->url]);
Enter fullscreen mode Exit fullscreen mode

}
Notice the pattern? Every feature checks the mode and gracefully switches between mock and real AI.

Step 5: Real-Time Streaming (The Highlight!)
This is where it gets exciting. Real-time streaming makes AI responses feel instant and modern:

// app/Http/Controllers/StreamingController.php
public function streamWords(Request $request)
{
$request->validate(['prompt' => 'required|string']);

return response()->stream(function () use ($request) {
    $provider = config('ai.mode') === 'mock' 
        ? $this->mockProvider 
        : null;

    if ($provider) {
        foreach ($provider->streamText($request->prompt) as $chunk) {
            echo "data: " . json_encode(['chunk' => $chunk]) . "\n\n";
            ob_flush();
            flush();
        }
    } else {
        $stream = \Laravel\Ai\Facades\Ai::stream($request->prompt);
        foreach ($stream as $chunk) {
            echo "data: " . json_encode(['chunk' => (string) $chunk]) . "\n\n";
            ob_flush();
            flush();
        }
    }

    echo "data: " . json_encode(['done' => true]) . "\n\n";
}, 200, [
    'Content-Type' => 'text/event-stream',
    'Cache-Control' => 'no-cache',
]);
Enter fullscreen mode Exit fullscreen mode

}
Frontend JavaScript to consume the stream:

async function streamResponse(prompt) {
const response = await fetch('/streaming/words', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
},
body: JSON.stringify({ prompt })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();
let output = document.getElementById('output');
output.textContent = '';

while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    const chunk = decoder.decode(value);
    const lines = chunk.split('\n');

    lines.forEach(line => {
        if (line.startsWith('data: ')) {
            const data = JSON.parse(line.slice(6));
            if (data.chunk) {
                output.textContent += data.chunk;
            }
        }
    });
}
Enter fullscreen mode Exit fullscreen mode

}
Users see responses appear word-by-word in real-time. The UX improvement is dramatic.

Step 6: Vector Search with Embeddings
For semantic search capabilities:

// Migration
Schema::create('articles', function (Blueprint $table) {
$table->id();
$table->string('title');
$table->text('content');

if ($this->vectorAvailable()) {
    $table->vector('embedding', dimensions: 768)->index();
} else {
    $table->text('embedding')->nullable();
}

$table->timestamps();
Enter fullscreen mode Exit fullscreen mode

});
Search implementation:

public function semanticSearch(Request $request)
{
$request->validate(['query' => 'required|string']);

if (config('ai.mode') === 'mock') {
    return response()->json([
        'results' => [
            ['title' => 'Mock Result', 'content' => 'Related to: ' . $request->query],
        ]
    ]);
}

$results = Article::query()
    ->whereVectorSimilarTo('embedding', $request->query)
    ->limit(10)
    ->get();

return response()->json(['results' => $results]);
Enter fullscreen mode Exit fullscreen mode

}
Step 7: Bulletproof Error Handling
Production apps need graceful degradation:

private function shouldUseMock(): bool
{
return config('ai.mode') === 'mock'
|| !class_exists('Laravel\Ai\Facades\Ai')
|| !$this->hasApiKeys();
}
private function hasApiKeys(): bool
{
return !empty(env('GEMINI_API_KEY'))
|| !empty(env('OPENAI_API_KEY'));
}
// In controllers
try {
if ($this->shouldUseMock()) {
$response = $this->mockProvider->text($prompt);
} else {
$response = \Laravel\Ai\Facades\Ai::text($prompt);
}

return response()->json([
    'success' => true,
    'response' => $response,
    'mode' => $this->shouldUseMock() ? 'mock' : 'real'
]);
Enter fullscreen mode Exit fullscreen mode

} catch (\Exception $e) {
return response()->json([
'success' => false,
'error' => $e->getMessage(),
], 500);
}
The app never breaks. It always falls back gracefully.

Step 8: Testing Everything
// tests/Feature/AiShowcaseTest.php
class AiShowcaseTest extends TestCase
{
/** @test */
public function mock_mode_works_without_api_keys()
{
config(['ai.mode' => 'mock']);

    $response = $this->postJson('/chat/demo', [
        'message' => 'Hello world'
    ]);

    $response->assertStatus(200)
        ->assertJson(['success' => true, 'mode' => 'mock']);
}

/** @test */
public function mode_switching_works_via_session()
{
    $response = $this->postJson('/ai/mode/switch', ['mode' => 'real']);

    $response->assertStatus(200);
    $this->assertEquals('real', session('ai_mode'));
}
Enter fullscreen mode Exit fullscreen mode

}
Run tests:

php artisan test --filter AiShowcaseTest
Key Lessons Learned

  1. Mock Mode is Essential Lets anyone try your app instantly Zero API costs during development Deterministic responses for testing Perfect for demos and documentation
  2. Session-Based Switching is Powerful Users toggle modes without server restart Great for A/B testing Clear visual feedback builds trust
  3. Streaming Transforms UX Users see responses immediately Perceived performance is dramatically better Modern users expect real-time feedback
  4. Graceful Degradation is Non-Negotiable Always have a fallback Never break the user experience Clear error messages help debugging What We Built ✅ Complete AI Showcase with 4 core features ✅ Dual Mode System (Mock/Real) for flexibility ✅ Real-time Streaming with Server-Sent Events ✅ Session-based Switching for instant demos ✅ Comprehensive Testing with full coverage ✅ Production Ready with proper error handling

Try It Yourself
git clone https://github.com/aliabdm/laravel-ai-showcase.git
cd laravel-ai-showcase
composer install
cp .env.example .env
php artisan key:generate
php artisan serve

Visit http://localhost:8000

All features work immediately in Mock Mode!

Live Demo: https://laravel-ai-showcase.onrender.com/
(Note: It’s on Render free tier, so first load may take a minute)

Conclusion
The Laravel AI SDK makes building AI-powered applications incredibly simple. What would have taken days of integration work now takes hours.

By implementing Mock Mode, session-based switching, and real-time streaming, we’ve created a showcase that:

Works instantly for demos
Scales to production
Provides excellent developer experience
Teaches best practices
The key lesson? Always build with both development and production in mind. Mock modes and graceful fallbacks aren’t just nice-to-have — they’re essential for professional AI applications.

What’s Next?
I’m planning follow-up articles on:

Deploying with Docker
Performance optimization techniques
Extending with custom AI tools
Production security considerations
Monitoring and logging AI usage
Want the complete code? Check out the repository and star it if you find it useful!

Questions? Drop them in the comments — I’d love to help you build your own AI showcase!

Built with ❤️ using Laravel AI SDK
GitHub:
Mohammad Ali Abdul Wahed

Top comments (0)