Every Laravel project I touch these days has the same request somewhere on the backlog: "add AI." Usually what people mean is "call the OpenAI API and pray." But if you've ever tried to maintain a raw Http::post() to an LLM endpoint in production, you know how quickly that falls apart. No structure, no type safety, no way to swap providers, and prompts scattered across your codebase like breadcrumbs.
Laravel now has proper tooling for this. Two options worth your time: Prism, the battle-tested community package, and the brand-new official Laravel AI SDK. Here's how I use them, and the patterns that actually hold up.
1. Pick your weapon
Prism (prism-php/prism) has been around longer and feels very Laravel-native. Fluent API, multi-provider support (OpenAI, Anthropic, Gemini, Ollama), structured output with schema objects, and a solid tool system. If you need production stability today, this is the safe bet.
Laravel AI SDK (laravel/ai) is Taylor's official answer. It's still 0.x, but the architecture is clean: agent classes that implement contracts, artisan generators, middleware support, and structured output via JSON Schema. If you're starting fresh and don't mind riding the early wave, this is where the ecosystem is heading.
Install either one:
composer require prism-php/prism
# or
composer require laravel/aiI'll show examples from both. The patterns are transferable.
2. Text generation that doesn't embarrass you
The simplest use case: generate text with a system prompt. Here's where most teams stop, and where most teams go wrong. A raw string concatenation is not a prompt strategy.
With Prism:
// app/Services/ProductDescriptionService.php
use Prism\Prism\Facades\Prism;
class ProductDescriptionService
{
public function generate(string $name, string $features): string
{
$response = Prism::text()
->using('anthropic', 'claude-sonnet-4-6')
->withSystemPrompt(
'You write concise product descriptions for an e-commerce store. '
. 'Max 2 sentences. No fluff. No exclamation marks.'
)
->withPrompt("Product: {$name}\nFeatures: {$features}")
->asText();
return $response->text;
}
}With Laravel AI SDK:
// app/Agents/ProductWriter.php
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Promptable;
class ProductWriter implements Agent
{
use Promptable;
public function instructions(): string
{
return 'You write concise product descriptions for an e-commerce store. '
. 'Max 2 sentences. No fluff. No exclamation marks.';
}
}
// Usage
$description = ProductWriter::make()
->prompt("Product: {$name}\nFeatures: {$features}")
->text;Both are clean. Both keep the prompt out of your controller. The key pattern: wrap every AI call in a dedicated service or agent class. When the model changes, when the prompt needs tuning, when you need to add caching, you change one file.
3. Structured output: stop parsing strings
The moment you need the AI to return data, not prose, you need structured output. This is the difference between a prototype and a production feature.
Say you're building a support ticket classifier:
// app/Services/TicketClassifier.php
use Prism\Prism\Facades\Prism;
use Prism\Prism\Schema\ObjectSchema;
use Prism\Prism\Schema\StringSchema;
use Prism\Prism\Schema\NumberSchema;
class TicketClassifier
{
public function classify(string $ticketBody): array
{
$schema = new ObjectSchema(
name: 'ticket_classification',
description: 'Classification of a support ticket',
properties: [
new StringSchema('category', 'The ticket category: billing, technical, account, other'),
new StringSchema('priority', 'Priority level: low, medium, high, critical'),
new NumberSchema('confidence', 'Confidence score between 0 and 1'),
new StringSchema('summary', 'One-sentence summary of the issue'),
],
requiredFields: ['category', 'priority', 'confidence', 'summary']
);
$response = Prism::structured()
->using('openai', 'gpt-4o')
->withSchema($schema)
->withPrompt("Classify this support ticket:\n\n{$ticketBody}")
->asStructured();
return $response->structured;
// ['category' => 'billing', 'priority' => 'high', 'confidence' => 0.92, 'summary' => '...']
}
}No regex. No json_decode and hoping. The model is constrained to your schema. If you need an enum, define an enum. If you need a number in a range, constrain it. This is what makes AI features reliable enough to feed into your business logic.
The Laravel AI SDK equivalent uses the HasStructuredOutput contract:
// app/Agents/TicketClassifier.php
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Contracts\HasStructuredOutput;
use Laravel\Ai\Promptable;
use Illuminate\Contracts\JsonSchema\JsonSchema;
class TicketClassifier implements Agent, HasStructuredOutput
{
use Promptable;
public function instructions(): string
{
return 'You classify support tickets by category and priority.';
}
public function schema(JsonSchema $schema): array
{
return [
'category' => $schema->string()->enum(['billing', 'technical', 'account', 'other'])->required(),
'priority' => $schema->string()->enum(['low', 'medium', 'high', 'critical'])->required(),
'confidence' => $schema->number()->minimum(0)->maximum(1)->required(),
'summary' => $schema->string()->required(),
];
}
}4. Tools: let the model call your code
This is where things get genuinely powerful. Tools let the AI call functions in your application, making it context-aware without stuffing everything into the prompt.
use Prism\Prism\Facades\Prism;
use Prism\Prism\Facades\Tool;
$orderLookup = Tool::as('lookup_order')
->for('Look up an order by order number')
->withStringParameter('order_number', 'The order number to look up')
->using(function (string $order_number): string {
$order = Order::where('number', $order_number)->first();
if (!$order) {
return "Order {$order_number} not found.";
}
return json_encode([
'status' => $order->status,
'total' => $order->total,
'shipped_at' => $order->shipped_at?->toDateString(),
'items' => $order->items->count(),
]);
});
$response = Prism::text()
->using('anthropic', 'claude-sonnet-4-6')
->withSystemPrompt('You are a customer support assistant. Use the tools to look up real data. Never guess.')
->withTools([$orderLookup])
->withMaxSteps(3)
->withPrompt("Customer asks: Where is my order #A1234?")
->asText();The model decides when to call the tool, processes the result, and formulates the response. Your Order model stays the single source of truth. The AI doesn't invent data because it has a real function to call.
5. Prompts are code, treat them like it
The biggest mistake I see: prompts as inline strings. The moment you have more than one AI feature, you need a prompt management strategy. Here's what works:
// app/Prompts/TicketClassifierPrompt.php
class TicketClassifierPrompt
{
public static function system(): string
{
return <<<'PROMPT'
You classify support tickets for a SaaS platform.
Rules:
- Category must be one of: billing, technical, account, other
- Priority is based on business impact, not customer emotion
- Critical: service down or data loss. High: blocked workflow. Medium: degraded experience. Low: questions.
- Confidence below 0.7 means you're unsure. Flag it.
PROMPT;
}
public static function user(string $body): string
{
return "Classify this ticket:\n\n{$body}";
}
}Dedicated classes. Version-controlled. Testable. When you iterate on your prompts (and you will, constantly), you get a clean diff in your PR.
The sharp edges
A few things that will bite you if you're not careful:
Rate limits. Every provider has them. Wrap your AI calls in a queue job with retry_after and exponential backoff. Don't call an LLM synchronously in a web request unless it's behind a loading state.
Cost. Structured output with tools can chain multiple API calls. A single withMaxSteps(5) can trigger 5 round-trips. Monitor your usage. Set hard limits.
Testing. Both Prism and Laravel AI SDK support faking responses in tests. Use it. Don't hit real APIs in your test suite. Prism has Prism::fake(), the official SDK has its own test helpers.
Prompt injection. If your prompt includes user input, assume they're trying to break out of your instructions. Separate system prompts from user content. Never interpolate user input into system instructions.
AI in Laravel isn't magic. It's plumbing. Good plumbing: typed responses, dedicated service classes, schema-constrained output, versioned prompts. Bad plumbing: Http::post('openai...') in a controller with a string prompt. The tooling is finally good enough to build on. Use it properly.