If you are searching for AI for PHP developers, this is the best place to begin. We will build a real Laravel feature that drafts support replies, then harden it with practical guardrails you can use in production.
AI for PHP developers: what you are building today
We are building a support-reply assistant for a Laravel app:
- Input: customer question + optional account context.
- Output: a clean suggested reply for your support agent.
- Value: faster first response time, better consistency, less repetitive writing.
You can use this exact pattern in:
- SaaS support dashboards
- Ecommerce back-office tools
- Internal operations panels
Build the Laravel support assistant in 20 minutes
1) Install dependencies
composer require guzzlehttp/guzzle2) Add environment variables
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=gpt-53) Add OpenAI config in config/services.php
'openai' => [
'key' => env('OPENAI_API_KEY'),
'model' => env('OPENAI_MODEL', 'gpt-5'),
],4) Create a service class
<?php
namespace App\Services;
use Illuminate\Support\Facades\Http;
class SupportReplyService
{
public function suggestReply(string $customerMessage, string $accountContext = ''): string
{
$instructions = <<<PROMPT
You are a senior customer support assistant.
Write concise, polite replies in plain English.
If the request needs account-specific action, ask one clear follow-up question.
Never invent refunds, discounts, or policy exceptions.
PROMPT;
$input = "Account context: {$accountContext}\n\nCustomer message: {$customerMessage}";
$response = Http::withToken(config('services.openai.key'))
->timeout(30)
->post('https://api.openai.com/v1/responses', [
'model' => config('services.openai.model'),
'instructions' => $instructions,
'input' => $input,
])
->throw()
->json();
return data_get($response, 'output_text', 'Sorry, I could not generate a reply.');
}
}5) Add a controller and route
<?php
namespace App\Http\Controllers;
use App\Services\SupportReplyService;
use Illuminate\Http\Request;
class SupportAssistantController extends Controller
{
public function __invoke(Request $request, SupportReplyService $service)
{
$data = $request->validate([
'customer_message' => ['required', 'string', 'max:5000'],
'account_context' => ['nullable', 'string', 'max:5000'],
]);
$reply = $service->suggestReply(
$data['customer_message'],
$data['account_context'] ?? ''
);
return back()->withInput()->with('suggested_reply', $reply);
}
}use App\Http\Controllers\SupportAssistantController;
Route::view('/support-assistant', 'support-assistant');
Route::post('/support-assistant', SupportAssistantController::class);6) Minimal Blade view
<form method="POST" action="/support-assistant">
@csrf
<label>Customer message</label>
<textarea name="customer_message" required>{{ old('customer_message') }}</textarea>
<label>Account context (optional)</label>
<textarea name="account_context">{{ old('account_context') }}</textarea>
<button type="submit">Generate reply</button>
</form>
@if (session('suggested_reply'))
<h2>Suggested reply</h2>
<pre>{{ session('suggested_reply') }}</pre>
@endifReal-world scenario: support desk for an ecommerce SaaS
Imagine your customer writes:
"My order sync failed again and the dashboard says webhook signature mismatch."
A good generated reply can:
- acknowledge the issue,
- request one necessary diagnostic detail,
- provide one immediate next step,
- avoid overpromising.
This keeps responses fast while still letting a human review before sending.
Common mistakes when starting with AI in PHP
- Sending full raw database payloads instead of curated context fields.
- Using vague prompts like "answer this customer," which leads to inconsistent tone.
- Skipping request timeouts and
->throw()handling, then debugging silent failures. - Returning AI output directly to customers without human review in v1.
- Hardcoding model IDs in code instead of using environment config.
Production checklist before you ship
- Add server-side validation for every input field.
- Enforce timeout and retry policy for API calls.
- Log request IDs and latency for debugging.
- Keep prompts versioned (for example, in
config/ai.php). - Add a manual review step for customer-facing replies.
- Track acceptance metrics: draft acceptance rate, response time, escalation rate.
FAQ
1) Should I use Responses API or Chat Completions in new Laravel projects?
For most new builds, start with Responses API because it is the modern path and supports advanced workflows you will likely need later.
2) Which model should I use first?
Start with a reliable general model via OPENAI_MODEL, then benchmark quality and cost on your own prompts before scaling.
3) Do I need vector databases on day one?
No. Start with single-turn or short-context workflows first. Add retrieval only when your use case needs large external knowledge.
Series navigation
- Previous: All articles
- Foundation: AI for PHP and Web Developers: Complete 6-Part Series
- Next: Laravel OpenAI API Integration: Build a Production-Ready Endpoint
Continue with the series
If this walkthrough was useful, subscribe for the next five implementation-focused parts of this series.
- Newsletter: Get practical build notes
- Secondary contact: Start a conversation
Official references: