Most teams struggle with AI in PHP because output is inconsistent, not because API calls are hard. In this prompt engineering php guide, we will build a robust extraction flow that returns validated JSON every time.
Why structured output matters in real Laravel apps
If your output feeds invoices, dashboards, or automations, free-form text is risky. You need a strict contract:
- same keys every run,
- validated types,
- safe fallback when parsing fails.
This is where prompt design + Laravel validation work together.
Define the output contract before writing prompts
Start by defining what your app expects, not what the model might produce.
$rules = [
'invoice_number' => ['required', 'string', 'max:100'],
'invoice_date' => ['nullable', 'date'],
'due_date' => ['nullable', 'date'],
'amount' => ['required', 'numeric', 'min:0'],
'currency' => ['required', 'string', 'size:3'],
'vendor_name' => ['required', 'string', 'max:200'],
'needs_human_review' => ['required', 'boolean'],
];Now your prompt can explicitly target this shape.
Build the extraction service with strict instructions
<?php
namespace App\Services;
use Illuminate\Support\Facades\Http;
class InvoiceExtractionService
{
public function extractRaw(string $emailBody): string
{
$instructions = <<<PROMPT
Return ONLY valid JSON.
Allowed keys: invoice_number, invoice_date, due_date, amount, currency, vendor_name, needs_human_review.
Rules:
- currency must be 3-letter uppercase.
- amount must be numeric.
- if uncertain, set needs_human_review=true.
No markdown. No explanation text.
PROMPT;
$response = Http::withToken(config('services.openai.key'))
->timeout(45)
->post('https://api.openai.com/v1/responses', [
'model' => config('services.openai.model'),
'instructions' => $instructions,
'input' => $emailBody,
'text' => [
'format' => [
'type' => 'json_object',
],
],
])
->throw()
->json();
return (string) data_get($response, 'output_text', '{}');
}
}Validate, normalize, and fail safely in Laravel
<?php
namespace App\Actions;
use App\Services\InvoiceExtractionService;
use Illuminate\Support\Arr;
use Illuminate\Support\Facades\Validator;
use Illuminate\Validation\ValidationException;
class ExtractInvoiceAction
{
public function __construct(private InvoiceExtractionService $service)
{
}
public function execute(string $emailBody): array
{
$raw = $this->service->extractRaw($emailBody);
$decoded = json_decode($raw, true);
if (! is_array($decoded)) {
throw ValidationException::withMessages([
'payload' => ['Model response was not valid JSON.'],
]);
}
$normalized = Arr::only($decoded, [
'invoice_number',
'invoice_date',
'due_date',
'amount',
'currency',
'vendor_name',
'needs_human_review',
]);
$normalized['currency'] = strtoupper((string) ($normalized['currency'] ?? ''));
$validator = Validator::make($normalized, [
'invoice_number' => ['required', 'string', 'max:100'],
'invoice_date' => ['nullable', 'date'],
'due_date' => ['nullable', 'date'],
'amount' => ['required', 'numeric', 'min:0'],
'currency' => ['required', 'string', 'size:3'],
'vendor_name' => ['required', 'string', 'max:200'],
'needs_human_review' => ['required', 'boolean'],
]);
if ($validator->fails()) {
throw ValidationException::withMessages($validator->errors()->toArray());
}
return $validator->validated();
}
}Fallback flow (important): if validation fails, push the email into a manual review queue and do not auto-write accounting records.
Real-world scenario: invoice inbox automation
A finance team receives 2,000 vendor emails monthly. Manual extraction adds hours of repetitive work.
Using the flow above:
- low-risk invoices auto-fill draft records,
- ambiguous emails route to human review,
- validation prevents broken payloads from touching core tables.
This is how AI improves speed without compromising data quality.
Common mistakes in prompt engineering for PHP
- Mixing natural-language explanation with machine JSON in one output.
- Changing key names in prompts without updating validators.
- Accepting unknown fields instead of whitelisting expected keys.
- Failing open when JSON decode fails.
- Not versioning prompts alongside application releases.
Production checklist
- Keep prompt text versioned (for example
invoice_extract_v3). - Validate every model output server-side.
- Route invalid payloads to manual review.
- Capture sample failures for regression tests.
- Track parse-success rate and schema-violation rate.
- Alert when violation rate crosses threshold.
FAQ
1) Should I parse output with regex if JSON fails?
No. Regex patching creates fragile systems. Fail safely and route to review.
2) Can I ask for both explanation and JSON?
For machine workflows, keep response JSON-only. Ask for explanation in a separate call if needed.
3) How often should prompts change?
Only when needed. Version them and run regression tests before rolling out.
Series navigation and references
- Previous: Laravel OpenAI API Integration: Build a Production-Ready Endpoint
- Foundation: AI for PHP and Web Developers: Complete 6-Part Series
- Next: RAG in Laravel: Query Your Docs with Embeddings
If this article was useful, the next part shows how to ground answers using your own docs.
- Newsletter: Get practical build notes
- Secondary contact: Start a conversation
Official references: