Skip to content

Laravel AI EvaluationReal-call LLM evals for Laravel AI

Make sure your agents respond how you want them to.

Quick Start

1) Install

bash
composer require --dev larswiegers/laravel-ai-evaluation

2) Configure your run mode

php
pest()->extend(Tests\TestCase::class)->in('Feature', 'AgentEvals');
text
No additional setup is required.

3) Generate an eval file

bash
php artisan make:ai-evals refund-policy --type=pest
bash
php artisan make:ai-evals refund-policy --type=standalone

The command scaffolds a starter file you can edit for your agent and expectations.

4) Run

bash
vendor/bin/pest tests/AgentEvals
bash
php artisan ai-evals:run

5) Configure summary output

Enable summaries and choose the format in your .env (or CI environment):

env
AI_EVAL_SUMMARY=true
AI_EVAL_SUMMARY_FORMAT=text
AI_EVAL_SUMMARY_CURRENCY=USD
env
AI_EVAL_SUMMARY=true
AI_EVAL_SUMMARY_FORMAT=json
AI_EVAL_SUMMARY_CURRENCY=USD

6) Get the summary output

Run your evals and check the end of the output:

text
$ vendor/bin/pest tests/AgentEvals

AI Eval Summary
Passed: 12
Failed: 1
Prompt tokens: 7,842
Completion tokens: 1,966
Total tokens: 9,808
Estimated cost: $0.07 USD
json
$ php artisan ai-evals:run

{"passed":12,"failed":1,"tokens":{"prompt":7842,"completion":1966,"total":9808},"cost":{"amount":0.07,"currency":"USD"}}