Skip to content

Commit

Permalink
More examples of using LLM via Inference class
Browse files Browse the repository at this point in the history
  • Loading branch information
ddebowczyk committed Oct 3, 2024
1 parent 5411bf8 commit 2c2a88f
Show file tree
Hide file tree
Showing 16 changed files with 579 additions and 4 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ $events = $instructor
->request(
messages: $report,
responseModel: Sequence::of(ProjectEvent::class),
model: 'claude-3-haiku-20240307', //'claude-3-sonnet-20240229',
model: 'claude-3-5-sonnet-20240620', // 'claude-3-haiku-20240307'
prompt: 'Extract a list of project events with all the details from the provided input in JSON format using schema: <|json_schema|>',
mode: Mode::Json,
examples: [['input' => 'Acme Insurance project to implement SalesTech CRM solution is currently in RED status due to delayed delivery of document production system, led by 3rd party vendor - Alfatech. Customer (Acme) is discussing the resolution with the vendor. Production deployment plan has been finalized on Aug 15th and awaiting customer approval.', 'output' => [["type" => "object", "title" => "sequenceOfProjectEvent", "description" => "A sequence of ProjectEvent", "properties" => ["list" => [["title" => "Absorbing delay by deploying extra resources", "description" => "System integrator (SysCorp) are working to absorb some of the delay by deploying extra resources to speed up development when the doc production is done.", "type" => "action", "status" => "open", "stakeholders" => [["name" => "SysCorp", "role" => "system integrator", "details" => "System integrator",],], "date" => "2021-09-01",], ["title" => "Finalization of production deployment plan", "description" => "Production deployment plan has been finalized on Aug 15th and awaiting customer approval.", "type" => "progress", "status" => "open", "stakeholders" => [["name" => "Acme", "role" => "customer", "details" => "Customer",],], "date" => "2021-08-15",],],]]]]],
Expand Down
1 change: 1 addition & 0 deletions docs/cookbook/examples/extras/image_to_data_anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ $receipt = (new Instructor)->withConnection('anthropic')->respond(
input: Image::fromFile(__DIR__ . '/receipt.png'),
responseModel: Receipt::class,
prompt: 'Extract structured data from the receipt. Return result as JSON following this schema: <|json_schema|>',
model: 'claude-3-5-sonnet-20240620',
mode: Mode::Json,
options: ['max_tokens' => 4096]
);
Expand Down
2 changes: 1 addition & 1 deletion docs/cookbook/examples/extras/llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ docname: 'llm'

## Overview

LLM class offers access to LLM APIs and convenient methods to execute
`Inference` class offers access to LLM APIs and convenient methods to execute
model inference, incl. chat completions, tool calling or JSON output
generation.

Expand Down
58 changes: 58 additions & 0 deletions docs/cookbook/examples/extras/llm_json.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
title: 'Working directly with LLMs and JSON - JSON mode'
docname: 'llm_json'
---

## Overview

While working with `Inference` class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.

`Inference` class supports multiple inference modes, like `Tools`, `Json`
`JsonSchema` or `MdJson`, which gives you flexibility to choose the best
approach for your use case.

## Example

In this example we will use OpenAI JSON mode, which guarantees that the
response will be in a JSON format.

It does not guarantee compliance with a specific schema (for some providers
including OpenAI). We can try to work around it by providing an example of
the expected JSON output in the prompt.

> NOTE: Some model providers allow to specify a JSON schema for model to
follow via `schema` parameter of `response_format`. OpenAI does not support
this feature in JSON mode (only in JSON Schema mode).

```php
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Extras\LLM\Inference;

// regular API, allows to customize inference options
$data = (new Inference)
->withConnection('openai') // optional, default is set in /config/llm.php
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with JSON data containing name", population and year of founding. \
Example: {"name": "Berlin", "population": 3700000, "founded": 1237}']],
responseFormat: [
'type' => 'json_object',
],
options: ['max_tokens' => 64],
mode: Mode::Json,
)
->toJson();

echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);

?>
```
73 changes: 73 additions & 0 deletions docs/cookbook/examples/extras/llm_json_schema.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
title: 'Working directly with LLMs and JSON - JSON Schema mode'
docname: 'llm_json_schema'
---

## Overview

While working with `Inference` class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.

## Example

In this example we will use OpenAI JSON Schema mode, which guarantees
that the response will be in a JSON format that matches the provided
schema.

> NOTE: Json Schema mode with guaranteed structured outputs is not
supported by all language model providers.

```php
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Enums\Mode;use Cognesy\Instructor\Extras\LLM\Inference;

// regular API, allows to customize inference options
$data = (new Inference)
->withConnection('openai')
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with JSON data.']],
responseFormat: [
'type' => 'json_schema',
'description' => 'City data',
'json_schema' => [
'name' => 'city_data',
'schema' => [
'type' => 'object',
'description' => 'City information',
'properties' => [
'name' => [
'type' => 'string',
'description' => 'City name',
],
'founded' => [
'type' => 'integer',
'description' => 'Founding year',
],
'population' => [
'type' => 'integer',
'description' => 'Current population',
],
],
'additionalProperties' => false,
'required' => ['name', 'founded', 'population'],
],
'strict' => true,
],
],
options: ['max_tokens' => 64],
mode: Mode::JsonSchema,
)
->toJson();

echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);

?>
```
48 changes: 48 additions & 0 deletions docs/cookbook/examples/extras/llm_md_json.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: 'Working directly with LLMs and JSON - MdJSON mode'
docname: 'llm_md_json'
---

## Overview

While working with `Inference` class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.

## Example

In this example we will use emulation mode - MdJson, which tries to
force the model to generate a JSON output by asking it to respond
with a JSON object within a Markdown code block.

This is useful for the models which do not support JSON output directly.

We will also provide an example of the expected JSON output in the prompt
to guide the model in generating the correct response.

```php
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Enums\Mode;use Cognesy\Instructor\Extras\LLM\Inference;

// regular API, allows to customize inference options
$data = (new Inference)
->withConnection('openai')
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with JSON data containing name", population and year of founding. \
Example: {"name": "Berlin", "population": 3700000, "founded": 1237}']],
options: ['max_tokens' => 64],
mode: Mode::MdJson,
)
->toJson();

echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);

?>
```
76 changes: 76 additions & 0 deletions docs/cookbook/examples/extras/llm_tools.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
---
title: 'Working directly with LLMs and JSON - Tools mode'
docname: 'llm_tools'
---

## Overview

While working with `Inference` class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.

## Example

In this example we will use OpenAI tools mode, in which model will generate
a JSON containing arguments for a function call. This way we can make the
model generate a JSON object with specific structure of parameters.

```php
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Extras\LLM\Inference;

// regular API, allows to customize inference options
$data = (new Inference)
->withConnection('openai')
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with function call.']],
tools: [[
'type' => 'function',
'function' => [
'name' => 'extract_data',
'description' => 'Extract city data',
'parameters' => [
'type' => 'object',
'description' => 'City information',
'properties' => [
'name' => [
'type' => 'string',
'description' => 'City name',
],
'founded' => [
'type' => 'integer',
'description' => 'Founding year',
],
'population' => [
'type' => 'integer',
'description' => 'Current population',
],
],
'required' => ['name', 'founded', 'population'],
'additionalProperties' => false,
],
],
]],
toolChoice: [
'type' => 'function',
'function' => [
'name' => 'extract_data'
]
],
options: ['max_tokens' => 64],
mode: Mode::Tools,
)
->toJson();

echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);

?>
```
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ $instructor = (new Instructor)
->respond(
messages: $text,
responseModel: User::class,
);
);
echo "\nTEXT: $text\n";
assert($counter->input > 0);
assert($counter->output > 0);
Expand Down
5 changes: 5 additions & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,11 @@
"cookbook/examples/extras/image_to_data_anthropic",
"cookbook/examples/extras/image_to_data_gemini",
"cookbook/examples/extras/llm",
"cookbook/examples/extras/llm_json",
"cookbook/examples/extras/llm_json_schema",
"cookbook/examples/extras/llm_md_json",
"cookbook/examples/extras/llm_tools",
"cookbook/examples/extras/llm_json_schema",
"cookbook/examples/extras/transcription_to_tasks",
"cookbook/examples/extras/translate_ui_fields",
"cookbook/examples/extras/web_to_objects"
Expand Down
2 changes: 1 addition & 1 deletion examples/A05_Extras/LLM/run.php
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

## Overview

LLM class offers access to LLM APIs and convenient methods to execute
`Inference` class offers access to LLM APIs and convenient methods to execute
model inference, incl. chat completions, tool calling or JSON output
generation.

Expand Down
58 changes: 58 additions & 0 deletions examples/A05_Extras/LLMJson/run.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
title: 'Working directly with LLMs and JSON - JSON mode'
docname: 'llm_json'
---

## Overview

While working with `Inference` class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.

`Inference` class supports multiple inference modes, like `Tools`, `Json`
`JsonSchema` or `MdJson`, which gives you flexibility to choose the best
approach for your use case.

## Example

In this example we will use OpenAI JSON mode, which guarantees that the
response will be in a JSON format.

It does not guarantee compliance with a specific schema (for some providers
including OpenAI). We can try to work around it by providing an example of
the expected JSON output in the prompt.

> NOTE: Some model providers allow to specify a JSON schema for model to
follow via `schema` parameter of `response_format`. OpenAI does not support
this feature in JSON mode (only in JSON Schema mode).

```php
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Extras\LLM\Inference;

// regular API, allows to customize inference options
$data = (new Inference)
->withConnection('openai') // optional, default is set in /config/llm.php
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with JSON data containing name", population and year of founding. \
Example: {"name": "Berlin", "population": 3700000, "founded": 1237}']],
responseFormat: [
'type' => 'json_object',
],
options: ['max_tokens' => 64],
mode: Mode::Json,
)
->toJson();

echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);

?>
```
Loading

0 comments on commit 2c2a88f

Please sign in to comment.