Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the new "Structured Outputs" #961

Open
3 tasks done
andehr opened this issue Aug 7, 2024 · 2 comments
Open
3 tasks done

Support the new "Structured Outputs" #961

andehr opened this issue Aug 7, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@andehr
Copy link
Contributor

andehr commented Aug 7, 2024

First check

  • I added a descriptive title to this issue.
  • I used the GitHub search to look for a similar issue and didn't find it.
  • I searched the Marvin documentation for this feature.

Describe the current behavior

Not yet released, but as of the merge of PR #957 we can specify the response format of messages returned from Open AI as a JSON object.

But as of August 6th, Open AI have extended the possible values of the the response_format. You can now specify any Pydantic base model, and they guarantee responses that conform to its schema.

See the Open AI guide here.

Describe the proposed behavior

Therefore, it would be very useful to extend marvin support such that we could now do:

Assistant(instructions=some_instructions, response_format=SomePydanticModel)

Example Use

No response

Additional context

No response

@andehr andehr added the enhancement New feature or request label Aug 7, 2024
@zzstoatzz
Copy link
Collaborator

zzstoatzz commented Aug 10, 2024

hi @andehr I agree we could leverage this pretty nicely, not only in assistants but pretty much everywhere

do you have any interest in making a contribution to this effect? I'd be happy to work with you on it

@andehr
Copy link
Contributor Author

andehr commented Aug 11, 2024

Great! I'm interested, but on hols for August, if you haven't had a start before I'm back, I'll have a go at perhaps a work-in-progress PR as soon as I can!

Main caveat I noticed after posting the above is that the model must be gpt-4o-2024-08-06 or later to work with this stuff - so might be worth bumping up the default version again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants