diff --git a/.changeset/config.json b/.changeset/config.json
index aa37f6e..feb2e4d 100644
--- a/.changeset/config.json
+++ b/.changeset/config.json
@@ -2,7 +2,7 @@
"$schema": "https://unpkg.com/@changesets/config@1.6.3/schema.json",
"changelog": [
"@changesets/changelog-github",
- { "repo": "propology/hopfield" }
+ { "repo": "EnjoinHQ/hopfield" }
],
"commit": false,
"access": "public",
diff --git a/.changeset/wise-clocks-greet.md b/.changeset/wise-clocks-greet.md
new file mode 100644
index 0000000..397f987
--- /dev/null
+++ b/.changeset/wise-clocks-greet.md
@@ -0,0 +1,5 @@
+---
+"hopfield": patch
+---
+
+**Feature:**: updated the model names for OpenAI to be more complete and up to date.
diff --git a/.env.example b/.env.example
index abd38a3..ff036db 100644
--- a/.env.example
+++ b/.env.example
@@ -1,2 +1 @@
VITE_OPENAI_API_KEY=""
-VITE_REPLICATE_API_KEY=""
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 8780f1c..7ef9683 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -2,11 +2,11 @@
Thanks for your interest in contributing to Hopfield! Please take a moment to review this document **before submitting a pull request.**
-If you want to contribute, but aren't sure where to start, you can create a [new discussion](https://github.com/propology/hopfield/discussions).
+If you want to contribute, but aren't sure where to start, you can create a [new discussion](https://github.com/EnjoinHQ/hopfield/discussions).
> **Note** **Please ask first before starting work on any significant new features.**
>
-> It's never a fun experience to have your pull request declined after investing time and effort into a new feature. To avoid this from happening, we request that contributors create a [feature request](https://github.com/propology/hopfield/discussions/new?category=ideas) to first discuss any API changes or significant new ideas.
+> It's never a fun experience to have your pull request declined after investing time and effort into a new feature. To avoid this from happening, we request that contributors create a [feature request](https://github.com/EnjoinHQ/hopfield/discussions/new?category=ideas) to first discuss any API changes or significant new ideas.
@@ -38,13 +38,13 @@ This guide covers more advanced topics. Pick the topics based on your needs.
To start contributing to the project, clone it to your local machine using git:
```bash
-git clone https://github.com/propology/hopfield.git
+git clone https://github.com/EnjoinHQ/hopfield.git
```
Or the [GitHub CLI](https://cli.github.com):
```bash
-gh repo clone propology/hopfield
+gh repo clone EnjoinHQ/hopfield
```
@@ -154,7 +154,7 @@ The first time a PR with a changeset is merged after a release, a new PR will au
### Creating a snapshot release
-If a PR has changesets, you can create a [snapshot release](https://github.com/changesets/changesets/blob/main/docs/snapshot-releases.md) by [manually dispatching](https://github.com/propology/hopfield/actions/workflows/snapshot.yml) the Snapshot workflow. This publishes a tagged version to npm with the PR branch name and timestamp.
+If a PR has changesets, you can create a [snapshot release](https://github.com/changesets/changesets/blob/main/docs/snapshot-releases.md) by [manually dispatching](https://github.com/EnjoinHQ/hopfield/actions/workflows/snapshot.yml) the Snapshot workflow. This publishes a tagged version to npm with the PR branch name and timestamp.
↑ back to top
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 63067f0..5d515c4 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -58,7 +58,7 @@ body:
label: Validations
description: Before submitting this issue, please make sure you do the following.
options:
- - label: Checked there isn't [already an issue](https://github.com/propology/hopfield/issues) that exists for the bug you encountered.
+ - label: Checked there isn't [already an issue](https://github.com/EnjoinHQ/hopfield/issues) that exists for the bug you encountered.
required: true
- - label: Read the [Contributing Guide](https://github.com/propology/hopfield/blob/main/.github/CONTRIBUTING.md).
+ - label: Read the [Contributing Guide](https://github.com/EnjoinHQ/hopfield/blob/main/.github/CONTRIBUTING.md).
required: true
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
index 7a4bfcc..c7b56c4 100644
--- a/.github/ISSUE_TEMPLATE/config.yml
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -1,8 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Question
- url: https://github.com/propology/hopfield/discussions/new?category=q-a
+ url: https://github.com/EnjoinHQ/hopfield/discussions/new?category=q-a
about: Ask questions and discuss with other community members.
- name: Feature Request
- url: https://github.com/propology/hopfield/discussions/new?category=ideas
+ url: https://github.com/EnjoinHQ/hopfield/discussions/new?category=ideas
about: Requests features or brainstorm ideas for new functionality.
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
index 7a83c82..59d8aa9 100644
--- a/.github/pull_request_template.md
+++ b/.github/pull_request_template.md
@@ -4,6 +4,6 @@ What changes are made in this PR? Is it a feature or a bug fix?
## Additional Information
-- [ ] I read the [contributing guide](https://github.com/propology/hopfield/blob/main/.github/CONTRIBUTING.md)
+- [ ] I read the [contributing guide](https://github.com/EnjoinHQ/hopfield/blob/main/.github/CONTRIBUTING.md)
- [ ] I added documentation related to the changes made.
- [ ] I added or updated tests related to the changes made.
diff --git a/.vscode/settings.json b/.vscode/settings.json
index b627d0a..3b5adb0 100644
--- a/.vscode/settings.json
+++ b/.vscode/settings.json
@@ -4,8 +4,8 @@
"typescript.tsdk": "node_modules/typescript/lib",
"typescript.enablePromptUseWorkspaceTsdk": true,
"editor.codeActionsOnSave": {
- "quickfix.biome": true,
- "source.organizeImports.biome": true
+ "quickfix.biome": "explicit",
+ "source.organizeImports.biome": "explicit"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
diff --git a/LICENSE b/LICENSE
index acdc39c..81540bf 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,6 +1,6 @@
MIT License
-Copyright (c) 2023-present Propology
+Copyright (c) 2023-present Enjoin
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
diff --git a/README.md b/README.md
index 1fe2e6f..9aac83b 100644
--- a/README.md
+++ b/README.md
@@ -3,8 +3,8 @@
@@ -262,8 +262,7 @@ Oh, and liking Typescript is a nice-to-have.
## Community
-If you have questions or need help, reach out to the community at the [Hopfield GitHub Discussions](https://github.com/propology/hopfield/discussions)
-or join the [Propology Discord](https://discord.gg/2hag5fc6) and check out the `π-hopfield` channel.
+If you have questions or need help, reach out to the community in the [Hopfield GitHub Discussions](https://github.com/EnjoinHQ/hopfield/discussions).
diff --git a/examples/cloudflare-worker/src/index.test.ts b/examples/cloudflare-worker/src/index.test.ts
index 68ccb57..fda99e6 100644
--- a/examples/cloudflare-worker/src/index.test.ts
+++ b/examples/cloudflare-worker/src/index.test.ts
@@ -1,4 +1,4 @@
-import { UnstableDevWorker, unstable_dev } from 'wrangler';
+import { type UnstableDevWorker, unstable_dev } from 'wrangler';
import { describe, beforeAll, afterAll, it, expect } from 'vitest';
describe('Worker', () => {
diff --git a/examples/next-13/src/app/docs.ts b/examples/next-13/src/app/docs.ts
index f53dd57..1511925 100644
--- a/examples/next-13/src/app/docs.ts
+++ b/examples/next-13/src/app/docs.ts
@@ -1,202 +1,4 @@
export const docs = `---
-description: "A detailed guide on seamlessly fetching and streaming data directly into React components."
-title: "Next.js App Router with Hopfield"
----
-
-# Next.js App Router
-
-Hopfield empowers developers to seamlessly fetch and stream data directly into Next.js React Server Components.
-
-## Overview
-
-Hopfield streaming chat provides a readableStream() which can be used to build recursive React Server Components.
-
-The readableStream() from Hopfield's streaming chat provider returns a [ReadableStream](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream) (available in Node 18+, or it can be polyfilled with a library like [web-streams-polyfill](https://www.npmjs.com/package/web-streams-polyfill).).
-
-::: info Non-streaming
-
-If you are not interested in using streaming, you can use the non-streaming chat provider easily with a simple RSC
-that awaits the full response from chat.get(). This is not shown below, but is a much simpler integration that does not
-include any custom code for streaming token by token.
-
-:::
-
-### Backpressure
-
-The readable stream handles backpressure with a pull-based approach. See our [tests](https://github.com/propology/hopfield/blob/main/src/utils.test.ts) for how Hopfield handles backpressure. For a more detailed explanation on "backpressure" and how it factors into streaming LLM responses, please see the
-[vercel/ai docs](https://sdk.vercel.ai/docs/concepts/backpressure-and-cancellation).
-
-## Usage
-
-Here's how to use Hopfield with a recursive React Server Component using Suspense:
-
-tsx
-import { Suspense } from "react";
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-
-// Set up the OpenAI client
-const openaiClient = new OpenAI({ apiKey: "OPENAI_API_KEY" });
-// Pass the OpenAI client into Hopfield
-const hopfield = hop.client(openai).provider(openaiClient);
-// Create a streaming chat provider
-const chat = hopfield.chat("gpt-3.5-turbo-16k-0613").streaming();
-
-export type ChatResponseProps = {
- prompt: string;
-};
-
-export async function ChatResponse({ prompt }: ChatResponseProps) {
- // construct messages with hop.inferMessageInput
- const messages: hop.inferMessageInput[] = [
- {
- role: "system",
- content: "You are a helpful AI assistant.",
- },
- {
- role: "user",
- content: prompt,
- },
- ];
-
- const response = await chat.get(
- { messages: messages },
- {
- onChunk: async (value) => {
- console.log(Received chunk type: \${value.choices[0].__type});
- // do something on the server with each individual chunk as it is
- // streamed in
- },
- onDone: async (chunks) => {
- console.log(Total chunks received: \${chunks.length});
- // do something on the server when the chat completion is done
- // this can be caching the response, storing in a database, etc.
- //
- // chunks is an array of all the streamed responses, so you
- // can access the raw content and combine how you'd like
- },
- // if you are using function calling, you can also add a onFunctionCall
- // here with zod-parsed arguments
- }
- );
-
- // pass the readableStream to the RSC
- return ;
-}
-
-type Props = {
- /**
- * A ReadableStream produced by Hopfield.
- */
- stream: ReadableStream>;
-};
-
-/**
- * A React Server Component that recursively renders a stream of tokens.
- */
-async function Tokens(props: Props) {
- const { stream } = props;
- const reader = stream.getReader();
-
- return (
-
-
-
- );
-}
-
-type RecursiveTokensProps = {
- reader: ReadableStreamDefaultReader>;
-};
-
-async function RecursiveTokens({ reader }: RecursiveTokensProps) {
- const { done, value } = await reader.read();
-
- if (done) {
- return null;
- }
-
- return (
- <>
- {value.choices[0].__type === "content" ? (
- value.choices[0].delta.content
- ) : (
- <>>
- )}
- }>
-
-
- >
- );
-}
-
-// This can be any loading indicator you want, which gets appended to the end
-// of the tokens while waiting for the next token to be streamed
-const LoadingDots = () => ...;
-
-
-We create a recursive React Server Component which uses Suspense boundaries to await each token,
-and show a fallback loading indicator where the next token will be rendered.
-
-See our [Next 13 RSC example](https://next-13.hopfield.ai) for a real-world integration
-using Vercel, similar to this quick example.
-
-### Dive Deeper
-
-To deepen your understanding of how Streaming works, and how it can be further utilized within your application,
-refer to the [Streaming Chat](/chat/streaming) section.
-'
-
-'---
-description: "Comparisons between Hopfield's features and features from similar libraries."
-title: "Comparisons"
----
-
-# Comparisons
-
-No other library does what Hopfield does (inferring static LLM TypeScript types from Zod schemas), but there are some similarities with other libraries. This page compares Hopfield to other libraries.
-
-Comparisons strive to be as accurate and as unbiased as possible. If you use any of these libraries and feel the information could be improved, feel free to suggest changes.
-
-## vercel/ai
-
-[**ai**](https://github.com/vercel/ai) is a framework for AI-powered applications with React, Svelte, Vue, and Solid. They provide hooks to easily integrate
-with a streaming text response (StreamingTextResponse) and allow a callback for function calling, as well as simple, drop-in components for React and other
-frameworks.
-
-**Hopfield** provides a subset of these features, and focuses solely on the API interactions, and **not** on providing React components.
-
-Below is a comparison of the library features:
-
-| | **ai** | **Hopfield** |
-| ---------------------- | ----------------------------------------- | ---------------------------------------------- |
-| **React Components** | Easy, inflexible UI components & hooks | No UI components or hooks |
-| **Typed Functions** | Streaming function calls with loose types | Strict function call types with Zod validation |
-| **Framework Examples** | Multiple | Multiple |
-| **Chat Providers** | Multiple | OpenAI, with support for others coming |
-
-## Langchain.js
-
-[**Langchain.js**](https://github.com/hwchase17/langchainjs) is a framework for developing applications powered by language models
-with Javascript. Developers usually use Langchain to develop apps which connect to internal tools (like internal knowledge bases,
-LLM demos, and generally in trusted environments).
-
-**Hopfield** is a TypeScript library that provides a subset of Langchain's features,
-prioritizing inferring static types from LLM input, alongside runtime response validation and static typing.
-
-Below is a comparison of the library features:
-
-| | **Langchain.js** | **Hopfield** |
-| -------------------- | ------------------------------------------- | ------------------------------------------------------------------------ |
-| **Prompt Templates** | Opaque | Use string template types for type inference |
-| **Type Generation** | Loose types with some Typescript helpers | Static types with Zod validation |
-| **Function Calling** | Starter templates, with some Zod validation | Validation-driven, composable functions |
-| **Connectors/Tools** | Many, with various integrations | Only a select few, with examples (actively being developed) |
-| **Dependencies** | Many, with non-optional peer dependencies | Few, with strict bundle splitting to avoid unnecessary peer dependencies |
-'
-
-'---
description: "Quickly add Hopfield to your TypeScript project."
title: "Getting Started"
---
@@ -322,293 +124,51 @@ Now that you're all set up, you are ready to dive in to the docs further!
'
'---
-description: "Typescript-first LLM framework with static type inference, testability, and composability."
-head:
- - - meta
- - name: keywords
- content: ai, openai, zod, gpt, llm, ai-tools
-title: "Hopfield: Typescript-first LLM framework with static type inference, testability, and composability."
-titleTemplate: false
+description: "Comparisons between Hopfield's features and features from similar libraries."
+title: "Comparisons"
---
-
+No other library does what Hopfield does (inferring static LLM TypeScript types from Zod schemas), but there are some similarities with other libraries. This page compares Hopfield to other libraries.
-
-
-
-
+Comparisons strive to be as accurate and as unbiased as possible. If you use any of these libraries and feel the information could be improved, feel free to suggest changes.
-
+## vercel/ai
-Hopfield is a Typescript-first large language model framework with static type inference, testability, and composability.
-Easily validate LLM responses and inputs with strong types. Flexible abstractions
-with best practices baked in.
+[**ai**](https://github.com/vercel/ai) is a framework for AI-powered applications with React, Svelte, Vue, and Solid. They provide hooks to easily integrate
+with a streaming text response (StreamingTextResponse) and allow a callback for function calling, as well as simple, drop-in components for React and other
+frameworks.
-Add it to your project, along with any peer dependencies:
-
-::: code-group
-
-bash [bun]
-bun i hopfield
-
-
-bash [pnpm]
-pnpm add hopfield
-
-
-bash [npm]
-npm i hopfield
-
-
-:::
-
-### ready, set, hop
-
-See how easy it is to add composable, type-safe LLM features with Hopfield:
-
-::: code-group
-
-ts twoslash [main.ts]
-// @filename: openai.ts
-export const SupportCategoryEnum = z.enum([
- "ACCOUNT_ISSUES",
- "BILLING_AND_PAYMENTS",
- "TECHNICAL_SUPPORT",
- "FEATURE_REQUESTS",
- "BUG_REPORTS",
- "PRODUCT_INQUIRIES",
- "PASSWORD_RESET",
- "SECURITY_ISSUES",
- "SERVICE_OUTAGES",
- "SETUP_AND_INSTALLATION",
- "TROUBLESHOOTING",
- "USER_GUIDES_AND_MANUALS",
- "WARRANTY_AND_REPAIRS",
- "ORDER_TRACKING",
- "DELIVERY_ISSUES",
- "RETURN_AND_REFUND",
- "ACCOUNT_DELETION",
- "PRIVACY_CONCERNS",
- "COMPLIANCE_QUERY",
- "TRAINING_AND_CERTIFICATIONS",
- "PARTNER_SUPPORT",
- "DEVELOPER_TOOLS",
- "API_SUPPORT",
- "PERFORMANCE_ISSUES",
- "DATA_ISSUES",
- "UPGRADE_ISSUES",
- "MIGRATION_ASSISTANCE",
- "SYSTEM_COMPATIBILITY",
- "PAYMENT_GATEWAY_SUPPORT",
- "SYSTEM_MAINTENANCE",
- "RELEASE_NOTES",
- "OTHERS",
-]);
-
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-import z from "zod";
-
-const hopfield = hop.client(openai).provider(new OpenAI());
-
-const categoryDescription = hopfield
- .template()
- .enum("The category of the message.");
-
-const classifyMessage = hopfield.function({
- name: "classifyMessage",
- description: "Triage an incoming support message.",
- parameters: z.object({
- summary: z.string().describe("The summary of the message."),
- category: SupportCategoryEnum.describe(categoryDescription),
- }),
-});
-
-export const chat = hopfield.chat().functions([classifyMessage]);
-
-// @filename: main.ts
-import z from "zod";
-import { SupportCategoryEnum } from "./openai";
-const handleMessageWithCategory = async (
- category: z.infer,
- message: string
-) => {};
-
-// ---cut---
-import hop from "hopfield";
-import { chat } from "./openai";
-
-const incomingUserMessage = "How do I reset my password?";
-
-const messages: hop.inferMessageInput[] = [
- {
- content: incomingUserMessage,
- role: "user",
- },
-];
-
-const parsed = await chat.get({
- messages,
-});
-
-if (parsed.choices[0].__type === "function_call") {
- // ^?
- const category = parsed.choices[0].message.function_call.arguments.category;
- await handleMessageWithCategory(category, incomingUserMessage);
- // ^?
-}
-
-
-ts twoslash [openai.ts]
-export const SupportCategoryEnum = z.enum([
- "ACCOUNT_ISSUES",
- "BILLING_AND_PAYMENTS",
- "TECHNICAL_SUPPORT",
- "FEATURE_REQUESTS",
- "BUG_REPORTS",
- "PRODUCT_INQUIRIES",
- "PASSWORD_RESET",
- "SECURITY_ISSUES",
- "SERVICE_OUTAGES",
- "SETUP_AND_INSTALLATION",
- "TROUBLESHOOTING",
- "USER_GUIDES_AND_MANUALS",
- "WARRANTY_AND_REPAIRS",
- "ORDER_TRACKING",
- "DELIVERY_ISSUES",
- "RETURN_AND_REFUND",
- "ACCOUNT_DELETION",
- "PRIVACY_CONCERNS",
- "COMPLIANCE_QUERY",
- "TRAINING_AND_CERTIFICATIONS",
- "PARTNER_SUPPORT",
- "DEVELOPER_TOOLS",
- "API_SUPPORT",
- "PERFORMANCE_ISSUES",
- "DATA_ISSUES",
- "UPGRADE_ISSUES",
- "MIGRATION_ASSISTANCE",
- "SYSTEM_COMPATIBILITY",
- "PAYMENT_GATEWAY_SUPPORT",
- "SYSTEM_MAINTENANCE",
- "RELEASE_NOTES",
- "OTHERS",
-]);
-
-// ---cut---
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-import z from "zod";
-
-const hopfield = hop.client(openai).provider(new OpenAI());
-
-const categoryDescription = hopfield
- .template()
- .enum("The category of the message.");
-
-const classifyMessage = hopfield.function({
- name: "classifyMessage",
- description: "Triage an incoming support message.",
- parameters: z.object({
- summary: z.string().describe("The summary of the message."),
- category: SupportCategoryEnum.describe(categoryDescription),
- // ^?
- }),
-});
-
-export const chat = hopfield.chat().functions([classifyMessage]);
-
-
-:::
-
-## TL;DR
-
-Hopfield might be a good fit for your project if:
-
-- ποΈ You build with Typescript/Javascript, and have your database schemas in these languages (e.g. [Prisma](https://www.prisma.io/) and/or [Next.js](https://nextjs.org/)).
-- πͺ¨ You don't need a heavyweight LLM orchestration framework that ships with a ton of dependencies you'll never use.
-- π€ You're using OpenAI function calling and/or custom tools, and want Typescript-native features for them (e.g. validations w/ [Zod](https://github.com/colinhacks/zod)).
-- π¬ You're building complex LLM interactions which use memory & [RAG](https://www.promptingguide.ai/techniques/rag), evaluation, and orchestration (_coming soonβ’_).
-- π You want best-practice, extensible templates, which use [string literal types](https://www.typescriptlang.org/docs/handbook/2/template-literal-types.html)
- under the hood for transparency.
-
-Oh, and liking Typescript is a nice-to-have.
-
-## Guiding principles
-
-- π We are Typescript-first, and only support TS (or JS) - with services like [Replicate](https://replicate.com/) or [OpenAI](https://platform.openai.com/docs/introduction), why do you need Python?
-- π€ We provide a simple, ejectable interface with common LLM use-cases. This is aligned 1-1 with LLM provider abstractions, like OpenAI's.
-- πͺ’ We explicitly _don't_ provide a ton of custom tools (please don't ask for too many π ) outside of the building blocks and simple examples provided. Other frameworks provide these, but when you use them, you soon realize the tool you want is very use-case specific.
-- π§ͺ We (will) provide evaluation frameworks which let you simulate user scenarios and backend interactions with the LLM, including multi-turn conversations and function calling.
-- πΆ We support Node.js, Vercel Edge Functions, Cloudflare Workers, and more (oh and even web, if you like giving away API keys).
-
-## Community
-
-If you have questions or need help, reach out to the community at the [Hopfield GitHub Discussions](https://github.com/propology/hopfield/discussions)
-or join the [Propology Discord](https://discord.gg/2hag5fc6) and check out the π-hopfield channel.
+**Hopfield** provides a subset of these features, and focuses solely on the API interactions, and **not** on providing React components.
-
-
-
+Below is a comparison of the library features:
-## Learn more
+| | **ai** | **Hopfield** |
+| ---------------------- | ----------------------------------------- | ---------------------------------------------- |
+| **React Components** | Easy, inflexible UI components & hooks | No UI components or hooks |
+| **Typed Functions** | Streaming function calls with loose types | Strict function call types with Zod validation |
+| **Framework Examples** | Multiple | Multiple |
+| **Chat Providers** | Multiple | OpenAI, with support for others coming |
-Read the [Getting Started](/guide/getting-started) guide to learn more how to use Hopfield.
+## Langchain.js
-### Inspiration
+[**Langchain.js**](https://github.com/hwchase17/langchainjs) is a framework for developing applications powered by language models
+with Javascript. Developers usually use Langchain to develop apps which connect to internal tools (like internal knowledge bases,
+LLM demos, and generally in trusted environments).
-Shoutout to these projects which inspired us:
+**Hopfield** is a TypeScript library that provides a subset of Langchain's features,
+prioritizing inferring static types from LLM input, alongside runtime response validation and static typing.
-- [Zod](https://github.com/colinhacks/zod)
-- [zod-to-json-schema](https://github.com/StefanTerdell/zod-to-json-schema)
-- [Autochain](https://github.com/Forethought-Technologies/AutoChain)
-- [Langchain.js](https://github.com/hwchase17/langchainjs)
-- [simpleaichat](https://github.com/minimaxir/simpleaichat)
-- [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT)
-- [abitype](https://github.com/wagmi-dev/abitype)
+Below is a comparison of the library features:
-If you like Hopfield, go star them on Github too.
+| | **Langchain.js** | **Hopfield** |
+| -------------------- | ------------------------------------------- | ------------------------------------------------------------------------ |
+| **Prompt Templates** | Opaque | Use string template types for type inference |
+| **Type Generation** | Loose types with some Typescript helpers | Static types with Zod validation |
+| **Function Calling** | Starter templates, with some Zod validation | Validation-driven, composable functions |
+| **Connectors/Tools** | Many, with various integrations | Only a select few, with examples (actively being developed) |
+| **Dependencies** | Many, with non-optional peer dependencies | Few, with strict bundle splitting to avoid unnecessary peer dependencies |
'
'---
@@ -624,8 +184,8 @@ guarantees with Zod, and composability across providers.
::: info API Providers
We currently only support OpenAI, but are
-working on adding further providers. Reach out on [Discord](https://discord.gg/2hag5fc6) or
-[Github Discussions](https://github.com/propology/hopfield/discussions) if you have any suggestions!
+working on adding further providers. Reach out on
+[Github Discussions](https://github.com/EnjoinHQ/hopfield/discussions) if you have any suggestions!
:::
@@ -787,114 +347,26 @@ const thirdEmbeddingLength = response.data[2].embedding.length;
'
'---
-description: "Hopfield makes streaming with LLM function calling seamless."
-title: "Chat - Functions with Streaming"
+description: "An overview of working with chat models in Hopfield."
+title: "Overview of Chat Models"
---
-# Functions with Streaming
+# Chat
-Hopfield makes it easy to use streaming with function calling.
-You define validation-driven functions which get passed to the LLM.
+Hopfield also provides simple APIs for interacting with chat models. It has different API providers with type
+guarantees with Zod.
+
+::: info API Providers
+
+We currently only support OpenAI, but are
+working on adding further providers. Reach out on
+[Github Discussions](https://github.com/EnjoinHQ/hopfield/discussions) if you have any suggestions!
+
+:::
## Usage
-Use streaming function calling like:
-
-ts twoslash
-const takeAction = async (
- name: string,
- args: {
- location: string;
- unit: "celsius" | "fahrenheit";
- }
-) => {};
-// ---cut---
-import z from "zod";
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-
-const hopfield = hop.client(openai).provider(new OpenAI());
-
-const weatherFunction = hopfield.function({
- name: "getCurrentWeather",
- description: "Get the current weather in a given location",
- parameters: z.object({
- location: z.string().describe("The city and state, e.g. San Francisco, CA"),
- unit: z
- .enum(["celsius", "fahrenheit"])
- .describe(hopfield.template().enum("The unit for the temperature.")),
- }),
-});
-
-const chat = hopfield.chat().streaming().functions([weatherFunction]);
-
-const messages: hop.inferMessageInput[] = [
- {
- role: "user",
- content: "What's the weather in San Jose?",
- },
-];
-
-const response = await chat.get(
- {
- messages,
- },
- {
- onChunk(chunk) {
- console.log(Received chunk type: \${chunk.choices[0].__type});
- // do something on the server with each individual chunk as it is
- // streamed in
- },
- onDone(chunks) {
- console.log(Total chunks received: \${chunks.length});
- // do something on the server when the chat completion is done
- // this can be caching the response, storing in a database, etc.
- //
- // chunks is an array of all the streamed responses, so you
- // can access the raw content and combine how you'd like
- },
- async onFunctionCall(fn) {
- // do something based on the function call result - this
- // is parsed by your function definition with zod, and
- // the arguments are coerced into the object shape you expect
- await takeAction(fn.name, fn.arguments);
- // ^?
- },
- }
-);
-
-
-::: info Feedback
-
-To influence these features, reach out on [Discord](https://discord.gg/2hag5fc6) or
-[Github Discussions](https://github.com/propology/hopfield/discussions).
-We want your feedback!
-
-:::
-'
-
-'---
-description: "An overview of working with chat models in Hopfield."
-title: "Overview of Chat Models"
----
-
-# Chat
-
-Hopfield also provides simple APIs for interacting with chat models. It has different API providers with type
-guarantees with Zod.
-
-::: info API Providers
-
-We currently only support OpenAI, but are
-working on adding further providers. Reach out on [Discord](https://discord.gg/2hag5fc6) or
-[Github Discussions](https://github.com/propology/hopfield/discussions) if you have any suggestions!
-
-:::
-
-## Usage
-
-Check out how we type responses:
+Check out how we type responses:
ts twoslash
import hop from "hopfield";
@@ -985,6 +457,94 @@ in influencing this!
:::
'
+'---
+description: "Hopfield makes streaming with LLM function calling seamless."
+title: "Chat - Functions with Streaming"
+---
+
+# Functions with Streaming
+
+Hopfield makes it easy to use streaming with function calling.
+You define validation-driven functions which get passed to the LLM.
+
+## Usage
+
+Use streaming function calling like:
+
+ts twoslash
+const takeAction = async (
+ name: string,
+ args: {
+ location: string;
+ unit: "celsius" | "fahrenheit";
+ }
+) => {};
+// ---cut---
+import z from "zod";
+import hop from "hopfield";
+import openai from "hopfield/openai";
+import OpenAI from "openai";
+
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const weatherFunction = hopfield.function({
+ name: "getCurrentWeather",
+ description: "Get the current weather in a given location",
+ parameters: z.object({
+ location: z.string().describe("The city and state, e.g. San Francisco, CA"),
+ unit: z
+ .enum(["celsius", "fahrenheit"])
+ .describe(hopfield.template().enum("The unit for the temperature.")),
+ }),
+});
+
+const chat = hopfield.chat().streaming().functions([weatherFunction]);
+
+const messages: hop.inferMessageInput[] = [
+ {
+ role: "user",
+ content: "What's the weather in San Jose?",
+ },
+];
+
+const response = await chat.get(
+ {
+ messages,
+ },
+ {
+ onChunk(chunk) {
+ console.log(Received chunk type: \${chunk.choices[0].__type});
+ // do something on the server with each individual chunk as it is
+ // streamed in
+ },
+ onDone(chunks) {
+ console.log(Total chunks received: \${chunks.length});
+ // do something on the server when the chat completion is done
+ // this can be caching the response, storing in a database, etc.
+ //
+ // chunks is an array of all the streamed responses, so you
+ // can access the raw content and combine how you'd like
+ },
+ async onFunctionCall(fn) {
+ // do something based on the function call result - this
+ // is parsed by your function definition with zod, and
+ // the arguments are coerced into the object shape you expect
+ await takeAction(fn.name, fn.arguments);
+ // ^?
+ },
+ }
+);
+
+
+::: info Feedback
+
+To influence these features, reach out on
+[Github Discussions](https://github.com/EnjoinHQ/hopfield/discussions).
+We want your feedback!
+
+:::
+'
+
'---
description: "Deep dive into how to get streaming chat responses with Hopfield."
title: "Chat - Non-streaming"
@@ -1061,14 +621,16 @@ See how to use streaming results combined with type-driven prompt templates in t
'
'---
-description: "Deep dive into how to get non-streaming chat responses with Hopfield."
-title: "Chat - Non-streaming"
+description: "Hopfield makes LLM function calling seamless."
+title: "Chat - Functions"
---
-# Non-streaming chat
+# Functions
-Hopfield provides a simple way to interact with chat models. You can use different
-API providers with type guarantees with Zod.
+Hopfield lets you define validation-driven functions which can be passed to the LLM.
+This lets you clearly build functions, which get transformed to JSON schema with
+[zod-to-json-schema](https://github.com/StefanTerdell/zod-to-json-schema), so
+the LLM can use these as tools.
## Usage
@@ -1078,179 +640,65 @@ ts twoslash
import hop from "hopfield";
import openai from "hopfield/openai";
import OpenAI from "openai";
+import z from "zod";
const hopfield = hop.client(openai).provider(new OpenAI());
-const chat = hopfield.chat();
+const weatherFunction = hopfield.function({
+ name: "getCurrentWeather",
+ description: "Get the current weather in a given location",
+ parameters: z.object({
+ location: z.string().describe("The city and state, e.g. San Francisco, CA"),
+ unit: z
+ .enum(["celsius", "fahrenheit"])
+ .describe(hopfield.template().enum("The unit for the temperature.")),
+ }),
+});
+
+const chat = hopfield.chat().functions([weatherFunction]);
const messages: hop.inferMessageInput[] = [
{
role: "user",
- content: "How do you count to ten?",
+ content: "What's the weather in Phoenix, AZ?",
},
];
const response = await chat.get({
messages,
+ temperature: 0,
});
-const responseType = response.choices[0].__type;
-// ^?
-if (responseType === "stop") {
- const message = response.choices[0].message;
- // ^?
-}
-
-
-## Parameters
-
-### Model Name
-
-The model name to use for the embedding.
+const choice = response.choices[0];
-ts
-const hopfield = hop.client(openai).provider(new OpenAI());
+if (choice.__type === "function_call") {
+ // ^?
+ const functionParams = choice.message.function_call;
+ // ^?
+}
-const chat = hopfield.chat("gpt-4-0613"); // [!code focus]
+The input function definition will be validated to make sure that:
-#### OpenAI
+1. Descriptions are provided for every argument.
+2. No error-prone types are used in parameters (for OpenAI, this includes ZodTuple, ZodBigInt, and ZodAny).
+3. If a type in the JSON schema performs better with a templated description (like enum), it is checked against the template.
-The default model name is shown below. To override this, you must use
-a model which is enabled on your OpenAI account.
+All of these checks are entirely customizable and can be overridden/disabled.
-ts twoslash
-import type { DefaultOpenAIChatModelName } from "hopfield/openai";
-// ^?
+## Parameters
+### Function Definition
-All possible model names are shown below (reach out if we are missing one!)
+The function takes a name, description, and a Zod schema
+for the parameters which can be passed into it. These are all required fields to define a function,
+and are used to construct the JSON schema definition for the function, to be passed to the LLM.
ts twoslash
-import type { OpenAIChatModelName } from "hopfield/openai";
-// ^?
-
-
----
-
-### Response Count
-
-The number of chat responses to be returned (this is usually referred to as n).
-For all providers, this defaults to 1.
-This is capped at 20.
-
-ts
-const hopfield = hop.client(openai).provider(new OpenAI());
-
-const chat = hopfield.chat("gpt-4-0613", 10); // [!code focus]
-
-
-The response can then be safely used:
-
-ts twoslash
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-
-const hopfield = hop.client(openai).provider(new OpenAI());
-
-const chat = hopfield.chat("gpt-4-0613", 10);
-// ---cut---
-const messages: hop.inferMessageInput[] = [
- {
- role: "user",
- content: "What's the best way to get a bunch of chat responses?",
- },
-];
-
-const response = await chat.get({
- messages,
-});
-
-const chatCount = response.choices.length;
-// ^?
-
-'
-
-'---
-description: "Hopfield makes LLM function calling seamless."
-title: "Chat - Functions"
----
-
-# Functions
-
-Hopfield lets you define validation-driven functions which can be passed to the LLM.
-This lets you clearly build functions, which get transformed to JSON schema with
-[zod-to-json-schema](https://github.com/StefanTerdell/zod-to-json-schema), so
-the LLM can use these as tools.
-
-## Usage
-
-Use chat models from OpenAI:
-
-ts twoslash
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-import z from "zod";
-
-const hopfield = hop.client(openai).provider(new OpenAI());
-
-const weatherFunction = hopfield.function({
- name: "getCurrentWeather",
- description: "Get the current weather in a given location",
- parameters: z.object({
- location: z.string().describe("The city and state, e.g. San Francisco, CA"),
- unit: z
- .enum(["celsius", "fahrenheit"])
- .describe(hopfield.template().enum("The unit for the temperature.")),
- }),
-});
-
-const chat = hopfield.chat().functions([weatherFunction]);
-
-const messages: hop.inferMessageInput[] = [
- {
- role: "user",
- content: "What's the weather in Phoenix, AZ?",
- },
-];
-
-const response = await chat.get({
- messages,
- temperature: 0,
-});
-
-const choice = response.choices[0];
-
-if (choice.__type === "function_call") {
- // ^?
- const functionParams = choice.message.function_call;
- // ^?
-}
-
-
-The input function definition will be validated to make sure that:
-
-1. Descriptions are provided for every argument.
-2. No error-prone types are used in parameters (for OpenAI, this includes ZodTuple, ZodBigInt, and ZodAny).
-3. If a type in the JSON schema performs better with a templated description (like enum), it is checked against the template.
-
-All of these checks are entirely customizable and can be overridden/disabled.
-
-## Parameters
-
-### Function Definition
-
-The function takes a name, description, and a Zod schema
-for the parameters which can be passed into it. These are all required fields to define a function,
-and are used to construct the JSON schema definition for the function, to be passed to the LLM.
-
-ts twoslash
-import hop from "hopfield";
-import openai from "hopfield/openai";
-import OpenAI from "openai";
-import z from "zod";
+import hop from "hopfield";
+import openai from "hopfield/openai";
+import OpenAI from "openai";
+import z from "zod";
export const SupportCategoryEnum = z.enum([
"ACCOUNT_ISSUES",
@@ -1330,28 +778,557 @@ if (response.choices[0].__type === "function_call") {
// ^?
}
+'
+
+'---
+description: "Deep dive into how to get non-streaming chat responses with Hopfield."
+title: "Chat - Non-streaming"
+---
+
+# Non-streaming chat
+
+Hopfield provides a simple way to interact with chat models. You can use different
+API providers with type guarantees with Zod.
+
+## Usage
+
+Use chat models from OpenAI:
+
+ts twoslash
+import hop from "hopfield";
+import openai from "hopfield/openai";
+import OpenAI from "openai";
+
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const chat = hopfield.chat();
+
+const messages: hop.inferMessageInput[] = [
+ {
+ role: "user",
+ content: "How do you count to ten?",
+ },
+];
+
+const response = await chat.get({
+ messages,
+});
+
+const responseType = response.choices[0].__type;
+// ^?
+if (responseType === "stop") {
+ const message = response.choices[0].message;
+ // ^?
+}
+
+
+## Parameters
-### Options
+### Model Name
-The function also allows an optional options parameter, which lets you override the runtime checks for the
-schema. This includes the checks for requiring descriptions on Zod schema parameters, as well as overriding the
-list of "disabled types", which are Zod types which typically produce unreliable results from an LLM.
+The model name to use for the embedding.
ts
-type HopfieldFunctionOptions = {
- /**
- * Allows descriptions to not be checked on the function parameters. This defaults to true.
- */
- requireDescriptions?: boolean;
- /**
- * Allows you override or disable "unstable" types, which are types that do not typically
- * produce good results with a given model. These are defined on a per-model basis.
- *
- * Set to false to allow all "unstable" types.
- */
- disabledTypes?: ZodFirstPartyTypeKind[] | false;
-};
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const chat = hopfield.chat("gpt-4-0613"); // [!code focus]
+
+
+#### OpenAI
+
+The default model name is shown below. To override this, you must use
+a model which is enabled on your OpenAI account.
+
+ts twoslash
+import type { DefaultOpenAIChatModelName } from "hopfield/openai";
+// ^?
+
+
+All possible model names are shown below (reach out if we are missing one!)
+
+ts twoslash
+import type { OpenAIChatModelName } from "hopfield/openai";
+// ^?
+
+
+---
+
+### Response Count
+
+The number of chat responses to be returned (this is usually referred to as n).
+For all providers, this defaults to 1.
+This is capped at 20.
+
+ts
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const chat = hopfield.chat("gpt-4-0613", 10); // [!code focus]
+
+
+The response can then be safely used:
+
+ts twoslash
+import hop from "hopfield";
+import openai from "hopfield/openai";
+import OpenAI from "openai";
+
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const chat = hopfield.chat("gpt-4-0613", 10);
+// ---cut---
+const messages: hop.inferMessageInput[] = [
+ {
+ role: "user",
+ content: "What's the best way to get a bunch of chat responses?",
+ },
+];
+
+const response = await chat.get({
+ messages,
+});
+
+const chatCount = response.choices.length;
+// ^?
+
+'
+
+'---
+description: "Typescript-first LLM framework with static type inference, testability, and composability."
+head:
+ - - meta
+ - name: keywords
+ content: ai, openai, zod, gpt, llm, ai-tools
+title: "Hopfield: Typescript-first LLM framework with static type inference, testability, and composability."
+titleTemplate: false
+---
+
+
+
+
+
+Hopfield is a Typescript-first large language model framework with static type inference, testability, and composability.
+Easily validate LLM responses and inputs with strong types. Flexible abstractions
+with best practices baked in.
+
+Add it to your project, along with any peer dependencies:
+
+::: code-group
+
+bash [bun]
+bun i hopfield
+
+
+bash [pnpm]
+pnpm add hopfield
+
+
+bash [npm]
+npm i hopfield
+
+
+:::
+
+### ready, set, hop
+
+See how easy it is to add composable, type-safe LLM features with Hopfield:
+
+::: code-group
+
+ts twoslash [main.ts]
+// @filename: openai.ts
+export const SupportCategoryEnum = z.enum([
+ "ACCOUNT_ISSUES",
+ "BILLING_AND_PAYMENTS",
+ "TECHNICAL_SUPPORT",
+ "FEATURE_REQUESTS",
+ "BUG_REPORTS",
+ "PRODUCT_INQUIRIES",
+ "PASSWORD_RESET",
+ "SECURITY_ISSUES",
+ "SERVICE_OUTAGES",
+ "SETUP_AND_INSTALLATION",
+ "TROUBLESHOOTING",
+ "USER_GUIDES_AND_MANUALS",
+ "WARRANTY_AND_REPAIRS",
+ "ORDER_TRACKING",
+ "DELIVERY_ISSUES",
+ "RETURN_AND_REFUND",
+ "ACCOUNT_DELETION",
+ "PRIVACY_CONCERNS",
+ "COMPLIANCE_QUERY",
+ "TRAINING_AND_CERTIFICATIONS",
+ "PARTNER_SUPPORT",
+ "DEVELOPER_TOOLS",
+ "API_SUPPORT",
+ "PERFORMANCE_ISSUES",
+ "DATA_ISSUES",
+ "UPGRADE_ISSUES",
+ "MIGRATION_ASSISTANCE",
+ "SYSTEM_COMPATIBILITY",
+ "PAYMENT_GATEWAY_SUPPORT",
+ "SYSTEM_MAINTENANCE",
+ "RELEASE_NOTES",
+ "OTHERS",
+]);
+
+import hop from "hopfield";
+import openai from "hopfield/openai";
+import OpenAI from "openai";
+import z from "zod";
+
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const categoryDescription = hopfield
+ .template()
+ .enum("The category of the message.");
+
+const classifyMessage = hopfield.function({
+ name: "classifyMessage",
+ description: "Triage an incoming support message.",
+ parameters: z.object({
+ summary: z.string().describe("The summary of the message."),
+ category: SupportCategoryEnum.describe(categoryDescription),
+ }),
+});
+
+export const chat = hopfield.chat().functions([classifyMessage]);
+
+// @filename: main.ts
+import z from "zod";
+import { SupportCategoryEnum } from "./openai";
+const handleMessageWithCategory = async (
+ category: z.infer,
+ message: string
+) => {};
+
+// ---cut---
+import hop from "hopfield";
+import { chat } from "./openai";
+
+const incomingUserMessage = "How do I reset my password?";
+
+const messages: hop.inferMessageInput[] = [
+ {
+ content: incomingUserMessage,
+ role: "user",
+ },
+];
+
+const parsed = await chat.get({
+ messages,
+});
+
+if (parsed.choices[0].__type === "function_call") {
+ // ^?
+ const category = parsed.choices[0].message.function_call.arguments.category;
+ await handleMessageWithCategory(category, incomingUserMessage);
+ // ^?
+}
+
+
+ts twoslash [openai.ts]
+export const SupportCategoryEnum = z.enum([
+ "ACCOUNT_ISSUES",
+ "BILLING_AND_PAYMENTS",
+ "TECHNICAL_SUPPORT",
+ "FEATURE_REQUESTS",
+ "BUG_REPORTS",
+ "PRODUCT_INQUIRIES",
+ "PASSWORD_RESET",
+ "SECURITY_ISSUES",
+ "SERVICE_OUTAGES",
+ "SETUP_AND_INSTALLATION",
+ "TROUBLESHOOTING",
+ "USER_GUIDES_AND_MANUALS",
+ "WARRANTY_AND_REPAIRS",
+ "ORDER_TRACKING",
+ "DELIVERY_ISSUES",
+ "RETURN_AND_REFUND",
+ "ACCOUNT_DELETION",
+ "PRIVACY_CONCERNS",
+ "COMPLIANCE_QUERY",
+ "TRAINING_AND_CERTIFICATIONS",
+ "PARTNER_SUPPORT",
+ "DEVELOPER_TOOLS",
+ "API_SUPPORT",
+ "PERFORMANCE_ISSUES",
+ "DATA_ISSUES",
+ "UPGRADE_ISSUES",
+ "MIGRATION_ASSISTANCE",
+ "SYSTEM_COMPATIBILITY",
+ "PAYMENT_GATEWAY_SUPPORT",
+ "SYSTEM_MAINTENANCE",
+ "RELEASE_NOTES",
+ "OTHERS",
+]);
+
+// ---cut---
+import hop from "hopfield";
+import openai from "hopfield/openai";
+import OpenAI from "openai";
+import z from "zod";
+
+const hopfield = hop.client(openai).provider(new OpenAI());
+
+const categoryDescription = hopfield
+ .template()
+ .enum("The category of the message.");
+
+const classifyMessage = hopfield.function({
+ name: "classifyMessage",
+ description: "Triage an incoming support message.",
+ parameters: z.object({
+ summary: z.string().describe("The summary of the message."),
+ category: SupportCategoryEnum.describe(categoryDescription),
+ // ^?
+ }),
+});
+
+export const chat = hopfield.chat().functions([classifyMessage]);
+
+
+:::
+
+## TL;DR
+
+Hopfield might be a good fit for your project if:
+
+- ποΈ You build with Typescript/Javascript, and have your database schemas in these languages (e.g. [Prisma](https://www.prisma.io/) and/or [Next.js](https://nextjs.org/)).
+- πͺ¨ You don't need a heavyweight LLM orchestration framework that ships with a ton of dependencies you'll never use.
+- π€ You're using OpenAI function calling and/or custom tools, and want Typescript-native features for them (e.g. validations w/ [Zod](https://github.com/colinhacks/zod)).
+- π¬ You're building complex LLM interactions which use memory & [RAG](https://www.promptingguide.ai/techniques/rag), evaluation, and orchestration (_coming soonβ’_).
+- π You want best-practice, extensible templates, which use [string literal types](https://www.typescriptlang.org/docs/handbook/2/template-literal-types.html)
+ under the hood for transparency.
+
+Oh, and liking Typescript is a nice-to-have.
+
+## Guiding principles
+
+- π We are Typescript-first, and only support TS (or JS) - with services like [Replicate](https://replicate.com/) or [OpenAI](https://platform.openai.com/docs/introduction), why do you need Python?
+- π€ We provide a simple, ejectable interface with common LLM use-cases. This is aligned 1-1 with LLM provider abstractions, like OpenAI's.
+- πͺ’ We explicitly _don't_ provide a ton of custom tools (please don't ask for too many π ) outside of the building blocks and simple examples provided. Other frameworks provide these, but when you use them, you soon realize the tool you want is very use-case specific.
+- π§ͺ We (will) provide evaluation frameworks which let you simulate user scenarios and backend interactions with the LLM, including multi-turn conversations and function calling.
+- πΆ We support Node.js, Vercel Edge Functions, Cloudflare Workers, and more (oh and even web, if you like giving away API keys).
+
+## Community
+
+If you have questions or need help, reach out to the community in the [Hopfield GitHub Discussions](https://github.com/EnjoinHQ/hopfield/discussions).
+
+