Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama server resolves to host.docker.internal regardless of endpoint set in .env file #1877

Open
3 tasks done
lavinir opened this issue Oct 13, 2024 · 0 comments
Open
3 tasks done

Comments

@lavinir
Copy link

lavinir commented Oct 13, 2024

Describe the bug
I'm trying to run letta in docker connected to an ollama service running on the same host. I'm using an .env file with the following vars:
LETTA_LLM_ENDPOINT=http://192.168.xx.xx:11434
LETTA_LLM_ENDPOINT_TYPE=ollama
LETTA_LLM_MODEL=llama3.2:3b-instruct-q8_0
LETTA_LLM_CONTEXT_WINDOW=8192
LETTA_EMBEDDING_ENDPOINT=http://192.168.xx.xx:11434
LETTA_EMBEDDING_ENDPOINT_TYPE=ollama
LETTA_EMBEDDING_MODEL=mxbai-embed-large
LETTA_EMBEDDING_DIM=512

The server loads fine. I configure an Agent and Persona via the web interface. Then when attempting to start a chat with the agent I receive the following error on the server:

urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/generate (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x72f4fd942ba0>: Failed to resolve 'host.docker.internal' ([Errno -2] Name or service not known)"))

For some reason it's trying to use host.docker.internal even though the url is overridden in the .env file.
I'm also run inspect on the container when running and confirmed the correct env settings have been applied.

Please describe your setup

  • Installed with docker compose
  • Running on Linux (Ubuntu)

If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run Letta with local LLMs, please provide the following information:

  • Model: llama3.2:3b-instruct-q8_0
  • Backend: Ollama
  • OS: Ubuntu Linux VM (CPU only)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: To triage
Development

No branches or pull requests

1 participant