Skip to content

How can i connect privateGPT to my local llama API? #1983

Answered by rajkaran27
rajkaran27 asked this question in Q&A
Discussion options

You must be logged in to vote

Managed to solve this, go to settings.py under private_gpt/settings, scroll down to line 223 and change the API url.

`class OllamaSettings(BaseModel):
api_base: str = Field(
"ollama_url",
description="Base URL of Ollama API. Example: 'http://localhost:1434'.",
)
embedding_api_base: str = Field(
"ollama_url",
description="Base URL of Ollama embedding API. Example: 'http://localhost:11434'.",
)
llm_model: str = Field(
None,
description="Model to use. Example: 'llama2-uncensored'.",
)
embedding_model: str = Field(
None,
description="Model to use. Example: 'nomic-embed-text'.",
)
keep_alive: str = Field(
"5m",
description="Time the model will stay loaded in memory after a request. examples: 5…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by rajkaran27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant