Skip to content

Setup Trouble - LLocalSearch appears to have some issue fetching Ollama models #126

@kuthalam

Description

@kuthalam

Hi all! Hopefully this is the right template to use and as the title says, I am having trouble trying to get LLocalSearch to load models downloaded through Ollama on a 2023 Macbook Pro (Apple M2 Pro chip). When I look at the Settings screen at "The agent chain is using the model", that dropdown is blank and shows no options. I expected the options to be the Ollama model list.

I followed issue #117 for setup. OpenWebUI is configured successfully and accessible via port 3000 and runs in a container. Ollama is running via port 11434 and installed on my local machine. Attached is my .env file and docker-compose.yaml, but as text files since GitHub only accepts those. Additionally, I have some output from docker container logs llocalsearch-backend-1:

Mar 21 15:13:10.996 INF app/main.go:33 created example session
Mar 21 15:13:11.001 INF app/main.go:36 Starting the server
Mar 21 15:13:11.003 INF app/apiServer.go:222 Starting server at http://localhost:8080
Mar 21 15:13:14.403 INF app/apiServer.go:213 Chat list sent
Mar 21 15:13:14.411 ERR app/apiServer.go:105 Error getting models
Mar 21 15:13:14.593 INF app/apiServer.go:170 Loaded Chat id=tutorial "message count"=2

Would anyone happen to know where I might be going wrong or if I missed a config setting? I have no advanced knowledge of Docker, just a little experience running containers. Please let me know if there is anything else I should provide and thank you for your time!

docker-compose.txt
env.txt

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions