Local LLM

Chatling currently only supports hosted/cloud models (OpenAI, Claude, Gemini, etc.). Is it possible to use custom endpoints or local model support?
For instance defining a custom LLM endpoint.

Please authenticate to join the conversation.

Upvoters
Status

Feedbacks In Review

Board

πŸ’‘ Feature Request

Date

About 11 hours ago

Author

Webwerkplaats

Subscribe to post

Get notified by email when there are changes.