Local LLM

Chatling currently only supports hosted/cloud models (OpenAI, Claude, Gemini, etc.). Is it possible to use custom endpoints or local model support?
For instance defining a custom LLM endpoint.

Please authenticate to join the conversation.

Upvoters
Status

Completed

Board

πŸ’‘ Feature Request

Date

21 days ago

Author

Webwerkplaats

Subscribe to post

Get notified by email when there are changes.