takatost 9ae91a2ec3 feat: optimize xinference request max token key and stop reason (#998) 2 年之前
..
__init__.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
azure_chat_open_ai.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
azure_open_ai.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
chat_open_ai.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
fake.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
huggingface_endpoint_llm.py a76fde3d23 feat: optimize hf inference endpoint (#975) 2 年之前
open_ai.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
openllm.py 866ee5da91 fix: openllm generate cutoff (#945) 2 年之前
replicate_llm.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
spark.py f42e7d1a61 feat: add spark v2 support (#885) 2 年之前
tongyi_llm.py 5fa2161b05 feat: server multi models support (#799) 2 年之前
wenxin.py c4d759dfba fix: wenxin error not raise when stream mode (#884) 2 年之前
xinference_llm.py 9ae91a2ec3 feat: optimize xinference request max token key and stop reason (#998) 2 年之前