.. |
__init__.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
azure_chat_open_ai.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
azure_open_ai.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
chat_open_ai.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
fake.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
huggingface_endpoint_llm.py
|
0796791de5
feat: hf inference endpoint stream support (#1028)
|
преди 1 година |
open_ai.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
openllm.py
|
866ee5da91
fix: openllm generate cutoff (#945)
|
преди 1 година |
replicate_llm.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
spark.py
|
f42e7d1a61
feat: add spark v2 support (#885)
|
преди 1 година |
tongyi_llm.py
|
5fa2161b05
feat: server multi models support (#799)
|
преди 1 година |
wenxin.py
|
c4d759dfba
fix: wenxin error not raise when stream mode (#884)
|
преди 1 година |
xinference_llm.py
|
2d9616c29c
fix: xinference last token being ignored (#1013)
|
преди 1 година |