takatost 9ae91a2ec3 feat: optimize xinference request max token key and stop reason (#998) 1 vuosi sitten
..
__init__.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
anthropic_provider.py 9adbeadeec feat: claude paid optimize (#890) 1 vuosi sitten
azure_openai_provider.py 1bd0a76a20 feat: optimize error raise (#820) 1 vuosi sitten
base.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
chatglm_provider.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
hosted.py 9adbeadeec feat: claude paid optimize (#890) 1 vuosi sitten
huggingface_hub_provider.py 9b247fccd4 feat: adjust hf max tokens (#979) 1 vuosi sitten
minimax_provider.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
openai_provider.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
openllm_provider.py 6c832ee328 fix: remove openllm pypi package because of this package too large (#931) 1 vuosi sitten
replicate_provider.py 95b179fb39 fix: replicate text generation model validate (#923) 1 vuosi sitten
spark_provider.py f42e7d1a61 feat: add spark v2 support (#885) 1 vuosi sitten
tongyi_provider.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
wenxin_provider.py 5fa2161b05 feat: server multi models support (#799) 1 vuosi sitten
xinference_provider.py 9ae91a2ec3 feat: optimize xinference request max token key and stop reason (#998) 1 vuosi sitten