服务器 7 mēneši atpakaļ
vecāks
revīzija
d5b7c5b2b1

BIN
__pycache__/embed.cpython-310.pyc


BIN
__pycache__/get_vector_db.cpython-310.pyc


BIN
__pycache__/query.cpython-310.pyc


Failā izmaiņas netiks attēlotas, jo tās ir par lielu
+ 0 - 3
app.py


BIN
chroma/cd7cb5a8-0622-4833-a6a2-d812be9d5da4/length.bin


+ 1 - 1
get_vector_db.py

@@ -7,7 +7,7 @@ COLLECTION_NAME = os.getenv('COLLECTION_NAME', 'siwei_ai')
 TEXT_EMBEDDING_MODEL = os.getenv('TEXT_EMBEDDING_MODEL', 'nomic-embed-text')
 
 def get_vector_db():
-    embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL,show_progress=True)
+    embedding = OllamaEmbeddings(model=TEXT_EMBEDDING_MODEL,show_progress=True,num_gpu=0)
 
     db = Chroma(
         collection_name=COLLECTION_NAME,

+ 1 - 1
query.py

@@ -21,7 +21,7 @@ def get_prompt():
     )
 
     template = """Answer the question in Chinese based ONLY on the following context:
-    {context}
+    {context} and output strictly in markdown format,Line breaks use <br>
     Question: {question}
     """
 

+ 1 - 1
requirements1.txt

@@ -38,7 +38,7 @@ Flask==3.0.3
 flatbuffers==24.3.25
 fonttools==4.53.0
 frozenlist==1.4.1
-fsspec==2024.6.1
+fsspec==2024.2.0
 google-api-core==2.19.1
 google-auth==2.30.0
 google-cloud-vision==3.7.2

Daži faili netika attēloti, jo izmaiņu fails ir pārāk liels