import { CodeGroup } from '@/app/components/develop/code.tsx'
import { Row, Col, Properties, Property, Heading, SubProperty, Paragraph } from '@/app/components/develop/md.tsx'
# Dataset API
### Request Body
Dataset name
```bash {{ title: 'cURL' }}
curl --location --request POST '${apiBaseUrl}/v1/datasets' \
--header 'Authorization: Bearer {api_key}' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "name"
}'
```
```json {{ title: 'Response' }}
{
"id": "",
"name": "name",
"description": null,
"provider": "vendor",
"permission": "only_me",
"data_source_type": null,
"indexing_technique": null,
"app_count": 0,
"document_count": 0,
"word_count": 0,
"created_by": "",
"created_at": 1695636173,
"updated_by": "",
"updated_at": 1695636173,
"embedding_model": null,
"embedding_model_provider": null,
"embedding_available": null
}
```
---
### Path Query
Page number
Number of items returned, default 20, range 1-100
```bash {{ title: 'cURL' }}
curl --location --request GET 'https://api.dify.ai/v1/datasets?page=1&limit=20' \
--header 'Authorization: Bearer {api_key}'
```
```json {{ title: 'Response' }}
{
"data": [
{
"id": "",
"name": "name",
"description": "desc",
"permission": "only_me",
"data_source_type": "upload_file",
"indexing_technique": "",
"app_count": 2,
"document_count": 10,
"word_count": 1200,
"created_by": "",
"created_at": "",
"updated_by": "",
"updated_at": ""
},
...
],
"has_more": true,
"limit": 20,
"total": 50,
"page": 1
}
```
---
This api is based on an existing dataset and creates a new document through text based on this dataset.
### Path Params
Dataset ID
### Request Body
Document name
Document content
Index mode
- high_quality High quality: embedding using embedding model, built as vector database index
- economy Economy: Build using inverted index of Keyword Table Index
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
```bash {{ title: 'cURL' }}
curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/create_by_text' \
--header 'Authorization: Bearer {api_key}' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "text",
"text": "text",
"indexing_technique": "high_quality",
"process_rule": {
"mode": "automatic"
}
}'
```
```json {{ title: 'Response' }}
{
"document": {
"id": "",
"position": 1,
"data_source_type": "upload_file",
"data_source_info": {
"upload_file_id": ""
},
"dataset_process_rule_id": "",
"name": "text.txt",
"created_from": "api",
"created_by": "",
"created_at": 1695690280,
"tokens": 0,
"indexing_status": "waiting",
"error": null,
"enabled": true,
"disabled_at": null,
"disabled_by": null,
"archived": false,
"display_status": "queuing",
"word_count": 0,
"hit_count": 0,
"doc_form": "text_model"
},
"batch": ""
}
```
---
This api is based on an existing dataset and creates a new document through a file based on this dataset.
### Path Params
Dataset ID
### Request Body
Source document ID (optional)
- Used to re-upload the document or modify the document cleaning and segmentation configuration. The missing information is copied from the source document
- The source document cannot be an archived document
- When original_document_id is passed in, the update operation is performed on behalf of the document. process_rule is a fillable item. If not filled in, the segmentation method of the source document will be used by defaul
- When original_document_id is not passed in, the new operation is performed on behalf of the document, and process_rule is required
Files that need to be uploaded.
Index mode
- high_quality High quality: embedding using embedding model, built as vector database index
- economy Economy: Build using inverted index of Keyword Table Index
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
```bash {{ title: 'cURL' }}
curl --location POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/create_by_file' \
--header 'Authorization: Bearer {api_key}' \
--form 'data="{\"name\":\"Dify\",\"indexing_technique\":\"high_quality\",\"process_rule\":{\"rules\":{\"pre_processing_rules\":[{\"id\":\"remove_extra_spaces\",\"enabled\":true},{\"id\":\"remove_urls_emails\",\"enabled\":true}],\"segmentation\":{\"separator\":\"###\",\"max_tokens\":500}},\"mode\":\"custom\"}}";type=text/plain' \
--form 'file=@"/path/to/file"'
```
```json {{ title: 'Response' }}
{
"document": {
"id": "",
"position": 1,
"data_source_type": "upload_file",
"data_source_info": {
"upload_file_id": ""
},
"dataset_process_rule_id": "",
"name": "Dify.txt",
"created_from": "api",
"created_by": "",
"created_at": 1695308667,
"tokens": 0,
"indexing_status": "waiting",
"error": null,
"enabled": true,
"disabled_at": null,
"disabled_by": null,
"archived": false,
"display_status": "queuing",
"word_count": 0,
"hit_count": 0,
"doc_form": "text_model"
},
"batch": ""
}
```
---
This api is based on an existing dataset and updates the document through text based on this dataset.
### Path Params
Dataset ID
Document ID
### Request Body
Document name (optional)
Document content (optional)
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
```bash {{ title: 'cURL' }}
curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/update_by_text' \
--header 'Authorization: Bearer {api_key}' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "name",
"text": "text"
}'
```
```json {{ title: 'Response' }}
{
"document": {
"id": "",
"position": 1,
"data_source_type": "upload_file",
"data_source_info": {
"upload_file_id": ""
},
"dataset_process_rule_id": "",
"name": "name.txt",
"created_from": "api",
"created_by": "",
"created_at": 1695308667,
"tokens": 0,
"indexing_status": "waiting",
"error": null,
"enabled": true,
"disabled_at": null,
"disabled_by": null,
"archived": false,
"display_status": "queuing",
"word_count": 0,
"hit_count": 0,
"doc_form": "text_model"
},
"batch": ""
}
```
---
This api is based on an existing dataset, and updates documents through files based on this dataset
### Path Params
Dataset ID
Document ID
### Request Body
Document name (optional)
Files to be uploaded
Processing rules
- mode (string) Cleaning, segmentation mode, automatic / custom
- rules (text) Custom rules (in automatic mode, this field is empty)
- pre_processing_rules (array[object]) Preprocessing rules
- id (string) Unique identifier for the preprocessing rule
- enumerate
- remove_extra_spaces Replace consecutive spaces, newlines, tabs
- remove_urls_emails Delete URL, email address
- enabled (bool) Whether to select this rule or not. If no document ID is passed in, it represents the default value.
- segmentation (object) segmentation rules
- separator Custom segment identifier, currently only allows one delimiter to be set. Default is \n
- max_tokens Maximum length (token) defaults to 1000
```bash {{ title: 'cURL' }}
curl --location POST 'https://api.dify.ai/v1/datasets/{dataset_id}/document/{document_id}/create_by_file' \
--header 'Authorization: Bearer {api_key}' \
--form 'data="{\"name\":\"Dify\",\"indexing_technique\":\"high_quality\",\"process_rule\":{\"rules\":{\"pre_processing_rules\":[{\"id\":\"remove_extra_spaces\",\"enabled\":true},{\"id\":\"remove_urls_emails\",\"enabled\":true}],\"segmentation\":{\"separator\":\"###\",\"max_tokens\":500}},\"mode\":\"custom\"}}";type=text/plain' \
--form 'file=@"/path/to/file"'
```
```json {{ title: 'Response' }}
{
"document": {
"id": "",
"position": 1,
"data_source_type": "upload_file",
"data_source_info": {
"upload_file_id": ""
},
"dataset_process_rule_id": "",
"name": "Dify.txt",
"created_from": "api",
"created_by": "",
"created_at": 1695308667,
"tokens": 0,
"indexing_status": "waiting",
"error": null,
"enabled": true,
"disabled_at": null,
"disabled_by": null,
"archived": false,
"display_status": "queuing",
"word_count": 0,
"hit_count": 0,
"doc_form": "text_model"
},
"batch": "20230921150427533684"
}
```
---
### Path Params
Dataset ID
Batch number of uploaded documents
```bash {{ title: 'cURL' }}
curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{batch}/indexing-status' \
--header 'Authorization: Bearer {api_key}' \
```
```json {{ title: 'Response' }}
{
"data":[{
"id": "",
"indexing_status": "indexing",
"processing_started_at": 1681623462.0,
"parsing_completed_at": 1681623462.0,
"cleaning_completed_at": 1681623462.0,
"splitting_completed_at": 1681623462.0,
"completed_at": null,
"paused_at": null,
"error": null,
"stopped_at": null,
"completed_segments": 24,
"total_segments": 100
}]
}
```
---
### Path Params
Dataset ID
Document ID
```bash {{ title: 'cURL' }}
curl --location --request DELETE 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}' \
--header 'Authorization: Bearer {api_key}' \
```
```json {{ title: 'Response' }}
{
"result": "success"
}
```
---
### Path Params
Dataset ID
### Path Query
Search keywords, currently only search document names(optional)
Page number(optional)
Number of items returned, default 20, range 1-100(optional)
```bash {{ title: 'cURL' }}
curl --location --request GET 'https://api.dify.ai/v1/datasets/{dataset_id}/documents' \
--header 'Authorization: Bearer {api_key}' \
```
```json {{ title: 'Response' }}
{
"data": [
{
"id": "",
"position": 1,
"data_source_type": "file_upload",
"data_source_info": null,
"dataset_process_rule_id": null,
"name": "dify",
"created_from": "",
"created_by": "",
"created_at": 1681623639,
"tokens": 0,
"indexing_status": "waiting",
"error": null,
"enabled": true,
"disabled_at": null,
"disabled_by": null,
"archived": false
},
],
"has_more": false,
"limit": 20,
"total": 9,
"page": 1
}
```
---
### Path Params
Dataset ID
Document ID
### Request Body
segments (object list) Segmented content
- content (text) Text content/question content, required
- answer(text) Answer content, if the mode of the data set is qa mode, pass the value(optional)
- keywords(list) Keywords(optional)
```bash {{ title: 'cURL' }}
curl --location --request POST 'https://api.dify.ai/v1/datasets/{dataset_id}/documents/{document_id}/segments' \
--header 'Authorization: Bearer {api_key}' \
--header 'Content-Type: application/json' \
--data-raw '{
"segments": [
{
"content": "1",
"answer": "1",
"keywords": ["a"]
}
]
}'
```
```json {{ title: 'Response' }}
{
"data": [{
"id": "",
"position": 1,
"document_id": "",
"content": "1",
"answer": "1",
"word_count": 25,
"tokens": 0,
"keywords": [
"a"
],
"index_node_id": "",
"index_node_hash": "",
"hit_count": 0,
"enabled": true,
"disabled_at": null,
"disabled_by": null,
"status": "completed",
"created_by": "",
"created_at": 1695312007,
"indexing_at": 1695312007,
"completed_at": 1695312007,
"error": null,
"stopped_at": null
}],
"doc_form": "text_model"
}
```
---
Error message
- **document_indexing**: Document indexing failed
- **provider_not_initialize**: Embedding model is not configured
- **not_found**, Document does not exist
- **dataset_name_duplicate**: Duplicate dataset name
- **provider_quota_exceeded**: Model quota exceeds limit
- **dataset_not_initialized**: The dataset has not been initialized yet
- **unsupported_file_type**: Unsupported file types.
- Currently only supports, txt, markdown, md, pdf, html, htm, xlsx, docx, csv
- **too_many_files**: There are too many files. Currently, only a single file is uploaded
- **file_too_large*: The file is too large, support below 15M based on you environment configuration