|
@@ -1,52 +1,52 @@
|
|
|
[简体中文](train_cn.md) | English
|
|
|
|
|
|
-# PaddleRS Training API Description
|
|
|
+# PaddleRS Training APIs
|
|
|
|
|
|
-**Trainers** (or model trainers) encapsulate model training, validation, quantization, and dynamic graph inference, defined in files of `paddlers/tasks/` directory. For user convenience, PaddleRS provides trainers that inherits from the parent class [`BaseModel`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/base.py) for all supported models, and provides several apis externally. The types of trainers corresponding to change detection, scene classification, target detection, image restoration and image segmentation tasks are respectively `BaseChangeDetector`、`BaseClassifier`、`BaseDetector`、`BaseRestorer` and `BaseSegmenter`。This document describes the initialization function of the trainer and `train()`、`evaluate()` API。
|
|
|
+**Trainers** (or model trainers) encapsulate model training, validation, quantization, and dynamic graph inference, defined in files of `paddlers/tasks/` directory. For user convenience, PaddleRS provides trainers that inherit from the parent class [`BaseModel`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/base.py) for all supported models, and provides several APIs. The types of trainers corresponding to change detection, scene classification, target detection, image restoration, and image segmentation tasks are respectively `BaseChangeDetector`, `BaseClassifier`, `BaseDetector`, `BaseRestorer`, and `BaseSegmenter`. This document describes how to initialize the trainers as well as how to use the APIs.
|
|
|
|
|
|
-## Initialize the Trainer
|
|
|
+## Initialize Trainers
|
|
|
|
|
|
-All trainers support default parameter construction (that is, no parameters are passed in when the object is constructed), in which case the constructed trainer object applies to three-channel RGB data.
|
|
|
+All trainers support construction with default parameters (that is, no parameters are passed in when the object is constructed), in which case the constructed trainer object applies to three-channel RGB data.
|
|
|
|
|
|
-### Initialize `BaseChangeDetector` Subclass Object
|
|
|
+### Initialize `BaseChangeDetector` Objects
|
|
|
|
|
|
-- The `num_classes`、`use_mixed_loss` and `in_channels` parameters are generally supported, indicating the number of model output categories, whether to use preset mixing losses, and the number of input channels, respectively. Some subclasses, such as `DSIFN`, do not yet support `in_channels`.
|
|
|
+- The `num_classes`, `use_mixed_loss`, `in_channels` parameters are generally supported, indicating the number of model output categories, whether to use pre-defined mixed losses, and the number of input channels, respectively. Some subclasses, such as `DSIFN`, do not yet support `in_channels`.
|
|
|
- `use_mixed_loss` will be deprecated in the future, so it is not recommended.
|
|
|
-- Specify the loss function used during model training through the `losses` parameter. `losses` needs to be a dictionary, where the values for the keys `types` and `coef` are two equal-length lists representing the loss function object (a callable object) and the weight of the loss function, respectively. For example: `losses={'types': [LossType1(), LossType2()], 'coef': [1.0, 0.5]}`. It is equivalent to calculating the following loss function in the training process: `1.0*LossType1()(logits, labels)+0.5*LossType2()(logits, labels)`, where `logits` and `labels` are model output and ground-truth labels, respectively.
|
|
|
-- Different subclasses support model-related input parameters. For details, you can refer to [model definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/cd) and [trainer definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py).
|
|
|
+- Specify the loss function used during model training through the `losses` parameter. `losses` needs to be a dictionary, where the values for the keys `types` and `coef` are two equal-length lists representing the loss function object (a callable object) and the weight of the loss function, respectively. For example: `losses={'types': [LossType1(), LossType2()], 'coef': [1.0, 0.5]}` is equivalent to calculating the following loss function in the training process: `1.0*LossType1()(logits, labels)+0.5*LossType2()(logits, labels)`, where `logits` and `labels` are model output and ground-truth labels, respectively.
|
|
|
+- Different subclasses support model-related input parameters. For details, you can refer to [this document](../intro/model_cons_params_en.md).
|
|
|
|
|
|
-### Initialize `BaseClassifier` Subclass Object
|
|
|
+### Initialize `BaseClassifier` Objects
|
|
|
|
|
|
-- The `num_classes` and `use_mixed_loss` parameters are generally supported, indicating the number of model output categories, whether to use preset mixing losses.
|
|
|
+- The `num_classes` and `use_mixed_loss` parameters are generally supported, indicating the number of model output categories, whether to use pre-defined mixed losses.
|
|
|
- `use_mixed_loss` will be deprecated in the future, so it is not recommended.
|
|
|
- Specify the loss function used during model training through the `losses` parameter. The passed argument needs to be an object of type `paddlers.models.clas_losses.CombinedLoss`.
|
|
|
-- Different subclasses support model-related input parameters. For details, you can refer to [model definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/clas) and [trainer definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/classifier.py).
|
|
|
+- Different subclasses support model-related input parameters. For details, you can refer to [this document](../intro/model_cons_params_en.md).
|
|
|
|
|
|
-### Initialize `BaseDetector` Subclass Object
|
|
|
+### Initialize `BaseDetector` Objects
|
|
|
|
|
|
-- Generally, the `num_classes` and `backbone` parameters can be set to indicate the number of output categories of the model and the type of backbone network used, respectively. Compared with other tasks, the trainer of object detection task supports more initialization parameters, including network structure, loss function, post-processing strategy and so on.
|
|
|
-- Different from tasks such as segmentation, classification and change detection, detection tasks do not support the loss function specified through the `losses` parameter. However, for some trainers such as `PPYOLO`, the loss function can be customized by `use_iou_loss` and other parameters.
|
|
|
-- Different subclasses support model-related input parameters. For details, you can refer to [model definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/det) and [trainer definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py).
|
|
|
+- Generally, the `num_classes` and `backbone` parameters can be set to indicate the number of output categories of the model and the type of backbone network used, respectively. Compared with other tasks, the trainer of object detection task supports more initialization parameters, including network structures, loss functions, post-processing strategies and so on.
|
|
|
+- Different from tasks such as segmentation, classification and change detection, object detection trainers do not support specifying loss function through the `losses` parameter. However, for some trainers such as `PPYOLO`, the loss function can be customized by `use_iou_loss` and other parameters.
|
|
|
+- Different subclasses support model-related input parameters. For details, you can refer to [this document](../intro/model_cons_params_en.md).
|
|
|
|
|
|
-### Initialize `BaseRestorer` Subclass Object
|
|
|
+### Initialize `BaseRestorer` Objects
|
|
|
|
|
|
-- Generally support setting `sr_factor` parameter, representing the scaling factor in image super resolution; for models that do not support super resolution rebuild tasks, `sr_factor` is set to `None`.
|
|
|
-- Specify the loss function used during model training through the `losses` parameter. `losses` needs to be a callable object or dictionary. `losses` specified manually must have the same format as the the subclass `default_loss()` method.
|
|
|
+- Generally support setting the `sr_factor` parameter, representing the scaling factor in image super resolution tasks. For models that do not support super resolution reconstruction tasks, `sr_factor` should be set to `None`.
|
|
|
+- Specify the loss function used during model training through the `losses` parameter. `losses` needs to be a callable object or dictionary. The specified `losses` must have the same format as the return value of the `default_loss()` method.
|
|
|
- The `min_max` parameter can specify the numerical range of model input and output. If `None`, the default range of values for the class is used.
|
|
|
-- Different subclasses support model-related input parameters. For details, you can refer to [model definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/res) and [trainer definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/restorer.py).
|
|
|
+- Different subclasses support model-related input parameters. For details, you can refer to [this document](../intro/model_cons_params_en.md).
|
|
|
|
|
|
-### Initialize `BaseSegmenter` Subclass Object
|
|
|
+### Initialize `BaseSegmenter` Objects
|
|
|
|
|
|
-- The parameters `in_channels`, `num_classes`, and `use_mixed_loss` are generally supported, indicating the number of input channels, the number of output categories, and whether the preset mixing loss is used.
|
|
|
+- The parameters `in_channels`, `num_classes`, and `use_mixed_loss` are generally supported, indicating the number of input channels, the number of output categories, and whether to use the pre-defined mixed losses.
|
|
|
- `use_mixed_loss` will be deprecated in the future, so it is not recommended.
|
|
|
-- Specify the loss function used during model training through the `losses` parameter. `losses` needs to be a dictionary, where the values for the keys `types` and `coef` are two equal-length lists representing the loss function object (a callable object) and the weight of the loss function, respectively. For example: `losses={'types': [LossType1(), LossType2()], 'coef': [1.0, 0.5]}`. It is equivalent to calculating the following loss function in the training process: `1.0*LossType1()(logits, labels)+0.5*LossType2()(logits, labels)`, where `logits` and `labels` are model output and ground-truth labels, respectively.
|
|
|
-- Different subclasses support model-related input parameters. For details, you can refer to [model definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/seg) and [trainer definitions](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmentor.py).
|
|
|
+- Specify the loss function used during model training through the `losses` parameter. `losses` needs to be a dictionary, where the values for the keys `types` and `coef` are two equal-length lists representing the loss function object (a callable object) and the weight of the loss function, respectively. For example: `losses={'types': [LossType1(), LossType2()], 'coef': [1.0, 0.5]}` is equivalent to calculating the following loss function in the training process: `1.0*LossType1()(logits, labels)+0.5*LossType2()(logits, labels)`, where `logits` and `labels` are model output and ground-truth labels, respectively.
|
|
|
+- Different subclasses support model-related input parameters. For details, you can refer to [this document](../intro/model_cons_params_en.md).
|
|
|
|
|
|
## `train()`
|
|
|
|
|
|
### `BaseChangeDetector.train()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def train(self,
|
|
@@ -67,7 +67,7 @@ def train(self,
|
|
|
resume_checkpoint=None):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
@@ -76,20 +76,20 @@ The meanings of each parameter are as follows:
|
|
|
|`train_batch_size`|`int`|Batch size used during training.|`2`|
|
|
|
|`eval_dataset`|`paddlers.datasets.CDDataset` \| `None`|Validation dataset.|`None`|
|
|
|
|`optimizer`|`paddle.optimizer.Optimizer` \| `None`|Optimizer used during training. If `None`, the optimizer defined by default is used.|`None`|
|
|
|
-|`save_interval_epochs`|`int`|Number of intervals epochs of the model stored during training.|`1`|
|
|
|
-|`log_interval_steps`|`int`|Number of steps (i.e., the number of iterations) between printing logs during training.|`2`|
|
|
|
-|`save_dir`|`str`|Path to save the model.|`'output'`|
|
|
|
-|`pretrain_weights`|`str` \| `None`|Name/path of the pre-training weight. If `None`, the pre-training weight is not used.|`None`|
|
|
|
+|`save_interval_epochs`|`int`|Number of intervals (in epochs) to evaluate and store models during training.|`1`|
|
|
|
+|`log_interval_steps`|`int`|Number of interval steps (i.e., the number of iterations) to print logs during training.|`2`|
|
|
|
+|`save_dir`|`str`|Path to save checkpoints.|`'output'`|
|
|
|
+|`pretrain_weights`|`str` \| `None`|Name/path of the pretrained weights. If `None`, no pretrained weight is used.|`None`|
|
|
|
|`learning_rate`|`float`|Learning rate used during training, for default optimizer.|`0.01`|
|
|
|
|`lr_decay_power`|`float`|Learning rate attenuation coefficient, for default optimizer.|`0.9`|
|
|
|
-|`early_stop`|`bool`|Whether the early stop policy is enabled during training.|`False`|
|
|
|
-|`early_stop_patience`|`int`|`patience` parameters when the early stop policy is enabled (refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py)).|`5`|
|
|
|
-|`use_vdl`|`bool`|Whether to enable VisualDL log.|`True`|
|
|
|
-|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports continuing training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
+|`early_stop`|`bool`|Whether to enable the early stopping policy during training.|`False`|
|
|
|
+|`early_stop_patience`|`int`|`patience` parameter when the early stopping policy is enabled. Please refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py) for more details.|`5`|
|
|
|
+|`use_vdl`|`bool`|Whether to enable VisualDL.|`True`|
|
|
|
+|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports resuming training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
|
|
|
### `BaseClassifier.train()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def train(self,
|
|
@@ -110,7 +110,7 @@ def train(self,
|
|
|
resume_checkpoint=None):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
@@ -119,20 +119,20 @@ The meanings of each parameter are as follows:
|
|
|
|`train_batch_size`|`int`|Batch size used during training.|`2`|
|
|
|
|`eval_dataset`|`paddlers.datasets.ClasDataset` \| `None`|Validation dataset.|`None`|
|
|
|
|`optimizer`|`paddle.optimizer.Optimizer` \| `None`|Optimizer used during training. If `None`, the optimizer defined by default is used.|`None`|
|
|
|
-|`save_interval_epochs`|`int`|Number of intervals epochs of the model stored during training.|`1`|
|
|
|
-|`log_interval_steps`|`int`|Number of steps (i.e., the number of iterations) between printing logs during training.|`2`|
|
|
|
-|`save_dir`|`str`|Path to save the model.|`'output'`|
|
|
|
-|`pretrain_weights`|`str` \| `None`|Name/path of the pre-training weight. If `None`, the pre-training weight is not used.|`'IMAGENET'`|
|
|
|
+|`save_interval_epochs`|`int`|Number of intervals (in epochs) to evaluate and store models during training.|`1`|
|
|
|
+|`log_interval_steps`|`int`|Number of interval steps (i.e., the number of iterations) to print logs during training.|`2`|
|
|
|
+|`save_dir`|`str`|Path to save checkpoints.|`'output'`|
|
|
|
+|`pretrain_weights`|`str` \| `None`|Name/path of the pretrained weights. If `None`, no pretrained weight is used.|`'IMAGENET'`|
|
|
|
|`learning_rate`|`float`|Learning rate used during training, for default optimizer.|`0.1`|
|
|
|
|`lr_decay_power`|`float`|Learning rate attenuation coefficient, for default optimizer.|`0.9`|
|
|
|
-|`early_stop`|`bool`|Whether the early stop policy is enabled during training.|`False`|
|
|
|
-|`early_stop_patience`|`int`|`patience` parameters when the early stop policy is enabled (refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py)).|`5`|
|
|
|
-|`use_vdl`|`bool`|Whether to enable VisualDL log.|`True`|
|
|
|
-|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports continuing training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
+|`early_stop`|`bool`|Whether to enable the early stopping policy during training.|`False`|
|
|
|
+|`early_stop_patience`|`int`|`patience` parameter when the early stopping policy is enabled. Please refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py) for more details.|`5`|
|
|
|
+|`use_vdl`|`bool`|Whether to enable VisualDL.|`True`|
|
|
|
+|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports resuming training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
|
|
|
### `BaseDetector.train()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def train(self,
|
|
@@ -158,34 +158,34 @@ def train(self,
|
|
|
resume_checkpoint=None):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
|
|`num_epochs`|`int`|Number of epochs to train.||
|
|
|
|`train_dataset`|`paddlers.datasets.COCODetDataset` \| `paddlers.datasets.VOCDetDataset` |Training dataset.||
|
|
|
-|`train_batch_size`|`int`|Batch size used during training.(For multi-card training, total batch size for all equipment).|`64`|
|
|
|
+|`train_batch_size`|`int`|Batch size used during training.|`64`|
|
|
|
|`eval_dataset`|`paddlers.datasets.COCODetDataset` \| `paddlers.datasets.VOCDetDataset` \| `None`|Validation dataset.|`None`|
|
|
|
|`optimizer`|`paddle.optimizer.Optimizer` \| `None`|Optimizer used during training. If `None`, the optimizer defined by default is used.|`None`|
|
|
|
-|`save_interval_epochs`|`int`|Number of intervals epochs of the model stored during training.|`1`|
|
|
|
-|`log_interval_steps`|`int`|Number of steps (i.e., the number of iterations) between printing logs during training.|`10`|
|
|
|
-|`save_dir`|`str`|Path to save the model.|`'output'`|
|
|
|
-|`pretrain_weights`|`str` \| `None`|Name/path of the pre-training weight. If `None`, the pre-training weight is not used.|`'IMAGENET'`|
|
|
|
+|`save_interval_epochs`|`int`|Number of intervals (in epochs) to evaluate and store models during training.|`1`|
|
|
|
+|`log_interval_steps`|`int`|Number of interval steps (i.e., the number of iterations) to print logs during training.|`10`|
|
|
|
+|`save_dir`|`str`|Path to save checkpoints.|`'output'`|
|
|
|
+|`pretrain_weights`|`str` \| `None`|Name/path of the pretrained weights. If `None`, no pretrained weight is used.|`'IMAGENET'`|
|
|
|
|`learning_rate`|`float`|Learning rate used during training, for default optimizer.|`0.001`|
|
|
|
|`warmup_steps`|`int`|Number of [warm-up](https://www.mdpi.com/2079-9292/10/16/2029/htm) rounds used by the default optimizer.|`0`|
|
|
|
-|`warmup_start_lr`|`int`|Default initial learning rate used by the warm-up phase of the optimizer.|`0`|
|
|
|
-|`lr_decay_epochs`|`list` \| `tuple`|Milestones of learning rate decline of the default optimizer, in terms of epoch. That is, which epoch the decay of the learning rate occurs.|`(216, 243)`|
|
|
|
+|`warmup_start_lr`|`int`|Default initial learning rate used in the warm-up phase of the optimizer.|`0`|
|
|
|
+|`lr_decay_epochs`|`list` \| `tuple`|Milestones of learning rate decline of the default optimizer, in terms of epochs. That is, which epoch the decay of the learning rate occurs.|`(216, 243)`|
|
|
|
|`lr_decay_gamma`|`float`|Learning rate attenuation coefficient, for default optimizer.|`0.1`|
|
|
|
-|`metric`|`str` \| `None`|Evaluation metrics, can be `'VOC'`、`COCO` or `None`. If `None`, the evaluation index to be used is automatically determined according to the format of the dataset.|`None`|
|
|
|
-|`use_ema`|`bool`|Whether to enable [exponential moving average strategy](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/models/ppdet/optimizer.py) to update model weight parameters.|`False`|
|
|
|
-|`early_stop`|`bool`|Whether the early stop policy is enabled during training.|`False`|
|
|
|
-|`early_stop_patience`|`int`|`patience` parameters when the early stop policy is enabled (refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py)).|`5`|
|
|
|
-|`use_vdl`|`bool`|Whether to enable VisualDL log.|`True`|
|
|
|
-|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports continuing training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
+|`metric`|`str` \| `None`|Evaluation metrics, which can be `'VOC'`, `COCO`, or `None`. If `None`, the evaluation metrics will be automatically determined according to the format of the dataset.|`None`|
|
|
|
+|`use_ema`|`bool`|Whether to enable [exponential moving average strategy](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/models/ppdet/optimizer.py) to update model weights.|`False`|
|
|
|
+|`early_stop`|`bool`|Whether to enable the early stopping policy during training.|`False`|
|
|
|
+|`early_stop_patience`|`int`|`patience` parameter when the early stopping policy is enabled. Please refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py) for more details.|`5`|
|
|
|
+|`use_vdl`|`bool`|Whether to enable VisualDL.|`True`|
|
|
|
+|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports resuming training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
|
|
|
### `BaseRestorer.train()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def train(self,
|
|
@@ -197,7 +197,7 @@ def train(self,
|
|
|
save_interval_epochs=1,
|
|
|
log_interval_steps=2,
|
|
|
save_dir='output',
|
|
|
- pretrain_weights='CITYSCAPES',
|
|
|
+ pretrain_weights=None,
|
|
|
learning_rate=0.01,
|
|
|
lr_decay_power=0.9,
|
|
|
early_stop=False,
|
|
@@ -206,7 +206,7 @@ def train(self,
|
|
|
resume_checkpoint=None):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
@@ -215,20 +215,20 @@ The meanings of each parameter are as follows:
|
|
|
|`train_batch_size`|`int`|Batch size used during training.|`2`|
|
|
|
|`eval_dataset`|`paddlers.datasets.ResDataset` \| `None`|Validation dataset.|`None`|
|
|
|
|`optimizer`|`paddle.optimizer.Optimizer` \| `None`|Optimizer used during training. If `None`, the optimizer defined by default is used.|`None`|
|
|
|
-|`save_interval_epochs`|`int`|Number of intervals epochs of the model stored during training.|`1`|
|
|
|
-|`log_interval_steps`|`int`|Number of steps (i.e., the number of iterations) between printing logs during training.|`2`|
|
|
|
-|`save_dir`|`str`|Path to save the model.|`'output'`|
|
|
|
-|`pretrain_weights`|`str` \| `None`|Name/path of the pre-training weight. If `None`, the pre-training weight is not used.|`'CITYSCAPES'`|
|
|
|
+|`save_interval_epochs`|`int`|Number of intervals (in epochs) to evaluate and store models during training.|`1`|
|
|
|
+|`log_interval_steps`|`int`|Number of interval steps (i.e., the number of iterations) to print logs during training.|`2`|
|
|
|
+|`save_dir`|`str`|Path to save checkpoints.|`'output'`|
|
|
|
+|`pretrain_weights`|`str` \| `None`|Name/path of the pretrained weights. If `None`, no pretrained weight is used.|`None`|
|
|
|
|`learning_rate`|`float`|Learning rate used during training, for default optimizer.|`0.01`|
|
|
|
|`lr_decay_power`|`float`|Learning rate attenuation coefficient, for default optimizer.|`0.9`|
|
|
|
-|`early_stop`|`bool`|Whether the early stop policy is enabled during training.|`False`|
|
|
|
-|`early_stop_patience`|`int`|`patience` parameters when the early stop policy is enabled (refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py)).|`5`|
|
|
|
-|`use_vdl`|`bool`|Whether to enable VisualDL log.|`True`|
|
|
|
-|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports continuing training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
+|`early_stop`|`bool`|Whether to enable the early stopping policy during training.|`False`|
|
|
|
+|`early_stop_patience`|`int`|`patience` parameter when the early stopping policy is enabled. Please refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py) for more details.|`5`|
|
|
|
+|`use_vdl`|`bool`|Whether to enable VisualDL.|`True`|
|
|
|
+|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports resuming training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
|
|
|
### `BaseSegmenter.train()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def train(self,
|
|
@@ -249,7 +249,7 @@ def train(self,
|
|
|
resume_checkpoint=None):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
@@ -258,36 +258,36 @@ The meanings of each parameter are as follows:
|
|
|
|`train_batch_size`|`int`|Batch size used during training.|`2`|
|
|
|
|`eval_dataset`|`paddlers.datasets.SegDataset` \| `None`|Validation dataset.|`None`|
|
|
|
|`optimizer`|`paddle.optimizer.Optimizer` \| `None`|Optimizer used during training. If `None`, the optimizer defined by default is used.|`None`|
|
|
|
-|`save_interval_epochs`|`int`|Number of intervals epochs of the model stored during training.|`1`|
|
|
|
-|`log_interval_steps`|`int`|Number of steps (i.e., the number of iterations) between printing logs during training.|`2`|
|
|
|
-|`save_dir`|`str`|Path to save the model.|`'output'`|
|
|
|
-|`pretrain_weights`|`str` \| `None`|Name/path of the pre-training weight. If `None`, the pre-training weight is not used.|`'CITYSCAPES'`|
|
|
|
+|`save_interval_epochs`|`int`|Number of intervals (in epochs) to evaluate and store models during training.|`1`|
|
|
|
+|`log_interval_steps`|`int`|Number of interval steps (i.e., the number of iterations) to print logs during training.|`2`|
|
|
|
+|`save_dir`|`str`|Path to save checkpoints.|`'output'`|
|
|
|
+|`pretrain_weights`|`str` \| `None`|Name/path of the pretrained weights. If `None`, no pretrained weight is used.|`'CITYSCAPES'`|
|
|
|
|`learning_rate`|`float`|Learning rate used during training, for default optimizer.|`0.01`|
|
|
|
|`lr_decay_power`|`float`|Learning rate attenuation coefficient, for default optimizer.|`0.9`|
|
|
|
-|`early_stop`|`bool`|Whether the early stop policy is enabled during training.|`False`|
|
|
|
-|`early_stop_patience`|`int`|`patience` parameters when the early stop policy is enabled (refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py)).|`5`|
|
|
|
-|`use_vdl`|`bool`|Whether to enable VisualDL log.|`True`|
|
|
|
-|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports continuing training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
+|`early_stop`|`bool`|Whether to enable the early stopping policy during training.|`False`|
|
|
|
+|`early_stop_patience`|`int`|`patience` parameter when the early stopping policy is enabled. Please refer to [`EarlyStop`](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/utils/utils.py) for more details.|`5`|
|
|
|
+|`use_vdl`|`bool`|Whether to enable VisualDL.|`True`|
|
|
|
+|`resume_checkpoint`|`str` \| `None`|Checkpoint path. PaddleRS supports resuming training from checkpoints (including model weights and optimizer weights stored during previous training), but note that `resume_checkpoint` and `pretrain_weights` must not be set to values other than `None` at the same time.|`None`|
|
|
|
|
|
|
## `evaluate()`
|
|
|
|
|
|
### `BaseChangeDetector.evaluate()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def evaluate(self, eval_dataset, batch_size=1, return_details=False):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
|
|`eval_dataset`|`paddlers.datasets.CDDataset`|Validation dataset.||
|
|
|
-|`batch_size`|`int`|Batch size used in the evaluation (for multi-card training, the batch size is totaled for all devices).|`1`|
|
|
|
+|`batch_size`|`int`|Batch size used in evaluation (for multi-card evaluation, this is the total batch size for all devices).|`1`|
|
|
|
|`return_details`|`bool`|Whether to return detailed information.|`False`|
|
|
|
|
|
|
-If `return_details` is `False`(default), output a `collections.OrderedDict` object. For the 2-category change detection task, the output contains the following key-value pairs:
|
|
|
+If `return_details` is `False` (default), outputs a `collections.OrderedDict` object. For the binary change detection task, the output contains the following key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"iou": the IoU metric of the change class,
|
|
@@ -296,7 +296,7 @@ If `return_details` is `False`(default), output a `collections.OrderedDict` obje
|
|
|
"kappa": kappa coefficient}
|
|
|
```
|
|
|
|
|
|
-For the multi-category change detection task, the output contains the following key-value pairs:
|
|
|
+For the multi-class change detection task, the output contains the following key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"miou": mIoU metric,
|
|
@@ -307,27 +307,26 @@ For the multi-category change detection task, the output contains the following
|
|
|
"category_F1score": F1 score of each category}
|
|
|
```
|
|
|
|
|
|
-If `return_details` is `True`, return a binary set of two dictionaries in which the first element is the metric mentioned above and the second element is a dictionary containing only one key, and the value of the `'confusion_matrix'` key is the confusion matrix stored in the python build-in list.
|
|
|
-
|
|
|
+If `return_details` is `True`, returns two dictionaries. The first dictionary is the metrics mentioned above, and the second one is a dictionary containing `'confusion_matrix'` (which is the confusion matrix).
|
|
|
|
|
|
|
|
|
### `BaseClassifier.evaluate()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def evaluate(self, eval_dataset, batch_size=1, return_details=False):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
|
|`eval_dataset`|`paddlers.datasets.ClasDataset`|Validation dataset.||
|
|
|
-|`batch_size`|`int`|Batch size used in the evaluation (for multi-card training, the batch size is totaled for all devices).|`1`|
|
|
|
-|`return_details`|`bool`|*Do not manually set this parameter in the current version.*|`False`|
|
|
|
+|`batch_size`|`int`|Batch size used in evaluation (for multi-card evaluation, this is the total batch size for all devices).|`1`|
|
|
|
+|`return_details`|`bool`|*Do not manually set this parameter in current version.*|`False`|
|
|
|
|
|
|
-output a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
+Outputs a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"top1": top1 accuracy,
|
|
@@ -336,7 +335,7 @@ output a `collections.OrderedDict` object, including the following key-value pai
|
|
|
|
|
|
### `BaseDetector.evaluate()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def evaluate(self,
|
|
@@ -346,22 +345,22 @@ def evaluate(self,
|
|
|
return_details=False):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
|
|`eval_dataset`|`paddlers.datasets.COCODetDataset` \| `paddlers.datasets.VOCDetDataset`|Validation dataset.||
|
|
|
-|`batch_size`|`int`|Batch size used in the evaluation (for multi-card training, the batch size is totaled for all devices).|`1`|
|
|
|
-|`metric`|`str` \| `None`|Evaluation metrics, can be `'VOC'`、`COCO` or `None`. If `None`, the evaluation index to be used is automatically determined according to the format of the dataset.|`None`|
|
|
|
+|`batch_size`|`int`|Batch size used in evaluation (for multi-card evaluation, this is the total batch size for all devices).|`1`|
|
|
|
+|`metric`|`str` \| `None`|Evaluation metrics, which can be `'VOC'`, `COCO`, or `None`. If `None`, the evaluation metrics will be automatically determined according to the format of the dataset.|`None`|
|
|
|
|`return_details`|`bool`|Whether to return detailed information.|`False`|
|
|
|
|
|
|
-If `return_details` is `False`(default), return a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
+If `return_details` is `False` (default), returns a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"bbox_mmap": mAP of predicted result}
|
|
|
```
|
|
|
|
|
|
-If `return_details` is `True`, return a binary set of two dictionaries, where the first dictionary is the above evaluation index and the second dictionary contains the following three key-value pairs:
|
|
|
+If `return_details` is `True`, returns two dictionaries. The first dictionary is the above evaluation metrics and the second dictionary contains the following three key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"gt": dataset annotation information,
|
|
@@ -371,21 +370,21 @@ If `return_details` is `True`, return a binary set of two dictionaries, where th
|
|
|
|
|
|
### `BaseRestorer.evaluate()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def evaluate(self, eval_dataset, batch_size=1, return_details=False):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
|
|`eval_dataset`|`paddlers.datasets.ResDataset`|Validation dataset.||
|
|
|
-|`batch_size`|`int`|Batch size used in the evaluation (for multi-card training, the batch size is totaled for all devices).|`1`|
|
|
|
-|`return_details`|`bool`|*Do not manually set this parameter in the current version.*|`False`|
|
|
|
+|`batch_size`|`int`|Batch size used in evaluation (for multi-card evaluation, this is the total batch size for all devices).|`1`|
|
|
|
+|`return_details`|`bool`|*Do not manually set this parameter in current version.*|`False`|
|
|
|
|
|
|
-Output a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
+Outputs a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"psnr": PSNR metric,
|
|
@@ -394,21 +393,21 @@ Output a `collections.OrderedDict` object, including the following key-value pai
|
|
|
|
|
|
### `BaseSegmenter.evaluate()`
|
|
|
|
|
|
-Interface format:
|
|
|
+Interface:
|
|
|
|
|
|
```python
|
|
|
def evaluate(self, eval_dataset, batch_size=1, return_details=False):
|
|
|
```
|
|
|
|
|
|
-The meanings of each parameter are as follows:
|
|
|
+The meaning of each parameter is as follows:
|
|
|
|
|
|
|Parameter Name|Type|Parameter Description|Default Value|
|
|
|
|-------|----|--------|-----|
|
|
|
|`eval_dataset`|`paddlers.datasets.SegDataset`|Validation dataset.||
|
|
|
-|`batch_size`|`int`|Batch size used in the evaluation (for multi-card training, the batch size is totaled for all devices).|`1`|
|
|
|
+|`batch_size`|`int`|Batch size used in evaluation (for multi-card evaluation, this is the total batch size for all devices).|`1`|
|
|
|
|`return_details`|`bool`|Whether to return detailed information.|`False`|
|
|
|
|
|
|
-If `return_details` is `False`(default), return a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
+If `return_details` is `False` (default), returns a `collections.OrderedDict` object, including the following key-value pairs:
|
|
|
|
|
|
```
|
|
|
{"miou": mIoU metric,
|
|
@@ -419,4 +418,4 @@ If `return_details` is `False`(default), return a `collections.OrderedDict` obje
|
|
|
"category_F1score": F1 score of each category}
|
|
|
```
|
|
|
|
|
|
-If `return_details` is `True`, return a binary set of two dictionaries in which the first element is the metric mentioned above and the second element is a dictionary containing only one key, and the value of the `'confusion_matrix'` key is the confusion matrix stored in the python build-in list.
|
|
|
+If `return_details` is `True`, returns two dictionaries. The first dictionary is the metrics mentioned above, and the second one is a dictionary containing `'confusion_matrix'` (which is the confusion matrix).
|