Răsfoiți Sursa

Merge remote-tracking branch 'official/develop' into add_examples

Bobholamovic 2 ani în urmă
părinte
comite
ca0e54d1c7

+ 4 - 28
.github/workflows/build_and_test.yaml → .github/workflows/build.yaml

@@ -1,4 +1,4 @@
-name: build and test
+name: build
 
 on:
   push:
@@ -17,7 +17,7 @@ concurrency:
   cancel-in-progress: true
 
 jobs:
-  build_and_test_cpu:
+  build_cpu:
     runs-on: ${{ matrix.os }}
     strategy:
       matrix:
@@ -53,29 +53,5 @@ jobs:
           python -m pip install -e .
       - name: Install GDAL
         run: python -m pip install ${{ matrix.gdal-whl-url }}
-      - name: Run unittests
-        run: |
-          cd tests
-          bash run_fast_tests.sh
-        shell: bash
-
-  build_and_test_cuda102:
-    runs-on: ubuntu-18.04
-    container:
-      image: registry.baidubce.com/paddlepaddle/paddle:2.3.1-gpu-cuda10.2-cudnn7
-    steps:
-      - uses: actions/checkout@v3
-      - name: Upgrade pip
-        run: python3.7 -m pip install pip --upgrade --user
-      - name: Install PaddleRS
-        run: |
-          python3.7 -m pip install -r requirements.txt
-          python3.7 -m pip install -e .
-      - name: Install GDAL
-        run: python3.7 -m pip install https://versaweb.dl.sourceforge.net/project/gdal-wheels-for-linux/GDAL-3.4.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl
-      # Do not run unittests, because there is NO GPU in the machine.
-      # - name: Run unittests
-      #   run: |
-      #     cd tests
-      #     bash run_fast_tests.sh
-      #   shell: bash
+      - name: Test installation
+        run: python -c "import paddlers; print(paddlers.__version__)"

+ 3 - 3
README.md

@@ -4,11 +4,11 @@
     <img src="./docs/images/logo.png" align="middle" width = "500" />
   </p>
 
-  **基于飞桨框架开发的高性能遥感影像处理开发套件,帮助您端到端完成从数据预处理模型部署的全流程遥感深度学习应用。**
+  **飞桨高性能遥感影像开发套件,端到端完成从数据到部署的全流程遥感应用。**
 
   <!-- [![version](https://img.shields.io/github/release/PaddlePaddle/PaddleRS.svg)](https://github.com/PaddlePaddle/PaddleRS/releases) -->
   [![license](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
-  [![build status](https://github.com/PaddlePaddle/PaddleRS/actions/workflows/build_and_test.yaml/badge.svg?branch=develop)](https://github.com/PaddlePaddle/PaddleRS/actions)
+  [![build status](https://github.com/PaddlePaddle/PaddleRS/actions/workflows/build.yaml/badge.svg?branch=develop)](https://github.com/PaddlePaddle/PaddleRS/actions)
   ![python version](https://img.shields.io/badge/python-3.7+-orange.svg)
   ![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
 </div>
@@ -179,7 +179,7 @@ PaddleRS目录树中关键部分如下:
 * 如果您发现任何PaddleRS存在的问题或是对PaddleRS有建议, 欢迎通过[GitHub Issues](https://github.com/PaddlePaddle/PaddleRS/issues)向我们提出。
 * 欢迎加入PaddleRS微信群
 <div align="center">
-<img src="./docs/images/wechat.png"  width = "150" />  
+<img src="https://user-images.githubusercontent.com/21275753/186310647-603f4b1c-5bbe-4b0d-a645-328d85789a5d.png"  width = "150" />  
 </div>
 
 ## 使用教程 <img src="./docs/images/teach.png" width="30"/>

+ 1 - 1
docs/apis/train.md

@@ -25,7 +25,7 @@
 
 ### 初始化`BaseSegmenter`子类对象
 
-- 一般支持设置`input_channel`、`num_classes`以及`use_mixed_loss`参数,分别表示输入通道数、输出类别数以及是否使用预置的混合损失。部分模型如`FarSeg`暂不支持对`input_channel`参数的设置。
+- 一般支持设置`in_channels`、`num_classes`以及`use_mixed_loss`参数,分别表示输入通道数、输出类别数以及是否使用预置的混合损失。部分模型如`FarSeg`暂不支持对`in_channels`参数的设置。
 - `use_mixed_loss`参将在未来被弃用,因此不建议使用。
 - 不同的子类支持与模型相关的输入参数,详情请参考[模型定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/rs_models/seg)和[训练器定义](https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmentor.py)。
 

BIN
docs/images/whole_picture.png


+ 6 - 6
paddlers/deploy/predictor.py

@@ -146,7 +146,7 @@ class Predictor(object):
         return predictor
 
     def preprocess(self, images, transforms):
-        preprocessed_samples = self._model._preprocess(
+        preprocessed_samples = self._model.preprocess(
             images, transforms, to_tensor=False)
         if self._model.model_type == 'classifier':
             preprocessed_samples = {'image': preprocessed_samples[0]}
@@ -172,12 +172,12 @@ class Predictor(object):
     def postprocess(self, net_outputs, topk=1, ori_shape=None, transforms=None):
         if self._model.model_type == 'classifier':
             true_topk = min(self._model.num_classes, topk)
-            if self._model._postprocess is None:
+            if self._model.postprocess is None:
                 self._model.build_postprocess_from_labels(topk)
-            # XXX: Convert ndarray to tensor as self._model._postprocess requires
+            # XXX: Convert ndarray to tensor as self._model.postprocess requires
             assert len(net_outputs) == 1
             net_outputs = paddle.to_tensor(net_outputs[0])
-            outputs = self._model._postprocess(net_outputs)
+            outputs = self._model.postprocess(net_outputs)
             class_ids = map(itemgetter('class_ids'), outputs)
             scores = map(itemgetter('scores'), outputs)
             label_names = map(itemgetter('label_names'), outputs)
@@ -187,7 +187,7 @@ class Predictor(object):
                 'label_names_map': n,
             } for l, s, n in zip(class_ids, scores, label_names)]
         elif self._model.model_type in ('segmenter', 'change_detector'):
-            label_map, score_map = self._model._postprocess(
+            label_map, score_map = self._model.postprocess(
                 net_outputs,
                 batch_origin_shape=ori_shape,
                 transforms=transforms.transforms)
@@ -200,7 +200,7 @@ class Predictor(object):
                 k: v
                 for k, v in zip(['bbox', 'bbox_num', 'mask'], net_outputs)
             }
-            preds = self._model._postprocess(net_outputs)
+            preds = self._model.postprocess(net_outputs)
         else:
             logging.error(
                 "Invalid model type {}.".format(self._model.model_type),

+ 6 - 6
paddlers/tasks/change_detector.py

@@ -111,10 +111,10 @@ class BaseChangeDetector(BaseModel):
         if mode == 'test':
             origin_shape = inputs[2]
             if self.status == 'Infer':
-                label_map_list, score_map_list = self._postprocess(
+                label_map_list, score_map_list = self.postprocess(
                     net_out, origin_shape, transforms=inputs[3])
             else:
-                logit_list = self._postprocess(
+                logit_list = self.postprocess(
                     logit, origin_shape, transforms=inputs[3])
                 label_map_list = []
                 score_map_list = []
@@ -142,7 +142,7 @@ class BaseChangeDetector(BaseModel):
                 raise ValueError("Expected label.ndim == 4 but got {}".format(
                     label.ndim))
             origin_shape = [label.shape[-2:]]
-            pred = self._postprocess(
+            pred = self.postprocess(
                 pred, origin_shape, transforms=inputs[3])[0]  # NCHW
             intersect_area, pred_area, label_area = ppseg.utils.metrics.calculate_area(
                 pred, label, self.num_classes)
@@ -547,7 +547,7 @@ class BaseChangeDetector(BaseModel):
             images = [img_file]
         else:
             images = img_file
-        batch_im1, batch_im2, batch_origin_shape = self._preprocess(
+        batch_im1, batch_im2, batch_origin_shape = self.preprocess(
             images, transforms, self.model_type)
         self.net.eval()
         data = (batch_im1, batch_im2, batch_origin_shape, transforms.transforms)
@@ -658,7 +658,7 @@ class BaseChangeDetector(BaseModel):
         dst_data = None
         print("GeoTiff saved in {}.".format(save_file))
 
-    def _preprocess(self, images, transforms, to_tensor=True):
+    def preprocess(self, images, transforms, to_tensor=True):
         self._check_transforms(transforms, 'test')
         batch_im1, batch_im2 = list(), list()
         batch_ori_shape = list()
@@ -730,7 +730,7 @@ class BaseChangeDetector(BaseModel):
             batch_restore_list.append(restore_list)
         return batch_restore_list
 
-    def _postprocess(self, batch_pred, batch_origin_shape, transforms):
+    def postprocess(self, batch_pred, batch_origin_shape, transforms):
         batch_restore_list = BaseChangeDetector.get_transforms_shape_info(
             batch_origin_shape, transforms)
         if isinstance(batch_pred, (tuple, list)) and self.status == 'Infer':

+ 9 - 11
paddlers/tasks/classifier.py

@@ -62,7 +62,7 @@ class BaseClassifier(BaseModel):
         self.metrics = None
         self.losses = losses
         self.labels = None
-        self._postprocess = None
+        self.postprocess = None
         if params.get('with_net', True):
             params.pop('with_net', None)
             self.net = self.build_net(**params)
@@ -122,13 +122,12 @@ class BaseClassifier(BaseModel):
         net_out = net(inputs[0])
 
         if mode == 'test':
-            return self._postprocess(net_out)
+            return self.postprocess(net_out)
 
         outputs = OrderedDict()
         label = paddle.to_tensor(inputs[1], dtype="int64")
 
         if mode == 'eval':
-            # print(self._postprocess(net_out)[0])  # for test
             label = paddle.unsqueeze(label, axis=-1)
             metric_dict = self.metrics(net_out, label)
             outputs['top1'] = metric_dict["top1"]
@@ -177,13 +176,13 @@ class BaseClassifier(BaseModel):
         label_dict = dict()
         for i, label in enumerate(self.labels):
             label_dict[i] = label
-        self._postprocess = build_postprocess({
+        self.postprocess = build_postprocess({
             "name": "Topk",
             "topk": topk,
             "class_id_map_file": None
         })
         # Add class_id_map from model.yml
-        self._postprocess.class_id_map = label_dict
+        self.postprocess.class_id_map = label_dict
 
     def train(self,
               num_epochs,
@@ -248,8 +247,7 @@ class BaseClassifier(BaseModel):
         if self.losses is None:
             self.losses = self.default_loss()
         self.metrics = self.default_metric()
-        self._postprocess = self.default_postprocess(train_dataset.label_list)
-        # print(self._postprocess.class_id_map)
+        self.postprocess = self.default_postprocess(train_dataset.label_list)
 
         if optimizer is None:
             num_steps_each_epoch = train_dataset.num_samples // train_batch_size
@@ -457,12 +455,12 @@ class BaseClassifier(BaseModel):
             images = [img_file]
         else:
             images = img_file
-        batch_im, batch_origin_shape = self._preprocess(images, transforms,
-                                                        self.model_type)
+        batch_im, batch_origin_shape = self.preprocess(images, transforms,
+                                                       self.model_type)
         self.net.eval()
         data = (batch_im, batch_origin_shape, transforms.transforms)
 
-        if self._postprocess is None:
+        if self.postprocess is None:
             self.build_postprocess_from_labels()
 
         outputs = self.run(self.net, data, 'test')
@@ -483,7 +481,7 @@ class BaseClassifier(BaseModel):
             }
         return prediction
 
-    def _preprocess(self, images, transforms, to_tensor=True):
+    def preprocess(self, images, transforms, to_tensor=True):
         self._check_transforms(transforms, 'test')
         batch_im = list()
         batch_ori_shape = list()

+ 53 - 249
paddlers/tasks/object_detector.py

@@ -249,6 +249,34 @@ class BaseDetector(BaseModel):
                 Defaults to None.
         """
 
+        args = self._pre_train(locals())
+        return self._real_train(**args)
+
+    def _pre_train(self, in_args):
+        return in_args
+
+    def _real_train(self,
+                    num_epochs,
+                    train_dataset,
+                    train_batch_size=64,
+                    eval_dataset=None,
+                    optimizer=None,
+                    save_interval_epochs=1,
+                    log_interval_steps=10,
+                    save_dir='output',
+                    pretrain_weights='IMAGENET',
+                    learning_rate=.001,
+                    warmup_steps=0,
+                    warmup_start_lr=0.0,
+                    lr_decay_epochs=(216, 243),
+                    lr_decay_gamma=0.1,
+                    metric=None,
+                    use_ema=False,
+                    early_stop=False,
+                    early_stop_patience=5,
+                    use_vdl=True,
+                    resume_checkpoint=None):
+
         if self.status == 'Infer':
             logging.error(
                 "Exported inference model does not support training.",
@@ -582,16 +610,16 @@ class BaseDetector(BaseModel):
         else:
             images = img_file
 
-        batch_samples = self._preprocess(images, transforms)
+        batch_samples = self.preprocess(images, transforms)
         self.net.eval()
         outputs = self.run(self.net, batch_samples, 'test')
-        prediction = self._postprocess(outputs)
+        prediction = self.postprocess(outputs)
 
         if isinstance(img_file, (str, np.ndarray)):
             prediction = prediction[0]
         return prediction
 
-    def _preprocess(self, images, transforms, to_tensor=True):
+    def preprocess(self, images, transforms, to_tensor=True):
         self._check_transforms(transforms, 'test')
         batch_samples = list()
         for im in images:
@@ -608,7 +636,7 @@ class BaseDetector(BaseModel):
 
         return batch_samples
 
-    def _postprocess(self, batch_pred):
+    def postprocess(self, batch_pred):
         infer_result = {}
         if 'bbox' in batch_pred:
             bboxes = batch_pred['bbox']
@@ -879,108 +907,24 @@ class PicoDet(BaseDetector):
         self.fixed_input_shape = image_shape
         return self._define_input_spec(image_shape)
 
-    def train(self,
-              num_epochs,
-              train_dataset,
-              train_batch_size=64,
-              eval_dataset=None,
-              optimizer=None,
-              save_interval_epochs=1,
-              log_interval_steps=10,
-              save_dir='output',
-              pretrain_weights='IMAGENET',
-              learning_rate=.001,
-              warmup_steps=0,
-              warmup_start_lr=0.0,
-              lr_decay_epochs=(216, 243),
-              lr_decay_gamma=0.1,
-              metric=None,
-              use_ema=False,
-              early_stop=False,
-              early_stop_patience=5,
-              use_vdl=True,
-              resume_checkpoint=None):
-        """
-        Train the model.
-
-        Args:
-            num_epochs (int): Number of epochs.
-            train_dataset (paddlers.datasets.COCODetDataset|paddlers.datasets.VOCDetDataset): 
-                Training dataset.
-            train_batch_size (int, optional): Total batch size among all cards used in 
-                training. Defaults to 64.
-            eval_dataset (paddlers.datasets.COCODetDataset|paddlers.datasets.VOCDetDataset|None, optional): 
-                Evaluation dataset. If None, the model will not be evaluated during training 
-                process. Defaults to None.
-            optimizer (paddle.optimizer.Optimizer|None, optional): Optimizer used for 
-                training. If None, a default optimizer will be used. Defaults to None.
-            save_interval_epochs (int, optional): Epoch interval for saving the model. 
-                Defaults to 1.
-            log_interval_steps (int, optional): Step interval for printing training 
-                information. Defaults to 10.
-            save_dir (str, optional): Directory to save the model. Defaults to 'output'.
-            pretrain_weights (str|None, optional): None or name/path of pretrained 
-                weights. If None, no pretrained weights will be loaded. 
-                Defaults to 'IMAGENET'.
-            learning_rate (float, optional): Learning rate for training. Defaults to .001.
-            warmup_steps (int, optional): Number of steps of warm-up training. 
-                Defaults to 0.
-            warmup_start_lr (float, optional): Start learning rate of warm-up training. 
-                Defaults to 0..
-            lr_decay_epochs (list|tuple, optional): Epoch milestones for learning 
-                rate decay. Defaults to (216, 243).
-            lr_decay_gamma (float, optional): Gamma coefficient of learning rate decay. 
-                Defaults to .1.
-            metric (str|None, optional): Evaluation metric. Choices are {'VOC', 'COCO', None}. 
-                If None, determine the metric according to the  dataset format. 
-                Defaults to None.
-            use_ema (bool, optional): Whether to use exponential moving average 
-                strategy. Defaults to False.
-            early_stop (bool, optional): Whether to adopt early stop strategy. 
-                Defaults to False.
-            early_stop_patience (int, optional): Early stop patience. Defaults to 5.
-            use_vdl(bool, optional): Whether to use VisualDL to monitor the training 
-                process. Defaults to True.
-            resume_checkpoint (str|None, optional): Path of the checkpoint to resume
-                training from. If None, no training checkpoint will be resumed. At most
-                Aone of `resume_checkpoint` and `pretrain_weights` can be set simultaneously.
-                Defaults to None.
-        """
-
+    def _pre_train(self, in_args):
+        optimizer = in_args['optimizer']
         if optimizer is None:
-            num_steps_each_epoch = len(train_dataset) // train_batch_size
+            num_steps_each_epoch = len(in_args['train_dataset']) // in_args[
+                'train_batch_size']
             optimizer = self.default_optimizer(
                 parameters=self.net.parameters(),
-                learning_rate=learning_rate,
-                warmup_steps=warmup_steps,
-                warmup_start_lr=warmup_start_lr,
-                lr_decay_epochs=lr_decay_epochs,
-                lr_decay_gamma=lr_decay_gamma,
-                num_steps_each_epoch=num_steps_each_epoch,
+                learning_rate=in_args['learning_rate'],
+                warmup_steps=in_args['warmup_steps'],
+                warmup_start_lr=in_args['warmup_start_lr'],
+                lr_decay_epochs=in_args['lr_decay_epochs'],
+                lr_decay_gamma=in_args['lr_decay_gamma'],
+                num_steps_each_epoch=in_args['num_steps_each_epoch'],
                 reg_coeff=4e-05,
                 scheduler='Cosine',
-                num_epochs=num_epochs)
-        super(PicoDet, self).train(
-            num_epochs=num_epochs,
-            train_dataset=train_dataset,
-            train_batch_size=train_batch_size,
-            eval_dataset=eval_dataset,
-            optimizer=optimizer,
-            save_interval_epochs=save_interval_epochs,
-            log_interval_steps=log_interval_steps,
-            save_dir=save_dir,
-            pretrain_weights=pretrain_weights,
-            learning_rate=learning_rate,
-            warmup_steps=warmup_steps,
-            warmup_start_lr=warmup_start_lr,
-            lr_decay_epochs=lr_decay_epochs,
-            lr_decay_gamma=lr_decay_gamma,
-            metric=metric,
-            use_ema=use_ema,
-            early_stop=early_stop,
-            early_stop_patience=early_stop_patience,
-            use_vdl=use_vdl,
-            resume_checkpoint=resume_checkpoint)
+                num_epochs=in_args['num_epochs'])
+            in_args['optimizer'] = optimizer
+        return in_args
 
 
 class YOLOv3(BaseDetector):
@@ -1372,82 +1316,12 @@ class FasterRCNN(BaseDetector):
         super(FasterRCNN, self).__init__(
             model_name='FasterRCNN', num_classes=num_classes, **params)
 
-    def train(self,
-              num_epochs,
-              train_dataset,
-              train_batch_size=64,
-              eval_dataset=None,
-              optimizer=None,
-              save_interval_epochs=1,
-              log_interval_steps=10,
-              save_dir='output',
-              pretrain_weights='IMAGENET',
-              learning_rate=.001,
-              warmup_steps=0,
-              warmup_start_lr=0.0,
-              lr_decay_epochs=(216, 243),
-              lr_decay_gamma=0.1,
-              metric=None,
-              use_ema=False,
-              early_stop=False,
-              early_stop_patience=5,
-              use_vdl=True,
-              resume_checkpoint=None):
-        """
-        Train the model.
-
-        Args:
-            num_epochs (int): Number of epochs.
-            train_dataset (paddlers.datasets.COCODetDataset|paddlers.datasets.VOCDetDataset): 
-                Training dataset.
-            train_batch_size (int, optional): Total batch size among all cards used in 
-                training. Defaults to 64.
-            eval_dataset (paddlers.datasets.COCODetDataset|paddlers.datasets.VOCDetDataset|None, optional): 
-                Evaluation dataset. If None, the model will not be evaluated during training 
-                process. Defaults to None.
-            optimizer (paddle.optimizer.Optimizer|None, optional): Optimizer used for 
-                training. If None, a default optimizer will be used. Defaults to None.
-            save_interval_epochs (int, optional): Epoch interval for saving the model. 
-                Defaults to 1.
-            log_interval_steps (int, optional): Step interval for printing training 
-                information. Defaults to 10.
-            save_dir (str, optional): Directory to save the model. Defaults to 'output'.
-            pretrain_weights (str|None, optional): None or name/path of pretrained 
-                weights. If None, no pretrained weights will be loaded. 
-                Defaults to 'IMAGENET'.
-            learning_rate (float, optional): Learning rate for training. Defaults to .001.
-            warmup_steps (int, optional): Number of steps of warm-up training. 
-                Defaults to 0.
-            warmup_start_lr (float, optional): Start learning rate of warm-up training. 
-                Defaults to 0..
-            lr_decay_epochs (list|tuple, optional): Epoch milestones for learning 
-                rate decay. Defaults to (216, 243).
-            lr_decay_gamma (float, optional): Gamma coefficient of learning rate decay. 
-                Defaults to .1.
-            metric (str|None, optional): Evaluation metric. Choices are {'VOC', 'COCO', None}. 
-                If None, determine the metric according to the  dataset format. 
-                Defaults to None.
-            use_ema (bool, optional): Whether to use exponential moving average 
-                strategy. Defaults to False.
-            early_stop (bool, optional): Whether to adopt early stop strategy. 
-                Defaults to False.
-            early_stop_patience (int, optional): Early stop patience. Defaults to 5.
-            use_vdl(bool, optional): Whether to use VisualDL to monitor the training 
-                process. Defaults to True.
-            resume_checkpoint (str|None, optional): Path of the checkpoint to resume
-                training from. If None, no training checkpoint will be resumed. At most
-                Aone of `resume_checkpoint` and `pretrain_weights` can be set simultaneously.
-                Defaults to None.
-        """
-
+    def _pre_train(self, in_args):
+        train_dataset = in_args['train_dataset']
         if train_dataset.pos_num < len(train_dataset.file_list):
+            # In-place modification
             train_dataset.num_workers = 0
-        super(FasterRCNN, self).train(
-            num_epochs, train_dataset, train_batch_size, eval_dataset,
-            optimizer, save_interval_epochs, log_interval_steps, save_dir,
-            pretrain_weights, learning_rate, warmup_steps, warmup_start_lr,
-            lr_decay_epochs, lr_decay_gamma, metric, use_ema, early_stop,
-            early_stop_patience, use_vdl, resume_checkpoint)
+        return in_args
 
     def _compose_batch_transform(self, transforms, mode='train'):
         if mode == 'train':
@@ -2214,82 +2088,12 @@ class MaskRCNN(BaseDetector):
         super(MaskRCNN, self).__init__(
             model_name='MaskRCNN', num_classes=num_classes, **params)
 
-    def train(self,
-              num_epochs,
-              train_dataset,
-              train_batch_size=64,
-              eval_dataset=None,
-              optimizer=None,
-              save_interval_epochs=1,
-              log_interval_steps=10,
-              save_dir='output',
-              pretrain_weights='IMAGENET',
-              learning_rate=.001,
-              warmup_steps=0,
-              warmup_start_lr=0.0,
-              lr_decay_epochs=(216, 243),
-              lr_decay_gamma=0.1,
-              metric=None,
-              use_ema=False,
-              early_stop=False,
-              early_stop_patience=5,
-              use_vdl=True,
-              resume_checkpoint=None):
-        """
-        Train the model.
-
-        Args:
-            num_epochs (int): Number of epochs.
-            train_dataset (paddlers.datasets.COCODetDataset|paddlers.datasets.VOCDetDataset): 
-                Training dataset.
-            train_batch_size (int, optional): Total batch size among all cards used in 
-                training. Defaults to 64.
-            eval_dataset (paddlers.datasets.COCODetDataset|paddlers.datasets.VOCDetDataset|None, optional): 
-                Evaluation dataset. If None, the model will not be evaluated during training 
-                process. Defaults to None.
-            optimizer (paddle.optimizer.Optimizer|None, optional): Optimizer used for 
-                training. If None, a default optimizer will be used. Defaults to None.
-            save_interval_epochs (int, optional): Epoch interval for saving the model. 
-                Defaults to 1.
-            log_interval_steps (int, optional): Step interval for printing training 
-                information. Defaults to 10.
-            save_dir (str, optional): Directory to save the model. Defaults to 'output'.
-            pretrain_weights (str|None, optional): None or name/path of pretrained 
-                weights. If None, no pretrained weights will be loaded. 
-                Defaults to 'IMAGENET'.
-            learning_rate (float, optional): Learning rate for training. Defaults to .001.
-            warmup_steps (int, optional): Number of steps of warm-up training. 
-                Defaults to 0.
-            warmup_start_lr (float, optional): Start learning rate of warm-up training. 
-                Defaults to 0..
-            lr_decay_epochs (list|tuple, optional): Epoch milestones for learning 
-                rate decay. Defaults to (216, 243).
-            lr_decay_gamma (float, optional): Gamma coefficient of learning rate decay. 
-                Defaults to .1.
-            metric (str|None, optional): Evaluation metric. Choices are {'VOC', 'COCO', None}. 
-                If None, determine the metric according to the  dataset format. 
-                Defaults to None.
-            use_ema (bool, optional): Whether to use exponential moving average 
-                strategy. Defaults to False.
-            early_stop (bool, optional): Whether to adopt early stop strategy. 
-                Defaults to False.
-            early_stop_patience (int, optional): Early stop patience. Defaults to 5.
-            use_vdl(bool, optional): Whether to use VisualDL to monitor the training 
-                process. Defaults to True.
-            resume_checkpoint (str|None, optional): Path of the checkpoint to resume
-                training from. If None, no training checkpoint will be resumed. At most
-                Aone of `resume_checkpoint` and `pretrain_weights` can be set simultaneously.
-                Defaults to None.
-        """
-
+    def _pre_train(self, in_args):
+        train_dataset = in_args['train_dataset']
         if train_dataset.pos_num < len(train_dataset.file_list):
+            # In-place modification
             train_dataset.num_workers = 0
-        super(MaskRCNN, self).train(
-            num_epochs, train_dataset, train_batch_size, eval_dataset,
-            optimizer, save_interval_epochs, log_interval_steps, save_dir,
-            pretrain_weights, learning_rate, warmup_steps, warmup_start_lr,
-            lr_decay_epochs, lr_decay_gamma, metric, use_ema, early_stop,
-            early_stop_patience, use_vdl, resume_checkpoint)
+        return in_args
 
     def _compose_batch_transform(self, transforms, mode='train'):
         if mode == 'train':

+ 11 - 11
paddlers/tasks/segmenter.py

@@ -111,10 +111,10 @@ class BaseSegmenter(BaseModel):
         if mode == 'test':
             origin_shape = inputs[1]
             if self.status == 'Infer':
-                label_map_list, score_map_list = self._postprocess(
+                label_map_list, score_map_list = self.postprocess(
                     net_out, origin_shape, transforms=inputs[2])
             else:
-                logit_list = self._postprocess(
+                logit_list = self.postprocess(
                     logit, origin_shape, transforms=inputs[2])
                 label_map_list = []
                 score_map_list = []
@@ -142,7 +142,7 @@ class BaseSegmenter(BaseModel):
                 raise ValueError("Expected label.ndim == 4 but got {}".format(
                     label.ndim))
             origin_shape = [label.shape[-2:]]
-            pred = self._postprocess(
+            pred = self.postprocess(
                 pred, origin_shape, transforms=inputs[2])[0]  # NCHW
             intersect_area, pred_area, label_area = ppseg.utils.metrics.calculate_area(
                 pred, label, self.num_classes)
@@ -521,8 +521,8 @@ class BaseSegmenter(BaseModel):
             images = [img_file]
         else:
             images = img_file
-        batch_im, batch_origin_shape = self._preprocess(images, transforms,
-                                                        self.model_type)
+        batch_im, batch_origin_shape = self.preprocess(images, transforms,
+                                                       self.model_type)
         self.net.eval()
         data = (batch_im, batch_origin_shape, transforms.transforms)
         outputs = self.run(self.net, data, 'test')
@@ -626,7 +626,7 @@ class BaseSegmenter(BaseModel):
         dst_data = None
         print("GeoTiff saved in {}.".format(save_file))
 
-    def _preprocess(self, images, transforms, to_tensor=True):
+    def preprocess(self, images, transforms, to_tensor=True):
         self._check_transforms(transforms, 'test')
         batch_im = list()
         batch_ori_shape = list()
@@ -693,7 +693,7 @@ class BaseSegmenter(BaseModel):
             batch_restore_list.append(restore_list)
         return batch_restore_list
 
-    def _postprocess(self, batch_pred, batch_origin_shape, transforms):
+    def postprocess(self, batch_pred, batch_origin_shape, transforms):
         batch_restore_list = BaseSegmenter.get_transforms_shape_info(
             batch_origin_shape, transforms)
         if isinstance(batch_pred, (tuple, list)) and self.status == 'Infer':
@@ -781,7 +781,7 @@ class BaseSegmenter(BaseModel):
 
 class UNet(BaseSegmenter):
     def __init__(self,
-                 input_channel=3,
+                 in_channels=3,
                  num_classes=2,
                  use_mixed_loss=False,
                  losses=None,
@@ -794,7 +794,7 @@ class UNet(BaseSegmenter):
         })
         super(UNet, self).__init__(
             model_name='UNet',
-            input_channel=input_channel,
+            input_channel=in_channels,
             num_classes=num_classes,
             use_mixed_loss=use_mixed_loss,
             losses=losses,
@@ -803,7 +803,7 @@ class UNet(BaseSegmenter):
 
 class DeepLabV3P(BaseSegmenter):
     def __init__(self,
-                 input_channel=3,
+                 in_channels=3,
                  num_classes=2,
                  backbone='ResNet50_vd',
                  use_mixed_loss=False,
@@ -822,7 +822,7 @@ class DeepLabV3P(BaseSegmenter):
         if params.get('with_net', True):
             with DisablePrint():
                 backbone = getattr(ppseg.models, backbone)(
-                    input_channel=input_channel, output_stride=output_stride)
+                    input_channel=in_channels, output_stride=output_stride)
         else:
             backbone = None
         params.update({

+ 1 - 1
requirements.txt

@@ -1,4 +1,4 @@
-paddleslim >= 2.2.1
+paddleslim >= 2.2.1,<2.3.3
 visualdl >= 2.1.1
 opencv-contrib-python == 4.3.0.38
 numba == 0.53.1

+ 4 - 4
test_tipc/configs/seg/unet/unet.yaml

@@ -5,7 +5,7 @@ _base_: ../_base_/rsseg.yaml
 save_dir: ./test_tipc/output/seg/unet/
 
 model: !Node
-    type: UNet
-        args:
-            input_channel: 10
-            num_classes: 5
+       type: UNet
+       args:
+           in_channels: 10
+           num_classes: 5

+ 6 - 6
test_tipc/infer.py

@@ -143,7 +143,7 @@ class TIPCPredictor(object):
         return config
 
     def preprocess(self, images, transforms):
-        preprocessed_samples = self._model._preprocess(
+        preprocessed_samples = self._model.preprocess(
             images, transforms, to_tensor=False)
         if self._model.model_type == 'classifier':
             preprocessed_samples = {'image': preprocessed_samples[0]}
@@ -169,12 +169,12 @@ class TIPCPredictor(object):
     def postprocess(self, net_outputs, topk=1, ori_shape=None, transforms=None):
         if self._model.model_type == 'classifier':
             true_topk = min(self._model.num_classes, topk)
-            if self._model._postprocess is None:
+            if self._model.postprocess is None:
                 self._model.build_postprocess_from_labels(topk)
-            # XXX: Convert ndarray to tensor as self._model._postprocess requires
+            # XXX: Convert ndarray to tensor as self._model.postprocess requires
             assert len(net_outputs) == 1
             net_outputs = paddle.to_tensor(net_outputs[0])
-            outputs = self._model._postprocess(net_outputs)
+            outputs = self._model.postprocess(net_outputs)
             class_ids = map(itemgetter('class_ids'), outputs)
             scores = map(itemgetter('scores'), outputs)
             label_names = map(itemgetter('label_names'), outputs)
@@ -184,7 +184,7 @@ class TIPCPredictor(object):
                 'label_names_map': n,
             } for l, s, n in zip(class_ids, scores, label_names)]
         elif self._model.model_type in ('segmenter', 'change_detector'):
-            label_map, score_map = self._model._postprocess(
+            label_map, score_map = self._model.postprocess(
                 net_outputs,
                 batch_origin_shape=ori_shape,
                 transforms=transforms.transforms)
@@ -197,7 +197,7 @@ class TIPCPredictor(object):
                 k: v
                 for k, v in zip(['bbox', 'bbox_num', 'mask'], net_outputs)
             }
-            preds = self._model._postprocess(net_outputs)
+            preds = self._model.postprocess(net_outputs)
         else:
             logging.error(
                 "Invalid model type {}.".format(self._model.model_type),

+ 3 - 8
tests/rs_models/test_cd_models.py

@@ -12,11 +12,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-import platform
 from itertools import cycle
 
 import paddlers
-from rs_models.test_model import TestModel
+from rs_models.test_model import TestModel, allow_oom
 
 __all__ = [
     'TestBITModel', 'TestCDNetModel', 'TestChangeStarModel', 'TestDSAMNetModel',
@@ -202,6 +201,7 @@ class TestSNUNetModel(TestCDModel):
         ]   # yapf: disable
 
 
+@allow_oom
 class TestSTANetModel(TestCDModel):
     MODEL_CLASS = paddlers.rs_models.cd.STANet
 
@@ -216,6 +216,7 @@ class TestSTANetModel(TestCDModel):
         ]   # yapf: disable
 
 
+@allow_oom
 class TestChangeFormerModel(TestCDModel):
     MODEL_CLASS = paddlers.rs_models.cd.ChangeFormer
 
@@ -226,9 +227,3 @@ class TestChangeFormerModel(TestCDModel):
             dict(**base_spec, decoder_softmax=True),
             dict(**base_spec, embed_dim=56)
         ]   # yapf: disable
-
-
-# HACK:FIXME: We observe an OOM error when running TestSTANetModel.test_forward() on a Windows machine.
-# Currently, we do not perform this test.
-if platform.system() == 'Windows':
-    TestSTANetModel.test_forward = lambda self: None

+ 34 - 2
tests/rs_models/test_model.py

@@ -18,6 +18,7 @@ import paddle
 import numpy as np
 from paddle.static import InputSpec
 
+from paddlers.utils import logging
 from testing_utils import CommonTest
 
 
@@ -37,20 +38,26 @@ class _TestModelNamespace:
             for i, (
                     input, model, target
             ) in enumerate(zip(self.inputs, self.models, self.targets)):
-                with self.subTest(i=i):
+                try:
                     if isinstance(input, list):
                         output = model(*input)
                     else:
                         output = model(input)
                     self.check_output(output, target)
+                except:
+                    logging.warning(f"Model built with spec{i} failed!")
+                    raise
 
         def test_to_static(self):
             for i, (
                     input, model, target
             ) in enumerate(zip(self.inputs, self.models, self.targets)):
-                with self.subTest(i=i):
+                try:
                     static_model = paddle.jit.to_static(
                         model, input_spec=self.get_input_spec(model, input))
+                except:
+                    logging.warning(f"Model built with spec{i} failed!")
+                    raise
 
         def check_output(self, output, target):
             pass
@@ -117,4 +124,29 @@ class _TestModelNamespace:
             return input_spec
 
 
+def allow_oom(cls):
+    def _deco(func):
+        def _wrapper(self, *args, **kwargs):
+            try:
+                func(self, *args, **kwargs)
+            except (SystemError, RuntimeError, OSError, MemoryError) as e:
+                # XXX: This may not cover all OOM cases.
+                msg = str(e)
+                if "Out of memory error" in msg \
+                    or "(External) CUDNN error(4), CUDNN_STATUS_INTERNAL_ERROR." in msg \
+                    or isinstance(e, MemoryError):
+                    logging.warning("An OOM error has been ignored.")
+                else:
+                    raise
+
+        return _wrapper
+
+    for key, value in inspect.getmembers(cls):
+        if key.startswith('test'):
+            value = _deco(value)
+            setattr(cls, key, value)
+
+    return cls
+
+
 TestModel = _TestModelNamespace.TestModel

+ 41 - 0
tests/run_ci_dev.sh

@@ -0,0 +1,41 @@
+#!/bin bash
+
+rm -rf /usr/local/python2.7.15/bin/python
+rm -rf /usr/local/python2.7.15/bin/pip
+ln -s /usr/local/bin/python3.7 /usr/local/python2.7.15/bin/python
+ln -s /usr/local/bin/pip3.7 /usr/local/python2.7.15/bin/pip
+export PYTHONPATH=`pwd`
+
+python -m pip install --upgrade pip --ignore-installed
+# python -m pip install --upgrade numpy --ignore-installed
+python -m pip uninstall paddlepaddle-gpu -y
+if [[ ${branch} == 'develop' ]];then
+echo "checkout develop !"
+python -m pip install ${paddle_dev} --no-cache-dir
+else
+echo "checkout release !"
+python -m pip install ${paddle_release} --no-cache-dir
+fi
+
+echo -e '*****************paddle_version*****'
+python -c 'import paddle;print(paddle.version.commit)'
+echo -e '*****************paddleseg_version****'
+git rev-parse HEAD
+
+pip install -r requirements.txt --ignore-installed
+pip install -e .
+pip install https://versaweb.dl.sourceforge.net/project/gdal-wheels-for-linux/GDAL-3.4.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl
+
+git clone https://github.com/LDOUBLEV/AutoLog
+cd AutoLog
+pip install -r requirements.txt
+python setup.py bdist_wheel
+pip install ./dist/auto_log*.whl
+cd ..
+
+unset http_proxy https_proxy
+
+set -e
+
+cd tests/
+bash run_fast_tests.sh

+ 13 - 0
tests/run_tipc_lite.sh

@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+
+cd ..
+
+for config in $(ls test_tipc/configs/*/*/train_infer_python.txt); do
+    bash test_tipc/prepare.sh ${config} lite_train_lite_infer
+    bash test_tipc/test_train_inference_python.sh ${config} lite_train_lite_infer
+    task="$(basename $(dirname $(dirname ${config})))"
+    model="$(basename $(dirname ${config}))"
+    if grep -q 'failed' "test_tipc/output/${task}/${model}/lite_train_lite_infer/results_python.log"; then
+        exit 1
+    fi
+done