使用Deeplabv3+训练自己的数据集

同感会变成一个肯定过程的核心

Posted by AspenStars on March 6, 2019

同感会变成一个肯定过程的核心

—勒庞 《乌合之众》

来自aquariusjay

We will add Cityscapes experiments on the updated deeplabv3+ paper soon.

Some important hyper-parameters we use when training on the train_fine set: learning rate = 1e-2, training crop_size = 769x769, and training iterations = 90K. Most importantly, you need to fine-tune the batch norm parameters during training which however requires large batch size.

If given the limited resource at hand, we would suggest you simply fine-tune from our provided checkpoints whose batch-norm parameters have been trained (i.e., train with a smaller learning rate, set fine_tune_batch_norm = false, and employ longer training iterations since the learning rate is small). If you really would like to train by yourself, we would suggest

  1. Set output_stride = 16 or maybe even 32 (remember to change the flag atrous_rates accordingly, e.g., atrous_rates = [3, 6, 9] for output_stride = 32).
  2. Use as many GPUs as possible (change the flag num_clones in train.py) and set train_batch_size as large as possible.
  3. Adjust the train_crop_size in train.py. Maybe set it to be smaller, e.g., 513x513 (or even 321x321), so that you could use a larger batch size.
  4. Use a smaller network backbone, such as MobileNet-v2, which will be supported soon.