io.github.betaseg / csbdeep_unet_train / 0.1.0

CSBDeep Unet Train

CSBDeep Unet Train Cover Image
An album solution to train a Unet with CSBDeep.
Tags
bdvcellsketchsegmentationannotation
Citation
15.12 (2018): 1090-1097.Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.
Solution written by
Martin Weigert
Jan Philipp Albrecht
License of solution
MIT

Arguments

--root
root folder of your data. Data structure must be provided as specified in documentation! (default value: PARAMETER_VALUE)
--use_augmentation
Whether to use augmentation or not. If enabled the data is probabilistically flipped, rotated, elastic deformed, intensity scaled and additionally noised. (default value: 1)
--limit_gpu_memory
The absolute number of bytes to allocate for GPU memory. (default value: 12000)
--patch_size
Patch size of each training instance. Must be given as a string separated by ",". (default value: 48,128,128)
--unet_n_depth
The depth of the network. Default: 3 (default value: 4)
--batch_size
batch size to be used during training (default value: 2)
--normalize_mi_ma
min and max to use for normalization, if not given use 1st and 99.8th percentile of each image (default value: )
--unet_pool_size
The pool size of the network. Must be given as a string separated by ",". Should be as many numbers as unet is deep. (default value: 2,2,2)
--train_class_weight
The weights for the binary cross entropy loss part. First weight for negative class. Must be given as a string separated by ",". (default value: 1,1)
--epochs
Number of epochs to train for. (default value: 300)
--steps_per_epoch
Number of steps per epochs. (default value: 512)
--train_reduce_lr_factor
Factor to reduce learn rate over time. (default value: 0.5)
--train_reduce_lr_patience
Patience after which to start learn rate reduction. (default value: 50)
--use_gpu_for_aug
Whether to use gpu for augmentations. Only enable if you have a GPU available that is compatible with tensorflow 2.0. (default value: 1)
--dry
dry run (dont create any output files/folders) (default value: 0)
--num_workers
Number of threads to use for training. On Windows, multiprocessing will be deactivated. Default: 4 (default value: 4)

Usage instructions

Please follow this link for details on how to install and run this solution.

CSBDeep Segmentation Solution for Album

Introduction

This album solution uses the CSBDeep toolbox to train a U-NET model to perfom segmentations from the command line.

The extensive documentation of CSBDeep can be found at http://csbdeep.bioimagecomputing.com/doc/.

This solution consists of two parts:

  1. CSBDeep-train: This solution is used to train the model
  2. CSBDeep-predict: This solution is used to perform inference/prediction.

Example: 3D segmentation of golgi aparatus with 3D U-Net

golgi

This demonstrates how to use the solution to train a 3D U-Net model to perform semantic segmentation of the golgi aparatus from 3D FIB-SEM data The procedure is described in the paper:

Müller, Andreas, et al. "3D FIB-SEM reconstruction of microtubule–organelle interaction in whole primary mouse β cells." Journal of Cell Biology 220.2 (2021).

Download the example data (or adapt your own data into the same format)

wget https://syncandshare.desy.de/index.php/s/FikPy4k2FHS5L4F/download/data_golgi.zip
unzip data_golgi.zip

which should result in the following folder structure:

data_golgi
├── train
│   ├── images
│   └── masks
└── val
    ├── images
    └── masks

Installation

Make sure album is already installed. If not, download and install it as described here. Also, don't forget to add the catalog to your album installation, so you can install the solutions from the catalog.

Install the CSBDeep-train solution by using the graphical user interface (GUI) of album or by running the following command in the terminal:

album install io.github.segmentation:CSBDeep-train:0.1.0

For prediction, install the CSBDeep-predict solution:

album install io.github.segmentation:CSBDeep-predict:0.1.0

How to use

CSBDeep-train

To run the training, set the parameters in the GUI or adapt this example for command line usage:

conda activate album
album run csbdeep_unet_train --root /data/csbdeep_unet_train/data_golgi --epochs 3 --steps_per_epoch 5

During training, a browser tab opens and shows the training progress. The program terminates after the training is finished and the tab is closed.

CSBDeep-predict

To perform inference/prediction, set the parameters in the GUI or adapt this example for command line usage:

conda activate album
album run csbdeep_unet_predict --input /data/csbdeep_unet_train/data_golgi/val/images --outdir /data/segmentations --model /models/2023_07_04-15_06_33_unet

For further options and default values, please refer to the corresponding info page of the solution:

album info csbdeep_unet_predict

Further documentation:

For further options, parameters and default values, please refer to the info page of the solution:

album info csbdeep_unet_predict

Hardware requirements

We recommend to use a GPU with at least 8GB of memory to run the solution as a minimum requirement.

Citation & License

This solution is licensed under the BSD 3-Clause License.

If you use this solution, please cite the following paper:

doi: 10.1038/s41592-018-0216-7,
title: Content-aware image restoration: pushing the limits of fluorescence microscopy