Easy-Augment-Batch-DL with Vessel Data#

This notebook demonstrates how to use the napari-easy-augment-batch-dl plugin to train deep learning models on vessel segmentation data.

Background#

The easy-augment-batch-dl plugin provides a streamlined workflow for training deep learning models with minimal data. It has panes that allow you to

  1. Draw labels

  2. Augment labels

  3. Train/predict with different architectures

Semantic vs Instance Segmentation#

Semantic Segmentation: Every pixel gets classified into a category (background, vessel, cell, etc.) Instance Segmentation: Individual objects are separated (vessel 1, vessel 2, cell 1, cell 2, etc.)

For this vessel example, we’ll use semantic segmentation with multiple classes.

Labeling Strategy#

Sparse Labeling: Not every pixel has to be labeled. However some background needs to be labeled to differentiate between background pixels and unlabeled pixels. For example if there were 2 foreground classes use label 1 for background, 2 for “class 1”, and 3 for “class 2”. Unlabeled pixels (0) will be ignored.

Dense Labeling: Every foreground pixel needs to be labeled however background does not need to be labeled. If there were 2 foreground classes, use 1 for “class 1”, 2 for “class 2” and then the remaining pixels (0) will be treated as background.

Augmentation#

The plugin includes a panel with various augmentation options. Simply check the desired augmentation types (Horizontal Flip, Random Resize, Random Adjust Color, Elastic Deformation, etc.) and click “Augment All Images” or “Augment Single” to automatically generate additional training data from your labeled images.

Train and Predict#

After generating augmented patches, you can train a model. Use the dropdown menu to select your desired architecture (in the screenshot, Monai UNet is selected), then click “Train Network”. Once training is complete, you can use your trained model to predict on new images. You also have the option to load a previously trained model if available.

The below screenshot shows Napari-Easy-Augment-Batch open with Semantic labels (1 background, 2 barrier, 3 vessel, 4 cell) drawn.

Easy Augment Label Semantic

Import and check versions#

import napari
import numpy as np
from napari_easy_augment_batch_dl import easy_augment_batch_dl

# for trouble shooting print the napari and numpy version. This can give us clues if there are dependency issues
print('napari version', napari.__version__)
print('numpy version', np.__version__)
raster_geometry not imported.  This is only needed for the ellipsoid rendering in apply_stardist
c:\Users\bnort\work\ImageJ2022\tnia\notebooks-and-napari-widgets-for-dl\pixi\microsam_cellposesam\.pixi\envs\default\Lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
No module named 'segment_everything'
napari version 0.6.6
numpy version 2.3.4

Start Napari and Easy Augment Batch DL#

Start Napari, show Easy-Augment-Batch-DL and show the parent directory.

viewer = napari.Viewer()

batch_dl = easy_augment_batch_dl.NapariEasyAugmentBatchDL(viewer)

viewer.window.add_dock_widget(
    batch_dl
)

parent_path = r'..\..\data\vessel_3D_lightsheet'

batch_dl.load_image_directory(parent_path)
C:\Users\bnort\AppData\Local\Temp\ipykernel_2184\1010478278.py:3: DeprecationWarning: The 'label_only' parameter is deprecated. Please use the 'mode' parameter instead.
  batch_dl = easy_augment_batch_dl.NapariEasyAugmentBatchDL(viewer)
No module named 'segment_everything'
Found framework CellPoseInstanceFramework
creating new log file
2025-11-18 19:25:56,443 [INFO] WRITING LOG OUTPUT TO C:\Users\bnort\.cellpose\run.log
2025-11-18 19:25:56,444 [INFO] 
cellpose version: 	4.0.7 
platform:       	win32 
python version: 	3.11.14 
torch version:  	2.6.0
2025-11-18 19:25:56,446 [WARNING] model_type argument is not used in v4.0.1+. Ignoring this argument...
2025-11-18 19:25:56,444 [INFO] 
cellpose version: 	4.0.7 
platform:       	win32 
python version: 	3.11.14 
torch version:  	2.6.0
2025-11-18 19:25:56,446 [WARNING] model_type argument is not used in v4.0.1+. Ignoring this argument...
2025-11-18 19:25:56,665 [INFO] ** TORCH CUDA version installed and working. **
2025-11-18 19:25:56,666 [INFO] >>>> using GPU (CUDA)
2025-11-18 19:25:56,665 [INFO] ** TORCH CUDA version installed and working. **
2025-11-18 19:25:56,666 [INFO] >>>> using GPU (CUDA)
2025-11-18 19:25:58,533 [INFO] >>>> loading model C:\Users\bnort\.cellpose\models\cpsam
2025-11-18 19:25:58,533 [INFO] >>>> loading model C:\Users\bnort\.cellpose\models\cpsam
Found framework RandomForestFramework
Found framework VesselsSemanticFramework
Found framework MicroSamInstanceFramework
Exception occurred when creating filenames dataset  AsyncGroup.create_array() got an unexpected keyword argument 'object_codec'
Error creating ml labels and features: 'filenames'
Found framework RandomForestFramework
Found framework VesselsSemanticFramework
Found framework MicroSamInstanceFramework
Exception occurred when creating filenames dataset  AsyncGroup.create_array() got an unexpected keyword argument 'object_codec'
Error creating ml labels and features: 'filenames'
Error creating ml_labels: 'DeepLearningProject' object has no attribute 'ml_labels'
Random Forest ML may not work properly
Adding object boxes layer
Error creating ml_labels: 'DeepLearningProject' object has no attribute 'ml_labels'
Random Forest ML may not work properly
Adding object boxes layer
Adding predicted object boxes layer
Adding predicted object boxes layer
Adding label boxes
Data changed
Data changed
Adding object boxes
Adding predicted object boxes
Setting object box classes
Setting predicted object box classes
Adding label boxes
Data changed
Data changed
Adding object boxes
Adding predicted object boxes
Setting object box classes
Setting predicted object box classes