2025 Updates and Advances#
This workshop is an update to the 2024 workshop.
Question: Does the 2024 workshop still work?#
Answer: The 2024 instructions worked perfectly in 2024, but may not work reliably in 2025 due to dependency drift - when non-versioned dependencies and vague version ranges are a problem because transitive dependencies to change over time. Rather than chasing these moving targets, this update first introduces new technologies that lock dependencies precisely, making workshops reproducible years later (we’ll eventually update the other sections to use technologies such as Pixi and Appose)
Question: Does the 2024 workshop still produce the same results?#
Answer: I don’t know. However it may not. Over the last year I’ve run into situations where the same code, run in different environments created with the same installation instructions, produces different results.
For example Cellpose: With the move to Cellposesam you can get different results with newer versions.
Word of warning with generalist models: Even after controlling environments perfectly, could get different result if the model was updated.
New technologies we will learn today#
These tools address the core challenges that make computational tutorials fragile over time:
Pixi#
Pixi solves the “it worked last year but breaks today” problem. Unlike conda environments that can drift over time or break because ambiguous dependencies change, Pixi creates locked, reproducible environments from a single pixi.toml file.
The key is pixi.lock file. Even if someone writes a sloppy pixi.toml with loose version constraints, the lock file captures the exact versions that actually worked. This lock file is what makes true reproduction possible.
This means:
Examples from 2024 will still run in 2030
No more “conda environment not found” errors
Faster setup - one
pixi installcommand gets everythingCross-platform consistency
Exact reproducibility thanks to locked dependency versions
Pixi makes our deep learning workflows future-proof by eliminating the dependency hell that often breaks older tutorials.
Appose#
Appose lets us run conflicting deep learning models in the same notebook without environment clashes. Instead of choosing between Cellpose v3 vs v4, or SAM vs MicroSAM, we can use them all.
This means:
Compare multiple dl frameworks side-by-side easily
No more “this breaks that” dependency conflicts
Future frameworks can be added to comparisons without breaking existing ones
Notebooks become model/framework comparison laboratories
Appose makes our examples resilient to the fast-changing deep learning landscape by isolating each model in its own environment while keeping them accessible from one place.
Code#
Code for the entire workshop can be found here. Please clone this repository before the course. Today we will focus on the notebooks in this section.
Jupyter book view#
The Jupyter book view of the workshop can be found here
Data#
Dropbox LinkDownload all the data and place it at the top level of the repository in the data directory.
The 2025 update to the workshop uses Jupyter Notebooks, Napari, Cellpose, Stardist and Microsam among other tools. In addition the following other projects are used. We will be using Pixi to get these projects.
Projects we use#
tnia-python - This is a general utility project for image processing including deep learning utilities.
easy-augment-batch-dl - A plugin to perform deep learning on small to medium-sized image sets with UNETs, Cellpose, Stardist, SAM and friends. This plugin is particularly useful for performing deep learning with a small number of labels and augmentation, and for experimenting with different deep learning frameworks.
napari-ai-lab - A collection of plugins and utilities for ND segmentation with Cellpose, StarDist, SAM and more. The motivation for this newer project is to construct a better foundation for handling ND datasets and complicated dependency management (via Appose and Pixi).
Some examples may be ‘under construction’#
