SizeModel#

class cellpose_omni.models.SizeModel(cp_model, device=None, pretrained_size=None, **kwargs)[source]#

Bases: object

linear regression model for determining the size of objects in image used to rescale before input to cp_model uses styles from cp_model

Parameters
  • cp_model (UnetModel or CellposeModel) -- model from which to get styles

  • device (mxnet device (optional, default mx.cpu())) -- where cellpose model is saved (mx.gpu() or mx.cpu())

  • pretrained_size (str) -- path to pretrained size model

  • omni (bool) -- whether or not to use distance-based size metrics corresponding to 'omni' model

Methods Summary

eval(x[, channels, channel_axis, normalize, ...])

Evaluation for SizeModel.

train(train_data, train_labels[, test_data, ...])

train size model with images train_data to estimate linear model from styles to diameters

Methods Documentation

eval(x, channels=None, channel_axis=None, normalize=True, invert=False, augment=False, tile=True, batch_size=8, progress=None, interp=True, omni=False)[source]#

Evaluation for SizeModel. Use images x to produce style or use style input to predict size of objects in image.

Object size estimation is done in two steps: 1. use a linear regression model to predict size from style in image 2. resize image to predicted size and run CellposeModel to get output masks.

Take the median object size of the predicted masks as the final predicted size.

Parameters
  • x (list or array of images) -- can be list of 2D/3D images, or array of 2D/3D images

  • channels (list (optional, default None)) -- list of channels, either of length 2 or of length number of images by 2. First element of list is the channel to segment (0=grayscale, 1=red, 2=green, 3=blue). Second element of list is the optional nuclear channel (0=none, 1=red, 2=green, 3=blue). For instance, to segment grayscale images, input [0,0]. To segment images with cells in green and nuclei in blue, input [2,3]. To segment one grayscale image and one image with cells in green and nuclei in blue, input [[0,0], [2,3]].

  • channel_axis (int (optional, default None)) -- if None, channels dimension is attempted to be automatically determined

  • normalize (bool (default, True)) -- normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel

  • invert (bool (optional, default False)) -- invert image pixel intensity before running network

  • augment (bool (optional, default False)) -- tiles image with overlapping tiles and flips overlapped regions to augment

  • tile (bool (optional, default True)) -- tiles image to ensure GPU/CPU memory usage limited (recommended)

  • progress (pyqt progress bar (optional, default None)) -- to return progress bar status to GUI

Returns

  • diam (array, float) -- final estimated diameters from images x or styles style after running both steps

  • diam_style (array, float) -- estimated diameters from style alone

train(train_data, train_labels, test_data=None, test_labels=None, channels=None, normalize=True, learning_rate=0.2, n_epochs=10, l2_regularization=1.0, batch_size=8)[source]#

train size model with images train_data to estimate linear model from styles to diameters

Parameters
  • train_data (list of arrays (2D or 3D)) -- images for training

  • train_labels (list of arrays (2D or 3D)) -- labels for train_data, where 0=no masks; 1,2,...=mask labels can include flows as additional images

  • channels (list of ints (default, None)) -- channels to use for training

  • normalize (bool (default, True)) -- normalize data so 0.0=1st percentile and 1.0=99th percentile of image intensities in each channel

  • n_epochs (int (default, 10)) -- how many times to go through whole training set (taking random patches) for styles for diameter estimation

  • l2_regularization (float (default, 1.0)) -- regularize linear model from styles to diameters

  • batch_size (int (optional, default 8)) -- number of 224x224 patches to run simultaneously on the GPU (can make smaller or bigger depending on GPU memory usage)