When To Say ‘Good Enough’

One of the most common questions I’m asked when helping a collaborator with an image analysis project is:

“How do I know when my analysis workflow is doing well enough at finding the objects or measuring the things I care about?”

Unfortunately, it’s also one of the hardest questions to answer!  In an ideal world, we’d be able to achieve perfect recognition and/or segmentation of our biological objects every time, and get out perfect data! Alas, biology is almost never so accommodating, even ignoring the effects of technical artifacts.

Ultimately, “what is the universal truth” and “how close to universal truth must we be for something to still be called true” are philosophical questions.  In analyzing finite samples of data, we are attempting to create a model of what we think is going on in the real world — however, as the statistician George Box said, “all models are wrong”. 

Good enough to be useful?

But Box also said later on: “all models are wrong, but some models are useful”. How right do our image analysis workflows have to be in order to be useful? As much as I wish this question had a simple answer, it’s pretty much always case dependent.  As scientists, we always want to report answers as accurately as we can; this drive, though, can sometimes make it hard to sense when we’re approaching a point of diminishing returns (there are more than a few hours of my life that might have been better spent watching a movie or having coffee with friends than improving a pipeline’s accuracy from 93% to 96%), or when we’re trying to analyze images that are ultimately so unsuitable we’d spend less time just retaking them.

If you, like me, find it hard to know when to set the keyboard down and walk away, here’s a rule of thumb: for every change you consider making to your analysis workflow (which I’ll refer to here as a “pipeline”, though it can be any way you process images) you should consciously weigh the following factors:

  1. How wrong is my current pipeline output?  
  2. How close to 100% accurate is it possible to get?
  3. How close to accurate do I need to be to assess the % or fold change I expect to see in this experiment?
  4. How important is this accuracy of this segmentation to my overall hypothesis?
  5. What else could I do in the time it will take me to make my pipeline maximally accurate?

Finding “good enough”

Here I’ll talk about two major aspects of image analysis (thresholding and object segmentation), discuss common pitfalls and how we try to get the “least wrong” answer, and discuss how we typically weigh the above factors in our lab.  While I’ll largely discuss these things in the context of CellProfiler, it’s worth noting these principles apply to all (classical) image analysis, with every software! While my examples below are of nuclei, these principles are generally applicable — from an organelle to an organism.

There are two major steps to segment objects. First, you determine the threshold of “signal” that distinguishes foreground from background — “signal” often refers to the amount of a fluorescent dye present, but a probability map from ilastik or FIJI’s WEKA plugin can also be a great input! 

In the example below, DAPI intensity has been thresholded at an algorithmically determined value (center), 0.5X that value (left), or 2X that value (right) — too low a value includes too much background, too high a value excludes parts of nuclei, so ideally we want to hit a “Goldilocks” value somewhere in the center.

fruit fly nuclei thresholded at intensity values of 0.03, 0.06, or 0.12.
fruit fly nuclei thresholded at intensity values of 0.03, 0.06, or 0.12.

Second, you need to determine how you will break the areas that have been called “background” and “foreground” into discrete objects — in our terminology, we often refer to this as “declumping”.  When what should have been called one object is broken into two or more objects, this is often called “oversegmentation” or “splitting”; when what should have been two or more objects is called only one objects, this is often referred to as “undersegmentation” or “merging”. 

Recent work from our group suggest that neural networks may be less prone to these sorts of errors than classical methods, but neural networks still do make both kinds of segmentation errors. 

adapted from Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images, Cacicedo et al Cytometry A 2019. https://doi.org/10.1002/cyto.a.23863
adapted from Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images, Cacicedo et al Cytometry A 2019. https://doi.org/10.1002/cyto.a.23863

Thresholding and declumping parameters are easy to determine for any given object, but can be hard to set globally for a whole image and especially hard for a whole experiment.  Let’s consider our 5 factors, in the context of segmentation:

1. How wrong is my current pipeline output?

If you have manually annotated ground truth, you can answer this quantitatively (in CellProfiler, this is the MeasureObjectOverlap module). While it’s usually not that hard to make manually annotated ground truth, it can take a VERY long time, so most people don’t bother for most experiments.  You can hand label a small test set, as an intermediate measure, but in most cases when we’re prototyping we assess this qualitatively.

In order to do this qualitative assessment, in our lab we typically try to look at the following:

  • Do I generally agree with most of the object segmentations from my analysis workflow? If not, the rest of the questions below likely don’t matter too much.
  • Overall across my experiment, do I have an approximately equal number of regions/images where the threshold chosen by the algorithm for this image  is a bit too low vs where the threshold chosen by the algorithm is a bit too high?
  • Overall across my experiment, do I have an approximately equal number of oversegmentations/splits and undersegmentations/merges?
  • Very important: Do both the second and third bullet points hold true for both my negative control images and my positive control (or most extreme expected phenotype(s) sample) images?

2. How close to 100% accurate is it possible to get?

To some degree, this depends on knowing your objects and/or your field a bit — if this is a thing that’s been studied by microscopy a lot, there are hopefully pretty good pipelines, if not there may not be.  If your objects are pretty “standardized” in their appearance, you’re more likely to have a higher possible accuracy than if they’re really variable. Good images are also critical here — garbage in, garbage out.

3. How close to accurate do I need to be to assess the % or fold change I expect to see in this experiment?

Do you expect the phenotypes you care about to be 20% different from negative control? 2000% different? How much variability do you expect, and how many samples will you have? Ultimately, this question can be answered with a power analysis and a few reasonable guesses.

4. How important is this aspect of my experiment to my overall hypothesis?

This is hard to put a number on, but qualitatively:

  • If you’re trying to test whether overexpressing GeneA causes cells to stop dividing, cell size (and therefore accurate cell borders) is probably really important!
  • If you’re trying to tell if overexpressing GeneA causes GFP-GeneB to be overexpressed (and GFP-GeneB is diffuse in the cytoplasm), a rough cell outline is probably sufficient since you really care more about the mean intensity of GFP-GeneB.  
  • If you’re trying to test if Drug123 causes mCherry-GeneC to translocate into the nucleus, the exact outlines of the nucleus are very important!
  • If you’re trying to test if Drug123 causes mCherry-GeneC to translocate into the mitochondria, the exact outlines of your nucleus are probably not that critical (but in that case, mitochondrial segmentation will be pretty important!). 

5. What else could I do in the time it will take me to make my pipeline maximally accurate?

You may want to spend a lot of time optimizing your pipeline if any or all of the following conditions are met:

  • You have a small number of samples
  • You have a pipeline that’s currently really wrong 
  • Your pipeline is really wrong in a way that might really obscure the features most important to testing your overall hypothesis (because it treats your negative and positive controls quite differently, because the most important structures aren’t identified accurately, etc)

In the case that all three are true, it might even be worth annotating some data by hand so that you can quantitatively track the ability of your pipeline to measure your most important object segmentations. 

Try to set yourself benchmarks ahead of time though — it might be worth spending 6 more hours or even 6 more days working on this, but will it really be worth 6 more weeks?

If your ultimate goal isn’t super dependent on precise segmentation, and your pipeline works pretty well on most cases (and equally well across your most phenotypically different cases), stop working on this, and go do other cool science (or non-science things)! 

A CellProfiler Approach to Analyzing Tissue Data

Imaging tissue slices provides a wealth of data about the spatial composition and number of the various cell types that make up a tissue. Interactions among cells within a tissue are crucial to understanding the role of the inflammation that is triggered by the invasion of cancerous cells. The strength of the inflammatory response has been linked to the prognosis of certain cancers such as lymphoma.

Quantifying the spatial relationship among cells in the crowded environment of a tissue requires reliable segmentation of several cell types. In lymph node sections, cells have representatives from the immune system, epithelial tissue, connective tissue, and cancer. Quantifying the cell locations provides the ability to gauge the degree to which the cancer has invaded a tissue and how the immune system is interacting with the leading edge of a tumor.

The ability to precisely measure this relationship will give a deeper understanding of the progression of cancer and might yield new insight into when and how the immune system is involved. Ultimately the aim is to define various configurations of this interaction that are predictive of patient outcome or the likelihood of success for a given treatment, such as immunotherapy.

CellProfiler and Tissue Data

In collaboration with the Margaret Shipp and Scott Rodig labs, we developed a pipeline in CellProfiler that addresses unique challenges presented by imaging tissue slices. Consider the image of a representative tissue slice (below), which reveals a field of view with a high cell density. The nuclei of all cells are stained (blue). Two cell types have been stained that are of particular interest in Hodgkin’s Lymphoma: Reed-Sternberg cells (aka RS in green), and tumor-associated macrophages (aka TAM in red).

CellProfiler tissue analysis: Tissue slice with nuclei stained blue, Reed-Sternberg cells (RS in green), tumor-associated macrophages (TAM in red)

The greatest challenge in quantifying the spatial relationships among these cells, and the others surrounding them, is the identification of individual cell boundaries – a process known as segmentation. The density of the cells makes segmentation complicated as there is extensive overlapping between cell types, which is much greater than that seen even in dense monolayers of cultured cells. This overlapping stems from the fact that tissue slices reveal a plane from a 3D volume of cells from an excised portion of tissue. There are many cells that are not centered in this plane, i.e. their nuclei are not entirely captured within the slice. This increases the variety of nucleus size and intensity as some nuclei are only partially captured. Cytoplasmic regions of cells whose nuclei were not captured in the tissue slice can reside above or below fully captured nuclei; this increases the chance of mis-classifying cells. The positioning artifacts of nuclei described above are a complication to the analysis of a tissue image because most image analysis pipelines rely upon clear nucleus signal for seeding the segmentation of cytoplasmic regions.

In addition to the variety created by the mechanics of acquiring a tissue slice, tissue also contains more natural variety than cell lines. For example, RS cells are physically much larger than any of the neighboring cells. This size difference is a defining characteristic of this type of cell. Furthermore, other cells within a tissue have their own unique characteristics that add to the heterogeneity of size and shape. The two sources of variety mentioned thus far both complicate segmentation and quantification of the cells in a tissue slice.

We’ve developed a pipeline that addresses the challenges outlined above that are specific to tissue slices. The key innovation, as compared to pipelines that work well for monolayer cells,  is prioritizing cell types based upon the quality of the marker and their size, and identifying them sequentially. Below is an overview of the method and pipeline:

1. Identify the nuclei of all cells:

The nuclei can be identified from a nucleus stain such as DAPI. The segmentation of the nuclei will create a pool of “seeds”, or starting points, for the segmentation and classification of the various cell types within a tissue. Many of the nuclei will overlap, because the sectioning of a tissue captures cells through a volume. When a volume is projected into a 2D image, cells that are separated in Z will overlap. This can challenge segmentation of the nuclei. To improve results, the DAPI image is enhanced with a filter that strengthens the signal of round objects of a typical diameter using the EnhanceOrSuppressFeatures module.

CellProfiler tissue analysis: DAPI image enhanced with a filter EnhanceOrSuppressFeatures module.

2. Classify cells in the order from most certain to least certain:

Prior knowledge of the tissue and cells within is imparted to the pipeline through the ordering of the segmentation steps. Stains with good signal and markers that strongly highlight a particular cell type are segmented with higher certainty in comparison to stains that are noisy or less-specific. In addition to the quality of staining, features and aspects unique to a cell type will also increase the certainty of segmentation when segmented objects of low certainty are removed using the FilterObjects module. In combination with the ordering of modules, the cells with the highest confidence are segmented before cells with lower confidence, and the segmentations of cells with higher confidence helps guide the segmentation of cells with lower confidence. In this example, we identify the HRS cells first because they are the largest and the stain for HRS cells also gives the strongest signal.

CellProfiler tissue analysis: HRS cells identified

HRS cells’ nuclei are often fragmented. Using the nuclei as seeds leads to a fragmented segmentation of any given HRS cell. This is to be expected, so these fragmented regions are then “glued” together. Any two fragments of an HRS cell that touch are assumed to come from the same cell. This strategy works well with this cell type, because the spacing between HRS cells is generally large.

CellProfiler tissue analysis: HRS fragmented regions "glued" together

The same process is then used for TAM cells, which are also larger than average and have a strong staining signal.

CellProfiler tissue analysis: TAM cells "glued" together

3. Additive Masking:

The remaining cells that are not already accounted for by the regions covered by the larger HRS and TAM cell types are then classified based upon the strength of their staining. To prevent double-counting the same cell as two different cell types (when appropriate, that is), a mask is created step by step that prevents the next cell types on the list from being identified in space  already occupied by previously identified, more confident, cell types. First, candidate cells for a particular cell type are found by expanding the region defined as the nuclei to capture regions that include staining for the respective cell type.

CellProfiler tissue analysis: candidate cells found by expanding the region defined as the nuclei to capture regions that include staining for the respective cell type.

Then a mask, that is the sum of the areas occupied by upstream cell types, is applied to the candidate cells to remove those that have already been classified. What is left is then the cells that have gone unclaimed.

CellProfiling tissue analysis: mask applied to the candidate cells

The mask grows after each round of segmentation, incorporating the cell types found before. After the final cell type has been segmented and classified, the remaining cells are segmented and classified as “unknown”.

Finally, the (x,y) location of each cell is exported to a spreadsheet. This table of locations can be analyzed to describe the spatial relationship among cells using downstream software applications such as R or Matlab.

Announcing CellProfiler 3.1.9

Hello all! It’s been a crazy last few months for the CellProfiler team, as we’ve been hiring some new members to the team and working hard on the transition to Python 3, which will bump us into CellProfiler version 4. Keep your eyes peeled for exciting content in the future!

We did want to bring you one last release in the 3.X series, though, especially for OSX 10.14 users who have been left without a useable build. This release also has some minor bugfixes in RelateObjects and IdentifySecondaryObjects.

As usual, you will find this new release (and links to all our old releases) on our releases page.

Thanks very much to Allen Goodman, Matthew Bowden, Anne Carpenter, Jan Eglinger, and GitHub user “cloudsforest” for their contributions on this release.

Browser-based Apps for Data Visualization

Have you ever stumbled across some amazing data visualization tools that run entirely on a web browser (such as this and many others), and wished you could plug in your own data and visualize it? Or, as a biologist, you may know of a good analytic tool, but it either costs too much, requires programming expertise, or requires bundled installations of many other dependencies that might not be compatible with your system… And then you spend more time fixing the tool, than using it. In this blog post, we share how to use browser-based applications and perform tasks in multivariate data analysis and image processing to visualize data (like the one below!), with much less hassle.

Continue reading

Announcing CellProfiler 3.1.8

Happy holidays to everyone- we here at the CellProfiler team got you a little end-of-year treat in the form of CellProfiler 3.1.8.  This is primarily a bugfix release, getting rid of some bugs in MeasureObjectIntensity, MeasureColocalization, ExportToSpreadsheet, CorrectIlluminationCalculate, and Smooth.  We’ve also updated how we package for Windows, so those of you who had JAVA_HOME issues with 3.1.5 (feel free to now unset that in your environment variables if it’s set to your 3.1.5 install!) should now experience much smoother sailing.  As usual, you will find this new release (and links to all our old releases) on our releases page.

Thanks very much to Allen Goodman, Matthew Bowden, Vito Zanotelli, and Christian Clauss for their contributions on this release.

On behalf of the whole CellProfiler team, may the season treat you well, and we wish you a happy end of 2018 and beginning of 2019!

ScienceSnippets: Building communication skills and sharing what you love

Clearly communicating the impact of your research is one of the most important skills you need to develop as a scientist, and yet typically it is only taught by doing (and if you are lucky, feedback – especially critical feedback). Clear communication is important to get funding and resources for your work, to publish it, to entice collaborators, to impress colleagues and supervisors, … and to not be boring at parties when asked “So what do you do?”

Continue reading

Tricks for maintaining your CV/resume with Google Docs: easy to edit, immediately published

You’ve earned degrees, authored papers, mentored supervisees, and traveled far and wide to speak about your work… And ideally it’s nicely showcased in your resume or curriculum vitae (CV), all updated and ready to go. But, if you’re like most academics, your CV is a sorely outdated PDF and upon its request, you always find yourself scrambling to dig up recent accomplishments to prove you’ve not just been lounging around for the last 6 months (or years). And updating it requires locating an elusive latest version of a Word doc, editing HTML on your lab website, or compiling and PDF-ifying your LaTeX file. Continue reading

Announcing CellProfiler 3.1

I’m excited to announce the release of CellProfiler 3.1.

Our focus for CellProfiler 3.1 was polishing features and squashing bugs introduced in CellProfiler 3.0. We also started laying down the foundation for our next release, CellProfiler 4.0, that will transition CellProfiler from Python 2 to Python 3, improve multiprocessing, and overhaul the interface.

There’re a few noteworthy changes that some users might enjoy like UTF-8 pipeline encoding, a simpler application bundle (that won’t require installing Java), and a variety of documentation improvements.

You can download CellProfiler 3.1 from the cellprofiler.org website. If you have feedback or questions, please let us know on the CellProfiler Forum message one of us on Twitter.

Of course this would not have been possible without the hard work of our software engineers and all our contributors- Allen Goodman, Claire McQuin, Matthew Bowden, Vasiliy Chernyshev, Kyle Karhohs, Jane Hung, Chris Allan, Vito Zanotelli, Carla Iriberri, and Christoph Moehl, take a bow!

Annotating Images with CellProfiler and GIMP

Annotated image data is valuable for assessing the performance of an image processing pipeline and as training data for machine learning methods such as deep learning. When assessing the performance of a CellProfiler pipeline, for example a pipeline that segments nuclei, the annotated image data are used as the ground truth. The performance of the pipeline can be quantified by comparing the segmentation output to the ground truth and calculating a comparison metric, such as the Jaccard Index or F1 Score. Annotated images are also essential for deep learning applications as training data, for example see the 2018 Data Science Bowl; an in-depth discussion on how the Data Science Bowl images were annotated can be found on the Kaggle forum. Continue reading