Conversation
|
Are any of these test for granualrity? I'm not seeing any passing test, so if there is a granularity test it either isn't running or it failed |
MattsonCam
left a comment
There was a problem hiding this comment.
LGTM @MikeLippincott , good job! I left a few comments
| def compute() -> dict[str, list[float]]: | ||
| """Placeholder for granularity computation implementation.""" | ||
| raise ZedProfilerError("granularity.compute is not implemented yet") | ||
| from zedprofiler.IO.loading_classes import ObjectLoader |
There was a problem hiding this comment.
I'm not seeing a src/zedprofiler/IO/ package, so this might raise a ModuleNotFoundError
| Returns | ||
| ------- | ||
| Dict[str, list] | ||
| Dictionary with keys 'object_id', 'feature', 'value'. | ||
| Image-level measurements use object_id=0. |
There was a problem hiding this comment.
From what I understand gs is the image-level granularity, but only per-object granularities (values / gss), feature, and object are returned in this dictionary
| if nobjects > 0: | ||
| label_range = numpy.arange(1, nobjects + 1) | ||
|
|
||
| # CellProfiler: self.labels[~im.mask] = 0 | ||
| masked_labels = original_labels.copy() | ||
| masked_labels[~original_mask] = 0 | ||
|
|
||
| per_object_current_mean = _fix_scipy_ndimage_result( | ||
| scipy.ndimage.mean(original_pixels, masked_labels, label_range) | ||
| ) |
There was a problem hiding this comment.
It looks like label_range is used to construct object ids. Will these ids always match correctly? For example, will every value in label_range always correspond to a label that is actually present in masked_labels in this scipy function?
There was a problem hiding this comment.
Consider splitting this into helpers such as “subsample”, “background removal”, and “per-object measurement” to make review and future debugging easier
| assert result["object_id"] == [1, 1, 1] | ||
| assert result["feature"] == [1, 2, 3] | ||
| assert len(result["value"]) == EXPECTED_THREE_SCALES |
There was a problem hiding this comment.
It looks like the tests validate shape/length and other integer values. Consider adding a test for numerical correctness of granularity if possible (e.g. my granularity calculation measure the expected granularity calculation)
| back_mask = ( | ||
| scipy.ndimage.map_coordinates(mask.astype(float), (k, i, j)) | ||
| > mask_threshold | ||
| ) |
There was a problem hiding this comment.
It looks like the order is is not specified here, so I think it will be cubic by default. Recommend setting this to a default like the others, such as 0 or 1
Description
This PR adds granularity modules. Note that this might fail tests due to not having a key piece of image-loading code that is in another PR.
What kind of change(s) are included?
Checklist
Please ensure that all boxes are checked before indicating that this pull request is ready for review.