Contact informationPlease contact Birgitte Nielsen for more information
Adaptive Low Dimensional Feature Vectors
Many statistical texture methods generate information in the form of matrices. A number of pre-defined, non-adaptive features are then extracted from each probability matrix. This feature extraction may be repeated for several settings of some free parameters (e.g. number of gray levels in the image, inter-pixel distance, window size), resulting in a relatively high dimensionality of the feature space. Through several studies we have proposed a unified approach to extract only two adaptive features from each texture probability matrix. The adaptive feature extraction, which is based on a Mahalanobis class distance matrix and a class difference matrix extracts texture features from the parts of the matrices, which actually contain class distance information. In a comparative study, we found that the adaptive features outperformed the classical pre-defined features when applied to the most difficult set of 45 Brodatz texture pairs (which are often used for evaluation and comparison of texture methods). In several studies, we have found that class distance and class difference matrices clearly illustrate the difference in texture between cell nucleus images from different prognostic (or diagnostic) classes. For each of the texture methods, one adaptive feature contains most of the discriminatory power of the method.
Ref: Nielsen B et al., Statistical nuclear texture analysis in cancer research: a review of methods and applications. Crit Rev Oncog 2008;14(2-3):89-164.
Ref: Nielsen B et al., Low dimensional adaptive texture feature vectors from class distance and class difference matrices. IEEE Trans Med Imaging 2004;23(1):73- 84).
Mapping Optimal Texture Features Back Onto Images
It is important to advance our understanding of the interaction between structural and functional changes within the nuclei. As part of the texture project, we map the relative diagnostic or prognostic importance of textural features back onto the relevant parts of the images, as shown as yellow areas on the figure above.
This text was last modified: 08.02.2016