Foreign Object Detection

Mammography images often contain objects that were inserted to aid the diagnostic process. These objects might be various shaped markers, needles, or wires that mark the location of a suspicious lump. Detection and classification of these objects can help "target" the tumor detection algorithm to regions in which the patient felt the suspicious lump. In our detection approach, we first apply a medical object proposal algorithm to provide a set of about 10-20 regions that are very likely to depict a medical finding. Second, we apply Deep Neural Net features to transform each region into a set of measurements using the neural nets' internal filters. Then, we apply a random forest classification to differentiate the markers from other medical findings and classify them into their respective type. The method recorded >90% accuracy on the detection and classification of these markers.

Method for Detecting and Classifying Calcification regions in Mammographic

In mammography images, calcium deposits may indicate a region with some potential for malignancy. Not all calcium deposits indicate a malignant region, but deposits with a particular morphology and distribution can be problematic. In 2014, we developed a fully automated method for detecting and classifying those regions as well as providing a description of their respective shape, morphology and distribution features. We continued this development of both micro and macro calcification and extended the size of both test and training sets. In macro calcification we achieved a detection rate of 95% for a data set of over 3,000 lesions.

A Cross Saliency Approach to Asymmetry Based Tumor Detection

Usually, the patient's left and right breast has similar internal tissue structure and appearance in mammograms. Therefore, radiologists analyze mammogram images by comparing both left and right breasts to find suspicious structures in the breast tissue. Inspired by the radiologists' approach, we developed a new method to detect suspicious regions by analyzing bilateral mammography images. We defined a co-saliency measure that uses patch statistics to detect those asymmetry regions. The proposed method doesn't use any image registration and can be used in a variety of modalities. We validated the results in both breast mammogram and brain MRI. In breast mammogram, we achieved AUC of 93, which is better than other methods that use non-rigid registration. In brain MRI, our unsupervised methods achieved results comparable to those of supervised methods found in the relevant literature. A paper describing this work was presented at MICCAI 2015.

Semantic Description of Medical Image Findings and Automatic Report Generation

Computer-Aided Diagnosis (CADx) systems are designed to assist doctors in medical image interpretation. However, a CADx system is often thought of as a "black box" whose diagnostic decision is not intelligible to a radiologist. Therefore, a system that uses semantic image interpretation, and mimics human image analysis, has clear benefits.

We developed a new method for automatic textual description of medical image findings, such as lesions in medical images. The method performs joint estimation of semantic features of lesions from image measurements. We formalize this problem as learning to map a set of diverse medical image measurements to a set of semantic descriptor values. We use a structured learning framework to model individual semantic descriptors and their relationships. The parameters of the model are efficiently learned using the Structured Support Vector Machine (SSVM).

The proposed approach generates radiological lexicon descriptors used to diagnose various diseases. This can help radiologists easily understand a diagnosis recommendation made by an automatic system, such as CADx. We apply the proposed method to publicly available and proprietary breast and brain imaging datasets, and show that our method generates more accurate descriptions, as compared to other alternative approaches.

DL-based Supervised Object Boundary Detection

Object boundary detection is a vital component of vision systems, performing tasks such as object recognition, image segmentation, and others. Segmenting objects correctly requires the ability to distinguish between semantic object boundaries and other 'uninteresting' edges. In this work, we train a ConvNet-based system to discriminate between the above two types of edges. We formulate this task as learning to map local image patch pixel values into probability maps of semantic boundaries.

The proposed deep neural network architecture is depicted in Fig. 1. The input to the first convolutional layer with 5x5 filters is a 36x36x3 feature map, comprising patches taken from the original RGB image. Average-pooling layers operate on neighborhoods of 3x3, with a stride of 2 pixels. Similarly, the second and third convolutional layers have 32 filters of 5x5. The fourth convolutional layer has 64 filters of 5x5. Both fully-connected layers have 256 neurons. The output of the second fully-connected layer is an input to the sigmoid layer. Finally, the output of the sigmoid layer is re-shaped into a 16x16 map of output probabilities.

Fig. 2 shows examples of application of the state-of-art and of the proposed method for the semantic boundary detection task on a BSDS500 dataset. Our method produces similar performance figures, but yields arguably better visual results.

Fig. 1. The regression-output network architecture: The input is an image patch of 36x36 pixels; the output is a 16x16 patch of scores [0…1] corresponding to probabilities of semantic object boundary to be present in each pixel.

Fig. 3. Examples of semantic boundary detection: (a) original image, (b) state-of-art method of P. Dollar, (c) proposed.

Tumor Classification Using Supervised DL Methods

We use DL-based trainable patch-wise feature extraction and classification to extract feature vectors that globally describe tumors. We aggregate the patch-wise features and classification scores into a new bag of DL features to describe and classify the tumor as a whole into classes of BIRADS or malignant/benign.

Multilabel segmentation of medical images

Automatic tissue classification from medical images is an important step in pathology detection and diagnosis. Here, we deal with mammography images and present a novel supervised deep learning based framework for region classification into semantically coherent tissues.

The proposed method uses Convolutional Neural Network (CNN) to learn discriminative features automatically. We overcome the difficulty involved in a medium size data base, by training the CNN in an overlapping patch-wise manner. In order to accelerate and fine tune the pixel-wise automatic class prediction, we use convolutional layers instead of the classical fully connected layers. This approach results in significantly faster computational procedures while preserving the classification accuracy. The proposed method was tested on annotated mammography images and demonstrates promising image segmentation and tissue classification results.

Fast Feature Extraction and Multiple Classifier Options Added in the Pipeline

In order to make pipeline running faster, we re-wrote the feature extraction modules using parallel computing and LSF. Further feature selection and testing was done and various options were added. In addition to existing methods, we added new LBP based texture descriptors. We also added the ability to run and test additional types of classifiers, such as logistic regression, softmax, various types of SVM kernels, and random forests hub.

Deep Scale-Space Label Fusion for Lesion Segmentation

We developed a novel scale-space label fusion method that is based on a multi-stage Convolutional Neural Net (CNN) system. The first stage contains several convolutional and fully connected layers, and serves to learn and aggregate features in multiple scales and locations. The second stage performs semantic label fusion. The proposed method improves upon the accuracy of competing methods by 3% - 10% (depending on the data sets).

Multimodal Data Fusion: Visual and Clinical Data Fusion by MKL Methods

To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesion detection and localization, and on the robustness of features computed based on the detected areas.

In this work we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features, and construct the textual descriptor of patients by extracting relevant keywords from the patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train an SVM-based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the classifier based only on image features.

Hybrid Unsupervised Lesion Candidate Detection with Supervised Ranking Stage

Automatic lesion detection is believed to be one of the most challenging and still open problems in breast imaging, and specifically in mammography. The main difficulty stems from the fact that the appearance of various objects and tissues in mammogram is similar to that of a lesion. This results in high rates of both false positives and misdetection.

In this work we propose a novel approach that combines unsupervised and supervised steps. In the first, unsupervised step, we produce a relatively large number of possible lesion candidates. This stage is based on the semantic threshold binarization approach to find initial seeds. Given the seeds, we use a dynamic programming framework and analyze local gradients to expand each of the seeds to obtain candidate contours. This unsupervised step yields up to 100 possible contour candidates. In the second, supervised step, we calculate a set of rich features for each contour candidate, and train a RankSVM-type classifier to select the best candidate based on these features. We show experimentally that the proposed method produces highly accurate detection results.

Contact

Aviad Zlotnick

Breast Skin Abnormalities Analysis

Breast skin abnormalities may indicate the presence of malignancy. To analyze the breast skin, we developed a novel algorithm that separates the background from the foreground in X-Ray images. Precise analysis of the breast contour geometry was implemented and an algorithm that tracks skin width was developed. With these tools, we are able to identify focal and diffuse skin thickening, and skin retraction.

Contact

Aviad Zlotnick

A Weakly Labeled Approach for Breast Tissue Segmentation and Breast Density Estimation in Digital Mammography

Breast tissue segmentation is a fundamental task in digital mammography. This segmentation is generally applied prior to breast density estimation. However, observations show a strong correlation between the segmentation parameters and the breast density, resulting in a chicken and egg problem.

This paper presents a new method for breast segmentation, based on training with weakly labeled data, namely breast density categories.

To this end, a fuzzy-logic module is employed that computes an adaptive parameter for segmentation. The suggested scheme consists of a feedback stage using preliminary segmentation to allow extraction of domain- specific features from an early estimation of the tissue regions. Selected features are then fed into a fuzzy logic module to yield an updated threshold for segmentation. Our evaluation is based on 50 fibroglandular delineated images and on breast density classification, obtained on a large data set of 1243 full-field digital mammograms. The data set contains images from different devices. The proposed analysis provides an average Jaccard spatial similarity coefficient of 0.4 with improvement of this measure in 70% of the cases where the suggested module was applied. In breast density classification, the results yielded an average classification accuracy of 75%, which significantly improved the baseline method (67%). Major improvement is obtained in low breast densities where higher threshold levels reject false positive regions. These results show a promise for the clinical application of this method in breast segmentation, without the need for laborious tissue annotation.

A paper describing this work was accepted to ISBI 2016.

Contact

Rami Ben-Ari

Automatic Dual-View Mass Detection in Full-Field Digital Mammograms

The purpose of this work was to improve the accuracy of lesion detection in mammography by combining information from two anatomical viewpoints: cranial-caudal (CC) and mediolateral oblique (MLO). Our approach follows the common practice of radiologists who examine two mammography views, assuming that true lesions are more likely to appear in both views than false detections. We developed a dual-view analysis framework for scoring the match between pairs of detected candidate lesions. The framework extracts a multitude of features from the candidate lesions, defines a 'correspondence descriptor' for the pairwise matching, and scores these descriptors using a random-forest classifier. The identification of correct lesion pairs also facilitates estimation of their three-dimensional spatial location. The performance of the method, and its potential to improve single-view detection, was evaluated using a publicly available full-field digital mammography database (INbreast). Classification of the correspondence descriptors into true- and false pairwise matches provided area under the receiver operating characteristic curve (AUROC) of 0.96, with optimal sensitivity and specificity of 89% and 96%, respectively. A correct one-to-one assignment of true CC-MLO mass candidates was found in 67% of the pairs, and a correct breast quadrant was estimated in 77% of the cases. The combined single-view and dual-view mass detection provided AUROC of 0.94, with a detection sensitivity of 87% at specificity of 90%. This significantly improved the single-view performance (72% sensitivity at 90% specificity, 78% specificity at 87% sensitivity, P<0.05). A paper describing this work was presented at MICCAI 2015

Breast Density Evaluation

The accuracy of mammographic abnormality detection methods depends heavily on the breast tissue characteristics, where a dense breast drastically reduces detection sensitivity. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. The BI-RADS breast density classification uses 4 levels, where at the highest level the sensitivity of the radiologist reduces to 30%. Previous methods often calculate the BD solely based on the breast FG tissue ratio after segmentation. This approach has two drawbacks: 1) FG misclassification directly impacts the BD estimate. 2) The new BI-RADS standard for BD introduces concerns such as the occluding effect not reflected in the previous measure. Our BD estimation tackles the above drawbacks by considering a variety of feature types, such as texture, breast FG and fat statistics, shape, and contrast. Further ML-based breast density classification yields robustness to the new BI-RADS measures.

Contact

Rami Ben-Ari

Biomedical Framework

The analytics modules are written in a variety of programming languages including Matlab, Java, C, R and some of them are UIMA annotators. Those modules need to be combined into coherent pipelines that execute as a service in the Cloud.

We build a generic Biomedical Runtime Framework for integrating analytics modules from different programming languages into pipelines and executing them in a variety of scalable infrastructures (e.g. Apache UIMA, Apache SPARK, Platform LSF) according to the pipeline characteristics. The framework exposes the pipelines as REST services in the Cloud e.g. in IBM.Next. It was successfully used to create the various pipelines of the breast analytics service and to evaluate the pipelines overall execution and their accuracy against the ground truth. It was also used to create the MAT service and a DNN service. The Biomedical Framework is the runtime layer of MedSieve Analytics Framework.

SPARK for Multi Modal Analytics

Apache SPARK open source is today the leading platform for Big Data analytics with large growing ecosystem. It provides a scalable, fault-tolerant, distributed backend for robustly analyzing large datasets in a scale-out cluster. However as of today, SPARK is oriented towards analyzing text, has no built-in support for Matlab or legacy code, and requires learning the framework architecture and APIs to write specific programs for that environment. We are adding support for automatic translation from a descriptive pipeline flow to efficient SPARK application that can perform multi modal analytics and utilize analytics modules written in various programming languages e.g. Matlab, Java, C.

Modality Analytics Toolkit (MAT)

The MAT service is a collection of generic modality analytics modules and pipelines coming from the Medical Sieve project. They can be incorporated in external workflows or data flows beyond the healthcare domain.

The modules we provide for now are: binarization by tiles, saliency, multi-level non-linear clustering, and thresholding. The service is based on the Biomedical Framework and included in IBM.Next (http://wpncatalog.stage1.mybluemix.net/assets/assets_medical_sieve_modality_analytics_toolkit_mat_).