Machine LearningThis research was supported by NSF grant DMS-1616340. Uncertainty-aware fine-tuning of segmentation foundation modelsThe Segment Anything Model (SAM) is a large-scale foundation model that has revolutionized segmentation methodology. Despite its impressive generalization ability, the segmentation accuracy of SAM on images with intricate structures is often unsatisfactory. Recent works have proposed lightweight fine-tuning using high-quality annotated data to improve accuracy on such images. However, here we provide extensive empirical evidence that this strategy leads to forgetting how to “segment anything”: these models lose the original generalization abilities of SAM, in the sense that they perform worse for segmentation tasks not represented in the annotated fine-tuning set. To improve performance without forgetting, we introduce a novel framework that combines high-quality annotated data with a large unlabeled dataset. The framework relies on two methodological innovations. First, we quantify the uncertainty in the SAM pseudo labels associated with the unlabeled data and leverage it to perform uncertainty-aware fine-tuning. Second, we encode the type of segmentation task associated with each training example using a task prompt to reduce ambiguity. We evaluated the proposed Segmentation with Uncertainty Model (SUM) on a diverse test set consisting of 14 public benchmarks, where it achieves state-of-the-art results.
Multiple instance learningLearning representations for individual instances when only bag-level labels are available is a fundamental challenge in multiple instance learning. Recent works have shown promising results using contrastive self-supervised learning (CSSL), which learns to push apart representations corresponding to two different randomly-selected instances. Unfortunately, in real-world applications, there is often class imbalance, so randomly-selected instances mostly belong to the same majority class, which precludes CSSL from learning inter-class differences. To address this issue, we propose a novel framework, which improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels. We evaluate the framework on three medical datasets, showing that it improves the quality of instance-level pseudo labels and representations, increasing both bag and instance level accuracy.
Deep probability estimationReliable probability estimation is of crucial importance in many real-world applications where there is inherent uncertainty, such as weather forecasting, medical prognosis, or collision avoidance in autonomous vehicles. Probability-estimation models are trained on observed outcomes (e.g. whether it has rained or not, or whether a patient has died or not), because the ground-truth probabilities of the events of interest are typically unknown. The problem is therefore analogous to binary classification, with the important difference that the objective is to estimate probabilities rather than predicting the specific outcome. The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks, and propose a new method, which modifies the training process to promote output probabilities that are consistent with empirical probabilities computed from the data.
Segmentation from noisy annotationsWe study the learning dynamics of deep segmentation networks trained on inaccurately-annotated data. We discover a phenomenon that was previously reported in classification problems: the networks tend to first fit the clean pixel-level labels during an “early-learning” phase, before eventually memorizing the false annotations. However, in contrast to classification, memorization in segmentation does not arise simultaneously for all semantic categories. Inspired by these findings, we propose a new method for segmentation from noisy annotations with two key elements. First, we detect the beginning of the memorization phase separately for each category during training, and adaptively correct the noisy annotations in order to exploit early learning. Second, we incorporate a regularization term that enforces consistency across scales. Our method outperforms standard approaches on a medical-imaging segmentation task where noises are synthesized to mimic human annotation errors and provides robustness to realistic noisy annotations present in weakly-supervised semantic segmentation.
Classification from noisy labelsWe propose a framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an early learning phase, before eventually memorizing the examples with false labels. We show that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, proving that they occur even in simple linear models. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase.
Data-driven frequency estimationFrequency estimation is a fundamental problem in signal processing, with applications in radar imaging, underwater acoustics, seismic imaging, and spectroscopy. The goal is to estimate the frequency of each component in a multisinusoidal signal from a finite number of noisy samples. We propose a learning-based framework for frequency estimation that achieves state-of-the-art results.
Robust classification in the presence of extraneous variablesExtraneous variables are variables that are irrelevant for a certain task, but heavily affect the distribution of the data. We show that standard classification models trained with batch normalization learn features that are highly dependent on extraneous variables. In batch normalization, the statistics used to normalize the features are learned from the training set and fixed at test time, which produces a mismatch in the presence of varying extraneous variables. We demonstrate that estimating the feature statistics adaptively during inference, as in instance normalization, addresses this issue, producing normalized features that are more robust.
|