What does "Training-time Calibration" mean?
Table of Contents
Training-time calibration is a method used to make sure that object detectors give reliable and accurate predictions. When we train these detectors, we want them to understand not just what they see but also how confident they are in their guesses.
One way to achieve this is by creating special loss functions during the training process. These loss functions help the model learn in a way that improves its ability to provide trustworthy predictions from the start.
Another method involves using Temperature Scaling after the detector is already trained. This adjusts the output of the detector to improve the reliability of its predictions without changing the underlying model.
However, not all methods for evaluating these calibration techniques are perfect. Some can lead to misleading results. It's important to use an evaluation framework that looks at both the accuracy of the object detector and its calibration.
Some simpler calibration techniques that can be applied after training, like Platt Scaling and Isotonic Regression, can often perform better than complicated training-time methods. These simpler methods are less expensive in terms of resources and time, yet they can provide high-quality results for calibrating object detectors.