How CLEVER is your neural network? Robustness evaluation against adversarial examples
February 3, 2020
Neural networks are particularly vulnerable to adversarial inputs. Carefully designed perturbations can lead a well-trained model to misbehave, raising new concerns about safety-critical and security-critical applications. Pin-Yu Chen offers an overview of CLEVER, a comprehensive robustness measure that can be used to assess the robustness of any neural network classifiers.