Speaker
Description
Autoencoders are an effective analysis tool for model-agnostic searches at the LHC. Unfortunately, it is known that their OOD detection performance is not robust and heavily depends on the compressibility of the signals. Even if a neural network can learn the physical content of the low-level data, the gain in sensitivity to features of interest can be hindered by redundant information already explainable in terms of known physics. This poses the problem of constructing a representation space where known physical symmetries are manifest and discriminating features are retained. I will present ideas in both directions. I will introduce a Machine Learning framework, known as Contrastive Learning, that allows for the definition of observables invariant to transformations and show how to use them for Autoencoders-based anomaly detection.