CAGING
Causality-driven Generative Models for Privacy-preserving Case-based Explanations
In recent years, deep learning has become the state-of-the-art approach in the majority of computer vision tasks. Even though the best performances presented in the literature were obtained with deep learning algorithms, there is a reluctance from the industry to adopt them into their services, mainly due to a lack of interpretability. Lately, several interpretability methods were proposed, consisting of saliency maps, natural language descriptions, and rule-based and case-based explanations. From this pool of interpretability methods, case-based explanations arise as one of the most intuitive for human beings, as learning by example is our natural way of reasoning. Nonetheless, case-based explanations are sometimes prohibited due to privacy issues. In applications where there is a person exposed in the image, particularly when those images are acquired for sensitive purposes, as is the case of medical images, the use of case-based explanations is completely inhibited.
CAGING will build on top of our work in content-based medical image retrieval and privacy-preserving case-based explanations and move the research towards the generation of causal case-based privacy-preserving explanations. In this exploratory research project, we intend to promote a causal design for the generation of privacy-preserving case-based explanations, starting from the explicit disentanglement between medical and identity features and moving towards a causal model in which the interventions are produced in terms of high-level semantic features.
More info here.