Explaining Knowledge Distillation by Quantifying the Knowledge

CVPR 2020, 2020

Recommended citation: Xu Cheng, Yilan Chen*, Zhefan Rao*, Quanshi Zhang. Explaining Knowledge Distillation by Quantifying the Knowledge. CVPR 2020.

Download paper here


This paper presents a method to interpret the success of knowledge distillation by quantifying and analyzing taskrelevant and task-irrelevant visual concepts that are encoded in intermediate layers of a deep neural network(DNN). More specifically, three hypotheses are proposed as follows. 1. Knowledge distillation makes the DNN learn more visual concepts than learning from raw data. 2. Knowledge distillation ensures that the DNN is prone to learning various visual concepts simultaneously. Whereas, in the scenario of learning from raw data, the DNN learns visual concepts sequentially. 3. Knowledge distillation yields more stable optimization directions than learning from raw data. Accordingly, we design three types of mathematical metrics to evaluate feature representations of the DNN. In experiments, we diagnosed various DNNs, and above hypotheses were verified.