⚠️ 本文最后更新于2023年12月03日,已经过了746天没有更新,若内容或图片失效,请留言反馈 > [**Hou, Yunzhong, and Liang Zheng. "Visualizing adapted knowledge in domain transfer." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.**](https://openaccess.thecvf.com/content/CVPR2021/html/Hou_Visualizing_Adapted_Knowledge_in_Domain_Transfer_CVPR_2021_paper.html) > [**code available**](https://github.com/hou-yz/DA_visualization) _这篇论文是研究了UDA中适应知识可视化的科学问题。具体来说,我们提出了一种无源图像转换(SFIT)方法,该方法在源模型和目标模型的指导下,从原始目标图像中生成源式图像。源模型上的翻译图像获得了与目标模型上的目标图像相似的结果,表明其成功地描述了自适应的知识。这些图像也表现出源风格,风格转移的程度遵循UDA方法的性能,这进一步验证了更强的UDA方法可以更好地解决域间的分布差异。我们表明,生成的图像可以应用于微调目标模型,并可能帮助其他任务,如增量学习。_ ___ ### _Contributions_ *relying on images from both domains to indicate the style difference, such works cannot faithfully portray the knowledge difference between source and target models, and are unable to help us understand the adaptation process. In this paper, we propose a source-free image translation (SFIT) approach, where we **translate target images to the source style without using source images**. The exclusion of source images prevents the system from relying on image pairs for style difference indication, and ensures that the system only learns from the two models. Specifically, we **feed translated source-style images to the source model and original target images to the target model, and force similar outputs from these two branches by updating the generator network**. To this end, we use **the traditional knowledge distillation loss and a novel relationship preserving loss, which maintains relative channel-wise relationships between feature maps**. We show that the proposed relationship preserving loss also helps to bridge the domain gap while changing the image style, further explaining the proposed method from a domain adaptation point of view. Some results of our method are shown in Fig. 1. We observe that even under the source-free setting, knowledge from the two models can still power the style transfer from the target style to the source style (SFIT decreases color saturation and whitens background to mimic the unseen source style). *  ___ ### _1.Method_ *Following many previous UDA works, we assume that only the feature extractor CNN in the source model is adapted to the target domain. Given a source CNN fS (·) and a target CNN fT (·) sharing the same classifier p (·), we train a generator g (·) for the SFIT task. We discuss why we choose this translation direction in Section 4.3. As the training process is source-free, for simplicity, we refer to the target image as x instead of xT in what follows. As shown in Fig. 2, given a generated image e x = g (x), the source model outputs a feature map fS (ex) and a probability distribution p (fS (ex)) over all C classes. To depict the adapted knowledge in the generated image, in addition to the traditional knowledge distillation loss, we introduce a novel relationship preserving loss, which maintains relative channel-wise relationships between the target-image-targetmodel feature map fT N (x) and the generated-image-sourcemodel feature map fS N (ex). *  ___ By Lingsgz On 2023年11月28日