⚠️ 本文最后更新于2023年12月18日,已经过了730天没有更新,若内容或图片失效,请留言反馈 > [__Islam, Ashraful, et al. "A broad study on the transferability of visual representations with contrastive learning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.__](https://openaccess.thecvf.com/content/ICCV2021/papers/Islam_A_Broad_Study_on_the_Transferability_of_Visual_Representations_With_ICCV_2021_paper.pdf) > [__code available__](https://github.com/asrafulashiq/transfer_broad) _论文主要目标是了解对比学习优越于标准监督交叉熵模型的可迁移性的潜在机制。_ ### _Contributions_ *1.Benchmark five methods, whose results show a similar trend that **contrastive learning extracts better features for transfer learning**. 2.Combining supervised loss with self-supervised contrastive loss improves transfer learning performance. 3.CKA analysis indicates that **contrastive models contain more low-level and mid-level information in the penultimate layers than standard cross-entropy models**. 4.**The contrastive models have higher intra-class variation than the standard cross-entropy models**, even if the network is not explicitly trained to increase intra-class distance. * ___ ### _1.Loss Functions_  ##### _(1).Supervised Cross-Entropy Loss_ *the standard loss function for multi-class classification. * ##### _(2).Self-Supervised and Supervised Contrastive Loss_ *MoCo has two base networks; one is actively trained to extract query features, and the other is the moving average of the query encoder to extract positive and negative features (commonly known as keys).    * ___ ### _2.Experiments and Analysis_    By Lingsgz On 2023年12月18日