参考文献/References:
[1]余游,冯林,王格格,等. 一种基于伪标签的半监督少样本学习模型[J]. 电子学报,2019,47(11):2284-2291.
[2]LEE D H. Pseudo-label:The simple and efficient semi-supervised learning method for deep neural networks[C]//Proceedings of the CML 2013 Workshop on Challenges in Representation Learning. Atlanta,USA:ICML,2013.
[3]FINI E,ASTOLFI P,ALAHARI K,et al. Semi-supervised learning made simple with self-supervised clustering[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Vancouver,Canada:IEEE,2023.
[4]CHEN B X,JIANG J G,WANG X M,et al. Debiased self-training for semi-supervised learning[J/OL]. arXiv Preprint arXiv:2202.07136,2022.
[5]鲍兆强,王立宏. 基于伪标签纠正的半监督深度子空间聚类[J]. 烟台大学学报(自然科学与工程版),2023,36(4):442-450.
[6]YANG H F. Contrastive self-supervised learning as a strong baseline for unsupervised hashing[C]//Proceedings of the 2022 IEEE 24th International Workshop on Multimedia Signal Processing(MMSP). Shanghai,China:IEEE,2022.
[7]DUAN Y,QI L,WANG L,et al. RDA:Reciprocal distribution alignment for robust semi-supervised learning[C]//Proceedings of the 17th European Conference on Computer Vision. Tel Aviv,Israel:ECCV,2022.
[8]廖凌湘,冯林,刘鑫磊,等. 基于信息对齐的半监督少样本学习方法[J]. 计算机工程与设计,2023,44(2):582-589.
[9]SOHN K,BERTHELOT D,LI C L,et al. Fixmatch:Simplifying semi-supervised learning with consistency and confidence[J]. Advances in Neural Information Processing Systems,2020,33:596-608.
[10]宋雨,肖玉柱,宋学力. 基于伪标签回归和流形正则化的无监督特征选择算法[J]. 南京大学学报(自然科学版),2023,59(2):263-272.
[11]XIE Q Z,DAI Z H,HOVY E,et al. Unsupervised data augmentation for consistency training[J]. Advances in Neural Information Processing Systems,2020,33:6256-6268.
[12]WEI J,ZOU K. Eda:Easy data augmentation techniques for boosting performance on text classification tasks[J/OL]. arXiv Preprint arXiv:1901.11196,2019.
[13]SUGIYAMA A,YOSHINAGA N. Data augmentation using back-translation for context-aware neural machine translation[C]//Proceedings of the 4th Workshop on Discourse in Machine Translation(DiscoMT 2019). Hong Kong,China:DiscoMT,2019.
[14]TARVAINEN A,VALPOLA H. Mean teachers are better role models:Weight-averaged consistency targets improve semi-supervised deep learning results[C]//Proceedings of the 31st International Conference on Neural Information Processing system. Long Beach,USA:NIPS,2017.
[15]REN Z Z,YEH R A,SCHWING A G. Not all unlabeled data are equal:Learning to weight data in semi-supervised learning[J/OL]. arXiv Preprint arXiv:2007.01293,2020
[16]SUN Z J,FAN C,SUN X F,et al. Neural semi-supervised learning for text classification under large-scale pretraining[J/OL]. arXiv Preprint arXiv:2011.08626,2020.
[17]DEVLIN J,CHANG M W,LEE K,et al. Bert:Pre-training of deep bidirectional transformers for language understanding[J/OL]. arXiv Preprint arXiv:1810.04805,2018.