中国电信股份有限公司江苏分公司,江苏 南京 210000
[ "谢李沁(1996- ),女,现就职于中国电信股份有限公司江苏分公司,主要研究方向为云网运维与AI技术结合。" ]
[ "林文通(1992- ),男,现就职于中国电信股份有限公司江苏分公司,主要研究方向为云网运维。" ]
[ "张正(1989- ),男,现就职于中国电信股份有限公司江苏分公司,主要研究方向为云网运维与AI技术结合。" ]
[ "丁煜(1994- ),男,现就职于中国电信股份有限公司江苏分公司,主要研究方向为云网运维与AI技术结合。" ]
[ "束栋(1978- ),男,中国电信股份有限公司江苏分公司高级工程师,主要研究方向为云网运维。" ]
[ "朱雯慧(1994- ),女,现就职于中国电信股份有限公司江苏分公司,主要研究方向为云网运维与AI技术结合。" ]
收稿:2025-06-24,
修回:2025-08-04,
录用:2025-08-12,
纸质出版:2025-11-20
移动端阅览
谢李沁,林文通,张正等.云网配置稽核大模型及其在IP网中的应用[J].电信科学,2025,41(11):84-95.
XIE Liqin,LIN Wentong,ZHANG Zheng,et al.Large language model for cloud-network configuration audit and its application in IP network[J].Telecommunications Science,2025,41(11):84-95.
谢李沁,林文通,张正等.云网配置稽核大模型及其在IP网中的应用[J].电信科学,2025,41(11):84-95. DOI: 10.11959/j.issn.1000-0801.2025261.
XIE Liqin,LIN Wentong,ZHANG Zheng,et al.Large language model for cloud-network configuration audit and its application in IP network[J].Telecommunications Science,2025,41(11):84-95. DOI: 10.11959/j.issn.1000-0801.2025261.
在云网运维领域,网络稳定性和安全性至关重要。除设备软硬件故障外,70%的云网故障是由不规范的配置所引起的,因此定期对设备配置进行稽核显得尤为重要。然而,传统采用人工编写规则,逐行校验配置文本的稽核方式效率低下,难以满足实际需求。为此,设计并开发了一套基于强化学习算法微调的云网配置稽核大模型,该模型能够自动化检测并纠正网络配置中的不规范行为,进而提升云网运维的稳定性和安全性。测试结果表明,该模型在提高稽核效率、降低网络故障发生率、减少运维成本方面成效显著,为云网配置稽核提供了创新性解决方案,也为后续在模型优化、拓展应用场景及与新兴网络技术融合等方面的研究奠定了基础。
In the field of cloud-network operation and maintenance
network stability and security are of utmost importance. Apart from software and hardware failures of equipment
70% of cloud-network failures are caused by non-standard configurations. Therefore
it is particularly important to regularly audit the device configurations. However
the traditional auditing method of writing rules manually and checking the configuration text line by line is inefficient to meet the actual needs. For this purpose
a cloud-network configuration auditing system based on the reinforcement learning fine-tuning large language model was designed and developed. This model could automatically detect and correct non-standard behaviors in network configurations
thereby enhancing the stability and security of cloud-network operation and maintenance. The test results show that this model has achieved remarkable results in improving the auditing efficiency
reducing the occurrence rate of network failures
and cutting down the operation and maintenance costs. It provides an innovative solution for cloud-network configuration auditing and lays a foundation for subsequent research in model optimization
expanding application scenarios
and integrating with emerging network technologies.
YAN X , HUANG H P , ZOU Z L . Distribution communication configuration audit method based on deep learning technology [C ] // Proceedings of the 2024 IEEE 7th International Conference on Information Systems and Computer Aided Education (ICISCAE) . Piscataway : IEEE Press , 2024 : 989 - 994 .
LI Y L , ZOU Z L , HUANG H P . Multi-protocol distribution network configuration audit and modeling method [C ] // Proceedings of the 2024 International Conference on Electronics and Devices, Computational Science (ICEDCS) . Piscataway : IEEE Press , 2025 : 956 - 960 .
KIM S , LEE S , BAIK D K , et al . Configuration management based configuration file version integrity auditing framework [C ] // Proceedings of the Annual Conference of KIPS . Piscataway : IEEE Press , 2012 : 1511 - 1514 .
CALDWELL D , LEE S , SEN S , et al . Gold standard auditing for router configurations [C ] // Proceedings of the 2010 17th IEEE Workshop on Local & Metropolitan Area Networks (LANMAN) . Piscataway : IEEE Press , 2010 : 1 - 6 .
HE P J , ZHU J M , ZHENG Z B , et al . Drain: an online log parsing approach with fixed depth tree [C ] // Proceedings of the 2017 IEEE International Conference on Web Services (ICWS) . Piscataway : IEEE Press , 2017 : 33 - 40 .
HU E J , SHEN Y L , WALLIS P , et al . LoRa: low-rank adaptation of large language models [J ] . arXiv preprint , 2021 : 2106 .09685.
TINN R , CHENG H , GU Y , et al . Fine-tuning large neural language models for biomedical natural language processing [J ] . Patterns , 2023 , 4 ( 4 ): 100729 .
DONG G T , YUAN H Y , LU K M , et al . How abilities in large language models are affected by supervised fine-tuning data composition [J ] . arXiv preprint , 2023 : 2310 .05492.
CHEN T L , LIU S J , CHANG S Y , et al . Adversarial robustness: from self-supervised pre-training to fine-tuning [C ] // Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway : IEEE Press , 2020 : 696 - 705 .
SU D , XU Y , WINATA G I , et al . Generalizing question answering system with pre-trained language model fine-tuning [C ] // Proceedings of the 2nd Workshop on Machine Reading for Question Answering . Piscataway : IEEE Press , 2019 : 203 - 211 .
SHAO Z W , YU Z , WANG M , et al . Prompting large language models with answer heuristics for knowledge-based visual question answering [C ] // Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway : IEEE Press , 2023 : 14974 - 14983 .
ZHOU Y C , MURESANU A I , HAN Z W , et al . Large language models are human-level prompt engineers [J ] . arXiv preprint , 2022 : 2211 .01910.
ZHANG J Y , HUANG J X , YAO H J , et al . R1-VL: learning to reason with multimodal large language models via step-wise group relative policy optimization [J ] . arXiv preprint , 2025 : 2503 .12937.
DU Y Q , WATKINS O , WANG Z H , et al . Guiding pretraining in reinforcement learning with large language models [J ] . arXiv preprint , 2023 : 2302 .06692.
CARTA T , ROMAC C , WOLF T , et al . Grounding large language models in interactive environments with online reinforcement learning [J ] . arXiv preprint , 2023 : 2302 .02662.
DING N , QIN Y J , YANG G , et al . Parameter-efficient fine-tuning of large-scale pre-trained language models [J ] . Nature Machine Intelligence , 2023 , 5 ( 3 ): 220 - 235 .
YU Q Y , ZHANG Z , ZHU R F , et al . DAPO: an open-source LLM reinforcement learning system at scale [J ] . arXiv preprint , 2025 : 2503 .14476.
0
浏览量
18
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构
京公网安备11010802024621