XIE Liqin,LIN Wentong,ZHANG Zheng,et al.Large language model for cloud-network configuration audit and its application in IP network[J].Telecommunications Science,2025,41(11):84-95.
XIE Liqin,LIN Wentong,ZHANG Zheng,et al.Large language model for cloud-network configuration audit and its application in IP network[J].Telecommunications Science,2025,41(11):84-95. DOI: 10.11959/j.issn.1000-0801.2025261.
Large language model for cloud-network configuration audit and its application in IP network
In the field of cloud-network operation and maintenance
network stability and security are of utmost importance. Apart from software and hardware failures of equipment
70% of cloud-network failures are caused by non-standard configurations. Therefore
it is particularly important to regularly audit the device configurations. However
the traditional auditing method of writing rules manually and checking the configuration text line by line is inefficient to meet the actual needs. For this purpose
a cloud-network configuration auditing system based on the reinforcement learning fine-tuning large language model was designed and developed. This model could automatically detect and correct non-standard behaviors in network configurations
thereby enhancing the stability and security of cloud-network operation and maintenance. The test results show that this model has achieved remarkable results in improving the auditing efficiency
reducing the occurrence rate of network failures
and cutting down the operation and maintenance costs. It provides an innovative solution for cloud-network configuration auditing and lays a foundation for subsequent research in model optimization
expanding application scenarios
and integrating with emerging network technologies.
关键词
Keywords
references
YAN X , HUANG H P , ZOU Z L . Distribution communication configuration audit method based on deep learning technology [C ] // Proceedings of the 2024 IEEE 7th International Conference on Information Systems and Computer Aided Education (ICISCAE) . Piscataway : IEEE Press , 2024 : 989 - 994 .
LI Y L , ZOU Z L , HUANG H P . Multi-protocol distribution network configuration audit and modeling method [C ] // Proceedings of the 2024 International Conference on Electronics and Devices, Computational Science (ICEDCS) . Piscataway : IEEE Press , 2025 : 956 - 960 .
KIM S , LEE S , BAIK D K , et al . Configuration management based configuration file version integrity auditing framework [C ] // Proceedings of the Annual Conference of KIPS . Piscataway : IEEE Press , 2012 : 1511 - 1514 .
CALDWELL D , LEE S , SEN S , et al . Gold standard auditing for router configurations [C ] // Proceedings of the 2010 17th IEEE Workshop on Local & Metropolitan Area Networks (LANMAN) . Piscataway : IEEE Press , 2010 : 1 - 6 .
HE P J , ZHU J M , ZHENG Z B , et al . Drain: an online log parsing approach with fixed depth tree [C ] // Proceedings of the 2017 IEEE International Conference on Web Services (ICWS) . Piscataway : IEEE Press , 2017 : 33 - 40 .
HU E J , SHEN Y L , WALLIS P , et al . LoRa: low-rank adaptation of large language models [J ] . arXiv preprint , 2021 : 2106 .09685.
TINN R , CHENG H , GU Y , et al . Fine-tuning large neural language models for biomedical natural language processing [J ] . Patterns , 2023 , 4 ( 4 ): 100729 .
DONG G T , YUAN H Y , LU K M , et al . How abilities in large language models are affected by supervised fine-tuning data composition [J ] . arXiv preprint , 2023 : 2310 .05492.
CHEN T L , LIU S J , CHANG S Y , et al . Adversarial robustness: from self-supervised pre-training to fine-tuning [C ] // Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway : IEEE Press , 2020 : 696 - 705 .
SU D , XU Y , WINATA G I , et al . Generalizing question answering system with pre-trained language model fine-tuning [C ] // Proceedings of the 2nd Workshop on Machine Reading for Question Answering . Piscataway : IEEE Press , 2019 : 203 - 211 .
SHAO Z W , YU Z , WANG M , et al . Prompting large language models with answer heuristics for knowledge-based visual question answering [C ] // Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway : IEEE Press , 2023 : 14974 - 14983 .
ZHOU Y C , MURESANU A I , HAN Z W , et al . Large language models are human-level prompt engineers [J ] . arXiv preprint , 2022 : 2211 .01910.
ZHANG J Y , HUANG J X , YAO H J , et al . R1-VL: learning to reason with multimodal large language models via step-wise group relative policy optimization [J ] . arXiv preprint , 2025 : 2503 .12937.
DU Y Q , WATKINS O , WANG Z H , et al . Guiding pretraining in reinforcement learning with large language models [J ] . arXiv preprint , 2023 : 2302 .06692.
CARTA T , ROMAC C , WOLF T , et al . Grounding large language models in interactive environments with online reinforcement learning [J ] . arXiv preprint , 2023 : 2302 .02662.
DING N , QIN Y J , YANG G , et al . Parameter-efficient fine-tuning of large-scale pre-trained language models [J ] . Nature Machine Intelligence , 2023 , 5 ( 3 ): 220 - 235 .
YU Q Y , ZHANG Z , ZHU R F , et al . DAPO: an open-source LLM reinforcement learning system at scale [J ] . arXiv preprint , 2025 : 2503 .14476.