Prediction of Spent Nuclear Fuel Decay Heat Based on GPR-SVR Co-training
-
摘要: 在压水堆核电厂中,乏燃料组件的衰变热是堆芯余热的主要来源,准确预测衰变热对于反应堆冷却系统的设计和安全分析至关重要,但传统核素衰变模拟程序计算成本高,而机器学习模型由于数据不足可能存在过拟合问题。本文基于高斯过程回归(GPR)和支持向量回归(SVR)方法建立了协同训练的基础模型,生成了高质量的乏燃料衰变热虚拟数据,并与核电厂实测数据组成了混合数据集,采用混合数据集训练极限学习机(ELM)模型,对乏燃料衰变热进行了预测。结果表明,与常规的机器学习模型相比,协同训练显著提升了衰变热预测的稳定性和准确性。经过混合数据集训练后,ELM模型的预测稳定性提高了39.9%,衰变热预测结果的均方根误差(RMSE)比传统核素衰变模拟程序低25.7%。本研究提出的方法可为解决核工程领域存在的小数据集问题提供新思路。
-
关键词:
- 衰变热预测 /
- 虚拟数据 /
- 协同训练 /
- 乏燃料 /
- 高斯过程回归(GPR) /
- 支持向量回归(SVR)
Abstract: The decay heat released by spent fuel assemblies is the main source of reactor core waste heat in PWR nuclear power plants. Accurate prediction of decay heat is crucial for the design and safety analysis of the nuclear power plant cooling system. However, the calculation cost of traditional nuclide decay simulation code is high, and the machine learning model may have over-fitting problems due to insufficient data. This study establishes a co-training model based on Gaussian Process Regression (GPR) and Support Vector Regression (SVR) to generate high-quality virtual decay heat samples. These virtual samples, combined with measured decay heat data, form a mixed dataset, which is used to train an Extreme Learning Machine (ELM) model for decay heat prediction. The results show that, compared with the conventional machine learning model, the co-training approach significantly enhances the stability and accuracy of decay heat predictions. After training on the mixed dataset, the prediction stability of the ELM model increased by 39.9%, and the RMSE of the predicted decay heat was 25.7% lower than that of the traditional nuclide decay simulation code. This research provides new insights for addressing the small sample problem in the field of nuclear engineering. -
表 1 乏燃料衰变热数据总结
Table 1. Summary of Spent Fuel Decay Heat Data
反应堆 组件排列 乏燃料组件富集度/% 初始铀质量/kg 燃耗深度/[MW·d·t−1(U)] 卸料后冷却时间/d Ringhals3 17×17 2.1~3.103 451.743~489.050 19699~47308 4717~7304 Ringhals2、Turkey Point 15×15 3.095~3.252 423.896~455.789 33973~50962 5804~8468 San Onofre1、Point Beach2 14×14 3.397~4.005 361.72~386.80 26540~39384 1078~3012 表 2 GPR和SVR预测误差汇总
Table 2. Summary of GPR and SVR Predictions
模型 RE/% RMSE/% MAPE/% 点1 点2 点3 $ {{\mathrm{M}}}_{{\mathrm{GPR}}0} $ 72.8 2.4 9.0 72.0 5.7 $ {{\mathrm{M}}}_{{\mathrm{SVR}}0} $ 10.5 32.9 1.3 76.3 3.8 神经网络 90.8 15.4 表 3 衰变热预测误差汇总
Table 3. Summary of Decay Heat Prediction Errors
模型 最大RE/% 最小RE/% RMSE/% $ {\mathrm{M}}_{\mathrm{E}\mathrm{L}\mathrm{M}}^{\mathrm{c}\mathrm{o}} $ 4.54 0.01 12.05 $ {\mathrm{M}}_{\mathrm{E}\mathrm{L}\mathrm{M}}^{0} $ 10.39 0.35 39.42 -
[1] 高拥军,郭振武,陈秋炀. 含钆量对乏燃料组件衰变热的影响[J]. 机械设计,2021, 38(S1): 167-169. [2] 王梦琪,彭超,黎辉,等. 三代非能动核电厂乏燃料贮运系统衰变热计算及关键因素研究[J]. 辐射防护,2023, 43(S1): 14-19. [3] 杨宁,李华琪,张信一,等. 西安脉冲堆衰变热功率精细化计算[J]. 核动力工程,2021, 42(6): 24-31. [4] MAEDA S, SEKINE T, AOYAMA T. Measurement and analysis of decay heat of fast reactor spent MOX fuel[J]. Annals of Nuclear Energy, 2004, 31(10): 1119-1133. doi: 10.1016/j.anucene.2003.11.002 [5] KROMAR M, GODFREY A T. Determination of the spent fuel decay heat with the VERA core simulator[J]. Frontiers in Energy Research, 2022, 10: 1046506. doi: 10.3389/fenrg.2022.1046506 [6] EBIWONJUMI B, CHEREZOV A, DZIANISAU S, et al. Machine learning of LWR spent nuclear fuel assembly decay heat measurements[J]. Nuclear Engineering and Technology, 2021, 53(11): 3563-3579. doi: 10.1016/j.net.2021.05.037 [7] HART G L W, MUELLER T, TOHER C, et al. Machine learning for alloys[J]. Nature Reviews Materials, 2021, 6(8): 730-755. doi: 10.1038/s41578-021-00340-w [8] NICHOLLS M. Machine Learning—state of the art: the critical role that machine learning can play in advancing cardiology was outlined at a packed session at ESC 2019[J]. European Heart Journal, 2019, 40(45): 3668-3669. doi: 10.1093/eurheartj/ehz801 [9] HUANG Q Y, PENG S N, DENG J, et al. A review of the application of artificial intelligence to nuclear reactors: where we are and what’s next[J]. Heliyon, 2023, 9(3): e13883. doi: 10.1016/j.heliyon.2023.e13883 [10] PANAGOPOULOS O P, XANTHOPOULOS P, RAZZAGHI T, et al. Relaxed support vector regression[J]. Annals of Operations Research, 2019, 276(1): 191-210. [11] 陈静,卢燕臻,江灏,等. 基于孪生模型的堆芯自给能中子探测器信号异常检测[J]. 核动力工程,2023, 44(3): 210-216. [12] 颜建国,郑书闽,郭鹏程,等. 基于机器学习的螺旋流动过冷沸腾CHF预测研究[J]. 核动力工程,2023, 44(3): 65-73. [13] 雷济充,谢金森,于涛,等. 基于数据挖掘技术的组件核子密度预测研究[J]. 核动力工程,2021, 42(4): 126-132. [14] BLUM A, MITCHELL T. Combining labeled and unlabeled data with co-training[C]//Proceedings of the Eleventh Annual Conference on Computational Learning Theory. Madison: Association for Computing Machinery, 1998: 92-100. [15] PENG J Z, ESTRADA G, PEDERSOLI M, et al. Deep co-training for semi-supervised image segmentation[J]. Pattern Recognition, 2020, 107: 107269. doi: 10.1016/j.patcog.2020.107269 [16] LU Y J, XU M, WU C X, et al. Cross-lingual implicit discourse relation recognition with co-training[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(5): 651-661. [17] LIN Y, QU Z Y, CHEN H, et al. Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training[J]. Medical Image Analysis, 2023, 89: 102933. doi: 10.1016/j.media.2023.102933 [18] ZHU Q X, ZHANG H T, TIAN Y, et al. Co-training based virtual sample generation for solving the small sample size problem in process industry[J]. ISA Transactions, 2023, 134: 290-301. doi: 10.1016/j.isatra.2022.08.021 [19] XIE Y, LIANG Y F, GONG M G, et al. Semisupervised graph neural networks for graph classification[J]. IEEE Transactions on Cybernetics, 2023, 53(10): 6222-6235. doi: 10.1109/TCYB.2022.3164696 [20] SVENSK KÄRNBRÄNSLEHANTERING AB. Measurements of decay heat in spent nuclear fuel at the Swedish interim storage facility, Clab: SKB R-05-62[R]. Sweden: Swedish Nuclear Fuel and Waste Management Co., 2006. [21] GAULD I C, ILAS G, MURPHY B D, et al. Validation of SCALE 5 decay heat predictions for LWR spent nuclear fuel: ORNL/TM-2006/13[R]. U.S: Nuclear Regulatory Commission, 2010. [22] ILAS G, GAULD I C. SCALE analysis of CLAB decay heat measurements for LWR spent fuel assemblies[J]. Annals of Nuclear Energy, 2008, 35(1): 37-48. doi: 10.1016/j.anucene.2007.05.017 [23] JANSSON P, BENGTSSON M, BÄCKSTRÖM U, et al. Data from calorimetric decay heat measurements of five used PWR 17x17 nuclear fuel assemblies[J]. Data in Brief, 2020, 28: 104917. doi: 10.1016/j.dib.2019.104917 [24] JANG J, EBIWONJUMI B, KIM W, et al. Validation of spent nuclear fuel decay heat calculation by a two-step method[J]. Nuclear Engineering and Technology, 2021, 53(1): 44-60. doi: 10.1016/j.net.2020.06.028 [25] RASMUSSEN C E, WILLIAMS C K I. Gaussian processes for machine learning[M]. Cambridge: MIT Press, 2006: 16: 112. [26] INSUA D R, RUGGERI F, WIPER M P. Bayesian analysis of stochastic process models[M]. Chichester: John Wiley & Sons, Ltd, 2012: 11. [27] BOLOIX-TORTOSA R, MURILLO-FUENTES J J, PAYÁN-SOMET F J, et al. Complex Gaussian processes for regression[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(11): 5499-5511. doi: 10.1109/TNNLS.2018.2805019 [28] 冯榆坤. 基于支持向量回归算法的船型优化设计研究[D]. 上海: 上海交通大学,2020. [29] VAPNIK V N. The nature of statistical learning theory[M]. 2nd ed. New York: Springer, 2000: 138. [30] LAU K W, WU Q H. Online training of support vector classifier[J]. Pattern Recognition, 2003, 36(8): 1913-1920. doi: 10.1016/S0031-3203(03)00038-4 [31] LIU X Y, JIN H. High-precision transient fuel consumption model based on support vector regression[J]. Fuel, 2023, 338: 127368. doi: 10.1016/j.fuel.2022.127368 [32] HUANG G B, ZHU Q Y, SIEW C K. Extreme learning machine: a new learning scheme of feedforward neural networks[C]//2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541). Budapest: IEEE, 2004: 985-990. [33] ZOU W D, XIA Y Q, CAO W P. Back-propagation extreme learning machine[J]. Soft Computing, 2022, 26(18): 9179-9188. doi: 10.1007/s00500-022-07331-1 [34] ZHANG Y S, WU J, ZHOU C, et al. Instance cloned extreme learning machine[J]. Pattern Recognition, 2017, 68: 52-65. doi: 10.1016/j.patcog.2017.02.036 [35] MARELLI S, SUDRET B. UQLab: a framework for uncertainty quantification in matlab[C]//Vulnerability, Uncertainty, and Risk: Quantification, Mitigation, and Management. Liverpool: American Society of Civil Engineers, 2014: 2554-2563. [36] LI D C, WEN I H. A genetic algorithm-based virtual sample generation technique to improve small data set learning[J]. Neurocomputing, 2014, 143: 222-230. doi: 10.1016/j.neucom.2014.06.004 [37] TAORMINA R, CHAU K W. Data-driven input variable selection for rainfall–runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines[J]. Journal of Hydrology, 2015, 529: 1617-1632. doi: 10.1016/j.jhydrol.2015.08.022 [38] MACALLISTER A, KOHL A, WINER E. Using high-fidelity meta-models to improve performance of small dataset trained Bayesian Networks[J]. Expert Systems with Applications, 2020, 139: 112830. doi: 10.1016/j.eswa.2019.112830 [39] NAZARÉ T S, DA COSTA G B P, CONTATO W A, et al. Deep convolutional neural networks and noisy images[C]//Proceedings of the 22nd Iberoamerican Congress. Valparaíso: Springer, 2017: 416-424. [40] WANG J D, LAN C L, LIU C, et al. Generalizing to unseen domains: a survey on domain generalization[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(8): 8052-8072. [41] CHEN Z N, WANG W Q, ZHAO Z C, et al. Bag of tricks for out-of-distribution generalization[C]//Computer Vision - ECCV 2022 Workshops. Tel Aviv: Springer, 2022: 465-476. -