Current Issue

2025, Volume 46,  Issue 2

Cover Article
Research and Prospects of Artificial Intelligence Scientific Computing for Nuclear Industry
Liu Dong, Tian Wenxi, Liu Xiaojing, Hao Chen, Peng Hang, Yu Yang, Xiao Cong
2025, 46(2): 1-13. doi: 10.13832/j.jnpe.2024.09.0027
Abstract(45) HTML (12) PDF(17)
Abstract:
Scientific computation is pivotal throughout the nuclear industry's technical framework, encompassing everything from the creation of nuclear databases to the design, analysis, validation, and operation of nuclear power engineering, as well as the reprocessing of nuclear fuel and the decommissioning of reactors. Historically, the scientific computing paradigm in the industrial field is mainly based on statistical methods for modeling experimental measurement data, as well as numerical computing methods represented by solving differential/integral equations. As artificial intelligence (AI) technology advances, leveraging AI for scientific computation is emerging as a novel paradigm. This paper introduces the basic principles and main features of this emerging technology field, focusing on the characteristics of the nuclear industry. It summarizes the current research work and analyzes the advantages and disadvantages of artificial intelligence scientific computing methods compared to traditional methods. The paper concludes with a prospective look at the future development trends of intelligent computation in the nuclear field, along with potential application scenarios, offering insights to foster the evolution of AI in scientific computation within the nuclear industry.
General Basic Theory and Method of Artificial Intelligence
An Overview of Data Fusion Methods for the Digital Twin of Nuclear Reactor
Song Meiqi, Chen Fukun, Liu Xiaojing
2025, 46(2): 14-37. doi: 10.13832/j.jnpe.2024.11.0148
Abstract(128) HTML (48) PDF(28)
Abstract:
The development of digital twin of nuclear reactor has the potential to enhance the safety and economic efficiency of nuclear power plants by achieving a cyber-physical fusion, while the key challenge of cyber-physical fusion is data fusion. Therefore, this paper focuses on the field of digital twin of nuclear reactor, starting from the definition of data fusion, fusion objects, fusion levels, fusion methods, and the relationship between digital twins and data fusion. Subsequently, the application and research status of data fusion methods in the entire life cycle of digital twin in nuclear reactor are discussed from eight perspectives: the construction of digital twin model of nuclear reactor, the optimization issues in the design and construction of nuclear reactor, the inversion and reconstruction of nuclear reactor operating parameters, the prediction of nuclear reactor operating parameters and remaining service life, the calibration of nuclear reactor operating parameters, the feedback and control of nuclear reactor operation, the fault detection, identification and diagnosis of nuclear reactor, and the data fusion of other aspects of digital twin of nuclear reactor. In conclusion, the challenges existing in current research has been identified from the aspects of data and fusion methods, providing references for addressing key data fusion issues in the future development of digital twin for nuclear reactor.
Study on the Selection Method of Probability Density Distribution in Nuclear Data Stochastic Sampling
Wang Yizhen, Hao Chen
2025, 46(2): 38-47. doi: 10.13832/j.jnpe.2024.12.0174
Abstract(3) HTML (2) PDF(0)
Abstract:
For statistical learning algorithms that are related to various core physical calculations with nuclear data as the analysis input, providing stochastic disturbance samples that are consistent with the known statistical moment information and physical constraints of nuclear data is fundamental. Reasonably perturbed nuclear data samples are one of the important factors to ensure the prediction accuracy of data-driven artificial intelligence models such as core physical response feature extraction and reduced-order modeling. Selecting the probability density distribution that can meet the physical constraints of nuclear data itself is the key to ensure the rationality of the above stochastic sampling of nuclear data. This work focuses on two types of physical constraints that are commonly seen in the evaluated nuclear data library, namely, non-negativity constraints (e.g. fission product yield, nuclear reaction cross section) and normalization constraints (e.g. decay branch ratio), studies the corresponding probability density distribution selection methods and provides the corresponding sampling algorithms. Combined with the uncertainty information of nuclear data provided in the evaluated nuclear data library, this work compares the stochastic sampling effects of nuclear data with different probability density distributions, and gives some suggestions on the selection of probability density distributions.
Reduced Order Modeling for Neutron Transport Equation Based on Operator Inference
Xiao Wei, Liu Xiaojing, Zhang Tengfei, Zu Jianhua, Chai Xiang, He Hui
2025, 46(2): 48-55. doi: 10.13832/j.jnpe.2024.080042
Abstract(38) HTML (14) PDF(10)
Abstract:
To establish a real-time prediction model for the time-dependent neutron transport equation, the affine-parametric operator inference is employed to train a reduced-order model of the neutron transport equation. Operator inference, through singular value decomposition and solving optimization problem, non-intrusively fits the operators of the reduced dynamic equations in the subspace while preserving the physical structure described by the original governing equations. The affine-parametric structure effectively addresses the issue of time-varying parameters commonly encountered in time-dependent neutron transport equations, enabling rapid computation from time-varying core parameters to physical quantities without the need for parameter space interpolation. The numerical results show that the reduced-order model based on high-fidelity data and affine-parametric operator inference has good generalization ability, accurately solving transient problems under different time-varying parameters. Therefore, the reduced-order model proposed in this study can be used for real-time prediction of high-fidelity neutron transport equations.
Research on Lattice Boltzmann Solution of Generalized Convection Diffusion Equation Based on Physical Fusion Neural Network
Wang Yahui, Xiao Hao, Ma Yu, Xie Yuchen, Chi Honghang
2025, 46(2): 56-67. doi: 10.13832/j.jnpe.2024.070062
Abstract(45) HTML (19) PDF(2)
Abstract:
In order to improve the network reusability of the deep learning method and construct a deep network model suitable for different control equations and different physical parameters, a lattice Boltzmann method based on physical fusion neural network (PFNN-LBM) is proposed in this study. The unified discrete-velocity Boltzmann equation for equations with different characteristics is established under lattice Boltzmann method, and solved using the parameterize physics–informed neural network with a single network. The PFNN-LBM can simultaneously solve governing equations with different forms and different physical parameters within one single training. In order to test the accuracy and adaptability of PFNN-LBM, four types of macro-equations, including diffusion equation, nonlinear heat conduction equation, Sine-Gordon equation and Burgers-Fisher equation, are selected for prediction analysis. At the same time, the prediction performance under different physical parameters and the two-group neutron diffusion equations are tested. The calculation results show that the proposed PFNN-LBM can solve the control equations of different forms and different physical parameters with high accuracy after one training. This work can provide a novel framework for solving different types of equations efficiently and flexibly, and for engineering application, this work may have outstanding advantages in multi-physics coupling calculations.
Using Convolutional Neural Networks to Distinguish Nucleon Effective Mass Splitting
Li Li, Zhang Yingxun, Yang Junping, Cui Ying, Chen Xiang, Wang Xinyu, Zhao Kai
2025, 46(2): 68-75. doi: 10.13832/j.jnpe.2024.09.0028
Abstract(45) HTML (11) PDF(1)
Abstract:
In order to accurately distinguish the effective mass splitting of protons and neutrons in nuclear matter, a dual-channel-input convolutional neural network (CNN) is proposed for determining the effective mass splitting of nucleons. The main idea of this method is to use CNN learning theoretical model to calculate the longitudinal and transverse momentum distributions of proton and neutron yields. The theoretical model used in this study is an improved quantum molecular dynamics model (ImQMD), and the effective interaction parameters are SkM* and SLy4, which correspond to the effective mass of neutrons being larger than the effective mass of protons and the effective mass of neutrons being smaller than the effective mass of protons respectively. Through the learning of a large number of model data, a method to distinguish the effective mass splitting of nucleons by CNN is established. The analysis of the three sets of neutron-rich target systems—48Ca+208Pb, 48Ca+124Sn, and 124Sn+124Sn—shows that all three systems achieve the highest resolution accuracy at a beam energy of 50 MeV/u, exceeding 99.5%. At a beam energy of 270 MeV/u, the resolution accuracy of all three systems remains above 93%. Using the blocking method, the importance regions of the longitudinal and transverse momentum distribution of proton and neutron yields were investigated. An analysis of the importance maps for the three systems at 50 MeV/u was provided, indicating that the two-dimensional energy spectra of nucleons in the low transverse momentum region are more sensitive to the effective mass splitting of nucleons.
Research on Algorithm of Solving Neutron Equation Based on ResNet-PINN
Niu Yixiao, Li Jiafang, Yang Chun, Liu Yang, Lai Qiuyu, Fu Meirui, Jiang Yi
2025, 46(2): 76-80. doi: 10.13832/j.jnpe.2024.080035
Abstract(49) HTML (12) PDF(4)
Abstract:
As a deep learning method integrating physical knowledge, the Physics-Informed Neural Network (PINN) has certain limitations in terms of the accuracy of problem-solving. To further enhance the solution accuracy of the PINN model, an improved PINN model based on the Residual Network (ResNet) structure (ResNet-PINN) is proposed. The basic principle and numerical calculation process of ResNet-PINN are elaborated in detail, and it is applied to the solution of neutron diffusion and transport equations in the nuclear field. Experimental validation has shown that ResNet-PINN improves the solution accuracy of the reactor core neutron diffusion equation by a factor of 2 to 10 times, and enhances the solution accuracy of the transport equation by a factor of 3 to 6 times., effectively solving the solution accuracy limitations faced by the PINN model.
Deep Learning Solution Technology for Sparse Data of Multi-Channel Flow Field of PWR Rod Bundle
Qian Hao, Chen Guangliang, Liu Dong, Yu Yang, Jiang Hongwei, Yin Xinli, Yang Yucheng
2025, 46(2): 81-89. doi: 10.13832/j.jnpe.2024.080039
Abstract(44) HTML (11) PDF(2)
Abstract:
The Reynolds number can reach 105 in typical reactor core operating conditions, and the coolant flow exhibits significant nonlinearity. An inevitable mismatch between the actual flow boundaries and states and the ideal flow equations can lead to conflicts between the data and the constraints of the governing equations during the solving process. This mutual restriction can cause difficulties in achieving convergence. This paper developed a sparse data-solving method based on deep learning to address this issue. By designing an adaptive mismatch correction scheme, an adaptive adjustment factor is introduced into the governing equations to correct the ideal model dynamically. This approach overcomes the convergence difficulties and accuracy issues caused by the inconsistency between the data and the equations. Based on this technology, the study further explored flow field-solving strategies under small sample data conditions and designed uniform, velocity-gradient-based, and hybrid point distribution strategies. These strategies aim to optimize the spatial distribution of sample points to improve the overall accuracy of the flow field solutions. The results show that among the three strategies, the uniform point distribution strategy provides the most comprehensive coverage of the overall flow field characteristics and achieves the best optimization effect, with an R² value greater than 0.95 and MSE on the order of 10−4 to 10−3. Moreover, even with only 60 small sample points (7.8% of the original data points), the proposed method can still effectively achieve high-accuracy flow field solutions, providing an efficient and highly applicable solution for solving the multi-channel flow field of PWR reactor core rod bundles under sparse data conditions.
Research on Rapid Reconstruction Technology of Temperature Field in Heat Transfer Tube of Steam Generator Based on POD and Neural Network
Zhang He, Liang Biao, Wang Bo, Tan Sichao, Han Rui, Li Jiangkuan, Tian Ruifeng
2025, 46(2): 90-97. doi: 10.13832/j.jnpe.2024.070047
Abstract(99) HTML (14) PDF(1)
Abstract:
The secondary flow region of a casing once-through steam generator involves complex two-phase flow. Although numerical simulation methods can achieve precise simulation calculations, they are slow and time-consuming for multi-condition and transient calculations, and consume significant computational resources. Model reduction is a method that transforms a complex system into an approximately simplified system, enabling rapid calculations while retaining the main characteristics of the original system. This study employs the Proper Orthogonal Decomposition (POD) method to reduce the model of the temperature field inside the heat exchange tubes, capturing the modal coefficients by projecting the original complex system onto a limited number of modes. A neural network method is applied to capture the distribution patterns of short- and long-term time series modal coefficients. The research results indicate that the error in predicting the reconstructed temperature field is within 15%, and the prediction speed is improved by four orders of magnitude compared to numerical simulation methods. Therefore, the prediction method established in this study, which couples model order reduction with neural networks, can be utilized for the rapid prediction of the temperature field within the casing, providing support for internal thermal-hydraulic analysis.
Study on Quantification of Parameter Uncertainty in Reflooding Model Based on Random Forest Algorithm
Lei Meng, Li Dong, Zhang Ziyue, Hao Rao
2025, 46(2): 98-106. doi: 10.13832/j.jnpe.2024.070031
Abstract(45) HTML (14) PDF(4)
Abstract:
In order to assess the uncertainty of physical models (inputs) of complex accidents, an inverse uncertainty quantification method based on Random Forest algorithm combined with PSO-Kriging surrogate model and KDE-SJ nonparametric statistics is proposed, and it is applied to the model assessment of reflooding in large breach accidents. The probability density distributions of the model parameters were obtained through the degree of consistency between the calculation results (output) of the system program and the FEBA experimental data as a classification criterion for the Random Forest algorithm. The validation results show that the 95% uncertainty bands obtained by randomly sampling 93 groups of calculations on the probability density distributions can completely envelope the experimental data, but the calibration effect of the model using mode or mean may not be as good as the maximum posterior mean obtained by Bayesian method.
POD-RBF Based ROM Method to Calculate Temporal-Spatial Temperature Distribution under DLOFC Accident for VHTR
Ding Yongwang, Zhang Han, Peng Chuzhen, Wu Yingjie, Guo Jiong, Peng Wei, Zhang Ping, Li Fu
2025, 46(2): 107-118. doi: 10.13832/j.jnpe.2024.10.0056
Abstract(50) HTML (20) PDF(8)
Abstract:
Very High-Temperature Gas-cooled Reactor (VHTR) has a wide range of applications such as hydrogen production by nuclear energy. Depressurized Loss of Forced Cooling (DLOFC) accident is one of the most serious design basis accidents of VHTR. It may cause large computational cost to analyze the DLOFC accident with large amount of different input parameters using the Full Order Model (FOM). Based on Reduced Order Model (ROM), it is of great demand and significance to calculate DLOFC accidents quickly and accurately for different schemes within the design parameters. In this paper, the FOM of VHTR is established by the code TINTE, and a ROM for fast calculation of the DLOFC accident of VHTR is realized based on Proper Orthogonal Decomposition-Radial Basis Function Interpolation (POD-RBF) method. Two methods are given to realize the transient process calculation of ROM. Method 1 equates time with input parameters such as inlet temperature; Method 2 calculates the coefficients of different time steps under the same parameter as a whole. The results show that the maximum relative error of both ROM methods is less than 1%, and the computation efficiency of ROMs is much higher than that of FOM. Furthermore, the computational efficiency of Method 2 is 40 times that of Method 1. Therefore, the ROM proposed in this paper can provide a fast calculation code for the optimization of design parameters of VHTR.
Research on Efficient Solution of Neutron Physics Equations Using NAS-Optimized PINN
Yu Caiyang, Jiang Yong, Chen Qilong, Liu Dong, Lyu Jiancheng
2025, 46(2): 119-126. doi: 10.13832/j.jnpe.2024.090041
Abstract(27) HTML (8) PDF(0)
Abstract:
To quickly and accurately solve the two kinds of equations of neutron diffusion and transport in the core, Physics-Informed Neural Networks (PINN) can be utilized to enhance the speed and efficiency of solving partial differential equation. However, the predefined structure of PINNs is relatively inflexible, limiting their width and depth in practical applications. This study proposes an innovative approach for determining the optimal PINN structure, (NAS-PINN), which employs a Neural Architecture Search (NAS) strategy to dynamically select the most suitable PINN structure for solving neutron diffusion and transport equations of nuclear reactors. The PINN model identified through this search is applied to equation solving, and the experimental verification comparison is made between the true and predicted values. The results show that the NAS-PINN method has higher accuracy in solving reactor equations with different geometries, and provides a more accurate and efficient solution for complex neutron equations.
Intelligent Design R&D Technology of Reactors
Application of Artificial Intelligence Algorithms in Thermal-Hydraulic Analysis of Nuclear Reactors
Zhang Jing, Wang Mingjun, Tian Wenxi, Su Guanghui, Qiu Suizheng
2025, 46(2): 127-140. doi: 10.13832/j.jnpe.2024.090039
Abstract(69) HTML (19) PDF(18)
Abstract:
The advantages of artificial intelligence (AI) algorithms in rapid prediction, self-learning, and strong generalizability have been applied to address the complexities of thermal-hydraulic phenomena and mechanisms in nuclear reactors. These applications include predictions of thermal-hydraulic parameters, optimization of thermal safety analysis codes, and enhancements in computational fluid dynamics (CFD) efficiency. This paper reviews the current state of research on AI algorithms in predicting thermal-hydraulic parameters such as flow regimes, boiling heat transfer, and critical flow. To address challenges such as unknown mechanisms and limited prediction ranges under extreme operating conditions, this study leverages the nonlinear rapid prediction capabilities of AI to expand the scope and accuracy of analyses. For thermal analysis codes constrained by parameter models, the self-learning, adaptive, and highly generalizable features of AI are utilized to improve the identification and prediction of complex phenomenon parameters through model calibration and data assimilation techniques. By employing model reduction and fast prediction methods, AI enhances the computational efficiency and the multidimensional reconstruction of complex thermal-hydraulic physical fields. Furthermore, the study highlights the future prospects of AI algorithms in accurately predicting the full lifecycle performance of key components in large-scale reactor systems, accelerating design iterations for advanced reactors such as liquid-metal fast reactors, and optimizing cross-scale, multiphysics interactions in a more efficient manner.
Offline Parameter Optimization of Steam Generator Liquid Level Control System Based on NSGA-II Algorithm
Sun Zhejun, Wei Xinyu, Zhang Nan, Li Mingqian, Zhang Ruiping, Wang Yulong, Sun Peiwei
2025, 46(2): 141-147. doi: 10.13832/j.jnpe.2024.090057
Abstract(61) HTML (21) PDF(4)
Abstract:
The steam generator is an important equipment in nuclear power plants. Currently, the liquid level of the steam generator is mainly controlled by a fixed Proportional-Integral-Derivative (PID), so it is necessary to tune the PID parameters. Traditional parameter tuning methods rely on precise mathematical models, and their effectiveness is poor when accurate model information is unavailable. Therefore, this article proposes a method for tuning the PID parameters of steam generator liquid level control system, which extracts historical data for offline tuning. Firstly, the back propagation (BP) neural network is used to identify the model of the steam generator liquid level control system based on historical data. Then, the PID parameters are optimized offline on the established BP neural network model. The parameter tuning method adopts Nondominated Sorting Genetic Algorithm (NSGA-II), with the dynamic performance index of the control system as the objective function, adjusting the PID parameters to improve the control effect. The proposed algorithm is validated through simulation in MATLAB/Simulink. The results demonstrate that the steam generator water level control system with offline parameter optimization exhibits superior performance in terms of overshoot and adjustment time under different operating conditions compared to the original system, achieving better control effectiveness.
Research on the Solution and Acceleration Algorithm of Source Iteration Method Based on PINN
Jiang Yong, An Ping, Liu Dong, Yu Yang
2025, 46(2): 148-155. doi: 10.13832/j.jnpe.2024.090040
Abstract(29) HTML (10) PDF(2)
Abstract:
This paper integrates physics-driven artificial intelligence methods with the traditional source iteration method to establish a novel approach for solving the few-group diffusion equations, and employs the Anderson acceleration method to accelerate the iterative source term. The results of numerical examples such as two-dimensional multi material and three-dimensional single material show that the combination of physics-driven Physics-Informed Neural Networks (PINN) and traditional source iteration method can calculate the continuous neutron flux density distribution while ensuring calculation accuracy. The use of Anderson acceleration method can reduce the number of iteration, ssuccessfully achieving the forward solution of the few-group neutron diffusion equations. This advancement promotes the application of artificial intelligence algorithms in the nuclear field.
Development of Prediction Model for Two-phase Flow Regime in Nuclear Reactor Core Based on Artificial Neural Network
Ma Yichao, Kong Dexiang, Tian Wenxi, Zhang Jing, Wu Yingwei, Qiu Suizheng, Su Guanghui
2025, 46(2): 156-163. doi: 10.13832/j.jnpe.2024.090038
Abstract(41) HTML (17) PDF(4)
Abstract:
To fully leverage the increasing experimental data on flow regimes to expand model applicability and improve prediction accuracy, this study collected experimental data, established a training database, and performed data preprocessing. A two-phase flow regime prediction model was developed based on the artificial neural network (ANN) algorithm. The model's prediction accuracy in various flow directions was analyzed and compared with traditional flow regime prediction models. The results show that the new model achieves an average accuracy of 88.56% on the training set and 87.86% on the test set. The proposed model can be directly applied to various operating conditions without causing misclassification of flow regimes in different directions. Compared to the Ishii model, Mandhane model, and Taitel model, the ANN-based model demonstrates superior prediction accuracy. This study provides a novel method for flow regime prediction, and with the continuous updating of training data, the applicability and accuracy of the model can be further improved.
Research on Optimization of Pressurized Water Reactor Core Loading Pattern Based on Neural Network and Genetic Algorithm
Chen Gang, Zou Jian, Liu Shichang, Cai Yun, Wang Lianjie
2025, 46(2): 164-176. doi: 10.13832/j.jnpe.2024.080038
Abstract(54) HTML (14) PDF(0)
Abstract:
Core loading pattern (LP) optimization can enhance the safety and economy of reactors. However, its optimization process demands a considerable amount of time-consuming computations and extensive manual experience. Aiming at the rapid evaluation issue of core LP schemes, this study employed the fully connected neural network (FCNN) and convolutional neural network (CNN) to establish a rapid prediction model for the neutronic parameters of the Daya Bay first-cycle core, enabling a rapid assessment of the pressurized water reactor core LP scheme. The generalization ability and accuracy of the prediction model were verified through the core calculation code DONJON. Regarding the global search issue of the core optimization scheme, the non-dominated sorting genetic algorithm (NSGA) was utilized to carry out multi-objective optimization of the LP scheme for the Daya Bay first-cycle core, and the optimization effect was enhanced by adjusting the parameters of the NSGA algorithm. The results suggest that the NSGA series of algorithms can be applied to various types of nuclear design optimization, including core LP optimization, and can make up for the poor global nature of manual search schemes. Simultaneously, the parallel optimization of the NSGA algorithm in combination with supercomputing can significantly enhance the optimization efficiency. For the rapid optimization of the core LP scheme, a joint optimization code was developed by leveraging the neural network prediction model based on GPU parallelism and the NSGA algorithm, achieving rapid optimization of the Daya Bay first-cycle core LP scheme. By comparing the optimization results of the joint optimization code with those of "DONJON + NSGA", it is shown that the joint optimization code of the neural network-genetic algorithm can obtain core LP schemes with relatively similar results while reducing the optimization time by over 99%.
Research on Parameter Prediction for Transient Conditions in Rod Bundle Subchannel Based on POD-ML Method
Xu Yujie, Mo Jinhong, Dong Xiaomeng, Liu Yong, Xu Anqi, Yu Yang
2025, 46(2): 177-185. doi: 10.13832/j.jnpe.2024.080031
Abstract(38) HTML (14) PDF(3)
Abstract:
The Reduced Order Model (ROM) effectively reduces the complexity of physical models by mapping full-order conservation equations to lower-order subspaces or building data-driven surrogate models. Compared with traditional computational fluid dynamics (CFD) simulation, ROM is more efficient in large-scale simulation. In this paper, a ROM framework is proposed by combining Proper Orthogonal Decomposition (POD) with machine learning (ML) to predict mass flow parameters in rod bundle subchannels. Comparison of prediction methods for different ways of combining POD and ML shows that the LSTM+POD method is more suitable for short-term prediction, while the POD+LSTM method has less error in long-term prediction, which can provide a solution for making predictions of other complex systems in the future.
Study on Coordinated Control Method of Reactor Power Based on Multi-Agent Reinforcement Learning
Niu Zhenfeng, Li Tong, Li Jiangkuan, Liu Yongchao, Lyu Wei, Tan Sichao, Tian Ruifeng
2025, 46(2): 186-192. doi: 10.13832/j.jnpe.2024.080030
Abstract(62) HTML (19) PDF(0)
Abstract:
To improve the precision of coordinated control between reactor power and steam generator water levels in nuclear power plants, a multi-agent reinforcement learning coordination control framework based on Twin Delayed Deep Deterministic Policy Gradient (TD3) is proposed in this study, in which various subtasks are assigned to the corresponding agents, and these agents cooperate with each other to accurately coordinate the reactor power and steam generator water levels. Through a series of simulation experiments, the performance of the framework under different operating conditions was evaluated. The experimental results demonstrate that the multi-agent control framework significantly improves the control speed and stability under various power switching conditions, with both overshoot and control time outperforming traditional proportional integral differential (PID) controllers. In addition, the framework also shows excellent generalization ability in untrained new conditions, which can effectively improve the precision and stability of coordinated control of reactor power.
Research on Data-driven Intelligent Optimization Design of Mobile Microreactor Shielding
Lei Kaihui, Wu Hongchun, He Qingming, Cao Yi, Li Xiaojing, Liu Guoming
2025, 46(2): 193-201. doi: 10.13832/j.jnpe.2024.080024
Abstract(37) HTML (14) PDF(3)
Abstract:
In order to quickly obtain the lightweight shielding design scheme of mobile microreactor (microreactor) that meets the engineering preferences, a multi-objective intelligent optimization algorithm coupled with the data-driven surrogate model is employed to optimize the operational shielding of a land-based mobile microreactor based on multiple constraints and engineering preferences. We initially construct the dataset by sampling advanced shielding material and geometry’s parameters in the variable-scale optimization space and train the surrogate model (SN-MscaleDNN), which consists of the multi-frequency scale neural network called MscaleDNN and the GPU-parallel 1-D neutron-photon coupling transport SN solver, to achieve stable, accurate, and efficient dose rate prediction. This model is then coupled with the NSGA-II genetic algorithm, incorporating penalty functions and engineering preference models, to achieve the final shielding optimization that satisfies multiple constraints such as dose rate safety, material and mechanical limitations and engineering preferences. The results confirm the surrogate model's ability to accurately predict dose rates of one shielding scheme at a millisecond level with its generalization error under 10%. Furthermore, the coupled optimization algorithm enables the efficient search for more shielding schemes that meet engineering constraints and preferences. The method established in this study can be used for lightweight shielding optimization design of mobile microreactor in a variable-scale optimization space.
Research on PWR Core Refueling Optimization Method Based on Bayesian Optimization
Zhou Yuancheng, Li Yunzhao, Wu Hongchun
2025, 46(2): 202-208. doi: 10.13832/j.jnpe.2024.09.0003
Abstract(1) HTML (0) PDF(0)
Abstract:
Refueling optimization for pressurized water reactor (PWR) cores is crucial for the safe, efficient, and cost-effective operation of nuclear power plants, which is a constrained, nonlinear, non-convex integer combinatorial optimization challenge. Traditional methods often struggle with low computational efficiency and the risk of getting trapped in local optima. This paper presents a refueling optimization approach based on variational autoencoders, deep metric learning, and Bayesian optimization. The method leverages variational autoencoders to map discrete core layout configurations into a continuous latent space. Deep metric learning is then used to construct the latent space such that samples with similar core physical characteristics are positioned closer together. A multi-objective Bayesian optimization is subsequently applied to efficiently search for optimal solutions in this latent space, and a decoder transforms the optimal latent variables back into corresponding core layouts. Experimental validation using the first-cycle initial loading data of an M310 core demonstrates that this method significantly improves refueling optimization efficiency and solution quality, producing better configurations than traditional methods.
Research on Wear life Prediction of CRDM Transmission Pair Based on Machine Learning
Xiao Cong, Liu Chengmin, Luo Ying, Peng Hang, Li Wei, Zhang Zhiqiang, Huang Qingyu
2025, 46(2): 209-216. doi: 10.13832/j.jnpe.2024.080027
Abstract(32) HTML (8) PDF(3)
Abstract:
Control Rod Drive Mechanism (CRDM) is the only equipment unit with relative operation in the reactor. It can adjust the reactivity of the reactor quickly, so it is very important for the safe operation of the reactor. Wear is the main factor affecting the functional failure of CRDM transmission pair, directly determining its service life. Through the wear life test of the transmission pair of CRDM, it is found that the three main wear forms of the transmission pair are abrasive wear, fatigue wear and oxidation wear. At the same time, it is found that when the wear volume ratio at the top of the transmission pair reaches 16.46%, the sliding rod appears in the driving mechanism, which can be judged that the rotating parts have worn out, and the wear volume value at this moment is taken as the failure threshold of transmission pair. After obtaining the data of transmission pair wear degradation and external vibration signals, the relationship between internal wear and external vibration signals is constructed. Through external vibration signals and based on three machine learning algorithms SVR, CNN and LSTM, the life prediction models of CRDM transmission pair were established respectively. Through comparative analysis, it is concluded that in terms of prediction accuracy, the LSTM model outperforms the CNN model, which in turn outperforms the SVR model, while in terms of computational efficiency, the SVR model surpasses the CNN model, which in turn surpasses the LSTM model.
Study of a Three-way Coupling Calculation Method Based on Particle Transport-Activation Calculation-Intelligent Optimization
Zheng Zheng, Wang Mengqi, Mei Qiliang, Li Hui
2025, 46(2): 217-221. doi: 10.13832/j.jnpe.2024.090006
Abstract(27) HTML (10) PDF(1)
Abstract:
Existing shielding design optimization techniques usually rely on the experience of the designer, which is inefficient and has high uncertainty. In order to improve the efficiency of shielding design, this paper combines the multi-objective optimization algorithm and radiation shielding calculation to carry out shielding design, and develops a three-way coupling calculation code based on particle transport-activation calculation-intelligent optimization. The code is validated using the shielding calculation model constructed in this study. The numerical results show that the shielding design optimization method based on the discrete ordinate method and genetic algorithm can achieve the simultaneous optimization of multiple objectives, such as shielding volume, weight, activation dose rate after reactor shutdown and normal operation dose rate.
Intelligent O&M Technology of Reactors
Research on Intelligent Monitoring and Warning Algorithms for Unexpected Reactor Shutdown Events in Nuclear Power Plants
Li Xi, Wang Jiansheng, Yang Senquan, Xue Wei
2025, 46(2): 222-229. doi: 10.13832/j.jnpe.2024.090025
Abstract(33) HTML (13) PDF(0)
Abstract:
The detection of abnormal conditions during the operation of nuclear power plant units mainly relies on threshold alarm information from the Digital Control System (DCS), with a lack of trend analysis. This paper investigates the establishment of logical relationships between variables through event logic, and based on this, employs an Auto-Associative Neural Network (AANN) model for anomaly detection of correlated variables. Finally, it uses the Empirical Mode Decomposition (EMD) trend extraction algorithm and the Adaptive Sliding Window Holt Linear Trend (HOLT) model to predict abnormal variables. This approach can provide early warnings for shutdown and reactor trip events, enabling plant operators to detect and resolve issues earlier, thus improving the operational safety of nuclear power plants. Testing experiments were conducted using both simulated data and actual unit anomaly data. The results from real data experiments show a Mean Squared Error (MSE) of 0.1 and a Goodness of Fit (R2) of 0.99, with at least 1 hour of advance warning before shutdown actions. This confirms the accuracy and early warning capabilities of the proposed AANN-HOLT warning algorithm.
Study on Transient Parameter Prediction and Fault Diagnosis of Nuclear Power Plant Based on LSTM Neural Network
Liu Tao, Xie Jinsen
2025, 46(2): 230-238. doi: 10.13832/j.jnpe.2024.080036
Abstract(48) HTML (16) PDF(6)
Abstract:
To improve the accuracy and real-time performance of parameter prediction and fault diagnosis under transient conditions in nuclear power plants, this study employs a Long Short-Term Memory (LSTM) neural network model for prediction and diagnosis. By generating and randomizing fault scenarios, the model’s dependence on specific patterns is reduced, and its generalization capability in unknown fault situations is enhanced. The study integrates SHAP (SHapley Additive exPlanations) to conduct interpretability analysis on the parameter prediction results, evaluates the impact of different input features on the model’s predictive performance, and verifies its effectiveness under sensor failures and data transmission errors. Furthermore, fault diagnosis is performed on transient parameters with different noise levels to validate the model's robustness. The results demonstrate that the LSTM model achieves high accuracy in both prediction and diagnosis, and it maintains excellent performance even under sensor failures, data transmission errors, and noisy data. The method proposed in this study can improve the safety and stability of nuclear power plant operation and provide effective technical support for safety under accident conditions.
Research on Prediction Method of Reactor Axial Power Deviation Based on Combined Feature Selection and Temporal Convolutional Network
Chen Jing, Chen Yan, Jiang Hao, Duan Pengbin, Lin Weiqing, Qiu Xinghua, Xu Yong
2025, 46(2): 239-247. doi: 10.13832/j.jnpe.2024.090021
Abstract(38) HTML (10) PDF(0)
Abstract:
The axial power deviation of a reactor can reflect the axial power distribution of the core and the operation of the reactor. Aiming at the difficulties in predicting the axial power deviation under variable operating conditions, this paper proposes a prediction method of reactor axial power deviation based on the combined feature selection and temporal convolutional network (TCN). Taking the basic principle of axial power deviation control as the starting point, this paper analyzes the factors affecting the change of axial power deviation, comprehensively analyzes the redundancy and correlation among multi-dimensional features, uses the combined feature selection strategy to form the optimal feature subset for axial power deviation prediction, constructs the key correlation feature data for axial power deviation prediction, and inputs it into TCN to capture dynamic causality, so as to achieve the prediction of reactor axial power deviation. Experimental studies show that the proposed method can deeply explore the temporal causal change characteristics of the parameters related to the axial power deviation of the reactor, accurately predict the development trend of the axial power deviation, solve the problem that the traditional prediction model does not predict and track in time under complex operating conditions, and provide an auxiliary reference basis for the reactor status monitoring and safe operation of nuclear power plants.
Research on Intelligent Diagnosis and Monitoring Method for Abnormal Operation Events of Reactor Coolant System
Yao Yuantao, Zhe Na, Yong Nuo, Xia Dongqin, Ge Daochuan, Yu Jie
2025, 46(2): 248-254. doi: 10.13832/j.jnpe.2024.080023
Abstract(34) HTML (14) PDF(0)
Abstract:
In order to solve the problem that the traditional deep learning (DL)-based intelligent fault diagnosis model cannot monitor the unknown abnormal operation events, this work constructs an intelligent diagnosis framework for reactor coolant system based on the Variational Inference-based Probabilistic Deep Neural Network (VI-PDNN), enabling the diagnosis of unknown abnormal operation event categories while quantitatively evaluating the uncertainty of the output results. The framework can effectively utilize the uncertainty difference between known and unknown operating events to achieve effective monitoring and warning of unknown abnormal operating events. Finally, the proposed methodology is validated based on simulation data from an established reactor simulator. The results show that the proposed method not only obtains high diagnostic accuracy for known events, but also effectively monitors and warns against unknown abnormal events, providing an effective means for real-time intelligent diagnosis and monitoring of reactor system operation in real environment.
Research on the Characteristics of Personnel Situation Awareness under the Intelligent Control System of Nuclear Power Plants
Zheng Tengjiao, Duan Pengfei, Hou Jie, Xu Yunlong
2025, 46(2): 255-260. doi: 10.13832/j.jnpe.2024.080010
Abstract(57) HTML (14) PDF(0)
Abstract:
The application of intelligent technology in Nuclear power plants (NPPs) has made the operation and control more centralized and automated, while also bringing new and deeper human factors issues. As situational awareness (SA) is an important factor affecting personnel efficiency in complex human-machine systems, it is necessary to conduct research on the characteristics and improvement methods of personnel SA in intelligent control systems of NPPs. This article takes the automation level of control systems [intelligent control systems VS digital control systems (DCS)] as the independent variable, selects typical scenarios for experiments, uses multimodal physiological measurement techniques and subjective scales to identify the SA characteristics of personnel under intelligent control systems, and proposes preventive measures for SA errors. The experimental results show that, compared with the DCS, there is a significant difference in the skin conductance data and fixation point data of the operator under the application of intelligent control system; The operator's attention level tends to be focused and communication frequency decreases to ensure that automatic actions are executed on time; The number of points that the operator focuses on has significantly increased, and the operator pays more attention to the changes in parameters on the interface. The intelligent control system requires a higher level of SA of personnel in the accident occurrence and handling stages, but the level of attention/emotional arousal gradually increases in the event of an accident, and it may not be possible to quickly reach the highest level during the accident occurrence and handling stage, which may result in a mismatch between needs and reality. This study provides human factors data and theoretical support for the design optimization and implementation of intelligent control systems in the main control room of NPPs.
Prediction of Nuclear Power Plant Operating Parameters Based on Transfer Learning between Simulation and Measurement Data
Pu Ke, Song Houde, Liu Xiaojing, Song Meiqi
2025, 46(2): 261-271. doi: 10.13832/j.jnpe.2024.080004
Abstract(37) HTML (10) PDF(1)
Abstract:
The key to the safe operation of nuclear power plants is to achieve accurate prediction of their operating parameters. In recent years, data-driven methods have shown strong predictive capabilities. However, insufficient measurement data limits their predictive performance. Based on the transfer learning framework, this study develops a prediction model construction method that is pre-trained with multiple sets of simulation conditions and then fine-tuned with measured data. First, the Gated Recurrent Unit (GRU) neural network is trained with simulation data, and then the model is fine-tuned using part of the measurement data to predict the future state of the operating conditions. The feasibility of the method is verified using the measurement data of the B3.1 experiment on the PKL Ⅲ thermal hydraulic bench and 9 sets of similar RELAP5 simulation data. Using this method, the relative errors of steam pressure, steam temperature, downcomer fluid temperature, outlet temperature, inlet temperature and mass flow rate can reach 0.358%, 0.065%, 0.020%, 0.065%, 0.028% and 1.705%, respectively. Finally, five sets of numerical experiments are used to compare and illustrate the effectiveness of each module of the method.
Prediction of Spent Nuclear Fuel Decay Heat Based on GPR-SVR Co-training
Liu Zihao, Liu Tong, Wen Xin, Li Yi, Wang Beiqi
2025, 46(2): 272-281. doi: 10.13832/j.jnpe.2024.070016
Abstract(38) HTML (10) PDF(3)
Abstract:
The decay heat released by spent fuel assemblies is the main source of reactor core waste heat in PWR nuclear power plants. Accurate prediction of decay heat is crucial for the design and safety analysis of the nuclear power plant cooling system. However, the calculation cost of traditional nuclide decay simulation code is high, and the machine learning model may have over-fitting problems due to insufficient data. This study establishes a co-training model based on Gaussian Process Regression (GPR) and Support Vector Regression (SVR) to generate high-quality virtual decay heat samples. These virtual samples, combined with measured decay heat data, form a mixed dataset, which is used to train an Extreme Learning Machine (ELM) model for decay heat prediction. The results show that, compared with the conventional machine learning model, the co-training approach significantly enhances the stability and accuracy of decay heat predictions. After training on the mixed dataset, the prediction stability of the ELM model increased by 39.9%, and the RMSE of the predicted decay heat was 25.7% lower than that of the traditional nuclide decay simulation code. This research provides new insights for addressing the small sample problem in the field of nuclear engineering.
Research on Intelligent Accident Diagnosis Model of Nuclear Reactor Coolant System
Yan Jiasheng, Sui Yang, Dai Tao, Liu Jiayi, Jin Yi, Jia Xiaolong
2025, 46(2): 282-292. doi: 10.13832/j.jnpe.2024.060034
Abstract(28) HTML (12) PDF(2)
Abstract:
Although artificial intelligence technology has been extensively employed in the field of accident diagnosis for nuclear power plants, conventional models often suffer from shortcomings such as insufficient accuracy and poor generalizability, which fail to meet the stringent requirements for accident diagnosis of the nuclear reactor coolant system (NRCS). This study establishes a new intelligent accident diagnosis model for NRCS. Firstly, to enhance the accuracy of accident diagnosis, an NRCS accident diagnosis model (CNN-GRU) integrating convolutional neural network (CNN) and gated recurrent unit (GRU) is proposed; Firstly, to enhance the diagnostic accuracy of the model, convolutional neural networks (CNN) and gated recurrent unit (GRU) were integrated. The powerful feature extraction capabilities of CNN and the efficient time-series data classification abilities of GRU were combined to establish the NRCS accident diagnosis model (CNN-GRU). Secondly, to enhance the generalizability of the model, the grey wolf optimizer (GWO) algorithm was used to adaptively optimize the hyperparameters within the CNN-GRU model, thereby establishing the NRCS intelligent accident diagnosis model (GWO-CNN-GRU). Finally, to validate the performance of the proposed model, the NRCS in personal computer transient analyzer (PCTRAN) was used as the object of study, and the diagnostic process of one normal operating condition and four typical accident conditions was simulated. The results demonstrated that the proposed model achieved an average accident diagnosis accuracy of 99.6% on the NRCS test set for the CPR1000 reactor type, which is an improvement of 2.1% and 1.5% compared to the GRU and CNN-GRU models, respectively; Similarly, on the NRCS test set for the AP1000 reactor type, the proposed model achieved an average accident diagnosis accuracy of 99.5%, representing an increase of 1.7% and 1.3% over the other two models, respectively. Therefore, the model proposed in this paper demonstrates superior performance in terms of accuracy and generalizability, providing a valuable reference for intelligent accident diagnosis of NRCS.
Research on Regression Prediction of Pressurizer Liquid Level under Ocean Conditions Based on SSA-LSTM Neural Network
Li Dongyang, Quan Zixuan, Zhang Biao, Li Jiangkuan, Tan Sichao, Tian Ruifeng
2025, 46(2): 293-299. doi: 10.13832/j.jnpe.2024.050045
Abstract(37) HTML (10) PDF(0)
Abstract:
To ensure the safe operation of the nuclear reactor system in the ocean environment, it is necessary to establish a set of computational models to obtain the real-time liquid level in the pressurizer. By building an experimental system to collect relevant data, the Long-Short Term Memory (LSTM) neural network is optimized based on the sparrow search algorithm (SSA), and the liquid level regression prediction model is established according to the measured key parameters such as pressure and motion attitude. The research results show that the prediction accuracy of the established liquid level regression prediction model is excellent, which is obviously better than other traditional neural networks. The model has good generalization ability, and the prediction accuracy of fresh samples is still acceptable. By integrating the model into the control system, the liquid level can be output in real time, which can improve the safety of nuclear power operation under ocean conditions and provide reference for the intelligent operation and maintenance of nuclear power.
Intelligent Anomaly Detection Method of Pump Set Based on Convolve-gated Self-attention Multi-source Data Fusion
Sun Yuanli, Song Zhihao
2025, 46(2): 300-305. doi: 10.13832/j.jnpe.2024.090028
Abstract(43) HTML (15) PDF(1)
Abstract:
To address the challenge of diagnosing nuclear power pump set under varying operating conditions using multi-source detection signals, this paper proposes an intelligent anomaly detection method based on deep learning for pump set by fusing multi-source data. The method employs Convolutional Neural Networks (CNN) to fuse multi-source data, effectively analyzing the relationships among diverse data sources. A self-attention mechanism is adopted to extract fusion features of input data with attention weights, enabling the constructed intelligent anomaly detection model to autonomously adapt to different types of input data. This ensures the high accuracy of the proposed method in detecting abnormal states of nuclear power pump set under multi-source data scenarios. Additionally, residual blocks are incorporated to enhance model training performance. The reliability and accuracy of the method are validated through a pump set fault simulation test bench. The results demonstrate that the proposed detection method effectively integrates the informational features of multi-source data, enabling the reliable diagnosis of faults in pump set under variable operating conditions with high diagnostic precision.