Online First

Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes/issues, but are citable by Digital Object Identifier (DOI).
Display Method:
Prediction of Spent Nuclear Fuel Decay Heat Based on GPR-SVR Co-training
, Available online  , doi: 10.13832/j.jnpe.2024.070016
Abstract:
The decay heat released by spent fuel assemblies is the main source of reactor core waste heat in PWR nuclear power plants. Accurate prediction of decay heat is crucial for the design and safety analysis of the nuclear power plant cooling system. Traditional methods calculate decay heat using nuclide decay simulation programs such as ORIGEN-S. However, calculating decay heat for a large number of fuel assemblies can incur high computational costs. In recent years, machine learning has been employed to predict decay heat. However, overfitting issues may arise due to insufficient data, leading to low prediction accuracy. This study establishes a co-training model based on Gaussian Process Regression (GPR) and Support Vector Regression (SVR) to generate high-quality virtual decay heat samples. These virtual samples, combined with measured decay heat data, form a mixed dataset used to train an Extreme Learning Machine (ELM) model for decay heat prediction. The results show that the co-training approach significantly enhances the stability and accuracy of decay heat predictions. After training on the mixed dataset, the prediction stability of the ELM model increased by 39.9%, and the RMSE of the predicted decay heat was 25.7% lower than that of the nuclide decay simulation program. This research provides new insights for addressing the small sample problem in the field of nuclear engineering.
, Available online  , doi: 10.13832/j.jnpe.2024.09.0027
Abstract:
Scientific computation is pivotal throughout the nuclear industry's technical framework, encompassing everything from the creation of nuclear databases to the design, analysis, validation, and operation of nuclear power engineering, as well as the post-processing of nuclear fuel and the decommissioning of reactors. Historically, the scientific computing paradigm in the industrial field is mainly based on statistical methods for modeling experimental measurement data, as well as numerical computing methods represented by solving differential/integral equations. As artificial intelligence (AI) technology advances, leveraging AI for scientific computation is emerging as a novel paradigm. This paper introduces the basic principles and main features of this emerging technology field, focusing on the characteristics of the nuclear industry. It summarizes the current research work and analyzes the advantages and disadvantages of artificial intelligence scientific computing methods compared to traditional methods. The paper concludes with a prospective look at the future development trends of intelligent computation in the nuclear field, along with potential application scenarios, offering insights to foster the evolution of AI in scientific computation within the nuclear industry.
Application of Artificial Intelligence Algorithms in Thermal-Hydraulic Analysis of Nuclear Reactors
, Available online  , doi: 10.13832/j.jnpe.2024.090039
Abstract:
The advantages of artificial intelligence (AI) algorithms in rapid prediction, self-learning, and strong generalization have been applied to address the complex thermal-hydraulic phenomena and mechanisms in nuclear reactors. These applications include the prediction of thermal-hydraulic parameters, optimization of thermal safety analysis programs, and enhancement of computational fluid dynamics (CFD) efficiency. This paper reviews the current research status of AI algorithms in predicting thermal-hydraulic parameters such as flow regimes, boiling heat transfer, and critical flow. It proposes that AI models, such as physics-informed neural networks (PINNs), can overcome the challenge of insufficient extrapolation accuracy due to the lack of experimental data under high-parameter and specific structural conditions within reactors. The adaptive advantages of AI algorithms can address issues such as model singularity and convergence difficulties in safety analysis programs. Additionally, model calibration methods can significantly reduce the time and uncertainty involved in system modeling, while data assimilation techniques can minimize time-accumulated errors, greatly improving the accuracy of time-series data predictions. AI algorithms can also enhance the computational efficiency and accuracy of traditional CFD methods. Through model order reduction, they can effectively predict the three-dimensional thermal-hydraulic performance parameters of key nuclear reactor components.
Research on wear life prediction of CRDM transmission pair based on machine learning
, Available online  , doi: 10.13832/j.jnpe.2024.080027
Abstract:
Control Rod Drive Mechanism(CRDM) is the only unit with relative operation in reactor. It can adjust the reactivity of the reactor quickly, so it is very important for the safe operation of the reactor. Wear is the main factor that affects the failure of the driving pair of the control rod drive mechanism, and directly determines its service life. Through the wear life test of the driving pair of CRDM, it is found that the three main wear forms of the driving pair are abrasive wear, fatigue wear and oxidation wear. At the same time, it is found that when the wear volume ratio at the top of the transmission pair reaches 16.46% , the sliding rod appears in the driving mechanism. After obtaining the data of wear degradation and external vibration signals, the relationship between internal wear and external vibration signals is constructed, based on three machine learning algorithms SVR, CNN and LSTM, the life prediction models of control rod drive mechanism were established, LSTM model is superior to CNN model in prediction precision, and SVR model is superior to CNN model in computation efficiency.
An Overview of Data Fusion Methods for the Digital Twin of Nuclear Reactor
Song Meiqi, Chen Fukun, Liu Xiaojing
, Available online  , doi: 10.13832/j.jnpe.2024.11.0148
Abstract(14) HTML (6) PDF(1)
Abstract:
The development of digital twin of nuclear reactor has the potential to enhance the security and financial viability of nuclear energy generation facilities by achieving a cyber-physical fusion, while the key challenge of cyber-physical fusion is data fusion problem. In this study, the concept of data fusion is introduced at first, including a definition, the fusion object, the fusion level, the fusion method, and the relationship between digital twin and data fusion. Subsequently, the application and research status of data fusion methods in the entire life cycle of digital twin in nuclear reactor are discussed from eight perspectives. In conclusion, the limitations and challenges inherent in current research has been identified, particularly concerning data sources and the data fusion methods employed. It is anticipated that this research will provide suggestions for the resolution of significant challenges associated with data fusion in the development of digital twin for nuclear reactor.
Research on regression prediction of pressurizer liquid level under ocean conditions based on SSA-LSTM neural network
, Available online  , doi: 10.13832/j.jnpe.2024.050045
Abstract:
To ensure the safe operation of the nuclear reactor system in the ocean environment, it is necessary to establish a set of computational models to obtain the real-time liquid level in the pressurizer. Therefore, this paper establishes an experimental system to collect relevant data, optimizes the LSTM neural network through the sparrow search algorithm, and establishes a regression prediction model between the measured pressure, motion attitude parameters and liquid level. The results show that the neural network model established in this paper has excellent prediction accuracy, which is significantly better than other traditional neural networks. The model has good generalization ability, and the prediction accuracy of fresh samples is still acceptable. Integrated into the control system, the liquid level can be output in real time, which can improve the safety of nuclear power operation under ocean conditions and provide reference for the intelligent operation and maintenance of nuclear power.
Research on the Algorithm of Solving Neutron Equation Based on ResNet-PINN
, Available online  , doi: 10.13832/j.jnpe.2024.080035
Abstract:
Abstract: Physics-Informed Neural Networks (PINNs), as a deep learning method incorporating physical knowledge, have shown great potential in addressing reactor core neutron physics problems within the field of nuclear engineering in recent years. However, PINNs still face limitations in terms of accuracy when solving these problems. To further improve the accuracy of PINN models, this paper proposes a Residual Network-based Physics-Informed Neural Network model, or ResNet-PINN. The fundamental principles and numerical computation processes of ResNet-PINN are elaborated in detail, and it is applied to solve neutron diffusion and transport equations. Experimental validation demonstrates that ResNet-PINN can significantly enhance the accuracy in solving reactor core neutron equations, effectively addressing the accuracy limitations of standard PINN models, and offers a degree of innovation in its approach.
Research on Intelligent Monitoring and Warning Algorithms for Unexpected Reactor Shutdown Events in Nuclear Power Plants
, Available online  , doi: 10.13832/j.jnpe.2024.090025
Abstract:
During normal operation of nuclear power plants, unintended protection actions, such as stopping the reactor, shutdown, load shedding, etc., may occur due to equipment failure, instrumentation and control system failure, and human error. The discovery of abnormal conditions during current unit operation mainly relies on the threshold alarm information of the DCS system and lacks the analysis of trends. The study establishes the logical relationship between variables through event logic, and based on this uses auto-associative neural network (AANN) modeling to detect anomalies in associated variables, and finally uses empirical modal decomposition (EMD) trend extraction algorithm with adaptive sliding window Holton linear trend model fitting to predict anomalous variables. It is able to realize the advance and accurate warning of stopping reactor shutdown events, so that the operation and maintenance personnel can find and solve the problems earlier and improve the safety of nuclear power operation. The article utilizes the simulation data and the real abnormal data of the unit to conduct test experiments, and obtains the real data experimental results with a mean square error of 0.1 and an R2 of 0.99, and at least more than one hour in advance of the shutdown action for early warning, which verifies the accuracy of the methodology and the ability of early warning.
Intelligent anomaly detection method of pump group based on convolve-gated self-attention multi-source data fusion
, Available online  , doi: 10.13832/j.jnpe.2024.090028
Abstract:
Aiming at the problems that it is difficult for pump sets to use multi-dimensional abnormal signals for diagnosis and the relationship between multi-dimensional signals cannot be fully extracted under extreme operating conditions, an intelligent anomaly detection method for pump sets using deep learning network and multi-source sensor data is proposed. This method uses convolutional neural network to fuse multi-source sensor data, and can effectively analyze the relationship between multi-dimensional signals. The self-attention mechanism is used to extract the fusion features of input signals with attention weights, so that the constructed anomaly detection model has the ability to adapt to different types of input signals independently, which ensures the accuracy of the proposed method in the abnormal state detection of pump sets in the scenario of multi-source sensor big data. The reliability and accuracy of the proposed method were verified by setting up a pump group fault simulation test rig. The results show that the proposed method can effectively integrate the information characteristics of multi-source sensors, and on this basis can fully complete the fault diagnosis task of the pump group under extreme operating conditions, and has high diagnostic accuracy.
Enhancing Nuclear Energy Efficiency through NAS-Optimized PINNs for Neutron Physics Equations
, Available online  , doi: 10.13832/j.jnpe.2024.090041
Abstract:
Nuclear reactors are crucial for the production and provision of nuclear energy, and their core design can be modeled as neutron diffusion and transport equations. Therefore, the fast and accurate solution of these two kinds of equations can effectively control nuclear reactor to ensure its safety and stability. Recent developments in Physics-Informed Neural Network (PINN) have significantly enhanced computational speed and efficiency in solving partial differential equations. However, their application is still constrained by the inflexibility of predefined structures, which limits its width and depth in practical applications to some extent. This paper proposes an innovative approach utilizing Neural Architecture Search (NAS) to dynamically identify optimal PINN architectures for neutron diffusion and transport equations in nuclear reactors. Specifically, we first introduce differential transform order theory, facilitating the transformation of integral terms in transport equations into higher-order differential terms to adapt the PINN model. Secondly, we use genetic algorithm (GA) as an optimization strategy in NAS to find the PINN model that is most suitable for solving the reactor equation. The verification results prove that the method has higher accuracy in solving reactor equations of different dimensions, providing a more accurate and efficient solution for complex neutron equations.
Quantification of parameter uncertainty in re-flooding model based on random forest algorithm
, Available online  , doi: 10.13832/j.jnpe.2024.070031
Abstract(12) PDF(1)
Abstract:
In order to assess the uncertainty of physical models (inputs) of complex accidental phenomena, an inverse uncertainty quantification method based on the Random Forest algorithm combined with the PSO-Kriging proxy model and the KDE-SJ nonparametric statistics is proposed and applied to the model assessment of the re-inundation phenomenon of large breach accidents. The probability density distributions of the model parameters were obtained through the degree of consistency between the calculation results (output) of the system program and the FEBA experimental data as a classification criterion for the Random Forest algorithm. The validation results show that the 95% uncertainty bands obtained by randomly sampling 93 groups of calculations on the probability density distributions can completely envelope the experimental data, but the calibration of the model using the plurality or the mean may not be as effective as the maximum a posteriori mean obtained by the Bayesian approach, which is the maximum a posteriori mean.
Research on Intelligent Accident Diagnosis Model of Nuclear Reactor Coolant System
, Available online  , doi: 10.13832/j.jnpe.2024.060034
Abstract:
The nuclear reactor coolant system (NRCS) is one of the most critical systems in nuclear power plants, making the implementation of effective accident diagnosis highly significant. Although artificial intelligence technology has been extensively employed in the field of accident diagnosis for nuclear power plants, conventional models often suffer from shortcomings such as insufficient accuracy and poor generalizability, which fail to meet the stringent requirements for accident diagnosis of the NRCS. To address these issues, this study establishes a new intelligent accident diagnosis model for NRCS. Firstly, to enhance the accuracy of accident diagnosis, an NRCS accident diagnosis model (CNN-GRU) integrating convolutional neural network (CNN) and gated recurrent unit (GRU) is proposed; Firstly, to enhance the diagnostic accuracy of the model, convolutional neural networks (CNN) and gated recurrent unit (GRU) were integrated. The powerful feature extraction capabilities of CNN and the efficient time-series data classification abilities of GRU were combined to establish the NRCS accident diagnosis model (CNN-GRU). Secondly, to enhance the generalizability of the model, the grey wolf optimizer (GWO) algorithm was used to adaptively optimize the hyperparameters within the CNN-GRU model, thereby establishing the NRCS intelligent accident diagnosis model (GWO-CNN-GRU). Finally, to validate the performance of the proposed model, the NRCS in PCTRAN were used as research subjects, simulating the diagnostic process of one normal operating condition and four typical accident conditions. The results demonstrated that the proposed model achieved an average accident diagnosis accuracy of 99.6% on the NRCS test set for the CPR1000 reactor type, which is an improvement of 2.1% and 1.5% compared to the GRU and CNN-GRU models, respectively; Similarly, on the NRCS test set for the AP1000 reactor type, the proposed model achieved an average accident diagnosis accuracy of 99.5%, representing an increase of 1.7% and 1.3% over the other two models, respectively. Therefore, the model proposed in this paper demonstrates superior performance in terms of accuracy and generalizability, providing a valuable reference value for intelligent accident diagnosis of the NRCS.
Development of prediction model for two-phase flow regime in nuclear reactor core based on artificial neural network
, Available online  , doi: 10.13832/j.jnpe.2024.090038
Abstract:
Most existing flow regime prediction models in system analysis codes are derived from early experimental data, which restricts their applicability to a limited range of conditions. To leverage the expanding volume of experimental flow regime data, enhance model applicability, and improve prediction accuracy, this study compiled a comprehensive experimental dataset to establish a training database, followed by thorough data preprocessing. A two-phase flow regime prediction model was subsequently developed using artificial neural network algorithms and was benchmarked against traditional prediction models. The findings demonstrate that the newly developed model can be directly applied across a diverse array of operating conditions, offering superior prediction accuracy compared to conventional models. This study introduces a novel methodology for flow regime prediction, with the model’s applicability and accuracy poised to improve progressively as the training dataset is further expanded.
Research on Data-driven Intelligent Optimization Design of Nuclear Reactor Shielding
, Available online  , doi: 10.13832/j.jnpe.2024.080024
Abstract:
In order to expedite the design process of lightweight reactor shielding, a multi-objective intelligent optimization algorithm coupled with the data-driven surrogate model is employed to optimize the operational shielding of a land-based micro-mobile reactor based on multiple constraints and engineering preferences. We initially construct the dataset by sampling advanced shielding material and geometry’s parameters in the variable-scale optimization space and train the surrogate model (SN-MscaleDNN), which consists of the multi-frequency scale neural network called MscaleDNN and the GPU-parallel 1-D neutron-photon coupling transport SN solver, to achieve stable, accurate, and efficient dose rate prediction. This model is then integrated with the NSGA-II genetic algorithm, incorporating penalty functions and engineering preference models, to achieve the final shielding optimization that satisfies multiple constraints such as safety, manufacturing, and mechanical limitations. The results confirm the surrogate model's ability to accurately predict dose rates of one shielding scheme at a millisecond level with its generalization error under 10%. Furthermore, the coupled optimization algorithm enables the efficient search for more shielding schemes that meet engineering constraints and preferences, thereby offering novel insights into the lightweight shielding optimization of micro-mobile reactors in a variable-scale space.
Transient parameters prediction and fault diagnosis of nuclear power plant based on Long Short-Term Memory neural network
, Available online  , doi: 10.13832/j.jnpe.2024.080036
Abstract:
To enhance the accuracy and real-time performance of transient condition parameter prediction and fault diagnosis in nuclear power plants, this study employs an LSTM neural network model for prediction and diagnosis. By generating and randomizing fault scenarios, the model's dependency on specific patterns is reduced, improving its generalization ability in previously unseen fault conditions. The study incorporates the SHAP method to provide an interpretive analysis of the model's parameter prediction results, assessing the impact of different input features on the model's predictive performance and validating the model's behavior under sensor faults and data transmission errors. Additionally, fault diagnosis was conducted for transient parameters with varying levels of noise, verifying the model’s robustness. The results demonstrate that the LSTM model achieves high accuracy in both prediction and diagnosis, performing well even in the presence of sensor faults, data transmission errors, and noisy data. The proposed LSTM approach enhances the operational safety and stability of nuclear power plants, providing effective technical support for safety under accident conditions.
POD-RBF based ROM Method to Calculate Temporal-Spatial Temperature Distribution under DLOFC Accident for VHTR
, Available online  , doi: 10.13832/j.jnpe.2024.10.0056
Abstract:
Very High Temperature gas cooled Reactor (VHTR) has a wide range of applications such as hydrogen production by nuclear energy. Depressurized Loss of Forced Cooling accident (DLOFC) is one of the most severe design basis accidents of VHTR. It may cause large cost to analyze the DLOFC with large amount of different parameters using the Full Order Model (FOM). It is of vital importance for calculating DLOFC accident accurately and quickly within the scope of the design parameter which is also a great need for design of VHTR. This paper constructs a FOM for VHTR based on TINTE and a Reduced Order Model (ROM) which can calculate the DLOFC accident of VHTR based on POD-RBF. This paper proposes two ROM methods for calculating the transient process of DLOFC. One is to consider the time equal to input temperature etc. of perturbation parameter. The other one is to calculate the coefficient of whole time for DLOFC of one parameter at one time. The results show that the maximum relative error of both methods is less than 1%, and the computation efficiency of ROM is much higher than that of FOM. At the same time, method 2 is 40 times more efficient than method 1.
 
Application of a three-way coupling multi-objective optimization algorithm on shielding design
, Available online  , doi: 10.13832/j.jnpe.2024.090006
Abstract:
Existing shielding design optimization techniques usually rely on the experience of the designer, which is inefficient and has high uncertainty. Genetic algorithms can help shielding designers quickly search for feasible shielding solutions under given conditions to achieve simultaneous optimization of multiple objectives, such as dose rate, volume of shielding material, and weight of shielding material. In order to improve the efficiency of shielding design, this paper combines the multi-objective optimization algorithm and radiation shielding calculation to carry out shielding design, develops a transport-activation-optimization three-way coupling calculation program, and validates a shielding calculation model constructed based on this paper. The numerical results show that the shielding design optimization method based on the discrete ordinate method and genetic algorithm can achieve the simultaneous optimization of multiple objectives, such as shielding volume, weight, activation dose rate after reactor shutdown and normal operation dose rate.
Research on Source Iteration Method Based on PINN with Acceleration Algorithm
, Available online  , doi: 10.13832/j.jnpe.2024.090040
Abstract:
This paper proposes a new method which combines physics-informed neural networks (PINN) with traditional source iteration method to solve few group neutron diffusion equations. And this paper uses Anderson acceleration method to accelerate the iterative process. The results of numerical examples such as two-dimensional multi material and three-dimensional single material show that the combination of PINN and traditional source iteration method can calculate the continuous neutron flux density distribution while ensuring calculation accuracy. The use of Anderson acceleration method can reduce the number of iterations and successfully achieve the forward solution of the few group neutron diffusion equations, which promotes the application of artificial intelligence algorithms in the nuclear field.
Prediction of nuclear power plant operating parameters based on transfer learning between simulation and measurement data
, Available online  , doi: 10.13832/j.jnpe.2024.080004
Abstract(16) PDF(1)
Abstract:
The key to the safe operation of nuclear power plants is to achieve accurate prediction of their operating parameters. In recent years, data-driven methods have shown strong predictive capabilities. However, insufficient measurement data limits their predictive performance. Based on the transfer learning framework, this study develops a prediction model construction method that pre-trains with multiple sets of simulation conditions and then fine-tunes with measurement data. First, the GRU neural network is trained with simulation data, and then the model is fine-tuned using part of the measurement data to predict the future state of the operating conditions. The feasibility of the method is verified using the measurement data of the B3.1 experiment of the PKL III thermal hydraulic bench and 9 sets of similar RELAP5 simulation data. Using this method, the relative errors of steam pressure, steam temperature, downcomer fluid temperature, outlet inlet temperature and mass flow rate can reach 0.358%, 0.065%, 0.020%, 0.065%, 0.028% and 1.705%, respectively. Finally, five sets of numerical experiments are used to compare and illustrate the effectiveness of each module of the method.
Deep Learning Solution Technology for Sparse Data in the Multi-Channel Flow Field of a PWR Rod Bundle
, Available online  , doi: 10.13832/j.jnpe.2024.080039
Abstract:
The Reynolds number can reach 10^5 in typical reactor core operating conditions, and the coolant flow exhibits significant nonlinearity. An inevitable mismatch between the actual flow boundaries and states and the ideal flow equations can lead to conflicts between the data and the constraints of the governing equations during the solving process. This mutual restriction can cause difficulties in achieving convergence. This paper developed a sparse data-solving method based on deep learning to address this issue. By designing an adaptive mismatch correction scheme, an adaptive adjustment factor is introduced into the governing equations to correct the ideal model dynamically. This approach overcomes the convergence difficulties and accuracy issues caused by the inconsistency between the data and the equations. Based on this technology, the study further explored flow field-solving strategies under small sample data conditions and designed uniform, velocity-gradient-based, and hybrid point distribution strategies. These strategies aim to optimize the spatial distribution of sample points to improve the overall accuracy of the flow field solutions. The results show that the uniform point distribution strategy provides the best optimization effect and significantly enhances the solution accuracy among the three strategies. Even with only 60 small sample points, the proposed method can effectively achieve high-accuracy flow field solutions, providing an efficient and highly applicable solution for solving the flow field of PWR reactor core rod bundles under sparse data conditions.
Research on Lattice Boltzmann Solution of Generalized Convection Diffusion Equation Based on Physical Fusion Neural Network
Wang Yahui, Xiao Hao, Ma Yu, Xie Yuchen, Chi Honghang
, Available online  , doi: 10.13832/j.jnpe.2024.070062
Abstract(11) HTML (6) PDF(0)
Abstract:
This paper proposes the physics fusion neural network based lattice Boltzmann method (PFNN-LBM) for solving the nonlinear convection-diffusion equations. The unified discrete-velocity Boltzmann equation for equations with different characteristics is established under lattice Boltzmann method, and solved using the parameterize physics–informed neural network with a single network. The PFNN-LBM can simultaneously solve governing equations with different forms and different physical parameters within one single training. Four types of equations are considered to verify the accuracy and adaptability of proposed PFNN-LBM, including diffusion equation, nonlinear heat conduct equation, Sine–Gordon equation, and Burgers–Fisher equation, with different physical parameters. The two-group neutron diffusion equations are also tested. The calculation results show that the proposed PFNN-LBM can solve equations for different forms and physical parameters with high accuracy, and only one training is required. This work can provide a novel framework for solving different types of equations efficiently and flexibly, and for engineering application, this work may have outstanding advantages in multi-physics coupling calculations.
Intelligent Diagnosis and Monitoring for Abnormal Operation Event of Reactor Coolant System
Yao Yuantao, Zhe Na, Yong Nuo, Xia Dongqin, Ge Daochuan, Yu Jie
, Available online  , doi: 10.13832/j.jnpe.2024.080023
Abstract(9) HTML (5) PDF(0)
Abstract:
In order to solve the problem that the traditional deep learning (DL)-based intelligent fault diagnosis model cannot monitor the unknown abnormal operation events, this work constructs an intelligent diagnosis framework based on the probabilistic deep neural network (VI-PDNN) model of variational inference, which realizes the diagnosis of the abnormal event categories of the reactor coolant system and quantitatively evaluates the uncertainty of the output results. The framework can effectively utilize the uncertainty difference between known and unknown operating events to achieve effective monitoring and warning of unknown abnormal operating events. Finally, the proposed methodology is validated based on simulation data from an established reactor simulator. The results show that the proposed method not only obtains high diagnostic accuracy for known events, but also effectively monitors and warns against unknown abnormal events, providing an effective means for real-time intelligent state diagnosis and monitoring of reactor system operation in real environment.
Research on Prediction of Transient Parameters in Tod Bundle Subchannel Based on POD-ML Method
Xu Yujie, Mo Jinhong, Dong Xiaomeng, Liu Yong, Xu Anqi, Yu Yang
, Available online  , doi: 10.13832/j.jnpe.2024.080031
Abstract(10) HTML (5) PDF(1)
Abstract:
Model reduction (ROM) effectively reduces the complexity of physical models by mapping full-order conservation equations to lower-order subspaces or building data-driven proxy models. Compared with traditional computational fluid dynamics (CFD) simulation, the reduced order model is more efficient in large-scale simulation. In this paper, a reduced order model framework is proposed by combining POD with machine learning (ML) to predict mass flow parameters in beam subchannels. The comparison of two different forecasting methods shows that both methods have advantages and disadvantages in long-term and short-term forecasting, which can provide a scheme for other complex system forecasting in the future.
Reduced Order Modeling for Neutron Transport Equation Based on Operator Inference
Xiao Wei, Liu Xiaojing, Zhang Tengfei, Zu Jianhua, Chai Xiang, He Hui
, Available online  , doi: 10.13832/j.jnpe.2024.080042
Abstract(4) HTML (2) PDF(0)
Abstract:
To establish a real-time prediction model for the time-dependent neutron transport equation, the affine-parametric operator inference is employed to train a reduced-order model of the neutron transport equation. Operator inference, through singular value decomposition and solving optimization problem, non-intrusively fits the operators of the reduced dynamic equations in the subspace while preserving the physical structure described by the original governing equations. The affine-parametric structure effectively addresses the time-varying parameters in time-dependent neutron transport equations, achieving rapid solutions with time-varying parameters without implementing interpolation in the parameter space. The numerical results show that the reduced-order model based on high-fidelity data and affine-parametric operator inference has good generalization ability, accurately solving transient problems under different time-varying parameters. Therefore, the reduced-order model proposed in this study can be used for real-time prediction of high-fidelity neutron transport equations.
Offline Parameter Optimization of Steam Generator Liquid Level Control System Based on NSGA-II Algorithm
Sun Zhejun, Wei Xinyu, Zhang Nan, Li Mingqian, Zhang Ruiping, Wang Yulong, Sun Peiwei
, Available online  , doi: 10.13832/j.jnpe.2024.090057
Abstract(27) HTML (8) PDF(3)
Abstract:
The steam generator is an important equipment in nuclear power plants. Currently, the liquid level of the steam generator is mainly controlled by a fixed PID, so it is necessary to tune the PID parameters. Traditional parameter tuning methods require precise mathematical models, and when accurate model information cannot be obtained, the tuning effect is poor. Therefore, this article proposes a method for tuning the PID parameters of steam generator liquid, which extracts historical data for offline tuning. Firstly, the BP neural network is used to identify the model of the steam generator liquid level control system based on historical data. Then, the PID parameters are optimized offline on the established BP neural network model. The parameter tuning method adopts multi-objective genetic algorithm (NSGA-II), with the dynamic performance index of the control system as the objective function, adjusting the PID parameters to improve the control effect. The proposed algorithm was validated through simulation in Matlab/Simulink, and the results showed that the offline parameter optimized steam generator level control system had better overshoot and adjustment time than the original control system in different operating conditions, and had better control effects.