Prediction of Critical Parameters of Reprocessing Non-uniform Conditions Based on Improved BP Neural-network
-
摘要: 作为后处理厂的关键工艺设备,萃取柱、贮槽经常出现溶液浓度波动工况(即非均一工况),而对其进行临界安全分析时,技术人员假定浓度扩大若干倍来进行保守分析——这虽满足保守性要求,但引入了过多的临界裕量,限制了后处理工艺的处理效率和能力。为解决这些问题,本研究基于改进BP神经网络方法,利用大型蒙特卡罗程序MCNP,针对典型设备结构尺寸完成了随机浓度分布的梯度建模,实现了基于浓度分布预测有效增殖因子(keff)的临界安全分析方法。数据测试结果表明,以本研究方法计算非均一工况keff的结果平均误差为1.82×10−4,损失函数均方差收敛值为3.34×10−6,远小于未改进的模型(2.4450×10−4)。同时与保守方法对比,本研究方法引入的临界裕量为–1.31×10−3,远小于传统方法(0.32951)。结果证明本研究方法在满足保守性的前提条件下,计算结果更精准、更有效,为后处理临界安全分析提供了方法参考。
-
关键词:
- 改进BP神经网络 /
- 非均一工况 /
- 参数预测 /
- 有效增殖因子(keff)
Abstract: As the key process equipment of reprocessing plant, the extraction column and storage tank often have the condition of fluctuating solution concentration (i.e., non-uniform condition). When conducting critical safety analysis, technicians adopt the conservative method of enlarging concentration several times. Although this meets the conservative requirements, it introduces too much critical margin, which limits the treatment efficiency and capacity of reprocessing. In order to solve the above problems, based on the improved BP neural network method and the large-scale MC code MCNP, this study completed the gradient modeling of random concentration distribution for typical equipment structure size, and realized the critical safety analysis method of predicting effective proliferation factor (keff) based on concentration distribution. Data test results show that the average error of keff calculated under non-uniform conditions by this method is 1.82×10−4, and the convergence value of loss function MSE is 3.34×10−6, which is far smaller than the unimproved model (2.4450×10−4). At the same time, in comparison with the conservative method, the critical margin introduced by the proposed method is –1.31×10−3, which is much smaller than that of the traditional method (0.32951). The above results prove that the method in this study is more accurate and effective under the precondition of conservativeness, and provide a method reference for the critical safety analysis of reprocessing.-
Key words:
- Improved BP neural network /
- Non-uniform condition /
- Parameter prediction /
- keff
-
表 1 IBP结构
Table 1. Structure of IBP
名称 激活函数 神经元数 输入层 10 隐藏层1 Relu 128 隐藏层2 Relu 32 隐藏层3 Relu 20 输出层 $ f\left(x\right)=x $ 1 表 2 不同训练次数下损失值
Table 2. Value of Loss with Different Trainning Times
训练次数 训练集损失 测试集损失 100 1.86×10−5 4.15×10−5 60 3.64×10−6 5.99×10−6 50 2.63×10−6 2.36×10−6 40 2.96×10−6 4.62×10−6 30 3.48×10−6 8.30×10−6 20 8.12×10−6 2.65×10−6 10 1.54×10−5 1.95×10−5 表 3 不同实验情况下平均误差与均方差值
Table 3. Average Error and MSE of Different Experiments
实验次数 平均误差 MSE 1 2.18×10−4 4.76×10−6 2 1.81×10−4 3.28×10−6 3 1.70×10−4 2.88×10−6 4 1.77×10−4 3.15×10−6 5 1.62×10−4 2.64×10−6 平均值 1.82×10−4 3.34×10−6 表 4 非均一计算、保守方法计算、IBP计算三种方法分别计算非均一工况的结果对比
Table 4. Comparison of Results of Non-uniform, Conservative and IBP Calculations under Non-uniform Conditions
参数 非均一计算 保守方法计算 使用IBP计算 平均铀浓度/[g(U) ∙ L−1] 43.25 151.38 43.25 keff 0.33893 0.66844 0.33762 MCNP计算 $ \mathrm{\sigma } $ 0.00081 0.00183 3 $ \mathrm{\sigma } $置信区间(99%) [0.33678, 0.34108] [0.66360, 0.67327] 耗时/s 31 32 0.15 引入反应性/10−5 0 +32951 −131 -
[1] 日本原子能研究所. 核临界安全手册[M]. 李喆,刘开武,胥全凯,等译. 北京:中国原子能出版社,2003:58-60. [2] 王俊峰. 动力堆核燃料后处理工学[M]. 北京:中国原子能出版社,2010:632-633. [3] GOERTZEL G. Minimum critical mass and flat flux[J]. Journal of Nuclear Energy (1954), 1956, 2(3-4): 193-201. doi: 10.1016/0891-3919(55)90034-6 [4] 霍小东,谢仲生. 遗传算法在CANDU堆燃料管理中应用的研究[J]. 核动力工程,2005,26(6):539-543. [5] 蔡宛睿,夏虹,杨波. 基于BP神经网络的堆芯三维功率重构方法研究[J]. 原子能科学技术,2018,52(12):2130-2135. doi: 10.7538/yzk.2018.youxian.0163 [6] 王东东,杨红义,王端,等. 中国实验快堆热工参数的自适应BP神经网络预测方法研究[J]. 原子能科学技术,2020,54(10):1809-1816. [7] SERRA P L S, MASOTTI P H F, ROCHA M S, et al. Two-phase flow void fraction estimation based on bubble image segmentation using Randomized Hough Transform with Neural Network (RHTN)[J]. Progress in Nuclear Energy, 2020, 118: 103133. doi: 10.1016/j.pnucene.2019.103133 [8] ZHOU L W, GARG D, QIU Y, et al. Machine learning algorithms to predict flow condensation heat transfer coefficient in mini/micro-channel utilizing universal data[J]. International Journal of Heat and Mass Transfer, 2020, 162: 120351. doi: 10.1016/j.ijheatmasstransfer.2020.120351 [9] ABADI M, BARHAM P, CHEN J M, et al. TensorFlow: a system for large-scale machine learning[C]// Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation. Savannah: USENIX, 2016: 265-283. [10] MOEKEN H H P. The density of nitric acid solutions of uranium and uranium-aluminium alloys[J]. Analytica Chimica Acta, 1969, 44(1): 225-228. doi: 10.1016/S0003-2670(01)81757-8 [11] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536. doi: 10.1038/323533a0 [12] HORNIK K, STINCHCOMBE M, WHITE H. Multilayer feedforward networks are universal approximators[J]. Neural Networks, 1989, 2(5): 359-366. doi: 10.1016/0893-6080(89)90020-8 [13] 万飞笑,罗泽,阎保平,等. 自适应BP神经网络在日最高气温预报中的应用[J]. 科研信息化技术与应用,2015,6(3):68-78. [14] KINGMA D P, BA J. Adam: a method for stochastic optimization[C]//Proceedings of the 3rd International Conference on Learning Representations. San Diego: ICLR, 2015. [15] 龙曲良. TensorFlow深度学习:深入理解人工智能算法设计[M]. 北京:清华大学出版社,2020:123-130. [16] GÉRON A. 机器学习实战[M]. 宋能辉,李娴 译. 第二版. 北京:机械工业出版社,2020:258-259.