神经算法BP算法的BP算法基本介绍

神经算法  时间:2021-07-02  阅读:()

BP算法及其改进

传统的BP算法及其改进算法的一个很大缺点是:由于其误差目标函数对于待学习的连接权值来说非凸的,存在局部最小点,对网络进行训练时,这些算法的权值一旦落入权值空间的局部最小点就很难跳出,因而无法达到全局最小点(即最优点)而使得网络训练失败。

针对这些缺陷,根据凸函数及其共轭的性质,利用Fenchel不等式,使用约束优化理论中的罚函数方法构造出了带有惩罚项的新误差目标函数。

用新的目标函数对前馈神经网络进行优化训练时,隐层输出也作为被优化变量。

这个目标函数的主要特点有: 1.固定隐层输出,该目标函数对连接权值来说是凸的;固定连接权值,对隐层输出来说是凸的。

这样在对连接权值和隐层输出进行交替优化时,它们所面对的目标函数都是凸函数,不存在局部最小的问题,算法对于初始权值的敏感性降低; 2.由于惩罚因子是逐渐增大的,使得权值的搜索空间变得比较大,从而对于大规模的网络也能够训练,在一定程度上降低了训练过程陷入局部最小的可能性。

这些特性能够在很大程度上有效地克服以往前馈网络的训练算法易于陷入局部最小而使网络训练失败的重大缺陷,也为利用凸优化理论研究前馈神经网络的学习算法开创了一个新思路。

在网络训练时,可以对连接权值和隐层输出进行交替优化。

把这种新算法应用到前馈神经网络训练学习中,在学习速度、泛化能力、网络训练成功率等多方面均优于传统训练算法,如经典的BP算法。

数值试验也表明了这一新算法的有效性。

本文通过典型的BP算法与新算法的比较,得到了二者之间相互关系的初步结论。

从理论上证明了当惩罚因子趋于正无穷大时新算法就是BP算法,并且用数值试验说明了惩罚因子在网络训练算法中的作用和意义。

对于三层前馈神经网络来说,惩罚因子较小时,隐层神经元局部梯度的可变范围大,有利于连接权值的更新;惩罚因子较大时,隐层神经元局部梯度的可变范围小,不利于连接权值的更新,但能提高网络训练精度。

这说明了在网络训练过程中惩罚因子为何从小到大变化的原因,也说明了新算法的可行性而BP算法则时有无法更新连接权值的重大缺陷。

矿体预测在矿床地质中占有重要地位,由于输入样本量大,用以往前馈网络算法进行矿体预测效果不佳。

本文把前馈网络新算法应用到矿体预测中,取得了良好的预期效果。

本文最后指出了新算法的优点,并指出了有待改进的地方。

关键词:前馈神经网络,凸优化理论,训练算法,矿体预测,应用 Feed forward Neural Networks Training Algorithm Based on Convex Optimization and Its Application in Deposit Forcasting JIA Wen-chen (Computer Application) Directed by YE Shi-wei Abstract The paper studies primarily the application of convex optimization theory and algorithm for feed forward works’ training and convergence performance. It reviews the history of feed forward works, points out that the training of feed forward works is essentially a non-linear problem and introduces BP algorithm, its advantages as well as disadvantages and previous improvements for it. One of the big disadvantages of BP algorithm and its improvement algorithms is: because its error target function is non-convex in the weight values between neurons in different layers and exists local minimum point, thus, if the weight values enter local minimum point in weight values space work is trained, it is difficult to skip local minimum point and reach the global minimum point (i.e. the most optimal point).If this happening, the training works will be essful. To e these essential disadvantages, the paper constructs a new error target function including restriction item ording to convex function, Fenchel inequality in the conjugate of convex function and punishment function method in restriction optimization theory. When feed forward works based on the new target function is being trained, hidden layers’ outputs are seen as optimization variables. The main characteristics of the new target function are as follows: 1.With fixed hidden layers’ outputs, the new target function is convex in connecting weight variables; with fixed connecting weight values, the new target function is convex in hidden layers’ outputs. Thus, when connecting weight values and hidden layers’ outputs are optimized alternately, the new target function is convex in them, doesn’t exist local minimum point, and the algorithm’s sensitiveness is reduced for original weight values . 2.Because the punishment factor is increased gradually, weight values ’ searching space gets much bigger, so works can be trained and the possibility of entering local minimum point can be reduced to a certain extent work training process. Using these characteristics can e efficiently in the former feed forward works’ training algorithms the big disadvantage works training enters local minimum point easily. This creats a new idea for feed forward works’ learning algorithms by using convex optimization theory works training, connecting weight variables and hidden layer outputs can be optimized alternately. The new algorithm is much better than traditional algorithms for feed forward works. The numerical experiments show that the new algorithm is essful. paring the new algorithm with the traditional ones, a primary conclusion of their relationship is reached. It is proved theoretically that when the punishment factor nears infinity, the new algorithm is BP algorithm yet. The meaning and function of the punishment factor are also explained by numerical experiments. For three-layer feed forward works, when the punishment factor is smaller, hidden layer outputs’ variable range is bigger and this is in favor to updating of the connecting weights values, when the punishment factor is bigger, hidden layer outputs’ variable range is smaller and this is not in favor to updating of the connecting weights values but it can improve precision works. This explains the reason that the punishment factor should be increased gradually works training process. It also explains feasibility of the new algorithm and BP algorithm’s disadvantage that connecting weigh values can not be updated sometimes. Deposit forecasting is very important in deposit geology. The previous algorithms’ effect is not good in deposit forecasting because of much more input samples. The paper applies the new algorithm to deposit forecasting and expectant result is reached. The paper points out the new algorithm’s strongpoint as well as to-be-improved places in the end. Keywords: feed forward works, convex optimization theory, training algorithm, deposit forecasting, application 传统的BP算法及其改进算法的一个很大缺点是:由于其误差目标函数对于待学习的连接权值来说非凸的,存在局部最小点,对网络进行训练时,这些算法的权值一旦落入权值空间的局部最小点就很难跳出,因而无法达到全局最小点(即最优点)而使得网络训练失败。

针对这些缺陷,根据凸函数及其共轭的性质,利用Fenchel不等式,使用约束优化理论中的罚函数方法构造出了带有惩罚项的新误差目标函数。

用新的目标函数对前馈神经网络进行优化训练时,隐层输出也作为被优化变量。

这个目标函数的主要特点有: 1.固定隐层输出,该目标函数对连接权值来说是凸的;固定连接权值,对隐层输出来说是凸的。

这样在对连接权值和隐层输出进行交替优化时,它们所面对的目标函数都是凸函数,不存在局部最小的问题,算法对于初始权值的敏感性降低; 2.由于惩罚因子是逐渐增大的,使得权值的搜索空间变得比较大,从而对于大规模的网络也能够训练,在一定程度上降低了训练过程陷入局部最小的可能性。

这些特性能够在很大程度上有效地克服以往前馈网络的训练算法易于陷入局部最小而使网络训练失败的重大缺陷,也为利用凸优化理论研究前馈神经网络的学习算法开创了一个新思路。

在网络训练时,可以对连接权值和隐层输出进行交替优化。

把这种新算法应用到前馈神经网络训练学习中,在学习速度、泛化能力、网络训练成功率等多方面均优于传统训练算法,如经典的BP算法。

数值试验也表明了这一新算法的有效性。

本文通过典型的BP算法与新算法的比较,得到了二者之间相互关系的初步结论。

从理论上证明了当惩罚因子趋于正无穷大时新算法就是BP算法,并且用数值试验说明了惩罚因子在网络训练算法中的作用和意义。

对于三层前馈神经网络来说,惩罚因子较小时,隐层神经元局部梯度的可变范围大,有利于连接权值的更新;惩罚因子较大时,隐层神经元局部梯度的可变范围小,不利于连接权值的更新,但能提高网络训练精度。

这说明了在网络训练过程中惩罚因子为何从小到大变化的原因,也说明了新算法的可行性而BP算法则时有无法更新连接权值的重大缺陷。

矿体预测在矿床地质中占有重要地位,由于输入样本量大,用以往前馈网络算法进行矿体预测效果不佳。

本文把前馈网络新算法应用到矿体预测中,取得了良好的预期效果。

本文最后指出了新算法的优点,并指出了有待改进的地方。

关键词:前馈神经网络,凸优化理论,训练算法,矿体预测,应用 Feed forward Neural Networks Training Algorithm Based on Convex Optimization and Its Application in Deposit Forcasting JIA Wen-chen (Computer Application) Directed by YE Shi-wei Abstract The paper studies primarily the application of convex optimization theory and algorithm for feed forward works’ training and convergence performance. It reviews the history of feed forward works, points out that the training of feed forward works is essentially a non-linear problem and introduces BP algorithm, its advantages as well as disadvantages and previous improvements for it. One of the big disadvantages of BP algorithm and its improvement algorithms is: because its error target function is non-convex in the weight values between neurons in different layers and exists local minimum point, thus, if the weight values enter local minimum point in weight values space work is trained, it is difficult to skip local minimum point and reach the global minimum point (i.e. the most optimal point).If this happening, the training works will be essful. To e these essential disadvantages, the paper constructs a new error target function including restriction item ording to convex function, Fenchel inequality in the conjugate of convex function and punishment function method in restriction optimization theory. When feed forward works based on the new target function is being trained, hidden layers’ outputs are seen as optimization variables. The main characteristics of the new target function are as follows: 1.With fixed hidden layers’ outputs, the new target function is convex in connecting weight variables; with fixed connecting weight values, the new target function is convex in hidden layers’ outputs. Thus, when connecting weight values and hidden layers’ outputs are optimized alternately, the new target function is convex in them, doesn’t exist local minimum point, and the algorithm’s sensitiveness is reduced for original weight values . 2.Because the punishment factor is increased gradually, weight values ’ searching space gets much bigger, so works can be trained and the possibility of entering local minimum point can be reduced to a certain extent work training process. Using these characteristics can e efficiently in the former feed forward works’ training algorithms the big disadvantage works training enters local minimum point easily. This creats a new idea for feed forward works’ learning algorithms by using convex optimization theory works training, connecting weight variables and hidden layer outputs can be optimized alternately. The new algorithm is much better than traditional algorithms for feed forward works. The numerical experiments show that the new algorithm is essful. paring the new algorithm with the traditional ones, a primary conclusion of their relationship is reached. It is proved theoretically that when the punishment factor nears infinity, the new algorithm is BP algorithm yet. The meaning and function of the punishment factor are also explained by numerical experiments. For three-layer feed forward works, when the punishment factor is smaller, hidden layer outputs’ variable range is bigger and this is in favor to updating of the connecting weights values, when the punishment factor is bigger, hidden layer outputs’ variable range is smaller and this is not in favor to updating of the connecting weights values but it can improve precision works. This explains the reason that the punishment factor should be increased gradually works training process. It also explains feasibility of the new algorithm and BP algorithm’s disadvantage that connecting weigh values can not be updated sometimes. Deposit forecasting is very important in deposit geology. The previous algorithms’ effect is not good in deposit forecasting because of much more input samples. The paper applies the new algorithm to deposit forecasting and expectant result is reached. The paper points out the new algorithm’s strongpoint as well as to-be-improved places in the end. Keywords: feed forward works, convex optimization theory, training algorithm, deposit forecasting, application BP算法及其改进 2.1 BP算法步骤 1°随机抽取初始权值ω0; 2°输入学习样本对(Xp,Yp),学习速率η,误差水平ε; 3°依次计算各层结点输出opi,opj,opk; 4°修正权值ωk+1=ωk+ηpk,其中pk=,ωk为第k次迭代权变量; 5°若误差E<ε停止,否则转3°。

2.2 最优步长ηk的确定 在上面的算法中,学习速率η实质上是一个沿负梯度方向的步长因子,在每一次迭代中如何确定一个最优步长ηk,使其误差值下降最快,则是典型的一维搜索问题,即E(ωk+ηkpk)=(ωk+ηpk)。

令Φ(η)=E(ωk+ηpk),则Φ′(η)=dE(ωk+ηpk)/dη=E(ωk+ηpk)Tpk。

若ηk为(η)的极小值点,则Φ′(ηk)=0,即E(ωk+ηpk)Tpk=-pTk+1pk=0。

确定ηk的算法步骤如下 1°给定η0=0,h=0.01,ε0=0.00001; 2°计算Φ′(η0),若Φ′(η0)=0,则令ηk=η0,停止计算; 3°令h=2h, η1=η0+h; 4°计算Φ′(η1),若Φ′(η1)=0,则令ηk=η1,停止计算; 若Φ′(η1)>0,则令a=η0,b=η1;若Φ′(η1)<0,则令η0=η1,转3°; 5°计算Φ′(a),若Φ′(a)=0,则ηk=a,停止计算; 6°计算Φ′(b),若Φ′(b)=0,则ηk=b,停止计算; 7°计算Φ′(a+b/2),若Φ′(a+b/2)=0,则ηk=a+b/2,停止计算; 若Φ′(a+b/2)<0,则令a=a+b/2;若Φ′(a+b/2)>0,则令b=a+b/2 8°若|a-b|<ε0,则令,ηk=a+b/2,停止计算,否则转7°。

2.3 改进BP算法的特点分析 在上述改进的BP算法中,对学习速率η的选取不再由用户自己确定,而是在每次迭代过程中让计算机自动寻找最优步长ηk。

而确定ηk的算法中,首先给定η0=0,由定义Φ(η)=E(ωk+ηpk)知,Φ′(η)=dE(ωk+ηpk)/dη=E(ωk+ηpk)Tpk,即Φ′(η0)=-pTkpk≤0。

若Φ′(η0)=0,则表明此时下降方向pk为零向量,也即已达到局部极值点,否则必有Φ′(η0)<0,而对于一维函数Φ(η)的性质可知,Φ′(η0)<0则在η0=0的局部范围内函数为减函数。

故在每一次迭代过程中给η0赋初值0是合理的。

改进后的BP算法与原BP算法相比有两处变化,即步骤2°中不需给定学习速率η的值;另外在每一次修正权值之前,即步骤4°前已计算出最优步长ηk。

学习神经网络算法,看什么书比较好, 求推荐~

理论方面用: <遗传算法---理论,应用与软件实现> 王小平 西安交大出版社 (本人从十本相关书籍中选的一本) 软件实现用: <MATLAB 遗传算法工具箱及应用...

BP算法的BP算法基本介绍

含有隐层的多层前馈网络能大大提高神经网络的分类能力,但长期以来没有提出解决权值调整问题的有效算法。

1986年,Rumelhart和McCelland领导的科学家小组在《Parallel Distributed Processing》一书中,对具有非线性连续转移函数的多层前馈网络的误差反向传播(Error Back Propagation,简称BP)算法进行了详尽的分析,实现了Minsky关于多层网络的设想。

BP算法的基本思想是,学习过程由信号的正向传播与误差的反向传播两个过程组成。

正向传播时,输入样本从输入层传入,经各隐层逐层处理后,传向输出层。

若输出层的实际输出与期望的输出(教师信号)不符,则转入误差的反向传播阶段。

误差反传是将输出误差以某种形式通过隐层向输入层逐层反传,并将误差分摊给各层的所有单元,从而获得各层单元的误差信号,此误差信号即作为修正各单元权值的依据。

这种信号正向传播与误差反向传播的各层权值调整过程,是周而复始地进行的。

权值不断调整的过程,也就是网络的学习训练过程。

此过程一直进行到网络输出的误差减少到可接受的程度,或进行到预先设定的学习次数为止。

BP算法的网络结构示意图

  • 神经算法BP算法的BP算法基本介绍相关文档

819云互联 香港 日本 美国 2核4G 18元 8核8G 39元 免费空间 免费CDN 香港 E3 16G 20M 230元/月

819云互联是海外领先的互联网业务平台服务提供商。专注为用户提供低价高性能云计算产品,致力于云计算应用的易用性开发,并引导云计算在国内普及。目前平台研发以及运营云服务基础设施服务平台(IaaS),面向全球客户提供基于云计算的IT解决方案与客户服务,拥有丰富的海外资源、香港,日本,美国等各国优质的IDC资源。官方网站:https://www.819yun.com香港特价物理服务器:地区CPU内存带宽...

欧路云:美国CUVIP线路10G防御,8折优惠,19元/月起

欧路云新上了美国洛杉矶cera机房的云服务器,具备弹性云特征(可自定义需要的资源配置:E5-2660 V3、内存、硬盘、流量、带宽),直连网络(联通CUVIP线路),KVM虚拟,自带一个IP,支持购买多个IP,10G的DDoS防御。付款方式:PayPal、支付宝、微信、数字货币(BTC USDT LTC ETH)测试IP:23.224.49.126云服务器 全场8折 优惠码:zhujiceping...

柚子互联(34元),湖北十堰高防, 香港 1核1G 5M

柚子互联官网商家介绍柚子互联(www.19vps.cn)本次给大家带来了盛夏促销活动,本次推出的活动是湖北十堰高防产品,这次老板也人狠话不多丢了一个6.5折优惠券而且还是续费同价,稳撸。喜欢的朋友可以看看下面的活动详情介绍,自从站长这么久以来柚子互联从19年开始算是老商家了。六五折优惠码:6kfUGl07活动截止时间:2021年9月30日客服QQ:207781983本次仅推荐部分套餐,更多套餐可进...

神经算法为你推荐
flash控件flash插件怎么弄wmiprvsewmiprvse.exe能禁用吗rbf神经网络RBF神经网络和BP神经网络有什么区别settimervc++6.0 settimer函数是怎么用的啊,能给个例子在讲解一下行么洗牌算法我是小白,eclipse说老式声明,怎么办?帮我看下,不胜感激!! //发牌算法inode智能客户端我的电脑上inode智能客户端连接网络时,提示~服务器没有响应,请确认当前认证网卡已连接到合适的网丁奇赛尔号丁奇技能表,带等级,刷什么学习力好?有b吗有什么好看的b级片ruby语言公司实习让我学习RUBY语言,不知道RUBY语言发展前景怎么样,值不值的去学习。问卷星登陆你好,如果之前用微信登录了问卷星小程序,以后每次回答都不需要微信登录了吗?回答了会被知道个人信息吗
七牛优惠码 服务器配置技术网 ddos GGC Vultr mediafire下载 外国域名 网站保姆 美国php主机 丹弗 e蜗牛 howfile 北京双线 美国网站服务器 国外ip加速器 网页提速 什么是web服务器 独立主机 东莞主机托管 云服务器比较 更多