| 1. | Decreasing gradient optimization algorithm with variable step length based on chaotic variables 基于混沌变量的变步长梯度下降优化算法 |
| 2. | Based on gradient descent rule , the bp ( back propagation ) algorithm is a local optimization algorithm Bp算法基于梯度下降原理,是一种局部寻优算法。 |
| 3. | A coupled exercise algorithm of forward neural network combined with gradient search and chaotic optimization search 基于规则的前馈神经网络的混沌梯度下降耦合学习算法 |
| 4. | The weights are trained with gradient descent method . the increase algorithm of bvs , and restricted algorithm , was induced 利用梯度下降法对网络的权值进行训练,并且推导了bvs的增长算法,以及网络训练的限制记忆递推公式。 |
| 5. | The essence of back propagation networks is that make the change of weights become little by gradient descent method and finally attain the minimal error 其实质是采用梯度下降法使权值的改变总是朝着误差变小的方向改进,最终达到最小误差。 |
| 6. | A cost function is explored by considering the independence of the improper sources , and the online separation algorithm is then deduced with descending gradient 结合非常态信号的独立性,构造出代价函数,利用梯度下降法推导出在线盲分离算法。 |
| 7. | The transition zone quite often shows a decrease in temperature gradient due to the ability of water to absorb up to a 1 / 3 more heat than rock 由于水具有比岩石能多吸收三分之一热量的能力,因此,这类有变化的地层通常都显示出地温梯度下降的趋势。 |
| 8. | The parameter of local model can be calculated by gradient descent in neighborhood with the sofm weight together , or estimated by least - squared estimation ( lse ) 局部模型的参数既可和映射网络权值一起在邻域内采用梯度下降法修正,也可结合最小二乘法得到其最佳估计。 |
| 9. | Firstly , it introduces the gradient boosting theory and the no weight regression algorithm based on this theory , then it presents the experimental results of a practical problem 首先对以损失函数梯度下降为原理的样本无权值算法进行了阐述,并给出了一个实际问题的仿真结果。 |
| 10. | Anfis based on takagi and sugeno ' s fuzzy model has the advantage of being linear - in - parameter ; thus the conventional adaptive methods can be efficiently utilized to estimate its parameters 由于节点参数是线性的,用梯度下降和最小二乘的混合学习算法来调节参数,减少了运算量,加快了收敛速度。 |