美文网首页
反向传播(BP)算法的推导与前馈神经网络的实现

反向传播(BP)算法的推导与前馈神经网络的实现

作者: 黄昏隐修所 | 来源:发表于2019-03-18 04:25 被阅读0次

1. BP算法的推导

考虑前馈神经网络, 即整个网络中不存在反馈.



如图所示, w_ji表示当前层神经元 j 的来自上层神经元 i 的输入的权值.
设样本集为D, 输出集是outputs, 标签值和预估值分别用 t 和 o 表示. 对于回归问题, 整个网络的损失函数定义为:

E(w) = \frac{1}{2}\sum_{d{\in}D}\sum_{k{\in}outputs}(t_{kd}-o_{kd})^2
由于每个样本都是对称的, 于是考虑单个样本d的损失函数:

E_d(w) = \frac{1}{2}\sum_{k{\in}outputs}(t_{k}-o_{k})^2
记以下符号:

net_j=\sum_iw_{ji}x_{ji}: 当前层神经元 j 的输入的加权和
Downstream(j): 所有输入含有神经元 j 的下一层神经元

欲更新各w_ji, 需要求
{\Delta}w_{ji}=\frac{\partial{E_d}}{\partial{w_{ji}}}
由链式求导法则
\frac{\partial{E_d}}{\partial{w_{ji}}}=\frac{\partial{E_d}}{\partial{net_j}}\frac{\partial{net_j}}{\partial{w_{ji}}}=\frac{\partial{E_d}}{\partial{net_j}}x_{ji}
定义误差项\ \ {\delta}_j=\frac{\partial{E_d}}{\partial{net_j}}
设激活函数为sigmoid函数\sigma: {\sigma}'={\sigma}(1-{\sigma}).

1.1 输出层的梯度

对回归问题而言, 输出层不再作用激活函数. 有

{\delta}_k = \frac{\partial{E_d}}{{\partial}o_k}\frac{{\partial}o_k}{\partial{net_k}}=\frac{\partial{E_d}}{{\partial}o_k}=\frac{\partial}{{\partial}o_k}\frac{1}{2}\sum_{j{\in}outputs}(t_{j}-o_{j})^2=-(t_{k}-o_{k})
{\Delta}w_{ki}=\frac{\partial{E_d}}{\partial{w_{ki}}}={\delta}_kx_{ki}

1.2 隐藏层的梯度

隐藏层误差项的求解会借助下一层神经元, 根据输出层的求解结果计算最后一层隐藏层, 然后依次向后计算, 故称为反向传播.

\delta_h=\frac{\partial{E_d}}{\partial{net_h}}=\sum_{k{\in}Downstream(h)}\frac{\partial{E_d}}{\partial{net_k}}\frac{\partial{net_k}}{\partial{net_h}}
=\sum_{k{\in}Downstream(h)}\frac{\partial{E_d}}{\partial{net_k}}\frac{\partial{net_k}}{\partial{o_h}}\frac{\partial{o_h}}{\partial{net_h}}
=\sum_{k{\in}Downstream(h)}\delta_kw_{kh}\frac{\partial{\sigma(net_h)}}{\partial{net_h}}
=o_h(1-o_h)\sum_{k{\in}Downstream(h)}\delta_kw_{kh}
{\Delta}w_{hi}=\frac{\partial{E_d}}{\partial{w_{hi}}}=\frac{\partial{E_d}}{\partial{net_h}}\frac{\partial{net_h}}{\partial{w_{hi}}}={\delta}_hx_{hi}

1.3 BP算法流程

初始化参数. 对每个训练样本(x, t):

  1. 前向传播一次, 求每个神经元 p的输出 o_p
  2. 对每个输出层神经元\ k, 求误差项\ \delta_k:

\delta_k\ {\leftarrow}\ -(t_{k}-o_{k})

  1. 对每个隐藏层神经元\ h, 求误差项\ \delta_h:

\delta_h\ {\leftarrow}\ o_h(1-o_h)\sum_{k{\in}Downstream(h)}\delta_kw_{kh}

  1. 更新参数 w_{ji}:
    w_{ji}:=w_{ji}-\eta\Delta{w_{ji}}=w_{ji}-\eta\delta_jx_{ji}

2. 简单前馈神经网络的实现

基于mini-batch, 即每次随机选取一定数量(batch_size)的样本来更新参数.

class SimpleFeedForwardNetwork(object):
    """mini-batch based feedforward neural network"""
    def __init__(self, input_units, hidden_units, output_units=1):
        self._n_input_units = input_units
        self._n_hidden_units = copy.deepcopy(hidden_units)
        self._n_output_units = output_units

        self._weights = list()
        self._bias = list()

        # 用正态分布初始化每层连接的参数
        input_nums = [self._n_input_units] + self._n_hidden_units
        output_nums = self._n_hidden_units + [self._n_output_units]
        for in_units, out_units in zip(input_nums, output_nums):
            self._weights.append(np.random.randn(in_units, out_units))
            self._bias.append(np.random.randn(out_units))

    def fit(self, X, y, batch_size, epochs, eta):
        input_idx = np.arange(X.shape[0])
        evaluate_res = list()
        for epoch in range(epochs):
            np.random.shuffle(input_idx)
            for i in range(0, input_idx.shape[0], batch_size):
                batch_idx = input_idx[i: i+batch_size]
                batch_size = batch_idx.shape[0]

                delta_weights, delta_bias = self._backword(X[batch_idx], y[batch_idx])
                for idx, (delta_w, delta_b) in enumerate(zip(delta_weights, delta_bias)):
                    self._weights[idx] -= eta / batch_size * delta_w
                    self._bias[idx] -= eta / batch_size * delta_b

            evaluate_res.append(self.evaluate(X, y))
            sys.stdout.write('Epoch {}: {}\n'.format(epoch, evaluate_res[-1]))

    def predict(self, X):
        layer_outs = self._forward(X)
        return layer_outs[-1]

    def evaluate(self, X, y):
        y_pred = self.predict(X)
        return np.square(y - y_pred).sum()

    def _activation(self, z):
        return 1 / (1 + np.exp(-z))

    # 前向传播过程, 求各层输出
    def _forward(self, inputs):
        layer_outs = [inputs]
        for w, b in zip(self._weights[:-1], self._bias[:-1]):
            net = np.dot(layer_outs[-1], w) + b
            layer_outs.append(self._activation(net))

        layer_outs.append(np.dot(layer_outs[-1], self._weights[-1]) + self._bias[-1])
        return layer_outs

    # 反向传播过程, 更新参数
    def _backword(self, inputs, outputs):
        # 1. 前向传播求预估值
        layer_outs = self._forward(inputs)

        delta_weights = [np.zeros(w.shape) for w in self._weights]
        delta_bias = [np.zeros(b.shape) for b in self._bias]

        # 2. 计算输出层误差项和梯度
        delta = layer_outs[-1] - outputs
        delta_weights[-1] = np.dot(layer_outs[-2].T, delta)
        delta_bias[-1] = delta.sum()

        # 3. 计算隐藏层误差项和梯度
        for h in range(2, len(self._weights)+1):
            delta = np.dot(delta, self._weights[-h + 1].T) * layer_outs[-h] * (1 - layer_outs[-h])
            delta_weights[-h] = np.dot(layer_outs[-h - 1].T, delta)
            delta_bias[-h] = delta.sum()

        return delta_weights, delta_bias

相关文章

网友评论

      本文标题:反向传播(BP)算法的推导与前馈神经网络的实现

      本文链接:https://www.haomeiwen.com/subject/ftxymqtx.html