RNN(Recurrent Neural Network)的几个难点
1. vanish of gradient
RNN的error相對(duì)于某個(gè)時(shí)間點(diǎn)t的梯度為:
\(\frac{\partial E_t}{\partial W}=\sum_{k=1}^{t}\frac{\partial E_t}{\partial y_t}\frac{\partial y_t}{\partial h_i}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W}\) (公式1),?
其中\(zhòng)(h\)是hidden node的輸出,\(y_t\)是網(wǎng)絡(luò)在t時(shí)刻的output,\(W\)是hidden nodes 到hidden nodes的weight,而\(\frac{\partial h_t}{\partial h_k}\)是導(dǎo)數(shù)在時(shí)間段[k,t]上的鏈?zhǔn)秸归_,這段時(shí)間可能很長(zhǎng),會(huì)造成vanish或者explosion gradiant。將\(\frac{\partial h_t}{\partial h_k}\)沿時(shí)間展開:\(\frac{\partial h_t}{\partial h_k}=\prod_{j=k+1}^{t}\frac{\partial h_j}{\partial h_{j-1}}=\prod_{j=k+1}^{t}W^T \times diag [\frac{\partial\sigma(h_{j-1})}{\partial h_{j-1}}]\)。上式中的diag矩陣是個(gè)什么鬼?我來(lái)舉個(gè)例子,你就明白了。假設(shè)現(xiàn)在要求解\(\frac{\partial h_5}{\partial h_4}\),回憶向前傳播時(shí)\(h_5\)是怎么得到的:\(h_5=W\sigma(h_4)+W^{hx}x_4\),則\(\frac{\partial h_5}{\partial h_4}=W\frac{\partial \sigma(h_4)}{\partial h_4}\),注意到\(\sigma(h_4)\)和\(h_4\)都是向量(維度為D),所以\(\frac{\partial \sigma(h_4)}{\partial h_4}\)是Jacobian矩陣也即:\(\frac{\partial \sigma(h_4)}{\partial h_4}=\) \(\begin{bmatrix} \frac{\partial\sigma_1(h_{41})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_1(h_{41})}{\partial h_{4D}}?\\ \vdots&\cdots&\vdots?\\ \frac{\partial\sigma_D(h_{4D})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_D(h_{4D})}{\partial h_{4D}}\end{bmatrix}\),明顯的,非對(duì)角線上的值都是0。這是因?yàn)閟igmoid logistic function \(\sigma\)是element-wise的操作。
后面推導(dǎo)vanish或者explosion gradiant的過(guò)程就很簡(jiǎn)單了,我就不寫了,請(qǐng)參考http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf 中的公式(14)往后部分。
2. weight shared (tied) 時(shí), the gradient?of tied weight = sum of gradient?of individual weights
舉個(gè)例子你就明白了:假設(shè)有向前傳播\(y=F[W_1f(W_2x)]\), 且weights \(W_1\) \(W_2\) tied,?現(xiàn)在要求gradient??\(\frac{\partial y}{\partial W}\)
辦法一:
先求gradient?\(\frac{\partial F[]}{\partial W_2} = F'[]f() \) ? ?
再求gradient?\(\frac{\partial F[]}{\partial W_1} = F'[] (W_2f'()x) \)
將上兩式相加后得,\(F'[]f()+F'[] (W_2f'()x)=F'[](f()+W_2f'()x)\)
假設(shè)weights \(W_1\) \(W_2\) tied,則上式=\(F'[](f()+Wf'()x) = \frac{\partial y}{\partial W} \)
辦法二:?
現(xiàn)在我們換個(gè)辦法,在假設(shè)weights \(W_1\) \(W_2\) tied的基礎(chǔ)上,直接求gradient
\(\frac{\partial y}{\partial W} = ?F'[]( \frac{\partial Wf()}{\partial W} + W?\frac{\partial f()}{\partial W}?) ?= F'[](f()+Wf'()x) \)
?
?可見,兩種方法的結(jié)果是一樣的。所以,當(dāng)權(quán)重共享時(shí),關(guān)于權(quán)重的梯度=兩個(gè)不同權(quán)重梯度的和。
3. LSTM & Gated Recurrent units 是如何避免vanish的?
To understand this, you will have to go through some math. The most accessible article wrt recurrent gradient problems IMHO is Pascanu's ICML2013 paper [1].
A summary: vanishing/exploding gradient comes from the repeated application of the recurrent weight matrix [2]. That the spectral radius of the recurrent weight matrix is bigger than 1 makes exploding gradients?possible?(it is a necessary condition), while a spectral radius smaller than 1 makes it vanish, which is a sufficient condition.
Now, if gradients vanish, that does not mean that all gradients vanish. Only some of them, gradient information local in time will still be present. That means, you might still have a non-zero gradient--but it will not contain long term information. That's because some gradient g + 0 is still g. (上文中公式1,因?yàn)槭窍嗉?#xff0c;所以有些為0,也不會(huì)引起全部為0)
If gradients explode, all of them do. That is because some gradient g + infinity is infinity.(上文中公式1,因?yàn)槭窍嗉?#xff0c;所以有些為無(wú)限大,會(huì)引起全部為無(wú)限大)
That is the reason why LSTM?does not protect you from exploding gradients, since LSTM also uses a recurrent weight matrix(h(t) = o(t) ? tanh(c(t))?), not only internal state-to-state connections(?c(t) = f (t) ? ?c(t?1) +i(t) ? ?c(t) h(t)). Successful LSTM applications typically use gradient clipping.
LSTM overcomes the vanishing gradient problem, though. That is because if you look at the derivative of the internal state at T to the internal state at T-1, there is no repeated weight application. The derivative actually is the value of the forget gate. And to avoid that this becomes zero, we need to initialise it properly in the beginning.
That makes it clear why the states can act as "a wormhole through time", because they can bridge long time lags and then (if the time is right) "re inject" it into the other parts of the net by opening the output gate.
[1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." arXiv preprint arXiv:1211.5063 (2012).
[2] It might "vanish" also due to saturating nonlinearities, but that is sth that can also happen in shallow nets and can be overcome with more careful weight initialisations.
?
ref:?Recursive Deep Learning for Natural Language Processing and Computer Vision.pdf
? ? ? CS224D-3-note bp.pdf
?
未完待續(xù)。。。?
轉(zhuǎn)載于:https://www.cnblogs.com/congliu/p/4546634.html
總結(jié)
以上是生活随笔為你收集整理的RNN(Recurrent Neural Network)的几个难点的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: redis(一)--简介
- 下一篇: EasyUI combobox 中文无法