卷积和池化matlab 实现,UFLDL新版教程与编程练习(七):Convolution and Pooling(卷积和池化)...
UFLDL是吳恩達團隊編寫的較早的一門深度學習入門,里面理論加上練習的節奏非常好,每次都想快點看完理論去動手編寫練習,因為他幫你打好了整個代碼框架,也有詳細的注釋,所以我們只要實現一點核心的代碼編寫工作就行了,上手快!
我這里找不到新版對應這塊的中文翻譯了,-_-,趁早寫一下,否則又沒感覺了!
第七節是:Convolution and Pooling(卷積和池化)
卷積(Convolution)
之前的多層卷積網絡是Fully Connected Networks,而卷積神經網絡是Locally Connected Networks,現在CNN這么火,想必提到卷積大家都會想到類似這種的圖吧:
[圖片上傳失敗...(image-cb3ff4-1565444895291)]
實際上,數學中離散變量的二維卷積是這樣的;
而我們可以利用matlab里面的conv2函數快捷地實現二維卷積操作(注意要先翻轉W 180°),通過卷積我們就可以讓一張
大圖片
,用小的
卷積核
滑過,就可以得到大小為
的特征圖了,下面就是我的cnnConvolve.m代碼,其中還有一段利用GPU運算的,被我注釋掉了
function convolvedFeatures = cnnConvolve(filterDim, numFilters, images, W, b)
% convolvedFeatures = cnnConvolve(filterDim, numFilters, convImages, W, b);
% in cnnExercise.m 8 100 28*28*8 8*8*100 100*100
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
% filterDim - filter (feature) dimension
% numFilters - number of feature maps
% images - large images to convolve with, matrix in the form
% images(r, c, image number)
% W, b - W, b for features from the sparse autoencoder
% W is of shape (filterDim,filterDim,numFilters)
% b is of shape (numFilters,1)
%
% Returns:
% convolvedFeatures - matrix of convolved features in the form
% convolvedFeatures(imageRow, imageCol, featureNum, imageNum)
numImages = size(images, 3);
imageDim = size(images, 1); % 方陣
convDim = imageDim - filterDim + 1; % 28 - 8 + 1 = 21
convolvedFeatures = zeros(convDim, convDim, numFilters, numImages);
% Instructions:
% Convolve every filter with every image here to produce the
% (imageDim - filterDim + 1) x (imageDim - filterDim + 1) x numFeatures x numImages
% matrix convolvedFeatures, such that
% convolvedFeatures(imageRow, imageCol, featureNum, imageNum) is the
% value of the convolved featureNum feature for the imageNum image over
% the region (imageRow, imageCol) to (imageRow + filterDim - 1, imageCol + filterDim - 1)
%
% Expected running times:
% Convolving with 100 images should take less than 30 seconds
% Convolving with 5000 images should take around 2 minutes
% (So to save time when testing, you should convolve with less images, as
% described earlier)
for imageNum = 1:numImages
for filterNum = 1:numFilters
% convolution of image with feature matrix
convolvedImage = zeros(convDim, convDim);
% Obtain the feature (filterDim x filterDim) needed during the convolution
%%% YOUR CODE HERE %%%
filter = squeeze(W(:,:,filterNum));
% Flip the feature matrix because of the definition of convolution, as explained later
filter = rot90(squeeze(filter),2); % squeeze 刪除單一維度 二維數組不受 squeeze 的影響
% Obtain the image
im = squeeze(images(:, :, imageNum));
% Convolve "filter" with "im", adding the result to convolvedImage
% be sure to do a 'valid' convolution
%%% YOUR CODE HERE %%%
convolvedImage = conv2(im,filter,'valid'); % 21*21
% Add the bias unit
% Then, apply the sigmoid function to get the hidden activation
%%% YOUR CODE HERE %%%
convolvedImage = convolvedImage + b(filterNum);
convolvedImage = sigmoid(convolvedImage);
convolvedFeatures(:, :, filterNum, imageNum) = convolvedImage;
end
end
%%%%%%%%%%%%%%%%%%% use gpu(can comment) %%%%%%%%%%%%%
% for imageNum = 1:numImages
% for filterNum = 1:numFilters
%
% % convolution of image with feature matrix
% convolvedImage = zeros(convDim, convDim);
% gpu_convolvedImage = gpuArray(convolvedImage);
%
% % Obtain the feature (filterDim x filterDim) needed during the convolution
%
% %%% YOUR CODE HERE %%%
% filter = squeeze(W(:,:,filterNum));
% % Flip the feature matrix because of the definition of convolution, as explained later
% filter = rot90(squeeze(filter),2); % squeeze 刪除單一維度 二維數組不受 squeeze 的影響
%
% % Obtain the image
% im = squeeze(images(:, :, imageNum));
%
% % Convolve "filter" with "im", adding the result to convolvedImage
% % be sure to do a 'valid' convolution
%
% %%% YOUR CODE HERE %%%
% gpu_filter = gpuArray(filter);
% gpu_im = gpuArray(im);
% gpu_convolvedImage = conv2(gpu_im,gpu_filter,'valid');
% % Add the bias unit
% % Then, apply the sigmoid function to get the hidden activation
%
% %%% YOUR CODE HERE %%%
% convolvedImage = gpu_convolvedImage + b(filterNum);
% convolvedImage = sigmoid(convolvedImage);
%
% convolvedFeatures(:, :, filterNum, imageNum) = gather(convolvedImage);
% end
% end
end
池化(Pooling)
下面這張動圖很好解釋了池化操作:
[圖片上傳失敗...(image-359be6-1565444895291)]
池化可以降低特征維數,降低計算量吧!下面是我的cnnPool.m代碼,用了mean函數和conv2函數都可以實現池化,我把其中一種注釋了:
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
% 3 21*21*100*8
%cnnPool Pools the given convolved features
%
% Parameters:
% poolDim - dimension of pooling region
% convolvedFeatures - convolved features to pool (as given by cnnConvolve)
% convolvedFeatures(imageRow, imageCol, featureNum, imageNum)
%
% Returns:
% pooledFeatures - matrix of pooled features in the form
% pooledFeatures(poolRow, poolCol, featureNum, imageNum)
%
numImages = size(convolvedFeatures, 4);
numFilters = size(convolvedFeatures, 3);
convolvedDim = size(convolvedFeatures, 1);
pooledFeatures = zeros(convolvedDim / poolDim, ...
convolvedDim / poolDim, numFilters, numImages); % 7*7*100*8
% Instructions:
% Now pool the convolved features in regions of poolDim x poolDim,
% to obtain the
% (convolvedDim/poolDim) x (convolvedDim/poolDim) x numFeatures x numImages
% matrix pooledFeatures, such that
% pooledFeatures(poolRow, poolCol, featureNum, imageNum) is the
% value of the featureNum feature for the imageNum image pooled over the
% corresponding (poolRow, poolCol) pooling region.
%
% Use mean pooling here.
%%% YOUR CODE HERE %%%
%% METHOD1:Using mean to pool
% for imageNum = 1:numImages
% for filterNum = 1:numFilters
% pooledImage = zeros(convolvedDim / poolDim, convolvedDim / poolDim);
% im = convolvedFeatures(:,:,filterNum, imageNum);
% for i=1:(convolvedDim / poolDim)
% for j=1:(convolvedDim / poolDim)
% pooledImage(i,j) = mean(mean(im((i-1)*poolDim+1:i*poolDim,(j-1)*poolDim+1:j*poolDim)));
% end
% end
%
% pooledFeatures(:,:,filterNum, imageNum) = pooledImage;
% end
% end
%%======================================================================
%% METHOD2:Using conv2 as well to pool
% (if numImages is large,this method may be better,can use "gpuArray.conv2"to speed up!)
pool_filter = 1/(poolDim*poolDim) * ones(poolDim,poolDim);
for imageNum = 1:numImages
for filterNum = 1:numFilters
pooledImage = zeros(convolvedDim / poolDim, convolvedDim / poolDim);
im = convolvedFeatures(:,:,filterNum, imageNum);
for i=1:(convolvedDim / poolDim)
for j=1:(convolvedDim / poolDim)
temp = conv2(im,pool_filter,'valid');
pooledImage(i,j) = temp(poolDim*(i-1)+1,poolDim*(j-1)+1);
end
end
pooledFeatures(:,:,filterNum, imageNum) = pooledImage;
end
end
end
運行結果(這個練習偏簡單,只是測試一下,為之后卷積神經網絡打鋪墊的):
卷積池化結果
有理解不到位之處,還請指出,有更好的想法,可以在下方評論交流!
總結
以上是生活随笔為你收集整理的卷积和池化matlab 实现,UFLDL新版教程与编程练习(七):Convolution and Pooling(卷积和池化)...的全部內容,希望文章能夠幫你解決所遇到的問題。
                            
                        - 上一篇: php还原json,PHP语言中使用JS
 - 下一篇: python教程长城图案,Python编