利用Inception-V3训练的权重微调,实现猫狗分类(基于keras)
利用Inception-V3訓(xùn)練的權(quán)重微調(diào)實(shí)現(xiàn)貓狗的分類(lèi),其中權(quán)重的下載在我的博客下載資源處,https://download.csdn.net/download/fanzonghao/10566634
第一種權(quán)重不改變直接用mixed7層(mixed7呆會(huì)把打印結(jié)果一放就知道了)進(jìn)行特征提取,然后在拉平,連上兩層神經(jīng)網(wǎng)絡(luò)
def define_model():InceptionV3_weight_path='./model_weight/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'pre_trained_model=InceptionV3(input_shape=(150,150,3),include_top=False,#不包含全連接層weights=None)pre_trained_model.load_weights(InceptionV3_weight_path)#下面兩種取其一#僅僅用其做特征提取 不需要更新權(quán)值for layer in pre_trained_model.layers:print(layer.name)layer.trainable=False#微調(diào)權(quán)值# unfreeze=False# for layer in pre_trained_model.layers:# if unfreeze:# layer.trainable=True# if layer.name=='mixed6':# unfreeze=Truelast_layer=pre_trained_model.get_layer('mixed7')print(last_layer.output_shape)last_output=last_layer.output#以下是在模型的基礎(chǔ)上增加的x=layers.Flatten()(last_output)x=layers.Dense(1024,activation='relu')(x)x=layers.Dropout(0.2)(x)x=layers.Dense(1,activation='sigmoid')(x)model=Model(inputs=pre_trained_model.input,outputs=x)return model第一種完全利用Inception-V3訓(xùn)練的權(quán)重代碼
import os import tensorflow as tf import matplotlib.pyplot as pltfrom keras.applications.inception_v3 import InceptionV3 from keras import layers from keras.models import Model from keras.optimizers import RMSprop from keras import backend as K from keras.preprocessing.image import ImageDataGenerator import data_read """ #獲得所需求的圖片--進(jìn)行了圖像增強(qiáng) """ def data_deal_overfit():# 獲取數(shù)據(jù)的路徑train_dir, validation_dir, next_cat_pix, next_dog_pix = data_read.read_data()#圖像增強(qiáng)train_datagen=ImageDataGenerator(rescale=1./255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2,shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')test_datagen=ImageDataGenerator(rescale=1./255)#從文件夾獲取所需要求的圖片train_generator=train_datagen.flow_from_directory(train_dir,target_size=(150,150),batch_size=20,class_mode='binary')test_generator = test_datagen.flow_from_directory(validation_dir,target_size=(150, 150),batch_size=20,class_mode='binary')return train_generator,test_generator """ #定義模型并加入了dropout """ def define_model():InceptionV3_weight_path='./model_weight/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'pre_trained_model=InceptionV3(input_shape=(150,150,3),include_top=False,#不包含全連接層weights=None)pre_trained_model.load_weights(InceptionV3_weight_path)#下面兩種取其一#僅僅用其做特征提取 不需要更新權(quán)值for layer in pre_trained_model.layers:print(layer.name)layer.trainable=False#微調(diào)權(quán)值# unfreeze=False# for layer in pre_trained_model.layers:# if unfreeze:# layer.trainable=True# if layer.name=='mixed6':# unfreeze=Truelast_layer=pre_trained_model.get_layer('mixed7')print(last_layer.output_shape)last_output=last_layer.output#以下實(shí)在模型的基礎(chǔ)上增加的x=layers.Flatten()(last_output)x=layers.Dense(1024,activation='relu')(x)x=layers.Dropout(0.2)(x)x=layers.Dense(1,activation='sigmoid')(x)model=Model(inputs=pre_trained_model.input,outputs=x)return model""" 訓(xùn)練模型 """ def train_model():model=define_model()model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])train_generator, test_generator = data_deal_overfit()# verbose:日志顯示,0為不在標(biāo)準(zhǔn)輸出流輸出日志信息,1為輸出進(jìn)度條記錄,2為每個(gè)epoch輸出一行記錄# 訓(xùn)練模型 返回history包含各種精度和損失history = model.fit_generator(train_generator,steps_per_epoch=100, # 2000 images=batch_szie*stepsepochs=50,validation_data=test_generator,validation_steps=50, # 1000=20*50verbose=2)#精度acc=history.history['acc']val_acc=history.history['val_acc']#損失loss=history.history['loss']val_loss=history.history['val_loss']#epochs的數(shù)量epochs=range(len(acc))plt.plot(epochs,acc)plt.plot(epochs, val_acc)plt.title('training and validation accuracy')plt.figure()plt.plot(epochs, loss)plt.plot(epochs, val_loss)plt.title('training and validation loss')plt.show()if __name__ == '__main__':train_model() 打印結(jié)果:其中這些代表每一層的名字,直接利用mixed7的特征,(none,7,7,768)就是該層的shape, 直接拉平,添加兩層神經(jīng)網(wǎng)絡(luò)進(jìn)行分類(lèi)。打印結(jié)果:這是每一層的名字,mixed7層的shape是(None,7,7,768)第一種做法就是直接利用該層及之前層的權(quán)重進(jìn)行訓(xùn)練分類(lèi)的。
第二種:進(jìn)行微調(diào)要不是需要對(duì)整個(gè)權(quán)重都進(jìn)行重新賦值,因?yàn)榍懊鎸訑?shù)學(xué)習(xí)到的特征是一些簡(jiǎn)單的特征,只是隨著層數(shù)增強(qiáng)才更加具有針對(duì)性,故把mixed7層的卷積層權(quán)重 重新訓(xùn)練,代碼:
unfreeze=False for layer in pre_trained_model.layers:if unfreeze:layer.trainable=Trueif layer.name=='mixed6':unfreeze=True也就是把我上段完整的代碼注釋替換一下即可。
創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來(lái)咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)總結(jié)
以上是生活随笔為你收集整理的利用Inception-V3训练的权重微调,实现猫狗分类(基于keras)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: haar级联分类器--人脸检测和匹配
- 下一篇: Android之Tab类总结