简介
这段代码定义了一个用于图像分类的深度卷积神经网络:DesnseNet;
导入必要的库
from keras.models import Model
from keras.layers.core import Dense, Dropout, Activation, Reshape, Permute
from keras.layers.convolutional import Conv2D, Conv2DTranspose, ZeroPadding2D
from keras.layers.pooling import AveragePooling2D, GlobalAveragePooling2D
from keras.layers import Input, Flatten
from tensorflow.keras.layers import concatenate, BatchNormalization, TimeDistributed
from keras.regularizers import l2
这里我们导入了构建神经网络模型所需的各种层和工具函数。
定义卷积块
def conv_block(input, growth_rate, dropout_rate=None, weight_decay=1e-4):
x = BatchNormalization(axis=-1, epsilon=1.1e-5)(input)
x = Activation('relu')(x)
x = Conv2D(growth_rate, (3, 3), kernel_initializer='he_normal', padding='same')(x)
if (dropout_rate):
x = Dropout(dropout_rate)(x)
return x
conv_block
函数定义了一个卷积块,该块包括批归一化(BatchNormalization)、激活函数ReLU和一个卷积层。如果指定了dropout_rate,还会添加Dropout层来防止过拟合。
定义Dense块
def dense_block(x, nb_layers, nb_filter, growth_rate, droput_rate=0.2, weight_decay=1e-4):
for i in range(nb_layers):
cb = conv_block(x, growth_rate, droput_rate, weight_decay)
x = concatenate([x, cb], axis=-1)
nb_filter += growth_rate
return x, nb_filter
dense_block
函数定义了Dense块。它通过调用多个卷积块,并将其输出与输入连接(concatenate)在一起,实现特征复用。
定义过渡块
def transition_block(input, nb_filter, dropout_rate=None, pooltype=1, weight_decay=1e-4):
x = BatchNormalization(axis=-1, epsilon=1.1e-5)(input)
x = Activation('relu')(x)
x = Conv2D(nb_filter, (1, 1), kernel_initializer='he_normal', padding='same', use_bias=False, kernel_regularizer=l2(weight_decay))(x)
if (dropout_rate):
x = Dropout(dropout_rate)(x)
if (pooltype == 2):
x = AveragePooling2D((2, 2), strides=(2, 2))(x)
elif (pooltype == 1):
x = ZeroPadding2D(padding=(0, 1))(x)
x = AveragePooling2D((2, 2), strides=(2, 1))(x)
elif (pooltype == 3):
x = AveragePooling2D((2, 2), strides=(2, 1))(x)
return x, nb_filter
transition_block
函数定义了过渡块,用于在Dense块之间进行转换。它包括批归一化、ReLU激活、1x1卷积和池化层(AveragePooling)。pooltype
参数决定池化的方式。
定义整个DenseNet模型
def dense_cnn(input, nclass):
_dropout_rate = 0.2
_weight_decay = 1e-4
_nb_filter = 64
x = Conv2D(_nb_filter, (5, 5), strides=(2, 2), kernel_initializer='he_normal', padding='same', use_bias=False, kernel_regularizer=l2(_weight_decay))(input)
x, _nb_filter = dense_block(x, 8, _nb_filter, 8, None, _weight_decay)
x, _nb_filter = transition_block(x, 128, _dropout_rate, 2, _weight_decay)
x, _nb_filter = dense_block(x, 8, _nb_filter, 8, None, _weight_decay)
x, _nb_filter = transition_block(x, 128, _dropout_rate, 2, _weight_decay)
x, _nb_filter = dense_block(x, 8, _nb_filter, 8, None, _weight_decay)
x = BatchNormalization(axis=-1, epsilon=1.1e-5)(x)
x = Activation('relu')(x)
x = Permute((2, 1, 3), name='permute')(x)
x = TimeDistributed(Flatten(), name='flatten')(x)
y_pred = Dense(nclass, name='out', activation='softmax')(x)
basemodel = Model(inputs=input, outputs=y_pred)
basemodel.summary()
return y_pred
dense_cnn
函数定义了整个DenseNet模型结构:
初始卷积层:
x = Conv2D(_nb_filter, (5, 5), strides=(2, 2), kernel_initializer='he_normal', padding='same', use_bias=False, kernel_regularizer=l2(_weight_decay))(input)
第一个Dense块和过渡块:
x, _nb_filter = dense_block(x, 8, _nb_filter, 8, None, _weight_decay)
x, _nb_filter = transition_block(x, 128, _dropout_rate, 2, _weight_decay)
第二个Dense块和过渡块:
x, _nb_filter = dense_block(x, 8, _nb_filter, 8, None, _weight_decay)
x, _nb_filter = transition_block(x, 128, _dropout_rate, 2, _weight_decay)
第三个Dense块:
x, _nb_filter = dense_block(x, 8, _nb_filter, 8, None, _weight_decay)
最后的批归一化和激活层:
x = BatchNormalization(axis=-1, epsilon=1.1e-5)(x)
x = Activation('relu')(x)
对特征图进行重排列并展开:
x = Permute((2, 1, 3), name='permute')(x)
x = TimeDistributed(Flatten(), name='flatten')(x)
输出层:
y_pred = Dense(nclass, name='out', activation='softmax')(x)
构建模型并打印摘要:
basemodel = Model(inputs=input, outputs=y_pred)
basemodel.summary()
这段代码定义了一个基于DenseNet结构的卷积神经网络,用于图像分类任务。主要包括卷积块、Dense块、过渡块和最后的输出层。通过这些块的组合,模型能够高效地提取和利用图像的特征,实现分类任务。下篇文章我们将用此网络进行深度学习训练
评论区