How can Tensorflow and pre-trained model be used for fine tuning?


Tensorflow and the pre-trained model can be used for fine tuning by setting the ‘trainable’ feature of ‘base_model’ to True. Before fine tuning, the layers are frozen.

Read More: What is TensorFlow and how Keras work with TensorFlow to create Neural Networks?

A neural network that contains at least one layer is known as a convolutional layer. We can use the Convolutional Neural Network to build learning model. 

We will understand how to classify images of cats and dogs with the help of transfer learning from a pre-trained network. The intuition behind transfer learning for image classification is, if a model is trained on a large and general dataset, this model can be used to effectively serve as a generic model for the visual world. It would have learned the feature maps, which means the user won’t have to start from scratch by training a large model on a large dataset.

Read More: How can a customized model be pre-trained?

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

Example

base_model.trainable = True
print("Number of layers in the base model are: ", len(base_model.layers))
print("Fine tuning begins")
fine_tune_at = 100
print("Layers are frozen before 'fine_tune_at' layer")
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False

Code credit −https://www.tensorflow.org/tutorials/images/transfer_learning

Output

Number of layers in the base model are: 154
Fine tuning begins
Layers are frozen before 'fine_tune_at' layer

Explanation

  • A way to increase performance is to fine-tune/train the weights of the top layers of the pre-trained model along with the training of the classifier that was added.

  • The training process will force the weights to get tuned from generic feature maps to features that are associated with that particular dataset.

  • A small number of top layers have to be fine tuned, instead of the whole MobileNet model.

  • The goal of fine-tuning is to adapt the specialized features so that they can be used to work with the new dataset, instead of overwriting on the generic learning.

  • The base_model needs to be unfreezed and the bottom layers has to be set to un-trainable.

  • The model is recompiled and training has to be resumed.

Updated on: 25-Feb-2021

247 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements