Can a Keras model be treated as just a layer and invoked using Python? If yes, demonstrate it


Tensorflow is a machine learning framework that is provided by Google. It is an open-source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes.

Keras was developed as a part of research for the project ONEIROS (Open ended Neuro-Electronic Intelligent Robot Operating System). Keras is a deep learning API, which is written in Python. It is a high-level API that has a productive interface that helps solve machine learning problems. It runs on top of Tensorflow framework. It was built to help experiment in a quick manner. It provides essential abstractions and building blocks that are essential in developing and encapsulating machine learning solutions.

It is highly scalable, and comes with cross platform abilities. This means Keras can be run on TPU or clusters of GPUs. Keras models can also be exported to run in a web browser or a mobile phone as well.

Keras is already present within the Tensorflow package. It can be accessed using the below line of code.

import tensorflow
from tensorflow import keras

Yes, a Keras model be treated as just a layer and invoked using Python. The Keras functional API helps create models that are more flexible in comparison to models created using sequential API. The functional API can work with models that have non-linear topology, can share layers and work with multiple inputs and outputs. A deep learning model is usually a directed acyclic graph (DAG) that contains multiple layers. The functional API helps build the graph of layers.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook. Following is the code snippet to treat Keras model as a layer and invoked using Python −

Example

Encoder_input = keras.Input(shape=(28, 28, 1), name=”original_img”)
print("Adding layers to the model")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
print("Performing golbal max pooling")
encoder_output = layers.GlobalMaxPooling2D()(x)

print("Creating a model using the layers")
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
print("More information about the model")
encoder.summary()
decoder_input = keras.Input(shape=(16,), name="encoded_img")
print("Reshaping the layers in the model")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
print("Creating a model using the layers")
decoder = keras.Model(decoder_input, decoder_output, name="decoder")
print("More information about the model")
decoder.summary()

autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
print("More information about the model")
autoencoder.summary()

Code credit − https://www.tensorflow.org/guide/keras/functional

Output

original_img (InputLayer)    [(None, 28, 28, 1)]       0
_________________________________________________________________
conv2d_28 (Conv2D)          (None, 26, 26, 16)       160
_________________________________________________________________
conv2d_29 (Conv2D)          (None, 24, 24, 32)       4640
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 8, 8, 32)          0
_________________________________________________________________
conv2d_30 (Conv2D)          (None, 6, 6, 32)          9248
_________________________________________________________________
conv2d_31 (Conv2D)          (None, 4, 4, 16)          4624
_________________________________________________________________
global_max_pooling2d_3       (Glob (None, 16)          0
=================================================================
Total params: 18,672
Trainable params: 18,672
Non-trainable params: 0
_________________________________________________________________
Reshaping the layers in the model
Creating a model using the layers
More information about the model
Model: "decoder"
_________________________________________________________________
Layer (type)                Output Shape             Param #
=================================================================
encoded_img (InputLayer)    [(None, 16)]             0
_________________________________________________________________
reshape_1 (Reshape)          (None, 4, 4, 1)         0
_________________________________________________________________
conv2d_transpose_4 (Conv2DTr (None, 6, 6, 16)       160
_________________________________________________________________
conv2d_transpose_5 (Conv2DTr (None, 8, 8, 32)       4640
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 24, 24, 32)       0
_________________________________________________________________
conv2d_transpose_6 (Conv2DTr (None, 26, 26, 16)      4624
_________________________________________________________________
conv2d_transpose_7 (Conv2DTr (None, 28, 28, 1)       145
=================================================================
Total params: 9,569
Trainable params: 9,569
Non-trainable params: 0
_________________________________________________________________
More information about the model
Model: "autoencoder"
_________________________________________________________________
Layer (type)             Output Shape          Param #
=================================================================
img (InputLayer)       [(None, 28, 28, 1)]       0
_________________________________________________________________
encoder (Functional)    (None, 16)             18672
_________________________________________________________________
decoder (Functional)    (None, 28, 28, 1)       9569
=================================================================
Total params: 28,241
Trainable params: 28,241
Non-trainable params: 0
_________________________________________________________________

Explanation

  • Any model can be treated as a layer by invoking it on an ‘input’ or output of another layer.

  • When the model is called, the architecture is being reused.

  • In addition, the weights are also being reused.

  • The autoencoder model can be created using the encoder model, a decoder model.

  • These two models are chained together into two calls to get the autoencoder model.

Updated on: 18-Jan-2021

625 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements