Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
How can Keras be used in the training, evaluation and inference of the model?
TensorFlow is an open-source machine learning framework provided by Google. It works with Python to implement algorithms, deep learning applications, and complex mathematical operations efficiently for both research and production purposes.
The tensorflow package can be installed on Windows using the below command ?
pip install tensorflow
Keras is a high-level deep learning API written in Python, originally developed for the ONEIROS project. It provides a productive interface for solving machine learning problems and is highly scalable with cross-platform capabilities, running on TPUs, GPU clusters, web browsers, and mobile devices.
Keras comes built-in with TensorFlow and can be accessed as follows ?
import tensorflow as tf from tensorflow import keras
Model Training with Keras
Here's a complete example demonstrating how to use Keras for training, evaluation, and inference using the MNIST dataset ?
import tensorflow as tf
from tensorflow import keras
print("Load the MNIST data")
print("Split data into training and test data")
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
print("Reshape the data for better training")
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
# Create the model
model = keras.Sequential([
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
print("Compile the model")
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=["accuracy"],
)
print("Fit the data to the model")
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
# Evaluate the model
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("The loss associated with model:", test_scores[0])
print("The accuracy of the model:", test_scores[1])
Load the MNIST data Split data into training and test data Reshape the data for better training Compile the model Fit the data to the model Epoch 1/2 750/750 [==============================] - 3s 3ms/step - loss: 0.5768 - accuracy: 0.8394 - val_loss: 0.2015 - val_accuracy: 0.9405 Epoch 2/2 750/750 [==============================] - 2s 3ms/step - loss: 0.1720 - accuracy: 0.9495 - val_loss: 0.1462 - val_accuracy: 0.9580 313/313 - 0s - loss: 0.1433 - accuracy: 0.9584 The loss associated with model: 0.14328785240650177 The accuracy of the model: 0.9584000110626221
Model Inference
After training, you can use the model for making predictions on new data ?
# Make predictions on test data
predictions = model.predict(x_test[:5])
print("Raw predictions shape:", predictions.shape)
print("First prediction:", predictions[0])
# Convert logits to probabilities
probabilities = tf.nn.softmax(predictions)
predicted_classes = tf.argmax(probabilities, axis=1)
print("Predicted classes:", predicted_classes.numpy())
print("Actual classes:", y_test[:5])
Key Steps in Keras Workflow
| Step | Purpose | Keras Method |
|---|---|---|
| Data Loading | Load dataset | keras.datasets.mnist.load_data() |
| Data Preprocessing | Normalize and reshape |
.reshape(), .astype()
|
| Model Creation | Define architecture | keras.Sequential() |
| Compilation | Set optimizer, loss, metrics | model.compile() |
| Training | Fit model to data | model.fit() |
| Evaluation | Test performance | model.evaluate() |
| Inference | Make predictions | model.predict() |
Conclusion
Keras provides a streamlined workflow for machine learning: load data, preprocess, create model, compile, train with fit(), evaluate with evaluate(), and make predictions with predict(). This high-level API makes deep learning accessible while maintaining flexibility for complex architectures.
