Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
How can TensorFlow be used to verify the predictions for Fashion MNIST in Python?
TensorFlow is a machine learning framework provided by Google. It is an open−source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes.
The Fashion MNIST dataset contains grayscale images of clothing items across 10 different categories, with over 70,000 low-resolution (28×28 pixels) images. After training a model on this dataset, we need to verify how well our predictions match the actual labels.
Installing TensorFlow
The 'tensorflow' package can be installed on Windows using the below command −
pip install tensorflow
Complete Example: Verifying Fashion MNIST Predictions
Here's a complete example showing how to train a model and verify predictions −
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# Load Fashion MNIST dataset
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# Class names for Fashion MNIST
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# Normalize pixel values
train_images = train_images / 255.0
test_images = test_images / 255.0
# Build the model
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train the model (simplified for demo)
model.fit(train_images[:1000], train_labels[:1000], epochs=5, verbose=0)
# Make predictions
probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images[:15])
print("Model training completed. Making predictions...")
Visualization Functions
These helper functions display images with their predicted and true labels −
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
Verifying Single Predictions
Let's examine individual predictions to see how well our model performs −
# Verify prediction for first image i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions[i], test_labels) plt.show() # Verify prediction for 13th image i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions[i], test_labels) plt.show()
Verifying Multiple Predictions
To get a broader view of model performance, we can visualize multiple predictions at once −
num_rows = 5
num_cols = 3
print("The test images, predicted labels and the true labels are plotted")
print("The correct predictions are in blue and the incorrect predictions are in red")
num_images = num_rows * num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
Understanding the Visualization
The prediction verification shows several key elements:
Image Display: The original grayscale clothing image from the test set
Prediction Confidence: The percentage indicates how confident the model is about its prediction
Color Coding: Blue labels indicate correct predictions, red labels show incorrect ones
Probability Bars: Show the confidence distribution across all 10 clothing categories
Conclusion
Verifying TensorFlow predictions on Fashion MNIST helps evaluate model performance through visual inspection. The color-coded system makes it easy to identify correct (blue) and incorrect (red) predictions at a glance.
