TensorFlow Serving in Action

This project aim in installing TensorFlow Serving to serving predictions via REST API, showcasing the seamless integration of a sophisticated model into a production environment. Witness the power of TensorFlow Serving in action as it facilitates the efficient and reliable deployment of machine learning models for real-world applications. The project is available at https://github.com/Yossranour1996/Deployment-in-TF-serving.

Key Steps:

Installation of TensorFlow Serving:

Install TensorFlow Serving (version 2.8.0) to serve the pretrained model.

Loading a Pretrained Model:

Download a pretrained model for classifying dogs, cats, and birds.

Load the model into memory using TensorFlow.

Saving the Model:

Save the loaded model in the SavedModel format for TensorFlow Serving.

Explore the contents of the saved model, including assets, variables, and protobuf file.

Data Preparation for Inference:

Download test images for making predictions.

Preprocess the test images using ImageDataGenerator.

Serve the Model with TensorFlow Serving:

Start TensorFlow Serving with specified parameters, including the REST API port and model name.

Make REST requests to the server, passing batches of test images.

Retrieve predictions from the server's response.

Evaluate Model Predictions:

Visualize the true and predicted labels for the first 10 images in the batch.

Skills:

#Tensorflowserving #Docker #MLOPs # Deployment