I introduced KServe as a scalable, cloud native, open source model server in the previous article. This tutorial will walk you through all the steps required to install and configure KServe on a Google Kubernetes Engine cluster powered by Nvidia T4 GPUs. We will then deploy a TensorFlow model to perform inference.

Step 1 – Launch a GKE Cluster with T4 GPU Node

Assuming you have access to Google Cloud Platform, run the following command to launch a 3-node cluster configured to use one Nvidia T4 GPU. Replace the project, zone, and other values appropriately to reflect your environment.

Add a cluster-admin role for the GCP user.

Install the device plugin for Nvidia T4 GPU and validate that it is accessible.

Create a pod to test the access based on the Nvidia CUDA image.

Run the command nvidia-smi to test GPU access

With the infrastructure in place, let’s proceed with KServe installation.

Read More:   Why Automation and Self-Service Are Key – InApps 2022

Step 2 – Installing Istio

Istio is an essential prerequisite for KServe. Knative Serving relies on Istio ingress to expose KServe API endpoints. For version compatibility, check the documentation.

Download the Istio binary and your local workstation, and run the CLI for installation.

Verify that all pods are in running state in the istio-system namespace.

Step 3 – Installing Knative Serving

Install Knative CRDs and core services.

To integrate Knative with Istio Ingress, run the below commands.

Finally, configure the DNS for Knative that points to the sslip.io domain.

Make sure that Knative Serving is successfully running.

Step 4 – Installing Certificate Manager

Install cert manager with the following command:

Step 5 – Install KServe Model Server

We are now ready to install the KServe model server on the GKE Cluster.

KServe also installs a couple of custom resources. Check them out with the below command:

Step 5 – Configuring Google Cloud Storage Bucket and Uploading a TensorFlow Model

KServe can pull models from a Google Cloud Storage (GCS) Bucket to serve them for inference. Let’s create the bucket and upload the model.

Read More:   Update What Spotify Learned From the Flop of its App Store

We will use the model from one of my previous tutorials that trained a CNN model to classify dogs and cats for this scenario. You can download the pre-trained TensorFlow model from here. Unzip the file and run the below commands to create the GCS bucket and upload the model artifacts.

For simplicity, we enabled public access to the bucket. But you may want to secure it and add the service account key as a secret for KServe to access the private bucket.

Step 6 – Creating and Deploying the TensorFlow Inference Service

Let’s go ahead and create an inference service pointing to the model uploaded to the GCS bucket. Notice that we use a node selector to ensure that the service utilizes the GPU for acceleration.

Wait for KServe to generate the endpoint for the inference service.

Step 7 – Performing Inference with KServe and TensorFlow

Install the below Python modules in a virtual environment:

Execute the client code with sample images of dogs and cats to see the inference in action.


This concludes the end-to-end tutorial on KServe which covered everything you need to explore the popular model server.

Feature Image by Rudy and Peter Skitterians from Pixabay.