Take Amazon SageMaker Studio Lab for a Spin – InApps Technology is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Take Amazon SageMaker Studio Lab for a Spin – InApps Technology in today’s post !

Read more about Take Amazon SageMaker Studio Lab for a Spin – InApps Technology at Wikipedia



You can find content about Take Amazon SageMaker Studio Lab for a Spin – InApps Technology from the Wikipedia website

Introduced  as a preview at the Amazon Web Services‘ re:Invent 2021 conference, SageMaker Studio Lab is a free stand-alone machine learning development environment based on the popular JupyterHub IDE.  Except for the branding, the service has almost nothing to do with SageMaker. For a detailed overview of the service, read my previous article.

In this tutorial, I will walk you through the steps of training an end-to-end deep learning model to perform image classification based on Amazon SageMaker Studio Lab. We will build a model that distinguishes between cats and dogs (Be sure to check back all this week for additional SageMaker Studio Lab tutorials).

Step 1: Request Access and Sign In

Visit https://studiolab.sagemaker.aws/ to request a free Amazon SageMaker Studio Lab account.

Studio lab landing page

request account

It may take a few hours to a couple of days for you to get access to the environment. Wait for the email confirmation.

account ready

Once approved, sign in to your account with the credentials.

sign in

Select GPU compute type, and click on the Start runtime button.

start runtime

When the runtime is ready, click on Open project.

Open project

The JupyterHub environment is ready for experimentation.

Read More:   10 DIY Development Boards for IoT Prototyping – InApps 2022

Ready for experimentation

Step 2: Preparing the Environment

From the launcher, click on the terminal icon to start a new terminal session. Clone the Git repository that has the Conda environment configuration and the notebooks.

git clone https://github.com/janakiramm/dogs-vs-cats

Navigate to the the dogs-vs-cats folder, and right click on env_tf2.yaml file to create a new Conda environment. This file has all the modules needed to train a TensorFlow/Keras model.

Pull in variables

Studio lab environ

Refresh the browser to see a new kernel named tf2:Python

New kernel

Before we can start training the model, we need to download the dataset. For this, login to Kaggle and download the file train.zip from the Dogs vs. Cats competition.

Download the dataset

Upload the file, train.zip into the dataset folder of the repo that we cloned in the previous step. Launch a terminal session and unzip the file in the same folder. You should now have a new folder — /dogs-vs-cats/dataset/train/.

We now have the environment fully configured to kickoff the training job within Amazon SageMaker Studio Lab.

Step 3: Train the Computer Vision Model to Classify Images

Navigate to the train folder of the repository and launch dogs-vs-cats.ipynb notebook.

Train the model to classify images

If prompted for the kernel, choose tf2:Python.

select the kernel

This notebook loads the dataset we downloaded and trains the image classification model. Run the cells to complete the training. It may take up to 15 minutes for the training to complete.

In my experiment, the model was trained with an accuracy of 87.5%. This may be improved by increasing the number of epochs.

Training the model

When the model is ready, it is exported to the model/export/Servo/1 directory in the TensorFlow Serving format.

Export the model

Step 3: Perform Inference on the Trained Model

Navigate to the infer folder to open the inference notebook. We load the saved model from /model/export/Servo/1/ and use it for inference.

model = tensorflow.keras.models.load_model("../model/export/Servo/1/")

When an image is appropriately resized and preprocessed, it can be sent to the model. Below are the screenshots predicting the correct classes.

Read More:   The Git Vulnerability and its Aftermath – InApps 2022

Perform inference

More inference training

You can easily upload the model to Amazon S3 using the Python Boto3 module to deploy it in Amazon SageMaker.

In the next part of this series — which will run all this week — we will utilize the image classification model to create a serverless inference endpoint in Amazon SageMaker. Stay tuned.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...