My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more

Image Classifier — Zalando Clothing Store using Monk Library

Learn with Vidya's photo
Learn with Vidya
·Oct 18, 2020·

13 min read

Computer Vision

Image Classifier — Zalando Clothing Store using Monk Library

This tutorial is about image classification on the Zalando Clothing Store dataset using Monk Library. In this dataset, there are high-res, color, and diverse images of clothing and models.

Tutorial available on GitHub.

Image for post

Image for post

About Monk

  • With Monk, you can write less code and create an end to end applications.
  • Learn only one syntax and create applications using any deep learning library — PyTorch, Mxnet, Keras, TensorFlow, etc.
  • Manage your entire project easily with multiple experiments.

This is the best tool to use for competitions held in platforms like Kaggle, Codalab, HackerEarth, AiCrowd, etc.

Table of contents

  1. Install Monk
  2. Demo of Zalando Clothing Store Classifier
  3. Download Dataset
  4. Background work
  5. Training from scratch: vgg16
  6. Summary of Hyperparameter Tuning Experiment
  7. Expert Mode
  8. Validation
  9. Inference
  10. Training from scratch: mobilenet_v2
  11. Comparing vgg16 and mobilenet_v2
  12. Conclusion

Install Monk

Step 1:

_# Use this command if you're working on colab._
!pip install -U monk-colab</span>

For other ways to install, visit Monk Library.

Step 2: Add to system path (Required for every terminal or kernel run)

**import** **sys**
sys.path.append("monk_v1/")</span>

Demo of Zalando Clothing Store Classifier

This section is to give you a demo of this classifier before getting into further details.

Let’s first download the weights.

! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1SO7GUcZlo8jRtLnGa6cBn2MGESyb1mUm' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/**\1\n**/p')&id=1SO7GUcZlo8jRtLnGa6cBn2MGESyb1mUm" -O cls_zalando_trained.zip && rm -rf /tmp/cookies.txt</span>

Unzip the folder.

! unzip -qq cls_zalando_trained.zip
ls workspace/</span>

Output: comparison/ Project-Zalando/ test/

There are three folders for you to explore:

  1. comparison — To check all the comparisons between different models and hyperparameters.

2. Project-Zalando — Final weights and logs.

3. test — Images to test the model.

ls workspace/Project-Zalando/</span>

Output: expert_mode_vgg16/

expert_mode_vgg16 is our final model.

_#Using keras backend_ 
**from** **monk.keras_prototype** **import** prototype</span>

Infer

ktf = prototype(verbose=1)
ktf.Prototype("Project-Zalando", "expert_mode_vgg16", eval_infer=**True**)</span>

Give the image’s location.

img_name = "/content/workspace/test/0DB22O007-A11@8.jpg"
predictions = ktf.Infer(img_name=img_name)</span><span id="bd83" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">_#Display_ 
**from** **IPython.display** **import** Image
Image(filename=img_name)</span>

Image for post

Image for post
img_name = "/content/workspace/test/1FI21J00A-A11@10.jpg"
predictions = ktf.Infer(img_name=img_name)</span><span id="a745" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">_#Display_ 
**from** **IPython.display** **import** Image
Image(filename=img_name)</span>

Image for post

Image for post

Download Dataset

Let’s install a dataset using Kaggle’s API Commands. Before doing that, follow the below steps:

Go to your Kaggle Profile > My Account > Scroll down to find API section > Click on Expire API Token > Now, click on Create new API Token > Save kaggle.json on your local system.

# Upload your _kaggle.json_ file here.
files.upload()</span><span id="8a51" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">! mkdir ~/.kaggle</span><span id="c9a2" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">! cp kaggle.json ~/.kaggle/</span><span id="e0e3" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">! chmod 600 ~/.kaggle/kaggle.json</span>

Time for downloading your dataset. Go to the dataset you want to download on Kaggle and copy the API command which Kaggle provides. That should look like below:

! unzip -qq zalando-store-crawl.zip -d zalando</span>

After unzipping the file, store your dataset/files in your own Google Drive for future experiments.

%cp -av /content/zalando/zalando zalando</span>

Background Work

1. Question: Which backend should I select to train my classifier?

Answer: Follow this tutorial%20Feature%20-%20Compare%20experiments%20-%20compare%20experiments%20across%20backends.ipynb) to compare experiments across backends.

2. Question: Which model should I select to train my classifier after selecting the backend?

Answer: In the current Experiment, I’ve selected Keras backend to train my classifier. So, use the following code to list all the models which are available under Keras.

# Using keras backend 
from keras_prototype import prototype

ktf = prototype(verbose=1)
ktf.List_Models()

Models List: 
    1\. mobilenet
    2\. densenet121
    3\. densenet169
    4\. densenet201
    5\. inception_v3
    6\. inception_resnet_v3
    7\. mobilenet_v2
    8\. nasnet_mobile
    9\. nasnet_large
    10\. resnet50
    11\. resnet101
    12\. resnet152
    13\. resnet50_v2
    14\. resnet101_v2
    15\. resnet152_v2
    16\. vgg16
    17\. vgg19
    18\. xception</span>

Now, you can select any 3–5 models to start doing your experiments. Follow this tutorial%20Feature%20-%20Compare%20experiments%20-%20compare%20experiments%20within%20same%20backend.ipynb) to compare experiments within the same backend.

Import Monk

from monk.keras_prototype import prototype</span>

You can create multiple experiments under one Project. Here, my project is named Project-Zalando, and my first Experiment is named as vgg16_exp1.

ktf.Prototype("Project-Zalando", "vgg16_exp1")</span>

Output:

Keras Version: 2.3.1 Tensorflow Version: 2.2.0  
Experiment Details     
Project: Project-Zalando     
Experiment: vgg16_exp1     
Dir: /content/drive/My Drive/Monk_v1/workspace/Project-Zalando/vgg16_exp1/</span>

You can use the following code to train your model with default Parameters. However, our goal is to increase accuracy. Hence, we will jump to tune our hyperparameters. Use this link to learn how to do hyperparameter tuning for your classifier.

#Read the summary generated once you run this cell.</span>

Output:

Image for post

Image for post

Image for post

Image for post

Image for post

Image for post
ktf.Train()</span>

a. Analyze Learning Rates

# Analysis Project Name
analysis_name = "analyse_learning_rates_vgg16"</span><span id="5d82" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Learning rates to explore
lrs = [0.1, 0.05, 0.01, 0.005, 0.0001]</span><span id="f527" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Number of epochs for each sub-experiment to run
epochs=5</span><span id="a9b0" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Percentage of original dataset to take in for experimentation
# We're taking 5% of our original dataset.
percent_data=5</span><span id="875f" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># I made sure that all the GPU processors are running
ktf.update_num_processors(2)</span><span id="78c7" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Very important to reload post updating
ktf.Reload()</span>

Output:

Image for post

Image for post
# "keep_all" - Keep all the sub experiments created
# "keep_none" - Delete all sub experiments created</span><span id="474f" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">analysis = ktf.Analyse_Learning_Rates(analysis_name, lrs, percent_data, num_epochs=epochs, state="keep_none")</span>

Output:

Image for post

Image for post

Image for post

Image for post

Image for post

Image for post

Image for post

Image for post

Result

From the above table, it is clear that Learning_Rate_0.0001 has the least validation loss. We will update our learning rate with this.

ktf.update_learning_rate(0.0001)
_# Very important to reload post updates_ ktf.Reload()</span>

b. Analyze Batch sizes

# Analysis Project Name
analysis_name = "analyse_batch_sizes_vgg16"</span><span id="bb1a" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Batch sizes to explore
batch_sizes = [2, 4, 8, 12]</span><span id="048a" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># **Note:** We're using the same percent_data and num_epochs.
# "keep_all" - Keep all the sub experiments created
# "keep_none" - Delete all sub experiments created</span><span id="3976" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">analysis_batches = ktf.Analyse_Batch_Sizes(analysis_name, batch_sizes, percent_data, num_epochs=epochs, state="keep_none")</span>

Image for post

Image for post

Result

From the above table, it is clear that Batch_Size_12 has the least validation loss. We will update the model with this.

# Very important to reload post updates
ktf.Reload()</span>

c. Analyse Optimizers

# Analysis Project Name
analysis_name = "analyse_optimizers_vgg16"</span><span id="ad46" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Optimizers to explore
optimizers = ["sgd", "adam", "adagrad"]</span><span id="def3" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># "keep_all" - Keep all the sub experiments created
# "keep_non" - Delete all sub experiments created</span><span id="05f5" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">analysis_optimizers = ktf.Analyse_Optimizers(analysis_name, optimizers, percent_data, num_epochs=epochs, state="keep_none")</span>

Image for post

Image for post

Result

From the above table, it is clear that we should go for Optimizer_adagrad since it has the least validation loss.

Summary of Hyperparameter Tuning Experiment

Here ends our Experiment, and now it’s time to switch on the Expert Mode to train our classifier using the above Hyperparameters.

Summary:

  • Learning Rate — 0.0001
  • Batch size — 12
  • Optimizer — adagrad

1. Training from scratch: vgg16

Expert Mode

Let’s create another Experiment named expert_mode_vgg16 and train our classifier from scratch.

# Load the dataset
ktf.Dataset()</span><span id="bf29" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Model_Params(model_name="vgg16", freeze_base_network=True, use_gpu=True, use_pretrained=True)</span><span id="f4e8" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Model()</span><span id="c635" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Training_Params(num_epochs=5, display_progress=True, display_progress_realtime=True, save_intermediate_models=True, intermediate_model_prefix="intermediate_model_", save_training_logs=True)</span><span id="6d84" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Update optimizer and learning rate
ktf.optimizer_adagrad(0.0001)</span><span id="1896" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.loss_crossentropy()</span><span id="5828" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Training
ktf.Train()</span>

Output:

Image for post

Image for post

Validation

# Just for example purposes, validating on the training set itself
ktf.Dataset_Params(dataset_path="/content/drive/My Drive/Data/zalando")</span><span id="5098" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Dataset()</span><span id="9631" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">accuracy, class_based_accuracy = ktf.Evaluate()</span>

Image for post

Image for post

Inference

Let’s see the Prediction on sample images.

ktf.Prototype("Project-Zalando", "expert_mode_vgg16", eval_infer=True)</span>

The model is now loaded.

img_name = "/content/1FI21J00A-A11@10.jpg"
predictions = ktf.Infer(img_name=img_name)</span><span id="0ad4" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">_#Display_ 
**from** **IPython.display** **import** Image
Image(filename=img_name)</span>

Image for post

Image for post

We’ve successfully completed training our classifier. Check the logs and models folder under this Experiment to see the model weights and other insights.

2. Training from scratch: mobilenet_v2

I have gone ahead and trained mobilenet_v2 from scratch in the same way to compare vgg16 and mobilenet_v2.

After the Experiment, the best hyperparameters for mobilenet_v2 are:

  • Learning Rate — 0.0001
  • Batch Size — 8
  • Optimizer — adam

Expert Mode

ktf = prototype(verbose=1)
ktf.Prototype("Project-Zalando", "expert_mode_mobilenet_v2")</span><span id="e987" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Dataset_Params(dataset_path="/content/drive/My Drive/Data/zalando", split=0.8, input_size=224, batch_size=8, shuffle_data=True, num_processors=2)</span><span id="b680" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Dataset()</span><span id="668c" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Model_Params(model_name="mobilenet_v2", freeze_base_network=True, use_gpu=True, use_pretrained=True)</span><span id="5741" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Model()</span><span id="3f8b" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Training_Params(num_epochs=5, display_progress=True, display_progress_realtime=True, save_intermediate_models=True, intermediate_model_prefix="intermediate_model_", save_training_logs=True)</span><span id="9858" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.optimizer_adam(0.0001)</span><span id="9bee" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.loss_crossentropy()</span><span id="c946" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Train()</span>

Image for post

Image for post

Validation

ktf = prototype(verbose=1)
ktf.Prototype("Project-Zalando", "expert_mode_mobilenet_v2", eval_infer=True)</span><span id="bc68" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Just for example purposes, validating on the training set itself
ktf.Dataset_Params(dataset_path="/content/drive/My Drive/Data/zalando")</span><span id="d342" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ktf.Dataset()</span><span id="5a56" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">accuracy, class_based_accuracy = ktf.Evaluate()</span>

Image for post

Image for post

Inference

ktf.Prototype("Project-Zalando", "expert_mode_mobilenet_v2", eval_infer=True)</span>

The model is now loaded.

img_name = "/content/0DB22O007-A11@8.jpg"
predictions = ktf.Infer(img_name=img_name)</span><span id="0ec9" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">#Display
from IPython.display import Image
Image(filename=img_name)</span>

Image for post

Image for post
img_name = "/content/TOB22O01W-A11@8.jpg"
predictions = ktf.Infer(img_name=img_name)</span><span id="3cc7" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">#Display
from IPython.display import Image
Image(filename=img_name)</span>

Image for post

Image for post

Comparing vgg16 and mobilenet_v2

I’ve used the same tutorial%20Feature%20-%20Compare%20experiments%20-%20compare%20experiments%20across%20backends.ipynb), which I mentioned previously, to compare these two experiments.

# Invoke the comparison class
# import monk_v1
from compare_prototype import compare</span><span id="ddbc" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw"># Create a project
ctf = compare(verbose=1)
ctf.Comparison("vgg-mobilenet-Comparison")</span><span id="99f1" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ctf.Add_Experiment("Project-Zalando", "expert_mode_vgg16")
ctf.Add_Experiment("Project-Zalando", "expert_mode_mobilenet_v2")</span><span id="2f8f" class="ew iq ir dp ks b kt lb lc ld le lf kv s kw">ctf.Generate_Statistics()</span>

After the statistics are generated,

Image(filename="/content/drive/My Drive/Monk_v1/workspace/comparison/vgg-mobilenet-Comparison/train_accuracy.png")</span>

Image for post

Image for post
Image(filename="/content/drive/My Drive/Monk_v1/workspace/comparison/vgg-mobilenet-Comparison/train_loss.png")</span>

Image for post

Image for post
Image(filename="/content/drive/My Drive/Monk_v1/workspace/comparison/vgg-mobilenet-Comparison/val_accuracy.png")</span>

Image for post

Image for post
Image(filename="/content/drive/My Drive/Monk_v1/workspace/comparison/vgg-mobilenet-Comparison/val_loss.png")</span>

Image for post

Image for post
Image(filename="/content/drive/My Drive/Monk_v1/workspace/comparison/vgg-mobilenet-Comparison/stats_training_time.png")</span>

Image for post

Image for post
Image(filename="/content/drive/My Drive/Monk_v1/workspace/comparison/vgg-mobilenet-Comparison/stats_best_val_acc.png")</span>

Image for post

Image for post

Conclusion

From the above comparisons, it is clear that the model vgg16 performed better in every aspect.

There is a lot of room for improving the model’s accuracy by further tuning the hyperparameters. Please refer to Image Classification Zoo for more tutorials.

Tutorial available on GitHub. Please Clap or Share this article if it helped you learn something!