Skip to content

Open In Colab

IceVision meets W&B

IceVision + W&B = Agnostic Object Detection Framework with Outstanding Experiments Tracking

For more information check the following Report

IceVision fully supports W&B by providing a one-liner API that enables users to track their trained models and display both the predicted and ground truth bounding boxes.

W&B makes visualizing and tracking different models performance a highly enjoyable task. Indeed, we are able to monitor the performance of several EfficientDet backbones by changing few lines of code and obtaining very intuitive and easy-to-interpret figures that highlights both the similarities and differences between the different backbones.

In this example, we are using the fastai training loop, it offers a slick integration with wandb through the use of the WandbCallback() callback.


In this tutorial, we walk you through the different steps of training the fridge dataset. Thanks to W&B, we can easily track the performance of the EfficientDet model using different backbones. In this example, we are using the fastai library training loop.

Installing IceVision and IceData

!pip install icevision[all] icedata


from icevision.all import *
from fastai.callback.wandb import *
from fastai.callback.tracker import SaveModelCallback

Datasets : Fridge Objects dataset

Fridge Objects dataset is tiny dataset that contains 134 images of 4 classes: - can, - carton, - milk bottle, - water bottle.

IceVision provides very handy methods such as loading a dataset, parsing annotations, and more.

# Loading Data
url = ""
dest_dir = "fridge"
data_dir = icedata.load_data(url, dest_dir, force_download=True)
# Parser
class_map = ClassMap(["milk_bottle", "carton", "can", "water_bottle"])
parser = parsers.voc(annotations_dir=data_dir / "odFridgeObjects/annotations",
                     images_dir=data_dir / "odFridgeObjects/images",
# Records
train_records, valid_records = parser.parse()


Showing a batch of images with their corresponding boxes and labels

show_records(train_records[:3], ncols=3, class_map=class_map)


Train and Validation Dataset Transforms

# Transforms
train_tfms = tfms.A.Adapter([*tfms.A.aug_tfms(size=384, presize=512), tfms.A.Normalize()])
valid_tfms = tfms.A.Adapter([*tfms.A.resize_and_pad(384), tfms.A.Normalize()])
# Datasets
train_ds = Dataset(train_records, train_tfms)
valid_ds = Dataset(valid_records, valid_tfms)

Displaying the same image with different transforms


Transforms are applied lazily, meaning they are only applied when we grab (get) an item. This means that, if you have augmentation (random) transforms, each time you get the same item from the dataset you will get a slightly different version of it.

samples = [train_ds[0] for _ in range(3)]
show_samples(samples, ncols=3, class_map=class_map)



# DataLoaders
train_dl = efficientdet.train_dl(train_ds, batch_size=16, num_workers=4, shuffle=True)
valid_dl = efficientdet.valid_dl(valid_ds, batch_size=16, num_workers=4, shuffle=False)
batch, samples = first(train_dl)
show_samples(samples[:6], class_map=class_map, ncols=3)



# EfficientDet D2 
model = efficientdet.model('tf_efficientdet_d2', num_classes=len(class_map), img_size=384) 


metrics = [COCOMetric(metric_type=COCOMetricType.bbox)]


IceVision is an agnostic framework meaning it can be plugged to other DL framework such as fastai2, and pytorch-lightning.

You could also plug to oth DL framework using your own custom code.

Training using fastai

wandb.init(project="icevision-fridge", name="efficientdet_d2-1", reinit=True)
<IPython.core.display.Javascript object>

wandb: Appending key for to your netrc file: /root/.netrc

Tracking run with wandb version 0.10.8
Syncing run efficientdet_d2-1 to Weights & Biases (Documentation).
Project page:
Run page:
Run data is saved locally in wandb/run-20201027_134038-m4mz59fz


learn = efficientdet.fastai.learner(dls=[train_dl, valid_dl], model=model, metrics=metrics, cbs=[WandbCallback(), SaveModelCallback()])
learn.fine_tune(50, 1e-2, freeze_epochs=5)

Show results

efficientdet.show_results(model, valid_ds, class_map=class_map)



infer_dl = efficientdet.infer_dl(valid_ds, batch_size=8)
samples, preds = efficientdet.predict_dl(model=model, infer_dl=infer_dl)
# from icevision.visualize.wandb_img import *
wandb_images = wandb_img_preds(samples, preds, class_map, add_ground_truth=True) 
wandb.log({"Predicted images": wandb_images})
# optional: mark the run as completed

Happy Learning!

If you need any assistance, feel free to join our forum.