Model zoo¶
In our experiments, we re-trained a set of models with the harmonization loss proposed in the paper. You can easily download the weights of each models here:
- ViT B16 Harmonized: serrelab/prj_harmonization/vit_b16_harmonized
- VGG16 Harmonized: serrelab/prj_harmonization/vgg16_harmonized
- ResNet50V2 Harmonized: serrelab/prj_harmonization/resnet50v2_harmonized
- EfficientNet B0: serrelab/prj_harmonization/efficientnet_b0
- LeViT: serrelab/prj_harmonization/levit
- ConvNeXT: serrelab/prj_harmonization/convnext
- MaxViT: serrelab/prj_harmonization/maxvit
In order to load them easily, we have set up utilities in the github repository. For example, to load the model lives harmonized:
from harmonization.models import (load_ViT_B16, load_ResNet50,
load_VGG16, load_EfficientNetB0,
load_tiny_ConvNeXT, load_tiny_MaxViT,
load_LeViT_small,
preprocess_input)
vit_harmonized = load_ViT_B16()
vgg_harmonized = load_VGG16()
resnet_harmonized = load_ResNet50()
efficient_harmonized = load_EfficientNetB0()
convnext_harmonized = load_tiny_ConvNeXT()
maxvit_harmonized = load_tiny_MaxViT()
levit_harmonized = load_LeViT_small()
# load images (in [0, 255])
# ...
images = preprocess_input(images)
predictions = vit_harmonized(images)