models facebook dinov2 base imagenet1k 1 layer - Azure/azureml-assets GitHub Wiki

facebook-dinov2-base-imagenet1k-1-layer

Overview

Vision Transformer (base-sized model) trained using DINOv2

Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper DINOv2: Learning Robust Visual Features without Supervision by Oquab et al. and first released in this repository.

The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.\n Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n Note that this model does not include any fine-tuned heads.\n By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.

For more details on Dinov2, Review the original-paper and the model's github repo

The model takes an image as input and returns a class token and patch tokens, and optionally 4 register tokens.

The embedding dimension is:

  • 384 for ViT-S.
  • 768 for ViT-B.
  • 1024 for ViT-L
  • 1536 for ViT-g
  • The models follow a Transformer architecture, with a patch size of 14. In the case of registers, we add 4 register tokens, learned during training, to the input sequence after the patch embedding.

    For a 224x224 image, this results in 1 class token + 256 patch tokens, and optionally 4 register tokens.

    The models can accept larger images provided the image shapes are multiples of the patch size (14). If this condition is not verified, the model will crop to the closest smaller multiple of the patch size.

    Training Details

    Training Data

    The Dinov2 model is pre-trained and fine-tuned on ImageNet 2012, of 1 million consistingimages and 1,000 classes on a resolution of 224x224.

    License

    apache-2.0

    Inference Samples

    Inference type Python sample (Notebook) CLI with YAML
    Real time image-classification-online-endpoint.ipynb image-classification-online-endpoint.sh
    Batch image-classification-batch-endpoint.ipynb image-classification-batch-endpoint.sh

    Finetuning Samples

    Task Use case Dataset Python sample (Notebook) CLI with YAML
    Image Multi-class classification Image Multi-class classification fridgeObjects fridgeobjects-multiclass-classification.ipynb fridgeobjects-multiclass-classification.sh
    Image Multi-label classification Image Multi-label classification multilabel fridgeObjects fridgeobjects-multilabel-classification.ipynb fridgeobjects-multilabel-classification.sh

    Evaluation Samples

    Task Use case Dataset Python sample (Notebook)
    Image Multi-class classification Image Multi-class classification fridgeObjects image-multiclass-classification.ipynb
    Image Multi-label classification Image Multi-label classification multilabel fridgeObjects image-multilabel-classification.ipynb

    Sample input and output

    Sample input

    {
      "input_data": ["image1", "image2"]
    }

    Note: "image1" and "image2" string should be in base64 format or publicly accessible urls.

    Sample output

    [
        {
            "probs": [0.91, 0.09],
            "labels": ["can", "carton"]
        },
        {
            "probs": [0.1, 0.9],
            "labels": ["can", "carton"]
        }
    ]

    Visualization of inference result for a sample image

    mc visualization

    Version: 2

    Tags

    huggingface_model_id : facebook/dinov2-base-imagenet1k-1-layer training_dataset : imagenet-1k SharedComputeCapacityEnabled author : Meta license : apache-2.0 model_specific_defaults : ordereddict({'apply_deepspeed': 'true', 'apply_ort': 'true'}) task : image-classification inference_compute_allow_list : ['Standard_DS2_v2', 'Standard_D2a_v4', 'Standard_D2as_v4', 'Standard_DS3_v2', 'Standard_D4a_v4', 'Standard_D4as_v4', 'Standard_DS4_v2', 'Standard_D8a_v4', 'Standard_D8as_v4', 'Standard_DS5_v2', 'Standard_D16a_v4', 'Standard_D16as_v4', 'Standard_D32a_v4', 'Standard_D32as_v4', 'Standard_D48a_v4', 'Standard_D48as_v4', 'Standard_D64a_v4', 'Standard_D64as_v4', 'Standard_D96a_v4', 'Standard_D96as_v4', 'Standard_F4s_v2', 'Standard_FX4mds', 'Standard_F8s_v2', 'Standard_FX12mds', 'Standard_F16s_v2', 'Standard_F32s_v2', 'Standard_F48s_v2', 'Standard_F64s_v2', 'Standard_F72s_v2', 'Standard_FX24mds', 'Standard_FX36mds', 'Standard_FX48mds', 'Standard_E2s_v3', 'Standard_E4s_v3', 'Standard_E8s_v3', 'Standard_E16s_v3', 'Standard_E48s_v3', 'Standard_E32s_v3', 'Standard_NC4as_T4_v3', 'Standard_E64s_v3', 'Standard_NC8as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC12s_v3', 'Standard_NC64as_T4_v3', 'Standard_NC24s_v3', 'Standard_NC48ads_A100_v4', 'Standard_NC24ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_NC96ads_A100_v4', 'Standard_ND40rs_v2', 'Standard_ND96amsr_A100_v4'] evaluation_compute_allow_list : ['Standard_NC4as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC8as_T4_v3', 'Standard_NC12s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC24s_v3', 'Standard_NC64as_T4_v3', 'Standard_NC96ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_ND96amsr_A100_v4', 'Standard_ND40rs_v2'] finetune_compute_allow_list : ['Standard_NC4as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC8as_T4_v3', 'Standard_NC12s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC24s_v3', 'Standard_NC64as_T4_v3', 'Standard_NC96ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_ND96amsr_A100_v4', 'Standard_ND40rs_v2']

    View in Studio: https://ml.azure.com/registries/azureml/models/facebook-dinov2-base-imagenet1k-1-layer/version/2

    License: apache-2.0

    Properties

    SharedComputeCapacityEnabled: True

    SHA: 3ec10e6c76362191b61260300fe1d6173a8dd7e1

    finetuning-tasks: image-classification

    finetune-min-sku-spec: 4|1|28|176

    finetune-recommended-sku: Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC8as_T4_v3, Standard_NC12s_v3, Standard_NC16as_T4_v3, Standard_NC24s_v3, Standard_NC64as_T4_v3, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2

    evaluation-min-sku-spec: 4|1|28|176

    evaluation-recommended-sku: Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC8as_T4_v3, Standard_NC12s_v3, Standard_NC16as_T4_v3, Standard_NC24s_v3, Standard_NC64as_T4_v3, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2

    inference-min-sku-spec: 2|0|7|14

    inference-recommended-sku: Standard_DS2_v2, Standard_D2a_v4, Standard_D2as_v4, Standard_DS3_v2, Standard_D4a_v4, Standard_D4as_v4, Standard_DS4_v2, Standard_D8a_v4, Standard_D8as_v4, Standard_DS5_v2, Standard_D16a_v4, Standard_D16as_v4, Standard_D32a_v4, Standard_D32as_v4, Standard_D48a_v4, Standard_D48as_v4, Standard_D64a_v4, Standard_D64as_v4, Standard_D96a_v4, Standard_D96as_v4, Standard_F4s_v2, Standard_FX4mds, Standard_F8s_v2, Standard_FX12mds, Standard_F16s_v2, Standard_F32s_v2, Standard_F48s_v2, Standard_F64s_v2, Standard_F72s_v2, Standard_FX24mds, Standard_FX36mds, Standard_FX48mds, Standard_E2s_v3, Standard_E4s_v3, Standard_E8s_v3, Standard_E16s_v3, Standard_E32s_v3, Standard_E48s_v3, Standard_E64s_v3, Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC8as_T4_v3, Standard_NC12s_v3, Standard_NC16as_T4_v3, Standard_NC24s_v3, Standard_NC64as_T4_v3, Standard_NC24ads_A100_v4, Standard_NC48ads_A100_v4, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2

    ⚠️ **GitHub.com Fallback** ⚠️