Update README: Add model card metadata, ImageNet-1k metrics, and LiteRT usage example
#1
by akashvverma1995 - opened
README.md
CHANGED
|
@@ -1,27 +1,146 @@
|
|
| 1 |
---
|
| 2 |
library_name: litert
|
|
|
|
| 3 |
tags:
|
| 4 |
- vision
|
| 5 |
- image-classification
|
|
|
|
|
|
|
| 6 |
datasets:
|
| 7 |
- imagenet-1k
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
|
|
|
| 9 |
# EfficientNet V2 M
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
## Intended uses & limitations
|
| 14 |
|
| 15 |
The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
### BibTeX entry and citation info
|
| 18 |
|
| 19 |
```bibtex
|
| 20 |
-
@
|
| 21 |
-
title={
|
| 22 |
-
author={
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
|
|
|
| 26 |
}
|
| 27 |
```
|
|
|
|
| 1 |
---
|
| 2 |
library_name: litert
|
| 3 |
+
pipeline_tag: image-classification
|
| 4 |
tags:
|
| 5 |
- vision
|
| 6 |
- image-classification
|
| 7 |
+
- google
|
| 8 |
+
- computer-vision
|
| 9 |
datasets:
|
| 10 |
- imagenet-1k
|
| 11 |
+
model-index:
|
| 12 |
+
- name: litert-community/efficientnet_v2_m
|
| 13 |
+
results:
|
| 14 |
+
- task:
|
| 15 |
+
type: image-classification
|
| 16 |
+
name: Image Classification
|
| 17 |
+
dataset:
|
| 18 |
+
name: ImageNet-1k
|
| 19 |
+
type: imagenet-1k
|
| 20 |
+
config: default
|
| 21 |
+
split: validation
|
| 22 |
+
metrics:
|
| 23 |
+
- name: Top 1 Accuracy (Full Precision)
|
| 24 |
+
type: accuracy
|
| 25 |
+
value: 0.8510
|
| 26 |
+
- name: Top 5 Accuracy (Full Precision)
|
| 27 |
+
type: accuracy
|
| 28 |
+
value: 0.9715
|
| 29 |
+
- name: Top 1 Accuracy (Dynamic Quantized wi8 afp32)
|
| 30 |
+
type: accuracy
|
| 31 |
+
value: 0.8504
|
| 32 |
+
- name: Top 5 Accuracy (Dynamic Quantized wi8 afp32)
|
| 33 |
+
type: accuracy
|
| 34 |
+
value: 0.9715
|
| 35 |
---
|
| 36 |
+
|
| 37 |
# EfficientNet V2 M
|
| 38 |
|
| 39 |
+
The EfficientNetV2-M is a high-capacity model pre-trained on ImageNet-1k, originally introduced by Tan, Mingxing, Le and Quoc in the 2021 paper, [**EfficientNetV2: Smaller Models and Faster Training**](https://arxiv.org/abs/2104.00298). This architecture evolves the original compound scaling formula by incorporating Fused-MBConv layers and progressive learning—a method that dynamically adjusts image resolution and regularization during training.
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Model description
|
| 43 |
+
|
| 44 |
+
The model was converted from a checkpoint from PyTorch Vision.
|
| 45 |
+
|
| 46 |
+
The original model has:
|
| 47 |
+
acc@1 (on ImageNet-1K): 85.112%
|
| 48 |
+
acc@5 (on ImageNet-1K): 97.156%
|
| 49 |
+
num_params: 54,139,356
|
| 50 |
|
| 51 |
## Intended uses & limitations
|
| 52 |
|
| 53 |
The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
|
| 54 |
|
| 55 |
+
## How to Use
|
| 56 |
+
|
| 57 |
+
**1. Install Dependencies** Ensure your Python environment is set up with the required libraries. Run the following command in your terminal:
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
pip install numpy Pillow huggingface_hub ai-edge-litert
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
**2. Prepare Your Image** The script expects an image file to analyze. Make sure you have an image (e.g., cat.jpg or car.png) saved in the same working directory as your script.
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
**3. Save the Script** Create a new file named `classify.py`, paste the script below into it, and save the file:
|
| 68 |
+
|
| 69 |
+
```python
|
| 70 |
+
#!/usr/bin/env python3
|
| 71 |
+
import argparse, json
|
| 72 |
+
import numpy as np
|
| 73 |
+
from PIL import Image
|
| 74 |
+
from huggingface_hub import hf_hub_download
|
| 75 |
+
from ai_edge_litert.compiled_model import CompiledModel
|
| 76 |
+
|
| 77 |
+
def preprocess(img: Image.Image) -> np.ndarray:
|
| 78 |
+
img = img.convert("RGB")
|
| 79 |
+
w, h = img.size
|
| 80 |
+
s = 480
|
| 81 |
+
if w < h:
|
| 82 |
+
img = img.resize((s, int(round(h * s / w))), Image.BILINEAR)
|
| 83 |
+
else:
|
| 84 |
+
img = img.resize((int(round(w * s / h)), s), Image.BILINEAR)
|
| 85 |
+
left = (img.size[0] - 480) // 2
|
| 86 |
+
top = (img.size[1] - 480) // 2
|
| 87 |
+
img = img.crop((left, top, left + 480, top + 480))
|
| 88 |
+
|
| 89 |
+
x = np.asarray(img, dtype=np.float32) / 255.0
|
| 90 |
+
x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
|
| 91 |
+
[0.229, 0.224, 0.225], dtype=np.float32
|
| 92 |
+
)
|
| 93 |
+
return np.transpose(x, (2, 0, 1))
|
| 94 |
+
|
| 95 |
+
def main():
|
| 96 |
+
ap = argparse.ArgumentParser()
|
| 97 |
+
ap.add_argument("--image", required=True)
|
| 98 |
+
args = ap.parse_args()
|
| 99 |
+
|
| 100 |
+
model_path = hf_hub_download("litert-community/efficientnet_v2_m", "efficientnet_v2_m.tflite")
|
| 101 |
+
labels_path = hf_hub_download(
|
| 102 |
+
"huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
|
| 103 |
+
)
|
| 104 |
+
with open(labels_path, "r", encoding="utf-8") as f:
|
| 105 |
+
id2label = {int(k): v for k, v in json.load(f).items()}
|
| 106 |
+
|
| 107 |
+
img = Image.open(args.image)
|
| 108 |
+
x = preprocess(img)
|
| 109 |
+
|
| 110 |
+
model = CompiledModel.from_file(model_path)
|
| 111 |
+
inp = model.create_input_buffers(0)
|
| 112 |
+
out = model.create_output_buffers(0)
|
| 113 |
+
|
| 114 |
+
inp[0].write(x)
|
| 115 |
+
model.run_by_index(0, inp, out)
|
| 116 |
+
|
| 117 |
+
req = model.get_output_buffer_requirements(0, 0)
|
| 118 |
+
y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
|
| 119 |
+
|
| 120 |
+
pred = int(np.argmax(y))
|
| 121 |
+
label = id2label.get(pred, f"class_{pred}")
|
| 122 |
+
|
| 123 |
+
print(f"Top-1 class index: {pred}")
|
| 124 |
+
print(f"Top-1 label: {label}")
|
| 125 |
+
if __name__ == "__main__":
|
| 126 |
+
main()
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
**4. Execute the Python Script** Run the below command:
|
| 130 |
+
|
| 131 |
+
```bash
|
| 132 |
+
python classify.py --image cat.jpg
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
### BibTeX entry and citation info
|
| 136 |
|
| 137 |
```bibtex
|
| 138 |
+
@inproceedings{tan2021efficientnetv2,
|
| 139 |
+
title={Efficientnetv2: Smaller models and faster training},
|
| 140 |
+
author={Tan, Mingxing and Le, Quoc},
|
| 141 |
+
booktitle={International conference on machine learning},
|
| 142 |
+
pages={10096--10106},
|
| 143 |
+
year={2021},
|
| 144 |
+
organization={PMLR}
|
| 145 |
}
|
| 146 |
```
|