Pixel Sentence Representation Learning
Paper
• 2402.08183 • Published
• 2
sentence1 imagewidth (px) 448 448 | sentence2 imagewidth (px) 448 448 | score float64 0 5 |
|---|---|---|
0.8 | ||
1 | ||
2.6 | ||
2.2 | ||
1.4 | ||
1.8 | ||
0.4 | ||
1.4 | ||
1 | ||
2.8 | ||
1 | ||
1.6 | ||
1.8 | ||
0.2 | ||
3.2 | ||
2 | ||
1 | ||
1.4 | ||
5 | ||
4 | ||
1 | ||
1.6 | ||
3.2 | ||
0.6 | ||
0.2 | ||
2.6 | ||
3.6 | ||
1.8 | ||
2 | ||
1 | ||
0.2 | ||
2.6 | ||
0.8 | ||
1.2 | ||
3 | ||
1.2 | ||
1 | ||
5 | ||
3.4 | ||
4.4 | ||
2.4 | ||
2.2 | ||
3.2 | ||
2.8 | ||
5 | ||
5 | ||
1.4 | ||
2.6 | ||
3.8 | ||
1.6 | ||
0.6 | ||
2 | ||
0.6 | ||
4 | ||
5 | ||
3.6 | ||
4 | ||
3.8 | ||
3.2 | ||
2 | ||
3.4 | ||
0.4 | ||
5 | ||
2.2 | ||
4.6 | ||
0.2 | ||
1.8 | ||
4 | ||
3.2 | ||
4.2 | ||
0 | ||
4.2 | ||
2 | ||
3.8 | ||
4.2 | ||
5 | ||
4.2 | ||
1.6 | ||
0 | ||
4.6 | ||
0.2 | ||
3 | ||
1.6 | ||
0.6 | ||
5 | ||
4 | ||
0.2 | ||
3.4 | ||
4.6 | ||
5 | ||
1.6 | ||
1.8 | ||
0.2 | ||
0.2 | ||
3.8 | ||
3.4 | ||
0.2 | ||
4.4 | ||
1.4 | ||
1.8 |
This dataset is rendered to images from STS-17. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images.
Examples of Use
Load Arabic to Arabic dataset:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts17", name="ar-ar", split="test")
Load French to English dataset:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts17", name="fr-en", split="test")
ar-ar, en-ar, en-de, en-en, en-tr, es-en, es-es, fr-en, it-en, ko-ko, nl-en
@article{xiao2024pixel,
title={Pixel Sentence Representation Learning},
author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al},
journal={arXiv preprint arXiv:2402.08183},
year={2024}
}