Skip to content

Commit a7d4fac

Browse files
[GH Actions] Automatically add papers from authors (#65)
Co-authored-by: RSTZZZ <RSTZZZ@users.noreply.github.com>
1 parent 92b09eb commit a7d4fac

File tree

3 files changed

+46
-0
lines changed

3 files changed

+46
-0
lines changed
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
title: 'Online Influence Campaigns: Strategies and Vulnerabilities'
3+
venue: arXiv.org
4+
names: Andreea Musulan, Veronica Xia, Ethan Kosak-Hine, Tom Gibbs, Vidya Sujaya, Reihaneh
5+
Rabbany, J. Godbout, Kellin Pelrine
6+
tags:
7+
- arXiv.org
8+
link: https://doi.org/10.48550/arXiv.2501.10387
9+
author: Andreea Musulan
10+
categories: Publications
11+
12+
---
13+
14+
*{{ page.names }}*
15+
16+
**{{ page.venue }}**
17+
18+
{% include display-publication-links.html pub=page %}
19+
20+
## Abstract
21+
22+
None
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
title: 'PairBench: A Systematic Framework for Selecting Reliable Judge VLMs'
3+
venue: ''
4+
names: Aarash Feizi, Sai Rajeswar, Adriana Romero-Soriano, Reihaneh Rabbany, Spandana
5+
Gella, Valentina Zantedeschi, Joao Monteiro
6+
tags:
7+
- ''
8+
link: https://arxiv.org/abs/2502.15210
9+
author: Aarash Feizi
10+
categories: Publications
11+
12+
---
13+
14+
*{{ page.names }}*
15+
16+
**{{ page.venue }}**
17+
18+
{% include display-publication-links.html pub=page %}
19+
20+
## Abstract
21+
22+
As large vision language models (VLMs) are increasingly used as automated evaluators, understanding their ability to effectively compare data pairs as instructed in the prompt becomes essential. To address this, we present PairBench, a low-cost framework that systematically evaluates VLMs as customizable similarity tools across various modalities and scenarios. Through PairBench, we introduce four metrics that represent key desiderata of similarity scores: alignment with human annotations, consistency for data pairs irrespective of their order, smoothness of similarity distributions, and controllability through prompting. Our analysis demonstrates that no model, whether closed- or open-source, is superior on all metrics; the optimal choice depends on an auto evaluator's desired behavior (e.g., a smooth vs. a sharp judge), highlighting risks of widespread adoption of VLMs as evaluators without thorough assessment. For instance, the majority of VLMs struggle with maintaining symmetric similarity scores regardless of order. Additionally, our results show that the performance of VLMs on the metrics in PairBench closely correlates with popular benchmarks, showcasing its predictive power in ranking models.

records/semantic_paper_ids_ignored.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,7 @@
100100
"493b9ad05b6baba4298ac8533268273aa187039d",
101101
"49d2ca68962595e53283b049ca2e11a81fd681f4",
102102
"49ebaefd64b48d071bfb0b0c5b1ec4df306f1a35",
103+
"4a25ac06c7536aac81c009a35641fdf410d594df",
103104
"4aed16d2d4266ceaa9d7d7e1e2ce4636e40f82f9",
104105
"4c6f53097829872734aa11de5ba6788fd992ce50",
105106
"4dc005ea288c50d57222122903edf87f21689781",
@@ -260,6 +261,7 @@
260261
"bcb40efaf8296033fc9a45e80f556226cf6b6a11",
261262
"bcb651d73447d96be58db5fac6fb13324842b351",
262263
"be249c78272e69f4f4d90ac5392fa0f2ce1b8621",
264+
"be839b75205330562737f3337b77cdaf35969222",
263265
"be9baccab3b2625e728bf8cf7cc9f717cae7103e",
264266
"c25245af4128a115a1056f7aa82d1cd0f883652f",
265267
"c2fe18041e08ba360f21240e17a15f7b140660e9",

0 commit comments

Comments
 (0)