← Back to Papers
Key Insights
- Introduces GREx, a unified benchmark that expands traditional referring expression tasks (RES, REC, REG) to support single‑target, multi‑target, and no‑target expressions, enabling more realistic and flexible language‑vision interactions.
- Releases gRefCOCO, the first large‑scale dataset containing annotated images with all three expression types, while remaining backward‑compatible with existing RES/REC datasets for fair comparison.
- Proposes ReLA, a region‑level attention framework that partitions images into sub‑instance regions and jointly models region‑region and region‑language dependencies, achieving state‑of‑the‑art performance on both generalized segmentation (GRES) and comprehension (GREC).
- Demonstrates a substantial performance gap when standard RES/REC models are evaluated on GREx tasks, highlighting the need for explicit relationship reasoning and handling of ambiguous or absent targets.
Abstract
Referring Expression Segmentation (RES) and Comprehension (REC) respectively segment and detect the object described by an expression, while Referring Expression Generation (REG) generates an expression for the selected object. Existing datasets and methods commonly support single-target expressions only, i.e., one expression refers to one object, not considering multi-target and no-target expressions. This greatly limits the real applications of REx (RES/REC/REG). This paper introduces three new benchmarks called Generalized Referring Expression Segmentation (GRES), Comprehension (GREC), and Generation (GREG), collectively denoted as GREx, which extend the classic REx to allow expressions to identify an arbitrary number of objects. We construct the first large-scale GREx dataset gRefCOCO that contains multi-target, no-target, and single-target expressions and their corresponding images with labeled targets. GREx and gRefCOCO are designed to be backward-compatible with REx, facilitating extensive experiments to study the performance gap of the existing REx methods on GREx tasks. One of the challenges of GRES/GREC is complex relationship modeling, for which we propose a baseline ReLA that adaptively divides the image into regions with sub-instance clues and explicitly models the region-region and region-language dependencies. The proposed ReLA achieves the state-of-the-art results on the both GRES and GREC tasks. The proposed gRefCOCO dataset and method are available at https://henghuiding.github.io/GREx.
Full Analysis
# Generalized Referring Expressions for Multi‑Target Vision‑Language Tasks
**Authors:** Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Yu-Gang Jiang
**Source:** [HuggingFace](None) | [arXiv](https://arxiv.org/abs/2601.05244)
**Published:** 2026-01-08
## Summary
- Introduces GREx, a unified benchmark that expands traditional referring expression tasks (RES, REC, REG) to support single‑target, multi‑target, and no‑target expressions, enabling more realistic and flexible language‑vision interactions.
- Releases gRefCOCO, the first large‑scale dataset containing annotated images with all three expression types, while remaining backward‑compatible with existing RES/REC datasets for fair comparison.
- Proposes ReLA, a region‑level attention framework that partitions images into sub‑instance regions and jointly models region‑region and region‑language dependencies, achieving state‑of‑the‑art performance on both generalized segmentation (GRES) and comprehension (GREC).
- Demonstrates a substantial performance gap when standard RES/REC models are evaluated on GREx tasks, highlighting the need for explicit relationship reasoning and handling of ambiguous or absent targets.
## Abstract
Referring Expression Segmentation (RES) and Comprehension (REC) respectively segment and detect the object described by an expression, while Referring Expression Generation (REG) generates an expression for the selected object. Existing datasets and methods commonly support single-target expressions only, i.e., one expression refers to one object, not considering multi-target and no-target expressions. This greatly limits the real applications of REx (RES/REC/REG). This paper introduces three new benchmarks called Generalized Referring Expression Segmentation (GRES), Comprehension (GREC), and Generation (GREG), collectively denoted as GREx, which extend the classic REx to allow expressions to identify an arbitrary number of objects. We construct the first large-scale GREx dataset gRefCOCO that contains multi-target, no-target, and single-target expressions and their corresponding images with labeled targets. GREx and gRefCOCO are designed to be backward-compatible with REx, facilitating extensive experiments to study the performance gap of the existing REx methods on GREx tasks. One of the challenges of GRES/GREC is complex relationship modeling, for which we propose a baseline ReLA that adaptively divides the image into regions with sub-instance clues and explicitly models the region-region and region-language dependencies. The proposed ReLA achieves the state-of-the-art results on the both GRES and GREC tasks. The proposed gRefCOCO dataset and method are available at https://henghuiding.github.io/GREx.
---
*Topics: multimodal, computer-vision, nlp*
*Difficulty: advanced*
*Upvotes: 0*