paper_id
uint32
0
3.26k
title
stringlengths
15
150
paper_url
stringlengths
42
42
authors
listlengths
1
21
type
stringclasses
3 values
abstract
stringlengths
393
2.58k
keywords
stringlengths
5
409
TL;DR
stringlengths
7
250
submission_number
int64
1
16.4k
arxiv_id
stringlengths
10
10
embedding
listlengths
768
768
github
stringlengths
26
123
3,200
CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities
https://openreview.net/forum?id=3pk0p4NGmQ
[ "Yuxuan Zhu", "Antony Kellermann", "Dylan Bowman", "Philip Li", "Akul Gupta", "Adarsh Danda", "Richard Fang", "Conner Jensen", "Eric Ihli", "Jason Benn", "Jet Geronimo", "Avi Dhir", "Sudhit Rao", "Kaicheng Yu", "Twm Stone", "Daniel Kang" ]
Spotlight
Large language model (LLM) agents are increasingly capable of autonomously conducting cyberattacks, posing significant threats to existing applications. This growing risk highlights the urgent need for a real-world benchmark to evaluate the ability of LLM agents to exploit web application vulnerabilities. However, exis...
benchmark, cybersecurity, llm, agent
We introduce a cybersecurity benchmark for evaluating the capability of AI agents in exploiting real-world vulnerabilities of web applications.
5,058
null
[ -0.002541055902838707, -0.0122139323502779, -0.030819380655884743, 0.024588584899902344, 0.027383383363485336, -0.01688866876065731, 0.04719671234488487, 0.008480462245643139, -0.005029394757002592, -0.011964728124439716, -0.007833058014512062, 0.014120576903223991, -0.05911821871995926, -...
https://github.com/uiuc-kang-lab/cve-bench
3,201
LOCATE 3D: Real-World Object Localization via Self-Supervised Learning in 3D
https://openreview.net/forum?id=FKi6yjXwCN
[ "Paul McVay", "Sergio Arnaud", "Ada Martin", "Arjun Majumdar", "Krishna Murthy Jatavallabhula", "Phillip Thomas", "Ruslan Partsey", "Daniel Dugas", "Abha Gejji", "Alexander Sax", "Vincent-Pierre Berges", "Mikael Henaff", "Ayush Jain", "Ang Cao", "Ishita Prasad", "Mrinal Kalakrishnan", ...
Spotlight
We present LOCATE 3D, a model for localizing objects in 3D scenes from referring expressions like "the small coffee table between the sofa and the lamp." LOCATE 3D sets a new state-of-the-art on standard referential grounding benchmarks and showcases robust generalization capabilities. Notably, LOCATE 3D operates direc...
self-supervised learning, object localization, referring expressions, 3D language grounding
A model that can localize objects in 3D from textual referring expressions.
5,047
2504.14151
[ 0.019932273775339127, 0.02143774926662445, 0.0023502211552113295, 0.03420292213559151, 0.023635437712073326, 0.06391767412424088, 0.015095335431396961, 0.0026459605433046818, -0.04570703208446503, -0.03978712111711502, -0.0382547453045845, -0.03027358092367649, -0.05480944737792015, -0.007...
https://github.com/facebookresearch/locate-3d
3,202
Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator
https://openreview.net/forum?id=OJ6WE7F8tK
[ "Kaiwen Zheng", "Yongxin Chen", "Huayu Chen", "Guande He", "Ming-Yu Liu", "Jun Zhu", "Qinsheng Zhang" ]
Spotlight
While likelihood-based generative models, particularly diffusion and autoregressive models, have achieved remarkable fidelity in visual generation, the maximum likelihood estimation (MLE) objective, which minimizes the forward KL divergence, inherently suffers from a mode-covering tendency that limits the generation qu...
Diffusion Models, Visual Autoregressive Models, GAN, Generation Quality
an efficient and effective finetuning method for enhancing diffusion models and visual autoregressive models
5,007
2503.01103
[ 0.011238894425332546, -0.007511966396123171, 0.014129060320556164, 0.03395305201411247, 0.02708972431719303, 0.03070417232811451, 0.014868496917188168, 0.005084361881017685, -0.010840851813554764, -0.04018004611134529, -0.013698852621018887, -0.0001334926491836086, -0.05987408757209778, 0....
https://github.com/NVlabs/DDO
3,203
Geometric Representation Condition Improves Equivariant Molecule Generation
https://openreview.net/forum?id=79O2XccGXZ
[ "Zian Li", "Cai Zhou", "Xiyuan Wang", "Xingang Peng", "Muhan Zhang" ]
Spotlight
Recent advances in molecular generative models have demonstrated great promise for accelerating scientific discovery, particularly in drug design. However, these models often struggle to generate high-quality molecules, especially in conditional scenarios where specific molecular properties must be satisfied. In this w...
molecule generation, equivariant generative models, representation, geometric deep learning, diffusion models
We propose a two-stage, model-agnostic generative approach that effectively leverages molecule representations to improve the generation quality of molecule generative models.
4,856
2410.03655
[ -0.017943700775504112, -0.005527233239263296, 0.015630286186933517, 0.0484863817691803, 0.02514749765396118, -0.014293136075139046, 0.0024995426647365093, 0.0012553575215861201, 0.016297658905386925, -0.04679616913199425, 0.008418289013206959, -0.017193570733070374, -0.07282668352127075, 0...
null
3,204
Graph Adaptive Autoregressive Moving Average Models
https://openreview.net/forum?id=UFlyLkvyAE
[ "Moshe Eliasof", "Alessio Gravina", "Andrea Ceni", "Claudio Gallicchio", "Davide Bacciu", "Carola-Bibiane Schönlieb" ]
Spotlight
Graph State Space Models (SSMs) have recently been introduced to enhance Graph Neural Networks (GNNs) in modeling long-range interactions. Despite their success, existing methods either compromise on permutation equivariance or limit their focus to pairwise interactions rather than sequences. Building on the connection...
Graph Neural Networks, Auto-regressive Moving Average
We introduce GRAMA, an ARMA-based framework that preserves permutation equivariance, and adapts coefficients via selective attention for long-range propagation. Experimental results on 22 datasets demonstrate its effectiveness.
4,819
2501.12732
[ 0.01281682588160038, -0.024542972445487976, 0.020520076155662537, 0.001994301797822118, 0.041821300983428955, 0.041829854249954224, 0.048655297607183456, 0.018834562972187996, -0.019745301455259323, -0.06358446925878525, 0.019335951656103134, -0.004954524338245392, -0.09429770708084106, 0....
null
3,205
am-ELO: A Stable Framework for Arena-based LLM Evaluation
https://openreview.net/forum?id=EUH4VUCXay
[ "Zirui Liu", "Jiatong Li", "Yan Zhuang", "Qi Liu", "Shuanghong Shen", "Jie Ouyang", "Mingyue Cheng", "Shijin Wang" ]
Spotlight
Arena-based evaluation is a fundamental yet significant evaluation paradigm for modern AI models, especially large language models (LLMs). Existing framework based on ELO rating system suffers from the inevitable instability problem due to ranking inconsistency and the lack of attention to the varying abilities of anno...
Large Language Models, Evaluation, Chatbot Arena, ELO Rating System
null
4,626
null
[ -0.01372409425675869, -0.003515037940815091, -0.003377171466127038, -0.006041851360350847, 0.03382790833711624, 0.00575124379247427, 0.03953653201460838, -0.005771869793534279, -0.032545529305934906, -0.025425923988223076, 0.003485016990453005, 0.053776320070028305, -0.06186428666114807, -...
null
3,206
Nonparametric Teaching for Graph Property Learners
https://openreview.net/forum?id=wbvshlfyB0
[ "Chen Zhang", "Weixin Bu", "Zeyi Ren", "Zhengwu Liu", "Yik Chung WU", "Ngai Wong" ]
Spotlight
Inferring properties of graph-structured data, *e.g.*, the solubility of molecules, essentially involves learning the implicit mapping from graphs to their properties. This learning process is often costly for graph property learners like Graph Convolutional Networks (GCNs). To address this, we propose a paradigm calle...
Nonparametric Teaching, Graph Property Learning, Functional Gradient Descent
null
4,554
2505.14170
[ -0.01776294782757759, -0.040959835052490234, 0.006755721289664507, 0.07539018243551254, 0.028977273032069206, 0.027890903875231743, 0.015870938077569008, -0.0025146317202597857, 0.02695142850279808, -0.048758454620838165, 0.00820138305425644, 0.012054815888404846, -0.06550581753253937, 0.0...
https://github.com/chen2hang/GraNT_NonparametricTeaching
3,207
Discovering a Zero (Zero-Vector Class of Machine Learning)
https://openreview.net/forum?id=u3n5wuRGTa
[ "Harikrishna Metta", "Venkatesh Babu Radhakrishnan" ]
Spotlight
In Machine learning, separating data into classes is a very fundamental problem. A mathematical framework around the classes is presented in this work to deepen the understanding of classes. The classes are defined as vectors in a Vector Space, where addition corresponds to the union of classes, and scalar multiplicati...
Metta, Metta-Class, Metta Class, Machine Learning, ICML, Class Vector, Class Tensor Equation, Class Integration, Repository of Classes, Continual Learning, Class Addition, Class Subtraction, Class Invert, Zero Vector Class, Set operations on Classes, Boolean operation on Classes, Unary classification, Manifold learning...
The classes are defined as vectors in a Vector Space, where Addition corresponds to the union of classes, and scalar multiplication resembles set complement of classes. Zero-Vector in that vector space has many useful applications.
4,514
null
[ -0.002324153436347842, -0.042519230395555496, -0.008044605143368244, 0.02021985501050949, 0.027018552646040916, 0.029693255200982094, 0.029545074328780174, 0.014800607226788998, -0.05199940875172615, -0.033397186547517776, -0.011603914201259613, 0.02975776046514511, -0.06699734181165695, 0...
https://github.com/hm-4/Metta-Class
3,208
Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models
https://openreview.net/forum?id=V61nluxFlR
[ "Yinhong Liu", "Zhijiang Guo", "Tianya Liang", "Ehsan Shareghi", "Ivan Vulić", "Nigel Collier" ]
Spotlight
Large Language Models (LLMs) are expected to be predictable and trustworthy to support reliable decision-making systems. Yet current LLMs often show inconsistencies in their judgments. In this work, we examine \textit{logical preference consistency} as a foundational requirement for building more dependable LLM systems...
LLMs, logical consistency, order consistency, transitivity
We quantify, evaluate and improve the logical preference consistency of LLMs' judgements
4,461
2410.02205
[ -0.015852726995944977, 0.00496247224509716, -0.029216201975941658, 0.04544292390346527, 0.04513918608427048, 0.018018227070569992, 0.01477504801005125, 0.031082818284630775, -0.021996619179844856, -0.023683572188019753, -0.0352310948073864, 0.06309473514556885, -0.08349747955799103, -0.017...
null
3,209
Better to Teach than to Give: Domain Generalized Semantic Segmentation via Agent Queries with Diffusion Model Guidance
https://openreview.net/forum?id=jvP1wbD0xh
[ "Fan Li", "Xuan Wang", "Min Qi", "Zhaoxiang Zhang", "yuelei xu" ]
Spotlight
Domain Generalized Semantic Segmentation (DGSS) trains a model on a labeled source domain to generalize to unseen target domains with consistent contextual distribution and varying visual appearance. Most existing methods rely on domain randomization or data generation but struggle to capture the underlying scene distr...
semantic segmentation, domain generalization, diffusion model
null
4,280
null
[ -0.0030459414701908827, -0.028395619243383408, 0.018368618562817574, 0.06623967736959457, 0.034218691289424896, 0.009436871856451035, 0.032811082899570465, -0.014503848738968372, -0.011178343556821346, -0.04024043679237366, -0.05213521420955658, 0.005267107859253883, -0.06837709993124008, ...
null
3,210
P(all-atom) Is Unlocking New Path For Protein Design
https://openreview.net/forum?id=yXRixu0ONY
[ "Wei Qu", "Jiawei Guan", "Rui Ma", "kezhai", "Weikun.Wu", "Haobo Wang" ]
Spotlight
We introduce Pallatom, an innovative protein generation model capable of producing protein structures with all-atom coordinates. Pallatom directly learns and models the joint distribution $P(\textit{structure}, \textit{seq})$ by focusing on $P(\textit{all-atom})$, effectively addressing the interdependence between sequ...
Proteins, Generative models, Co-design, All-atom
A state-of- the-art all-atom protein generative model.
4,192
null
[ -0.02760215289890766, 0.0013200256507843733, -0.020307086408138275, 0.030991878360509872, 0.02133936993777752, -0.015836281701922417, -0.001154208672232926, 0.013370489701628685, 0.009070673026144505, -0.02972416952252388, 0.017063723877072334, -0.0276387557387352, -0.07918461412191391, 0....
https://github.com/levinthal/Pallatom
3,211
The Number of Trials Matters in Infinite-Horizon General-Utility Markov Decision Processes
https://openreview.net/forum?id=I4jNAbqHnM
[ "Pedro Pinto Santos", "Alberto Sardinha", "Francisco S. Melo" ]
Spotlight
The general-utility Markov decision processes (GUMDPs) framework generalizes the MDPs framework by considering objective functions that depend on the frequency of visitation of state-action pairs induced by a given policy. In this work, we contribute with the first analysis on the impact of the number of trials, i.e., ...
Planning, sequential decision-making, general-utility markov decision processes, convex markov decision processes
null
4,103
2409.15128
[ -0.06633598357439041, -0.02905285358428955, -0.01800655573606491, 0.05044825002551079, 0.05405536666512489, 0.02229885756969452, 0.014227871783077717, 0.028495701029896736, -0.024857286363840103, -0.04582206904888153, -0.02124185673892498, 0.018841097131371498, -0.07057619839906693, -0.029...
https://github.com/PPSantos/gumdps-number-of-trials
3,212
On the Benefits of Active Data Collection in Operator Learning
https://openreview.net/forum?id=hYHczNrKoX
[ "Unique Subedi", "Ambuj Tewari" ]
Spotlight
We study active data collection strategies for operator learning when the target operator is linear and the input functions are drawn from a mean-zero stochastic process with continuous covariance kernels. With an active data collection strategy, we establish an error convergence rate in terms of the decay rate of the ...
Operator Learning, Active Learning, PDEs
We study active data collection strategies for operator learning and establish their provable advantage over passive sampling approaches.
4,078
2410.19725
[ -0.02579122595489025, -0.012611799873411655, 0.029481716454029083, 0.03193424642086029, 0.033612702041864395, 0.0246297437697649, 0.01498146541416645, -0.007738250773400068, -0.007247897796332836, -0.04412510618567467, -0.010021509602665901, 0.012164806947112083, -0.08080632239580154, -0.0...
https://github.com/unique-subedi/active-operator-learning
3,213
Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger
https://openreview.net/forum?id=DJcEoC9JpQ
[ "Qi Yang", "Chenghao Zhang", "Lubin Fan", "Kun Ding", "Jieping Ye", "Shiming Xiang" ]
Spotlight
Recent advancements in Large Vision Language Models (LVLMs) have significantly improved performance in Visual Question Answering (VQA) tasks through multimodal Retrieval-Augmented Generation (RAG). However, existing methods still face challenges, such as the scarcity of knowledge with reasoning examples and erratic res...
Large Vision Language Model, Multimodal Retrieval-Augmented Generation, In-context Learning, Monte Carlo Tree Search
We propose RCTS, a multimodal RAG framework that enhances LVLMs for VQA tasks by integrating a reasoning-context-enriched knowledge base and tree-search re-ranking, achieving state-of-the-art performance.
3,925
2506.07785
[ 0.010824047029018402, 0.0032405881211161613, 0.017740381881594658, 0.07146215438842773, 0.022384904325008392, 0.011717726476490498, 0.030068345367908478, 0.009247749112546444, -0.035894110798835754, -0.0016587956342846155, -0.050719309598207474, 0.05943356081843376, -0.05832982063293457, -...
https://github.com/yannqi/RCTS-RAG
3,214
Neural Collapse Beyond the Unconstrained Features Model: Landscape, Dynamics, and Generalization in the Mean-Field Regime
https://openreview.net/forum?id=ZrhGq664om
[ "Diyuan Wu", "Marco Mondelli" ]
Spotlight
Neural Collapse is a phenomenon where the last-layer representations of a well-trained neural network converge to a highly structured geometry. In this paper, we focus on its first (and most basic) property, known as NC1: the within-class variability vanishes. While prior theoretical studies establish the occurrence o...
neural collapse, mean-field analysis, gradient flow, generalization error, loss landscape
We prove that NC1 (vanishing within-class variability) holds when training a class of 3-layer networks via gradient flow, due to loss landscape properties; we further show co-occurrence of NC1 and small test error for certain data distributions.
3,886
2501.19104
[ -0.03595232963562012, -0.0015204385854303837, 0.02304792031645775, 0.03125642612576485, 0.03926647827029228, 0.022781871259212494, 0.011487229727208614, 0.018888527527451515, -0.05479632318019867, -0.04194599390029907, -0.01838075742125511, -0.017473183572292328, -0.06476988643407822, 0.01...
https://github.com/DiyuanWu/icml25_expr
3,215
Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency
https://openreview.net/forum?id=mruyFvKDKq
[ "Zexu Sun", "Qiyu Han", "Hao Yang", "Anpeng Wu", "Minqin Zhu", "Dugang Liu", "Chen Ma", "Yunpeng Weng", "Xing Tang", "xiuqiang He" ]
Spotlight
In online platforms, incentives (\textit{e.g}., discounts, coupons) are used to boost user engagement and revenue. Uplift modeling methods are developed to estimate user responses from observational data, often incorporating distribution balancing to address selection bias. However, these methods are limited by in-dist...
Uplift modeling, Invariant learning, Incentives assignment, Online marketing
This paper proposes an invariant learning based uplift modeling method, which aims to solve the out-of-distribution problem in online marketing.
3,824
null
[ -0.023100968450307846, -0.026913702487945557, 0.01887114718556404, 0.025484176352620125, 0.03763214871287346, 0.03833458572626114, 0.013600236736238003, 0.002675409661605954, -0.02221694216132164, -0.009581700898706913, -0.041486844420433044, 0.005040027666836977, -0.0856667309999466, -0.0...
null
3,216
G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks
https://openreview.net/forum?id=LpE54NUnmO
[ "Guibin Zhang", "Yanwei Yue", "Xiangguo Sun", "Guancheng Wan", "Miao Yu", "Junfeng Fang", "Kun Wang", "Tianlong Chen", "Dawei Cheng" ]
Spotlight
Recent advancements in large language model (LLM)-based agents have demonstrated that collective intelligence can significantly surpass the capabilities of individual agents, primarily due to well-crafted inter-agent communication topologies. Despite the diverse and high-performing designs available, practitioners ofte...
Multi-agent communication, Graph machine learning, LLM-based agent
null
3,779
null
[ -0.0032377028837800026, -0.03241393342614174, 0.006839130539447069, 0.04130919650197029, 0.025353090837597847, 0.02664140611886978, 0.043052177876234055, 0.008738757111132145, 0.01146481093019247, -0.0728713870048523, 0.007673074025660753, 0.00245653442107141, -0.09120935946702957, -0.0028...
https://github.com/yanweiyue/GDesigner
3,217
SAFE: Finding Sparse and Flat Minima to Improve Pruning
https://openreview.net/forum?id=10l1pGeOcK
[ "Dongyeop Lee", "Kwanhee Lee", "Jinseok Chung", "Namhoon Lee" ]
Spotlight
Sparsifying neural networks often suffers from seemingly inevitable performance degradation, and it remains challenging to restore the original performance despite much recent progress. Motivated by recent studies in robust optimization, we aim to tackle this problem by finding subnetworks that are both sparse and flat...
Pruning, Constrained optimization, Sharpness minimization
We propose SAFE, an optimization-based pruning method that improves generalization of sparse models by inducing flatness.
3,619
2506.06866
[ -0.014754695817828178, -0.013190017081797123, 0.006733798887580633, 0.032510992139577866, 0.04369516670703888, 0.06412477046251297, 0.012184444814920425, -0.007697492837905884, -0.05908728390932083, -0.06438799947500229, -0.006431507878005505, 0.0032304157502949238, -0.05712534487247467, -...
https://github.com/LOG-postech/safe-torch,https://github.com/LOG-postech/safe-jax
3,218
On the Guidance of Flow Matching
https://openreview.net/forum?id=pKaNgFzJBy
[ "Ruiqi Feng", "Chenglei Yu", "Wenhao Deng", "Peiyan Hu", "Tailin Wu" ]
Spotlight
Flow matching has shown state-of-the-art performance in various generative tasks, ranging from image generation to decision-making, where generation under energy guidance (abbreviated as guidance in the following) is pivotal. However, the guidance of flow matching is more general than and thus substantially different f...
flow matching, guided generation, generative modeling
We introduce the first framework for general flow matching guidance, from which new guidance methods are derived and many classical guidance methods are covered as special cases.
3,557
2502.02150
[ -0.006284666247665882, -0.024979323148727417, 0.004139236640185118, 0.028478367254137993, 0.032328035682439804, 0.05684337392449379, 0.0070128729566931725, 0.0008579460554756224, -0.022665375843644142, -0.04731527715921402, -0.042115096002817154, -0.007094968110322952, -0.07374752312898636, ...
https://github.com/AI4Science-WestlakeU/flow_guidance
3,219
TLLC: Transfer Learning-based Label Completion for Crowdsourcing
https://openreview.net/forum?id=BkdAnSKNoX
[ "Wenjun Zhang", "Liangxiao Jiang", "Chaoqun Li" ]
Spotlight
Label completion serves as a preprocessing approach to handling the sparse crowdsourced label matrix problem, significantly boosting the effectiveness of the downstream label aggregation. In recent advances, worker modeling has been proved to be a powerful strategy to further improve the performance of label completion...
Crowdsourcing learning, Label Completion, Worker modeling, Transfer Learning
This paper proposes a novel transfer learning-based label completion (TLLC) algorithm.
3,322
null
[ 0.03834415227174759, -0.043017901480197906, -0.02665979042649269, 0.03225955367088318, 0.03142465278506279, 0.011631251312792301, -0.0034464907366782427, 0.02337121218442917, -0.02554916776716709, -0.019427143037319183, -0.03152363747358322, -0.00780309597030282, -0.0582420639693737, 0.008...
https://github.com/jiangliangxiao/TLLC
3,220
Return of the Latent Space COWBOYS: Re-thinking the use of VAEs for Bayesian Optimisation of Structured Spaces
https://openreview.net/forum?id=U354tbTjav
[ "Henry Moss", "Sebastian W. Ober", "Tom Diethe" ]
Spotlight
Bayesian optimisation in the latent space of a VAE is a powerful framework for optimisation tasks over complex structured domains, such as the space of valid molecules. However, existing approaches tightly couple the surrogate and generative models, which can lead to suboptimal performance when the latent space is not ...
Bayesian Optimisation
Don't do Bayesian optimisation in the latent space of a VAE ....
3,247
2507.03910
[ 0.019535740837454796, 0.03609020262956619, 0.0018943378236144781, 0.017860984429717064, 0.03357008844614029, 0.02349717915058136, 0.0388517789542675, -0.017172859981656075, 0.006834806874394417, -0.050792399793863297, -0.011655576527118683, 0.005696455482393503, -0.07433224469423294, 0.009...
null
3,221
New Bounds for Sparse Variational Gaussian Processes
https://openreview.net/forum?id=Ppcf30NGL0
[ "Michalis Titsias" ]
Spotlight
Sparse variational Gaussian processes (GPs) construct tractable posterior approximations to GP models. At the core of these methods is the assumption that the true posterior distribution over training function values ${\bf f}$ and inducing variables ${\bf u}$ is approximated by a variational distribution that incorpor...
Sparse variational Gaussian process, new collapsed bound
It presents new collapsed and uncollapsed bounds for sparse variational Gaussian processes using inducing points.
3,219
2502.08730
[ -0.011453141458332539, 0.003778166137635708, 0.009129744954407215, 0.036031559109687805, 0.02475985512137413, 0.05366210639476776, 0.024412013590335846, 0.002668794011697173, -0.036262355744838715, -0.05598541349172592, -0.016941899433732033, 0.01554128434509039, -0.08364671468734741, 0.00...
null
3,222
An Error Analysis of Flow Matching for Deep Generative Modeling
https://openreview.net/forum?id=vES22INUKm
[ "Zhengyu Zhou", "Weiwei Liu" ]
Spotlight
Continuous Normalizing Flows (CNFs) have proven to be a highly efficient technique for generative modeling of complex data since the introduction of Flow Matching (FM). The core of FM is to learn the constructed velocity fields of CNFs through deep least squares regression. Despite its empirical effectiveness, theoreti...
Statistical Learning Theory
null
3,098
null
[ 0.007854930125176907, -0.03435168042778969, 0.007699082139879465, 0.05593009293079376, 0.057846903800964355, 0.07295785844326019, 0.00635236781090498, 0.003842264413833618, 0.00975094549357891, -0.054180510342121124, 0.008357984945178032, -0.014957609586417675, -0.06364364922046661, 0.0112...
null
3,223
Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance
https://openreview.net/forum?id=PUzNwYmb3l
[ "Lisha Chen", "Quan Xiao", "Ellen Hidemi Fukuda", "Xinyi Chen", "Kun Yuan", "Tianyi Chen" ]
Spotlight
Multi-objective learning under user-specified preference is common in real-world problems such as multi-lingual speech recognition under fairness. In this work, we frame such a problem as a semivectorial bilevel optimization problem, whose goal is to optimize a pre-defined preference function, subject to the constraint...
multi-objective optimization, optimization on the Pareto set, semivectorial bilevel optimization
We cast the preference-guided multi-objective learning problem as optimization on the Pareto set, and propose a first-order penalty approach to solve it.
3,045
2504.02854
[ -0.048186879605054855, -0.0010768035426735878, 0.014028235338628292, 0.026876378804445267, 0.017104852944612503, 0.053814489394426346, 0.024055682122707367, -0.00002588766255939845, -0.01274495292454958, -0.05090395733714104, -0.0033508206252008677, 0.053254857659339905, -0.07747729122638702...
null
3,224
Automatically Identify and Rectify: Robust Deep Contrastive Multi-view Clustering in Noisy Scenarios
https://openreview.net/forum?id=iFOXz5H2gB
[ "Xihong Yang", "Siwei Wang", "Fangdi Wang", "Jiaqi Jin", "Suyuan Liu", "Yue Liu", "En Zhu", "Xinwang Liu", "Yueming Jin" ]
Spotlight
Leveraging the powerful representation learning capabilities, deep multi-view clustering methods have demonstrated reliable performance by effectively integrating multi-source information from diverse views in recent years. Most existing methods rely on the assumption of clean views. However, noise is pervasive in real...
Multi-view Clustering; Contrastive Learning; Noisy Scenarios
null
2,848
2505.21387
[ 0.023199167102575302, -0.013454469852149487, 0.009282118640840054, 0.030094364657998085, 0.019245153293013573, 0.029620949178934097, 0.030297154560685158, -0.018458960577845573, -0.014944527298212051, -0.053706731647253036, -0.01717512309551239, -0.008362934924662113, -0.0760107934474945, ...
null
3,225
Improving Zero-Shot Adversarial Robustness in Vision-Language Models by Closed-form Alignment of Adversarial Path Simplices
https://openreview.net/forum?id=WR0ahlhOoy
[ "Junhao Dong", "Piotr Koniusz", "Yifei Zhang", "Hao Zhu", "Weiming Liu", "Xinghua Qu", "Yew-Soon Ong" ]
Spotlight
Vision-Language Models (VLMs) such as CLIP excel at zero-shot classification due to large-scale pre-training but are vulnerable to adversarial examples. Adversarial fine-tuning robustifies zero-shot models by aligning prediction scores of individual adversaries with their clean counterparts, which typically overlooks i...
Vision-Language Models, Adversarial Examples, Zero-Shot Classification
We robustify VLMs by aligning entire adversarial simplices rather than individual adversarial samples with classifier scores of clean samples.
2,781
null
[ 0.004434118513017893, 0.00840896274894476, 0.005266317166388035, 0.05996771156787872, 0.004575778730213642, 0.03279614448547363, 0.04924960806965828, 0.022866694256663322, -0.012704513967037201, -0.03312116861343384, -0.02970074489712715, 0.014303553849458694, -0.08282598108053207, -0.0115...
null
3,226
Graph Diffusion for Robust Multi-Agent Coordination
https://openreview.net/forum?id=T5IZ32ImAB
[ "Xianghua Zeng", "Hang Su", "Zhengyi Wang", "Zhiyuan LIN" ]
Spotlight
Offline multi-agent reinforcement learning (MARL) struggles to estimate out-of-distribution states and actions due to the absence of real-time environmental feedback. While diffusion models show promise in addressing these challenges, their application primarily focuses on independently diffusing the historical traject...
multi-agent coordination, offline reinforcement learning, diffusion models
null
2,772
null
[ -0.028301335871219635, -0.021914267912507057, 0.013510216027498245, 0.04289931803941727, 0.03846609964966774, 0.004774155095219612, 0.027626411989331245, 0.002586707705631852, -0.029343437403440475, -0.0546458438038826, 0.022998036816716194, -0.029380647465586662, -0.06850972026586533, -0....
null
3,227
Weakly-Supervised Contrastive Learning for Imprecise Class Labels
https://openreview.net/forum?id=Y19ngWhN0b
[ "Zi-Hao Zhou", "Jun-Jie Wang", "Tong Wei", "Min-Ling Zhang" ]
Spotlight
Contrastive learning has achieved remarkable success in learning effective representations, with supervised contrastive learning often outperforming self-supervised approaches. However, in real-world scenarios, data annotations are often ambiguous or inaccurate, meaning that class labels may not reliably indicate wheth...
Weakly-supervised learning, Contrastive learning, Noisy label learning, Partial label learning
null
2,740
2505.22028
[ 0.0017569293268024921, -0.03479832410812378, -0.020671634003520012, 0.0675738975405693, 0.016472386196255684, 0.013610214926302433, 0.027991903945803642, -0.007276285905390978, -0.01738615334033966, -0.003656278597190976, -0.022529704496264458, 0.025565700605511665, -0.07421875, 0.01751432...
https://github.com/Speechless-10308/WSC
3,228
Robust Automatic Modulation Classification with Fuzzy Regularization
https://openreview.net/forum?id=DDIGCk25BO
[ "Xinyan Liang", "Ruijie Sang", "Yuhua Qian", "Qian Guo", "Feijiang Li", "Liang Du" ]
Spotlight
Automatic Modulation Classification (AMC) serves as a foundational pillar for cognitive radio systems, enabling critical functionalities including dynamic spectrum allocation, non-cooperative signal surveillance, and adaptive waveform optimization. However, practical deployment of AMC faces a fundamental challenge: pre...
Robustness, Fuzzy Regularization, Automatic Modulation Classification
null
2,696
null
[ 0.008686595596373081, -0.023216215893626213, 0.027503149583935738, -0.007504547014832497, 0.04292616248130798, -0.016052918508648872, 0.03713054582476616, -0.035187333822250366, -0.05326220393180847, -0.05116502195596695, -0.008958299644291401, 0.05057467147707939, -0.043999407440423965, 0...
https://github.com/ruijiesang/FR-AMC
3,229
Language Models May Verbatim Complete Text They Were Not Explicitly Trained On
https://openreview.net/forum?id=bLcXkIasck
[ "Ken Liu", "Christopher A. Choquette-Choo", "Matthew Jagielski", "Peter Kairouz", "Sanmi Koyejo", "Percy Liang", "Nicolas Papernot" ]
Spotlight
An important question today is whether a given text was used to train a large language model (LLM). A completion test is often employed: check if the LLM completes a sufficiently complex text. This, however, requires a ground-truth definition of membership; most commonly, it is defined as a member based on the n-gram o...
Training data membership, data completion, data reconstruction, membership inference, unlearning, privacy, training set inclusion, copyright
Under $n$-gram definitions of train-set inclusion, LLMs can complete “unseen” texts—both after data deletion and adding “gibberish” data. Our results impact unlearning, membership inference & data transparency.
2,670
2503.17514
[ -0.02830583043396473, -0.03272257000207901, -0.021856753155589104, 0.06846604496240616, 0.045768335461616516, 0.0003351581690367311, 0.057199783623218536, 0.03202414885163307, -0.03118635155260563, 0.0012211694847792387, -0.034920673817396164, 0.059477418661117554, -0.05248650163412094, 0....
null
3,230
Taming Knowledge Conflicts in Language Models
https://openreview.net/forum?id=0cEZyhHEks
[ "Gaotang Li", "Yuzhong Chen", "Hanghang Tong" ]
Spotlight
Language Models (LMs) often encounter knowledge conflicts when parametric memory contradicts contextual knowledge. Previous works attribute this conflict to the interplay between "memory heads" and "context heads", attention heads assumed to promote either memory or context exclusively. In this study, we go beyond thi...
Knowledge Conflict, Mechanistic Interpretability, Science of Large Language Models
null
2,610
2503.10996
[ -0.029397742822766304, 0.021811673417687416, -0.05485447123646736, 0.028326915577054024, 0.01995955966413021, 0.02686227299273014, 0.060354072600603104, 0.024565596133470535, -0.030915942043066025, -0.01124412938952446, -0.03339733928442001, 0.04827490076422691, -0.06462971121072769, -0.01...
https://github.com/GaotangLi/JUICE
3,231
UniDB: A Unified Diffusion Bridge Framework via Stochastic Optimal Control
https://openreview.net/forum?id=uqCfoVXb67
[ "Kaizhen Zhu", "Mokai Pan", "Yuexin Ma", "Yanwei Fu", "Jingyi Yu", "Jingya Wang", "Ye Shi" ]
Spotlight
Recent advances in diffusion bridge models leverage Doob’s $h$-transform to establish fixed endpoints between distributions, demonstrating promising results in image translation and restoration tasks. However, these approaches frequently produce blurred or excessively smoothed image details and lack a comprehensive the...
Diffusion bridge, Doob's h-transform, Stochastic optimal control
We present UniDB, a unified diffusion bridge framework using stochastic optimal control, significantly improving detail preservation and image quality in generative tasks with minimal code modifications.
2,519
2502.05749
[ -0.03779662027955055, -0.018858831375837326, -0.002516046166419983, 0.02729310654103756, 0.06652354449033737, 0.02482479438185692, -0.013188925571739674, 0.005671137943863869, -0.013840186409652233, -0.0734354704618454, 0.028619464486837387, -0.017180876806378365, -0.0591692179441452, -0.0...
https://github.com/UniDB-SOC/UniDB
3,232
K2VAE: A Koopman-Kalman Enhanced Variational AutoEncoder for Probabilistic Time Series Forecasting
https://openreview.net/forum?id=71Mm8GDGYd
[ "Xingjian Wu", "Xiangfei Qiu", "Hongfan Gao", "Jilin Hu", "Bin Yang", "Chenjuan Guo" ]
Spotlight
Probabilistic Time Series Forecasting (PTSF) plays a crucial role in decision-making across various fields, including economics, energy, and transportation. Most existing methods excell at short-term forecasting, while overlooking the hurdles of Long-term Probabilistic Time Series Forecasting (LPTSF). As the forecast h...
Time Series Probabilistic Forecasting
null
2,349
2505.23017
[ 0.0008425947744399309, -0.008714583702385426, 0.00022199496743269265, 0.03928804770112038, 0.03673150762915611, 0.050992537289857864, 0.05803397297859192, 0.019904304295778275, -0.03823971375823021, -0.03961595892906189, -0.003657151944935322, 0.007137504871934652, -0.05018562078475952, 0....
https://github.com/decisionintelligence/K2VAE
3,233
Doubly Robust Conformalized Survival Analysis with Right-Censored Data
https://openreview.net/forum?id=2PWn1LtCwP
[ "Matteo Sesia", "Vladimir Svetnik" ]
Spotlight
We present a conformal inference method for constructing lower prediction bounds for survival times from right-censored data, extending recent approaches designed for more restrictive type-I censoring scenarios. The proposed method imputes unobserved censoring times using a machine learning model, and then analyzes the...
Conformal inference, Survival analysis, Uncertainty Estimation
This paper presents a conformal inference method for constructing lower prediction bounds for survival times from right-censored data.
2,249
2412.09729
[ 0.01253545843064785, -0.01663072220981121, -0.005065207835286856, 0.0013723154552280903, 0.06487813591957092, 0.022660749033093452, 0.03327462449669838, -0.013721599243581295, -0.014489768072962761, -0.04582236707210541, 0.03307279944419861, -0.032166946679353714, -0.0393906906247139, 0.00...
https://github.com/msesia/conformal_survival
3,234
HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation
https://openreview.net/forum?id=WbP2OwMULq
[ "Tianwei Lin", "Wenqiao Zhang", "SIJING LI", "Yuqian Yuan", "Binhe Yu", "Haoyuan Li", "Wanggui He", "Hao Jiang", "Mengze Li", "Song xiaohui", "Siliang Tang", "Jun Xiao", "Hui Lin", "Yueting Zhuang", "Beng Chin Ooi" ]
Spotlight
We present **HealthGPT**, a powerful Medical Large Vision-Language Model (Med-LVLM) that integrates medical visual comprehension and generation capabilities within a unified autoregressive paradigm. Our bootstrapping philosophy is to progressively adapt heterogeneous comprehension and generation knowledge to pre-traine...
Medical Large Vision-Language Models; Multi-Modal Comprehension and Generation
null
2,225
2502.09838
[ 0.007873033173382282, 0.00216039945371449, 0.019933901727199554, 0.02090059593319893, 0.03394026309251785, 0.0014071529731154442, 0.036766160279512405, 0.015329056419432163, -0.016759557649493217, -0.018262209370732307, -0.020520711317658424, 0.03981113061308861, -0.08369480073451996, 0.00...
https://github.com/DCDmllm/HealthGPT
3,235
TimeBase: The Power of Minimalism in Efficient Long-term Time Series Forecasting
https://openreview.net/forum?id=GhTdNOMfOD
[ "Qihe Huang", "Zhengyang Zhou", "Kuo Yang", "Zhongchao Yi", "Xu Wang", "Yang Wang" ]
Spotlight
Long-term time series forecasting (LTSF) has traditionally relied on large parameters to capture extended temporal dependencies, resulting in substantial computational costs and inefficiencies in both memory usage and processing time. However, time series data, unlike high-dimensional images or text, often exhibit te...
Time series forecasting
null
2,176
null
[ -0.012136811390519142, -0.040806844830513, 0.02272750996053219, 0.012053264304995537, 0.04056581109762192, 0.026745587587356567, 0.007271260023117065, 0.011187436990439892, -0.04316941276192665, -0.0408855639398098, 0.0141483498737216, -0.005708656739443541, -0.05455593019723892, 0.0220803...
null
3,236
Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models
https://openreview.net/forum?id=IYOksPHJKT
[ "Yiyang Fang", "Jian Liang", "Wenke Huang", "He Li", "Kehua Su", "Mang Ye" ]
Spotlight
Multimodal large language models (MLLMs) have achieved impressive progress in tasks such as visual question answering and visual understanding, but they still face significant challenges in emotional reasoning. Current methods to enhance emotional understanding typically rely on fine-tuning or manual annotations, which...
Multimodal Large Language Models, Emotion Recognition, Training-Free
We propose SEPM to enhance MLLMs' emotion recognition by refining classification through a two-stage inference and reducing visual redundancy, offering a scalable, resource-efficient solution.
2,164
null
[ -0.007483028341084719, -0.021467432379722595, 0.013698453083634377, 0.014373383484780788, 0.007095950655639172, -0.00972041767090559, 0.031629886478185654, 0.018174653872847557, -0.04947466775774956, -0.0014995766105130315, -0.03861476853489876, 0.010880183428525925, -0.05860566720366478, ...
https://github.com/fuyyyyy/SEPM
3,237
Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models
https://openreview.net/forum?id=F1ff8zcjPp
[ "Saketh Bachu", "Erfan Shayegani", "Rohit Lal", "Trishna Chakraborty", "Arindam Dutta", "Chengyu Song", "Yue Dong", "Nael B. Abu-Ghazaleh", "Amit Roy-Chowdhury" ]
Spotlight
Vision-language models (VLMs) have improved significantly in their capabilities, but their complex architecture makes their safety alignment challenging. In this paper, we reveal an uneven distribution of harmful information across the intermediate layers of the image encoder and show that skipping a certain set of lay...
Vision Language Models, Safety Alignment, Reinforcement Learning from Human Feedback (RLHF)
We reveal an image encoder early exit based vulnerability in VLMs and propose layer-wise RLHF to alleviate it.
2,098
2411.04291
[ 0.011005133390426636, -0.005376123823225498, -0.0028881102334707975, 0.041203178465366364, 0.030012212693691254, 0.01472804881632328, 0.0349690243601799, -0.0006284654373303056, -0.027981944382190704, -0.024943316355347633, -0.04770040884613991, 0.020217202603816986, -0.07993578165769577, ...
null
3,238
Training Dynamics of In-Context Learning in Linear Attention
https://openreview.net/forum?id=aFNq67ilos
[ "Yedi Zhang", "Aaditya K Singh", "Peter E. Latham", "Andrew M Saxe" ]
Spotlight
While attention-based models have demonstrated the remarkable ability of in-context learning (ICL), the theoretical understanding of how these models acquired this ability through gradient descent training is still preliminary. Towards answering this question, we study the gradient descent dynamics of multi-head linear...
learning dynamics, in-context learning, linear attention
We theoretically characterize how in-context learning abilities evolve during gradient descent training of linear attention, revealing abrupt acquisition or progressive improvements depending on how the key and query are parametrized.
1,939
2501.16265
[ -0.019731061533093452, 0.019311748445034027, 0.00182370375841856, 0.008917784318327904, 0.014148733578622341, 0.023410292342305183, 0.04319539666175842, 0.028671199455857277, -0.0513608381152153, -0.015024201013147831, -0.045146141201257706, 0.024766240268945694, -0.06018558517098427, -0.0...
https://github.com/yedizhang/linattn-icl
3,239
When and How Does CLIP Enable Domain and Compositional Generalization?
https://openreview.net/forum?id=Lktwi30g63
[ "Elias Kempf", "Simon Schrodi", "Max Argus", "Thomas Brox" ]
Spotlight
The remarkable generalization performance of contrastive vision-language models like CLIP is often attributed to the diversity of their training distributions. However, key questions remain unanswered: Can CLIP generalize to an entirely unseen domain when trained on a diverse mixture of domains (domain generalization)...
CLIP, Compositional Generalization, Domain Generalization, Out-of-Distribution Robustness, OOD generalization
We studied CLIP's domain and compositional generalization via systematic data-centric experiments and mechanistic analyses, revealing that domain diversity, sufficiently shared intermediate features and circuitry are crucial for generalization.
1,549
2502.09507
[ 0.02163892798125744, 0.005316858179867268, 0.000855784397572279, 0.07300157099962234, 0.038312189280986786, 0.0039152163080871105, 0.03143883869051933, 0.014005455188453197, -0.01977667771279812, -0.023697366937994957, -0.046708542853593826, 0.02120671235024929, -0.08708050101995468, 0.003...
https://github.com/lmb-freiburg/understanding-clip-ood
3,240
Diffusion-based Adversarial Purification from the Perspective of the Frequency Domain
https://openreview.net/forum?id=Bm706VlAtU
[ "Gaozheng Pei", "Ke Ma", "Yingfei Sun", "Qianqian Xu", "Qingming Huang" ]
Spotlight
The diffusion-based adversarial purification methods attempt to drown adversarial perturbations into a part of isotropic noise through the forward process, and then recover the clean images through the reverse process. Due to the lack of distribution information about adversarial perturbations in the pixel domain, it i...
Adversarial Purification
Adversarial Purification from Perspective of Frequency Domain
1,502
2505.01267
[ -0.0038476360496133566, -0.003091406309977174, 0.014221088960766792, 0.038774583488702774, 0.04830820858478546, -0.0005528568872250617, 0.012793316505849361, -0.030920911580324173, -0.028349395841360092, -0.04406554624438286, -0.012278360314667225, -0.0031286345329135656, -0.0485905632376670...
https://github.com/GaozhengPei/FreqPure
3,241
Leveraging Diffusion Model as Pseudo-Anomalous Graph Generator for Graph-Level Anomaly Detection
https://openreview.net/forum?id=Zm2M92TZyO
[ "Jinyu Cai", "Yunhe Zhang", "Fusheng Liu", "See-Kiong Ng" ]
Spotlight
A fundamental challenge in graph-level anomaly detection (GLAD) is the scarcity of anomalous graph data, as the training dataset typically contains only normal graphs or very few anomalies. This imbalance hinders the development of robust detection models. In this paper, we propose **A**nomalous **G**raph **Diff**usion...
Anomaly Detection, Diffusion Model, Graph Neural Network
null
1,490
null
[ -0.002747983206063509, -0.035903200507164, 0.012242932803928852, 0.06382228434085846, 0.06404808163642883, -0.02368767186999321, 0.043411318212747574, -0.01454635988920927, -0.005826060194522142, -0.05482793226838112, 0.031112587079405785, -0.017054487019777298, -0.05611186847090721, 0.021...
null
3,242
An Analysis for Reasoning Bias of Language Models with Small Initialization
https://openreview.net/forum?id=4HQaMUYWAT
[ "Junjie Yao", "Zhongwang Zhang", "Zhi-Qin John Xu" ]
Spotlight
Transformer-based Large Language Models (LLMs) have revolutionized Natural Language Processing by demonstrating exceptional performance across diverse tasks. This study investigates the impact of the parameter initialization scale on the training behavior and task preferences of LLMs. We discover that smaller initializ...
initialization scale, reasoning bias, language model, embedding space, training dynamics
null
1,338
2502.04375
[ -0.03985975310206413, -0.01581624522805214, -0.018162542954087257, 0.039321184158325195, 0.04393361508846283, 0.010831624269485474, 0.01692713052034378, 0.019010188058018684, -0.045752737671136856, 0.008025511167943478, -0.031109200790524483, 0.05398932844400406, -0.048410266637802124, -0....
null
3,243
Instance Correlation Graph-based Naive Bayes
https://openreview.net/forum?id=hwTKGdM4TK
[ "Chengyuan Li", "Liangxiao Jiang", "Wenjun Zhang", "Liangjun Yu", "Huan Zhang" ]
Spotlight
Due to its simplicity, effectiveness and robustness, naive Bayes (NB) has continued to be one of the top 10 data mining algorithms. To improve its performance, a large number of improved algorithms have been proposed in the last few decades. However, in addition to Gaussian naive Bayes (GNB), there is little work on nu...
Naive Bayes, Numerical attribute, Instance correlation graph, Variational graph auto-encoder
A novel instance correlation graph-based naive Bayes (ICGNB) algorithm is proposed.
1,002
null
[ 0.015169091522693634, -0.0006669443682767451, 0.010687909089028835, 0.04698263853788376, 0.03866056352853775, 0.05777957662940025, 0.014722340740263462, -0.004542994312942028, -0.023726386949419975, -0.031753528863191605, -0.022074460983276367, 0.01244376227259636, -0.09189499169588089, 0....
https://github.com/jiangliangxiao/ICGNB
3,244
Trusted Multi-View Classification with Expert Knowledge Constraints
https://openreview.net/forum?id=U64wEbM7NB
[ "Xinyan Liang", "Shijie Wang", "Yuhua Qian", "Qian Guo", "Liang Du", "Bingbing Jiang", "Tingjin Luo", "Feijiang Li" ]
Spotlight
Multi-view classification (MVC) based on the Dempster-Shafer theory has gained significant recognition for its reliability in safety-critical applications. However, existing methods predominantly focus on providing confidence levels for decision outcomes without explaining the reasoning behind these decisions. Moreover...
multi-view classification, trusted multi-view classification, trusted fusion, distribution-aware subjective opinion
null
850
null
[ 0.011308585293591022, -0.006757786031812429, -0.0346435084939003, 0.06998463720083237, 0.030341079458594322, 0.011827139183878899, 0.05201127007603645, -0.02795790694653988, -0.016736647114157677, -0.0539100207388401, -0.031758978962898254, 0.03820854797959328, -0.06694778054952621, 0.0319...
https://github.com/jie019/TMCEK_ICML2025
3,245
Discrepancy Minimization in Input-Sparsity Time
https://openreview.net/forum?id=TmJvacopmV
[ "Yichuan Deng", "Xiaoyu Li", "Zhao Song", "OMRI WEINSTEIN" ]
Spotlight
A recent work by [Larsen, SODA 2023] introduced a faster combinatorial alternative to Bansal's SDP algorithm for finding a coloring $x \in \\{-1, 1\\}^n$ that approximately minimizes the discrepancy $\mathrm{disc}(A, x) := \\| A x \\|_{\infty}$ of a real-valued $m \times n$ matrix $A$. Larsen's algorithm runs in $\wide...
combinatorial optimization, algorithmic discrepancy theory, sketching, input-sparsity time
We give the algorithm for discrepancy minimization which runs in input-sparsity time.
816
2210.12468
[ -0.002692044945433736, -0.028368311002850533, -0.014945989474654198, 0.06641866266727448, 0.039226677268743515, 0.05141308531165123, 0.005113702733069658, -0.014183541759848595, -0.04240739718079567, -0.0759972631931305, 0.009523719549179077, -0.025268182158470154, -0.06988198310136795, 0....
null
3,246
Sharp Generalization for Nonparametric Regression by Over-Parameterized Neural Networks: A Distribution-Free Analysis in Spherical Covariate
https://openreview.net/forum?id=fPOkujQBVb
[ "Yingzhen Yang" ]
Spotlight
Sharp generalization bound for neural networks trained by gradient descent (GD) is of central interest in statistical learning theory and deep learning. In this paper, we consider nonparametric regression by an over-parameterized two-layer NN trained by GD. We show that, if the neural network is trained by GD with earl...
Nonparametric Regression, Over-Parameterized Neural Network, Gradient Descent, Minimax Optimal Rate
We show that an over-parameterized two-layer neural network trained by gradient descent (GD) exhibits minimax optimal convergence rates for nonparametric regression, and our results are distribution-free in spherical covariate.
572
null
[ -0.05287288874387741, -0.013835140503942966, 0.016210369765758514, 0.023767465725541115, 0.035031069070100784, 0.05994992330670357, 0.019832966849207878, -0.014598083682358265, -0.017955878749489784, -0.02289128303527832, -0.016040800139307976, 0.030396973714232445, -0.07411079108715057, 0...
null
3,247
Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss
https://openreview.net/forum?id=S2K5MyRjrL
[ "Bo-Han Lai", "Pin-Han Huang", "Bo-Han Kung", "Shang-Tse Chen" ]
Spotlight
Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoreticall...
Certified robustness, Adversarial
We propose a new orthogonal convolution and a novel loss function to enhance certified robustness.
559
2505.15174
[ -0.017427440732717514, -0.039643295109272, 0.016591809689998627, 0.02372133545577526, 0.04093446210026741, 0.021415311843156815, 0.012346060015261173, -0.015621508471667767, -0.023231135681271553, -0.05975367873907089, -0.015827739611268044, -0.0075471242889761925, -0.039955560117959976, 0...
https://github.com/ntuaislab/BRONet
3,248
Relational Invariant Learning for Robust Solvation Free Energy Prediction
https://openreview.net/forum?id=xVBfdltHST
[ "yeyunchen" ]
Spotlight
Predicting the solvation free energy of molecules using graph neural networks holds significant potential for advancing drug discovery and the design of novel materials. While previous methods have demonstrated success on independent and identically distributed (IID) datasets, their performance in out-of-distribution (...
Molecule relational learning, graph neural network, out of distribution generalization
null
496
null
[ -0.012538155540823936, 0.021067582070827484, 0.013200987130403519, 0.04399244114756584, 0.04950781539082527, -0.011291222646832466, 0.023002848029136658, -0.001260575489141047, -0.003637028858065605, -0.020412057638168335, 0.01319316029548645, 0.0259531419724226, -0.08381761610507965, -0.0...
null
3,249
Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection
https://openreview.net/forum?id=GoGuB1yFko
[ "Xiang Fang", "Arvind Easwaran", "Blaise Genest" ]
Spotlight
Out-of-distribution (OOD) detection attempts to distinguish outlier samples to prevent models trained on the in-distribution (ID) dataset from producing unavailable outputs. Most OOD detection methods require many ID samples for training, which seriously limits their real-world applications. To this end, we target a ch...
Adaptive Multi-prompt Contrastive Network
null
484
2506.17633
[ -0.009356077760457993, -0.01978178881108761, 0.005262717604637146, 0.04416091740131378, 0.02438931167125702, -0.006785817444324493, 0.013829079456627369, 0.003167085349559784, -0.04358852282166481, 0.01389537937939167, -0.04348454624414444, 0.016079582273960114, -0.06372295320034027, -0.01...
null
3,250
FlashTP: Fused, Sparsity-Aware Tensor Product for Machine Learning Interatomic Potentials
https://openreview.net/forum?id=wiQe95BPaB
[ "Seung Yul Lee", "Hojoon Kim", "Yutack Park", "Dawoon Jeong", "Seungwu Han", "Yeonhong Park", "Jae W. Lee" ]
Spotlight
Machine Learning Interatomic Potentials (MLIPs) enable efficient molecular dynamics (MD) simulations with high accuracy. While equivariant MLIPs achieve state-of-the-art accuracy, they face significant computational bottlenecks centered around their Tensor-Product layer, which account for up to 75\% of training time an...
Equivariant neural networks, Tensor Product, Software libraries, Efficiency, Machine-learned interatomic potential (MLIP), Machine Learning Force Fields (MLFF)
FlashTP accelerates equivariant MLIPs by optimizing Tensor-Product operations, achieving up to speedup and significantly reducing memory footprint.
450
null
[ -0.01583295874297619, -0.018414873629808426, -0.011332233436405659, 0.07267831265926361, 0.03181234747171402, -0.01912236586213112, 0.017487892881035805, -0.011481915600597858, -0.0005066071753390133, -0.04692091792821884, 0.018473457545042038, -0.0020686518400907516, -0.05478601157665253, ...
https://github.com/SNU-ARC/flashTP
3,251
Which Agent Causes Task Failures and When? On Automated Failure Attribution of LLM Multi-Agent Systems
https://openreview.net/forum?id=GazlTYxZss
[ "Shaokun Zhang", "Ming Yin", "Jieyu Zhang", "Jiale Liu", "Zhiguang Han", "Jingyang Zhang", "Beibin Li", "Chi Wang", "Huazheng Wang", "Yiran Chen", "Qingyun Wu" ]
Spotlight
Failure attribution in LLM multi-agent systems—identifying the agent and step responsible for task failures—provides crucial clues for systems debugging but remains underexplored and labor-intensive. In this paper, we propose and formulate a new research area: automated failure attribution for LLM multi-agent systems....
failure attribution, multi-agent systems.
null
425
2505.00212
[ 0.015264291316270828, -0.0015311246970668435, -0.012256093323230743, 0.017794687300920486, 0.05046982318162918, -0.009038630872964859, 0.033207278698682785, 0.025314290076494217, -0.0023142395075410604, -0.04656994715332985, -0.03043719194829464, 0.03581684082746506, -0.06280770897865295, ...
https://github.com/mingyin1/Agents_Failure_Attribution
3,252
A Closer Look at Multimodal Representation Collapse
https://openreview.net/forum?id=Vf9f7eNX6T
[ "Abhra Chaudhuri", "Anjan Dutta", "Tu Bui", "Serban Georgescu" ]
Spotlight
We aim to develop a fundamental understanding of modality collapse, a recently observed empirical phenomenon wherein models trained for multimodal fusion tend to rely only on a subset of the modalities, ignoring the rest. We show that modality collapse happens when noisy features from one modality are entangled, via a ...
Multimodal learning, modality collapse
Modality collapse happens as a result of cross-modal polysemantic entanglements arising out of rank bottlenecks in deep multimodal models, and can thus be remedied by freeing up such bottlenecks.
421
2505.22483
[ -0.00044555444037541747, 0.01217851135879755, 0.011638610623776913, 0.06663873791694641, 0.023154649883508682, 0.010475668124854565, 0.04004057124257088, 0.009105444885790348, -0.05758555606007576, -0.03490573540329933, -0.037874735891819, -0.00019293490913696587, -0.08260860294103622, 0.0...
null
3,253
Geometric Hyena Networks for Large-scale Equivariant Learning
https://openreview.net/forum?id=jJRkkPr474
[ "Artem Moskalev", "Mangal Prakash", "Junjie Xu", "Tianyu Cui", "Rui Liao", "Tommaso Mansi" ]
Spotlight
Processing global geometric context while preserving equivariance is crucial when modeling biological, chemical, and physical systems. Yet, this is challenging due to the computational demands of equivariance and global context at scale. Standard methods such as equivariant self-attention suffer from quadratic complexi...
equivariance, global context, long convolution, scalability, mechanistic interpretability, architecrture
Geometric Hyena Networks is the first equivariant long-convolutional model that efficiently captures global geometric context at sub-quadratic complexity
339
2505.22560
[ 0.011044753715395927, 0.01563769206404686, -0.00637254910543561, 0.032799821346998215, 0.009637538343667984, 0.000184315606020391, 0.01413695141673088, 0.041766997426748276, -0.013591970317065716, -0.038967497646808624, 0.015031042508780956, -0.03208265081048012, -0.0650668516755104, 0.026...
null
3,254
Covered Forest: Fine-grained generalization analysis of graph neural networks
https://openreview.net/forum?id=xvLVYrYQ8a
[ "Antonis Vasileiou", "Ben Finkelshtein", "Floris Geerts", "Ron Levie", "Christopher Morris" ]
Spotlight
The expressive power of message-passing graph neural networks (MPNNs) is reasonably well understood, primarily through combinatorial techniques from graph isomorphism testing. However, MPNNs' generalization abilities---making meaningful predictions beyond the training set---remain less explored. Current generalization ...
MPNNs, generalization, bounds, theory, Weisfeiler, Leman, Lehman
We provide tighter generalization bounds for MPNNs by considering the pseudometric geometry of MPNNs' feature space
252
2412.07106
[ -0.021143797785043716, -0.0393991656601429, 0.02182580903172493, 0.04726791754364967, 0.05757078900933266, 0.019188275560736656, 0.038289982825517654, 0.006827662233263254, -0.03836992755532265, -0.03770753741264343, 0.0104669239372015, -0.01618114858865738, -0.07563044875860214, -0.002101...
https://github.com/benfinkelshtein/CoveredForests
3,255
Revisiting Continuity of Image Tokens for Cross-domain Few-shot Learning
https://openreview.net/forum?id=OpineZj5bj
[ "Shuai Yi", "Yixiong Zou", "Yuhua Li", "Ruixuan Li" ]
Spotlight
Vision Transformer (ViT) has achieved remarkable success due to its large-scale pretraining on general domains, but it still faces challenges when applying it to downstream distant domains that have only scarce training data, which gives rise to the Cross-Domain Few-Shot Learning (CDFSL) task. Inspired by Self-Attentio...
Cross-Domain Few-Shot Learning
We find a phenomenon that disrupting image patches' continuity (e.g., shuffle patches) affects differently on source and target domains. We delve into it for an interpretation and propose a method based on it for CDFSL.
247
2506.03110
[ 0.013432389125227928, -0.026244306936860085, 0.00008663689368404448, 0.053954169154167175, 0.021607857197523117, 0.0049768718890845776, 0.03546419367194176, 0.012032303027808666, -0.020428089424967766, -0.026592284440994263, -0.02912118099629879, 0.020158257335424423, -0.06216850131750107, ...
https://github.com/shuaiyi308/ReCIT
3,256
On the Tension between Byzantine Robustness and No-Attack Accuracy in Distributed Learning
https://openreview.net/forum?id=zU4VCPHYRC
[ "Yi-Rui Yang", "Chang-Wei Shi", "Wu-Jun Li" ]
Spotlight
Byzantine-robust distributed learning (BRDL), which refers to distributed learning that can work with potential faulty or malicious workers (also known as Byzantine workers), has recently attracted much research attention. Robust aggregators are widely used in existing BRDL methods to obtain robustness against Byzantin...
distributed machine learning, Byzantine robustness, robust aggregation
null
121
null
[ -0.02111426554620266, -0.0030951371882110834, -0.03014589287340641, 0.029228923842310905, 0.034630149602890015, 0.006405841559171677, 0.04632173478603363, -0.009626060724258423, -0.023897891864180565, -0.03490587696433067, 0.0015276939375326037, -0.007225915789604187, -0.06459060311317444, ...
null
3,257
Determining Layer-wise Sparsity for Large Language Models Through a Theoretical Perspective
https://openreview.net/forum?id=otNB7BzsiR
[ "Weizhong Huang", "Yuxin Zhang", "Xiawu Zheng", "Fei Chao", "Rongrong Ji" ]
Spotlight
In this paper, we address the challenge of determining the layer-wise sparsity rates of large language models (LLMs) through a theoretical perspective. Specifically, we identify a critical issue of **"reconstruction error explosion"** in existing LLMs sparsification methods. This refers to the cumulative effect of reco...
Large language models, Network Sparsity, Layerwise sparsity
We derive the layer-wise sparsity rate of LLMs through a theoretical perspective, which significantly enhances the performance of sparse LLMs.
58
2502.14770
[ -0.013511696830391884, -0.01570139452815056, 0.006803370080888271, 0.02932584658265114, 0.04312465712428093, 0.047962769865989685, 0.010415429249405861, 0.02394808828830719, -0.03511230647563934, -0.02017691358923912, 0.0028990362770855427, 0.023747634142637253, -0.03467974439263344, 0.011...
https://github.com/wzhuang-xmu/ATP