LEGOBench: leaderboard generation benchmark for scientific models

Show simple item record

dc.contributor.author Singh, Shruti
dc.contributor.author Alam, Shoaib
dc.contributor.author Singh, Mayank
dc.coverage.spatial United States of America
dc.date.accessioned 2024-01-25T07:18:09Z
dc.date.available 2024-01-25T07:18:09Z
dc.date.issued 2024-01
dc.identifier.citation Singh, Shruti; Alam, Shoaib and Singh, Mayank, "LEGOBench: leaderboard generation benchmark for scientific models", arXiv, Cornell University Library, DOI: arXiv:2401.06233, Jan. 2024.
dc.identifier.issn 2331-8422
dc.identifier.uri https://doi.org/10.48550/arXiv.2401.06233
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/9699
dc.description.abstract The ever-increasing volume of paper submissions makes it difficult to stay informed about the latest state-of-the-art research. To address this challenge, we introduce LEGOBench, a benchmark for evaluating systems that generate leaderboards. LEGOBench is curated from 22 years of preprint submission data in arXiv and more than 11,000 machine learning leaderboards in the PapersWithCode portal. We evaluate the performance of four traditional graph-based ranking variants and three recently proposed large language models. Our preliminary results show significant performance gaps in automatic leaderboard generation. The code is available on this https URL and the dataset is hosted on this https URL .
dc.description.statementofresponsibility by Shruti Singh, Shoaib Alam and Mayank Singh
dc.language.iso en_US
dc.publisher Cornell University Library
dc.title LEGOBench: leaderboard generation benchmark for scientific models
dc.type Article
dc.relation.journal arXiv


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account