Model hubs and beyond: analyzing model popularity, performance, and documentation

Show simple item record

dc.contributor.author Kadasi, Pritam
dc.contributor.author Reddy, Sriman
dc.contributor.author Chaturvedula, Srivathsa Vamsi
dc.contributor.author Sen, Rudranshu
dc.contributor.author Saha, Agnish
dc.contributor.author Sikdar, Soumavo
dc.contributor.author Sarkar, Sayani
dc.contributor.author Mittal, Suhani
dc.contributor.author Jindal, Rohit
dc.contributor.author Singh, Mayank
dc.coverage.spatial United States of America
dc.date.accessioned 2025-03-28T15:38:35Z
dc.date.available 2025-03-28T15:38:35Z
dc.date.issued 2025-03
dc.identifier.citation Kadasi, Pritam; Reddy, Sriman; Chaturvedula, Srivathsa Vamsi; Sen, Rudranshu; Saha, Agnish; Sikdar, Soumavo; Sarkar, Sayani; Mittal, Suhani; Jindal, Rohit and Singh, Mayank, "Model hubs and beyond: analyzing model popularity, performance, and documentation", arXiv, Cornell University Library, DOI: arXiv:2503.15222, Mar. 2025.
dc.identifier.uri http://arxiv.org/abs/2503.15222
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/11137
dc.description.abstract With the massive surge in ML models on platforms like Hugging Face, users often lose track and struggle to choose the best model for their downstream tasks, frequently relying on model popularity indicated by download counts, likes, or recency. We investigate whether this popularity aligns with actual model performance and how the comprehensiveness of model documentation correlates with both popularity and performance. In our study, we evaluated a comprehensive set of 500 Sentiment Analysis models on Hugging Face. This evaluation involved massive annotation efforts, with human annotators completing nearly 80,000 annotations, alongside extensive model training and evaluation. Our findings reveal that model popularity does not necessarily correlate with performance. Additionally, we identify critical inconsistencies in model card reporting: approximately 80\% of the models analyzed lack detailed information about the model, training, and evaluation processes. Furthermore, about 88\% of model authors overstate their models' performance in the model cards. Based on our findings, we provide a checklist of guidelines for users to choose good models for downstream tasks.
dc.description.statementofresponsibility by Pritam Kadasi, Sriman Reddy, Srivathsa Vamsi Chaturvedula, Rudranshu Sen, Agnish Saha, Soumavo Sikdar, Sayani Sarkar, Suhani Mittal, Rohit Jindal and Mayank Singh
dc.language.iso en_US
dc.publisher Cornell University Library
dc.title Model hubs and beyond: analyzing model popularity, performance, and documentation
dc.type Article
dc.relation.journal arXiv


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account