Remember this event that year? assessing temporal information and reasoning in large language models

Show simple item record

dc.contributor.author Beniwal, Himanshu
dc.contributor.author Patel, Dishant
dc.contributor.author Nandagopan D., Kowsik
dc.contributor.author Ladia, Hritik
dc.contributor.author Yadav, Ankit
dc.contributor.author Singh, Mayank
dc.contributor.other Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
dc.coverage.spatial United States of America
dc.date.accessioned 2024-11-20T13:29:59Z
dc.date.available 2024-11-20T13:29:59Z
dc.date.issued 2024-11-12
dc.identifier.citation Beniwal, Himanshu; Patel, Dishant; Nandagopan D., Kowsik; Ladia, Hritik; Yadav, Ankit and Singh, Mayank, "Remember this event that year? assessing temporal information and reasoning in large language models", in the Conference on Empirical Methods in Natural Language Processing (EMNLP 2024), Miami, US, Nov. 12-16, 2024.
dc.identifier.uri https://aclanthology.org/2024.findings-emnlp.953
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/10788
dc.description.abstract Large Language Models (LLMs) are increasingly ubiquitous, yet their ability to retain and reason about temporal information remains limited, hindering their application in real-world scenarios where understanding the sequential nature of events is crucial. Our study experiments with 12 state-of-the-art models (ranging from 2B to 70B+ parameters) on a novel numerical-temporal dataset, TempUN, spanning from 10,000 BCE to 2100 CE, to uncover significant temporal retention and comprehension limitations. We propose six metrics to assess three learning paradigms to enhance temporal knowledge acquisition. Our findings reveal that open-source models exhibit knowledge gaps more frequently, suggesting a trade-off between limited knowledge and incorrect responses. Additionally, various fine-tuning approaches significantly improved performance, reducing incorrect outputs and impacting the identification of ‘information not available’ in the generations. The associated dataset and code are available at the [URL](https://anonymous.4open.science/r/TempUN-ARR/).
dc.description.statementofresponsibility by Himanshu Beniwal, Dishant Patel, Kowsik Nandagopan D., Hritik Ladia, Ankit Yadav and Mayank Singh
dc.language.iso en_US
dc.publisher Association for Computational Linguistics
dc.title Remember this event that year? assessing temporal information and reasoning in large language models
dc.type Conference Paper


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account