PythonSaga: redefining the benchmark to evaluate code generating LLMs

Show simple item record

dc.contributor.advisor Singh, Mayank
dc.contributor.author Yadav, Ankit
dc.date.accessioned 2025-09-11T15:52:53Z
dc.date.available 2025-09-11T15:52:53Z
dc.date.issued 2025
dc.identifier.citation Yadav, Ankit. (2025). PythonSaga: redefining the benchmark to evaluate code generating LLMs. Gandhinagar: Indian Institute of Technology Gandhinagar, 91p. (Acc. No.: T01418)
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/11970
dc.description.statementofresponsibility by Ankit Yadav
dc.format.extent xiii, 91p.: hbk.: 30 cm
dc.language.iso en_US
dc.publisher Indian Institute of Technology Gandhinagar
dc.subject 22270001
dc.subject M. Tech
dc.subject Computer Science and Engineering
dc.subject Large Language Model (LLM)
dc.subject Natural Language Models (NLM)
dc.subject Code Generation
dc.subject Programming Concepts
dc.subject Annotation Study
dc.subject PythonSaga
dc.title PythonSaga: redefining the benchmark to evaluate code generating LLMs
dc.type Thesis
dc.contributor.department Computer Science and Engineering
dc.description.degree M.Tech.


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account