LLM-Assisted Test Generation and Reporting: A New Paradigm in Automated Software Quality Assurance

Dr. Ritesh Kumar Sharma, Rakhi Gupta, Sonia Jain

Abstract


The increasing complexity of modern software systems has elevated the demand for intelligent, automated testing frameworks. Traditional testing methods, though reliable, struggle to handle dynamic requirements, high-speed iterations, and large-scale codebases. The recent rise of Large Language Models (LLMs)—such as GPT-based architectures—offers new opportunities to automate and enhance test generation, execution, and reporting. This paper explores how LLM-assisted test generation and reporting can revolutionize software quality assurance by reducing manual effort, improving coverage, and facilitating intelligent defect analysis. It also analyzes existing approaches, identifies challenges, discusses potential solutions, and outlines future research directions that integrate LLMs into software testing ecosystems.

KEYWORDS: Large Language Models, Software Testing, Automated Test Generation, Test Reporting, Quality Assurance, AI-Assisted Development, Software Engineering, NLP in Testing


Full Text:

PDF 137-147

Refbacks

  • There are currently no refbacks.