Abstract
Automated text summarization (ATS) is crucial for collecting specialized, domain-specific information. Zero-shot learning (ZSL) allows large language models (LLMs) to respond to prompts on information not included in their training, playing a vital role in this process. This study evaluates LLMs' effectiveness in generating accurate summaries under ZSL conditions and explores using retrieval augmented generation (RAG) and prompt engineering to enhance factual accuracy and understanding. We combined LLMs with summarization modeling, prompt engineering, and RAG, evaluating the summaries using the METEOR metric and keyword frequencies through word clouds. Results indicate that LLMs are generally well-suited for ATS tasks, demonstrating an ability to handle specialized information under ZSL conditions with RAG. However, web scraping limitations hinder a single generalized retrieval mechanism. While LLMs show promise for ATS under ZSL conditions with RAG, challenges like goal misgeneralization and web scraping limitations need addressing. Future research should focus on solutions to these issues.