Kabakus, Abdullah Talha2025-10-112025-10-1120251309-1751https://doi.org/10.17780/ksujes.1649970https://search.trdizin.gov.tr/tr/yayin/detay/1315044https://hdl.handle.net/20.500.12684/20825Recent advancements in Natural Language Processing (NLP) and Artificial Intelligence (AI) have been propelled by the emergence of Transformer-based Large Language Models (LLMs), which have demonstrated outstanding performance across various tasks, including Question Answering (QA). However, the adoption and performance of these models in low-resource and morphologically rich languages like Turkish remain underexplored. This study addresses this gap by systematically evaluating several state-of-the-art Transformer-based LLMs on a curated, gold-standard Turkish QA dataset. The models evaluated include BERTurk, XLM-RoBERTa, ELECTRA-Turkish, DistilBERT, and T5-Small, with a focus on their ability to handle the unique linguistic challenges posed by Turkish. The experimental results indicate that the BERTurk model outperforms other models, achieving an F1-score of 0.8144, an Exact Match of 0.6351, and a BLEU score of 0.4035. The study highlights the importance of language-specific pre-training and the need for further research to improve the performance of LLMs in low-resource languages. The findings provide valuable insights for future efforts in enhancing Turkish NLP resources and advancing QA systems in underrepresented linguistic contexts.en10.17780/ksujes.1649970info:eu-repo/semantics/openAccessArtificial intelligencenatural language processingquestion answeringtransformerlarge language modelEXPLORING THE EFFECTIVENESS OF PRE-TRAINED TRANSFORMER MODELS FOR TURKISH QUESTION ANSWERINGArticle2829759931315044