EXPLORING THE EFFECTIVENESS OF PRE-TRAINED TRANSFORMER MODELS FOR TURKISH QUESTION ANSWERING

dc.contributor.authorKabakus, Abdullah Talha
dc.date.accessioned2025-10-11T20:38:01Z
dc.date.available2025-10-11T20:38:01Z
dc.date.issued2025
dc.departmentDüzce Üniversitesien_US
dc.description.abstractRecent advancements in Natural Language Processing (NLP) and Artificial Intelligence (AI) have been propelled by the emergence of Transformer-based Large Language Models (LLMs), which have demonstrated outstanding performance across various tasks, including Question Answering (QA). However, the adoption and performance of these models in low-resource and morphologically rich languages like Turkish remain underexplored. This study addresses this gap by systematically evaluating several state-of-the-art Transformer-based LLMs on a curated, gold-standard Turkish QA dataset. The models evaluated include BERTurk, XLM-RoBERTa, ELECTRA-Turkish, DistilBERT, and T5-Small, with a focus on their ability to handle the unique linguistic challenges posed by Turkish. The experimental results indicate that the BERTurk model outperforms other models, achieving an F1-score of 0.8144, an Exact Match of 0.6351, and a BLEU score of 0.4035. The study highlights the importance of language-specific pre-training and the need for further research to improve the performance of LLMs in low-resource languages. The findings provide valuable insights for future efforts in enhancing Turkish NLP resources and advancing QA systems in underrepresented linguistic contexts.en_US
dc.identifier.doi10.17780/ksujes.1649970
dc.identifier.endpage993en_US
dc.identifier.issn1309-1751
dc.identifier.issue2en_US
dc.identifier.startpage975en_US
dc.identifier.trdizinid1315044en_US
dc.identifier.urihttps://doi.org/10.17780/ksujes.1649970
dc.identifier.urihttps://search.trdizin.gov.tr/tr/yayin/detay/1315044
dc.identifier.urihttps://hdl.handle.net/20.500.12684/20825
dc.identifier.volume28en_US
dc.indekslendigikaynakTR-Dizinen_US
dc.institutionauthorKabakus, Abdullah Talha
dc.language.isoenen_US
dc.relation.ispartofKSÜ Mühendislik Bilimleri Dergisien_US
dc.relation.publicationcategoryMakale - Ulusal Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.snmzKA_TR_20250911
dc.subjectArtificial intelligenceen_US
dc.subjectnatural language processingen_US
dc.subjectquestion answeringen_US
dc.subjecttransformeren_US
dc.subjectlarge language modelen_US
dc.titleEXPLORING THE EFFECTIVENESS OF PRE-TRAINED TRANSFORMER MODELS FOR TURKISH QUESTION ANSWERINGen_US
dc.typeArticleen_US

Dosyalar