EXPLORING THE EFFECTIVENESS OF PRE-TRAINED TRANSFORMER MODELS FOR TURKISH QUESTION ANSWERING
dc.contributor.author | Kabakus, Abdullah Talha | |
dc.date.accessioned | 2025-10-11T20:38:01Z | |
dc.date.available | 2025-10-11T20:38:01Z | |
dc.date.issued | 2025 | |
dc.department | Düzce Üniversitesi | en_US |
dc.description.abstract | Recent advancements in Natural Language Processing (NLP) and Artificial Intelligence (AI) have been propelled by the emergence of Transformer-based Large Language Models (LLMs), which have demonstrated outstanding performance across various tasks, including Question Answering (QA). However, the adoption and performance of these models in low-resource and morphologically rich languages like Turkish remain underexplored. This study addresses this gap by systematically evaluating several state-of-the-art Transformer-based LLMs on a curated, gold-standard Turkish QA dataset. The models evaluated include BERTurk, XLM-RoBERTa, ELECTRA-Turkish, DistilBERT, and T5-Small, with a focus on their ability to handle the unique linguistic challenges posed by Turkish. The experimental results indicate that the BERTurk model outperforms other models, achieving an F1-score of 0.8144, an Exact Match of 0.6351, and a BLEU score of 0.4035. The study highlights the importance of language-specific pre-training and the need for further research to improve the performance of LLMs in low-resource languages. The findings provide valuable insights for future efforts in enhancing Turkish NLP resources and advancing QA systems in underrepresented linguistic contexts. | en_US |
dc.identifier.doi | 10.17780/ksujes.1649970 | |
dc.identifier.endpage | 993 | en_US |
dc.identifier.issn | 1309-1751 | |
dc.identifier.issue | 2 | en_US |
dc.identifier.startpage | 975 | en_US |
dc.identifier.trdizinid | 1315044 | en_US |
dc.identifier.uri | https://doi.org/10.17780/ksujes.1649970 | |
dc.identifier.uri | https://search.trdizin.gov.tr/tr/yayin/detay/1315044 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12684/20825 | |
dc.identifier.volume | 28 | en_US |
dc.indekslendigikaynak | TR-Dizin | en_US |
dc.institutionauthor | Kabakus, Abdullah Talha | |
dc.language.iso | en | en_US |
dc.relation.ispartof | KSÜ Mühendislik Bilimleri Dergisi | en_US |
dc.relation.publicationcategory | Makale - Ulusal Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.snmz | KA_TR_20250911 | |
dc.subject | Artificial intelligence | en_US |
dc.subject | natural language processing | en_US |
dc.subject | question answering | en_US |
dc.subject | transformer | en_US |
dc.subject | large language model | en_US |
dc.title | EXPLORING THE EFFECTIVENESS OF PRE-TRAINED TRANSFORMER MODELS FOR TURKISH QUESTION ANSWERING | en_US |
dc.type | Article | en_US |