Toklu, SinanKabakus, Abdullah Talha2025-10-112025-10-1120251300-18841304-4915https://doi.org/10.17341/gazimmfd.1543854https://hdl.handle.net/20.500.12684/21677Language detection, one of the most important elements used in natural language processing, is used extensively in various applications such as machine translation, sentiment analysis, and information retrieval. Thanks to language detection, communication between people in many different countries is possible. In addition, human-animal interaction can also be carried out in this area. In this paper, a novel Bidirectional Long Short-Term Memory model with Multi-Head Attention mechanism is proposed to accurately classify text into 17 languages, namely Arabic, Danish, Dutch, English, French, German, Greek, Hindi, Italian, Kannada, Malayalam, Portuguese, Russian, Spanish, Swedish, Tamil, and Turkish. A publicly available dataset consisting of 10,337 texts written in the above-mentioned languages is utilized to train and evaluate the proposed model. The proposed novel model achieved an extraordinary accuracy, precision, recall, and F1-score of 99.9%, outperforming the state-of-the-art baseline models. In particular, the proposed model demonstrated perfect precision (100%) for 15 languages, namely Arabic, Dutch, English, French, German, Greek, Hindi, Italian, Kannada, Malayalam, Portuguese, Russian, Swedish, Tamil, and Turkish. This research highlights the effectiveness of deep learning techniques in language detection, providing promising avenues for further advances in the field of multilingual text processing.tr10.17341/gazimmfd.1543854info:eu-repo/semantics/openAccessLanguage detectionlanguage classificationtranslationdeep learninglong short-term memoryA novel bidirectional long short-term memory model with multi-head attention for accurate language detectionArticle4032-s2.0-105013632417WOS:001569394800039Q2Q3