Celik, EmreHoussein, Essam H.Abdel-Salam, MahmoudOliva, DiegoTejani, Ghanshyam G.Ozturk, NihatSharma, Sunil Kumar2025-10-112025-10-1120252215-0986https://doi.org/10.1016/j.jestch.2025.102053https://hdl.handle.net/20.500.12684/21962An important portion of metaheuristic algorithms is guided by the fittest solution obtained so far. Searching around the fittest solution is beneficial for speeding up convergence, but it is detrimental considering local minima stagnation and premature convergence. A novel distance-fitness learning (DFL) scheme that provides better searchability and greater diversity is proposed to resolve these. The method allows search agents in the population to actively learn from the fittest solution, the worst solution, and an optimum distance-fitness (ODF) candidate. This way, it aims at approaching both the fittest solution and ODF candidate while at the same time moving away from the worst solution. The effectiveness of our proposal is evaluated by integrating it with the reptile search algorithm (RSA), which is an interesting algorithm that is simple to code but suffers from stagnating in local minima, converging too early, and a lack of sufficient global searchability. Empirical results from solving 23 standard benchmark functions, 10 Congresses on Evolutionary Computation (CEC) 2020 test functions, and 2 real-world engineering problems reveal that DFL boosts the capability of RSA significantly. Further, the comparison of DFL-RSA with popular algorithms vividly signifies the potential and superiority of the method over most of the problems in terms of solution precision.en10.1016/j.jestch.2025.102053info:eu-repo/semantics/openAccessMetaheuristicReptile search algorithmDistance-fitness learningGlobal optimizationNovel distance-fitness learning scheme for ameliorating metaheuristic optimizationArticle652-s2.0-105001303443WOS:001459999700001N/AQ1