Yazar "Sharma, Sunil Kumar" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Application of the 2-archive multi-objective cuckoo search algorithm for structure optimization(Nature Portfolio, 2024) Tejani, Ghanshyam G.; Mashru, Nikunj; Patel, Pinank; Sharma, Sunil Kumar; Celik, EmreThe study suggests a better multi-objective optimization method called 2-Archive Multi-Objective Cuckoo Search (MOCS2arc). It is then used to improve eight classical truss structures and six ZDT test functions. The optimization aims to minimize both mass and compliance simultaneously. MOCS2arc is an advanced version of the traditional Multi-Objective Cuckoo Search (MOCS) algorithm, enhanced through a dual archive strategy that significantly improves solution diversity and optimization performance. To evaluate the effectiveness of MOCS2arc, we conducted extensive comparisons with several established multi-objective optimization algorithms: MOSCA, MODA, MOWHO, MOMFO, MOMPA, NSGA-II, DEMO, and MOCS. Such a comparison has been made with various performance metrics to compare and benchmark the efficacy of the proposed algorithm. These metrics comprehensively assess the algorithms' abilities to generate diverse and optimal solutions. The statistical results demonstrate the superior performance of MOCS2arc, evidenced by enhanced diversity and optimal solutions. Additionally, Friedman's test & Wilcoxon's test corroborate the finding that MOCS2arc consistently delivers superior optimization results compared to others. The results show that MOCS2arc is a highly effective improved algorithm for multi-objective truss structure optimization, offering significant and promising improvements over existing methods.Öğe Novel distance-fitness learning scheme for ameliorating metaheuristic optimization(Elsevier - Division Reed Elsevier India Pvt Ltd, 2025) Celik, Emre; Houssein, Essam H.; Abdel-Salam, Mahmoud; Oliva, Diego; Tejani, Ghanshyam G.; Ozturk, Nihat; Sharma, Sunil KumarAn important portion of metaheuristic algorithms is guided by the fittest solution obtained so far. Searching around the fittest solution is beneficial for speeding up convergence, but it is detrimental considering local minima stagnation and premature convergence. A novel distance-fitness learning (DFL) scheme that provides better searchability and greater diversity is proposed to resolve these. The method allows search agents in the population to actively learn from the fittest solution, the worst solution, and an optimum distance-fitness (ODF) candidate. This way, it aims at approaching both the fittest solution and ODF candidate while at the same time moving away from the worst solution. The effectiveness of our proposal is evaluated by integrating it with the reptile search algorithm (RSA), which is an interesting algorithm that is simple to code but suffers from stagnating in local minima, converging too early, and a lack of sufficient global searchability. Empirical results from solving 23 standard benchmark functions, 10 Congresses on Evolutionary Computation (CEC) 2020 test functions, and 2 real-world engineering problems reveal that DFL boosts the capability of RSA significantly. Further, the comparison of DFL-RSA with popular algorithms vividly signifies the potential and superiority of the method over most of the problems in terms of solution precision.