Statistics, Optimization & Information Computing <p><em><strong>Statistics, Optimization and Information Computing</strong></em>&nbsp;(SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences.&nbsp; Topics of interest are (but not limited to):&nbsp;</p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap,&nbsp;Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series&nbsp;analysis, &nbsp;High-dimensional&nbsp; multivariate integrals,&nbsp;statistical analysis in market, business, finance,&nbsp;insurance, economic and social science, etc</li> </ul> <p>&nbsp;Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming&nbsp;</li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and&nbsp;machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence,&nbsp;Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data&nbsp;analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics,&nbsp;Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul> International Academic Press en-US Statistics, Optimization & Information Computing 2311-004X <span>Authors who publish with this journal agree to the following terms:</span><br /><br /><ol type="a"><ol type="a"><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="" target="_new">The Effect of Open Access</a>).</li></ol></ol> Moments of Generalized Order Statistics from Doubly Truncated Power Linear Hazard Rate Distribution <p>This paper is concerned with some recurrence relations for single and product moments of doubly truncated power linear hazard rate distribution via generalized order statistics. Some deductions and related results are also considered. The characterization result is provided at the end.</p> M. I. Khan Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-03-13 2024-03-13 12 4 841 850 10.19139/soic-2310-5070-1304 Modified Bagdonavicius-Nikulin Goodness-of-fit Test Statistic for the Compound Topp Leone Burr XII Model with Various Censored Applications <p>The Poisson Topp Leone Burr XII distribution is extensively studied due to its broad relevance in analyzing censored real datasets from engineering, economics, and medicine. In this research, the distribution's versatility is highlighted through the analysis of four specific real datasets. The study compares the Poisson Topp Leone Burr XII distribution with nine extensions of the Burr type XII distribution to determine which offers the best fit for these datasets. To evaluate the goodness-of-fit of the Poisson Topp Leone Burr XII distribution under right censoring, a modified Bagdonavi\v{c}ius-Nikulin goodness-of-fit test statistic is introduced and applied. This new test statistic is utilized to validate the distributional fit for the Poisson Topp Leone Burr XII distribution across the four right-censored datasets. The modified Bagdonavi\v{c}ius-Nikulin test statistic is employed to assess distributional validation, specifically in the context of right censoring. The application of this statistic involves analyzing each of the four censored datasets to confirm the appropriateness of the Poisson Topp Leone Burr XII distribution for these scenarios. Additionally, to support the evaluation of the modified goodness-of-fit test statistic, the Barzilai-Borwein algorithm is utilized. This algorithm is employed within a simulation study to further assess the effectiveness and reliability of the modified Bagdonavi\v{c}ius-Nikulin test statistic, thereby ensuring robust validation of the Poisson Topp Leone Burr XII distribution against the observed real datasets.</p> Mohamed G. Khalil Khaoula Aidi M. Masoom Ali Nadeem S. Butt Mohamed Ibrahim Haitham M. Yousof Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-04-13 2024-04-13 12 4 851 868 10.19139/soic-2310-5070-1447 Empirical likelihood ratio-based goodness-of-fit test for the Lindley distribution <p>The Lindley distribution may serve as a useful reliability model. In this article, we propose a goodness of fit test for the Lindley distribution based on the empirical likelihood (EL) ratio. The properties of the proposed test are stated and the critical values are obtained by Monte Carlo simulation. Power comparisons of the proposed test with some known competing tests are carried out via simulations. Finally, an illustrative example is presented and analyzed.</p> Hadi Alizadeh Noughabi Mohammad Shafaei Noughabi Copyright (c) 2023 Statistics, Optimization & Information Computing 2023-12-19 2023-12-19 12 4 869 881 10.19139/soic-2310-5070-1481 The Marshall-Olkin-Topp-Leone-Gompertz-G Family of Distributions with Applications <p>A new family of distributions called the Marshall-Olkin-Topp-Leone-Gompertz-G (MO-TL-Gom-G) distribution is developed and studied in detail. Some mathematical and statistical properties of the new family of distributions are explored. Statistical properties of the new family of distributions considered are the quantile function, moments and generating function, probability weighted moments, distribution of the order statistics and R\'enyi entropy. The maximum likelihood technique is used for estimating the model parameters and Monte Carlo simulation is conducted to examine the performance of the model. Finally, we give examples of real-life data applications to show the usefulness of the above mentioned Topp-Leone-Gompertz generalization.</p> Broderick Oluyede Morongwa Gabanakgosi Gayan Warahena-Liyanage Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-04-13 2024-04-13 12 4 882 906 10.19139/soic-2310-5070-1509 Systematic Literature Review on Named Entity Recognition: Approach, Method, and Application <p>Named entity recognition (NER) is one of the preprocessing stages in natural language processing (NLP), which functions to detect and classify entities in the corpus. NER results are used in various NLP applications, including sentiment analysis, text summarization, chatbot, machine translation, and question answering. Several previous reviews partially discussed NER, for instance, NER reviews in specific domains, NER classification, and NER deep learning. This paper provides a comprehensive and systematic review on NER topic studies published from 2011 to 2020. The main contribution of this review is to present a comprehensive systematic literature review on NER from preprocessing techniques, datasets, application domains, feature extraction techniques, approaches, methods, and evaluation techniques. The result concludes that the deep learning approach and the Bi-directional long short-term memory with a conditional random field (Bi-LSTM-CRF) method are the most interesting methods among NER researchers. At the same time, medical and health are NER researchers' most popular domains. These developments have also led to an increasing number of public datasets in the medical and health fields. At the end of this review, we recommend some opportunities and challenges for NER research going forward.</p> Warto Supriadi Rustad Guruh Fajar Shidik Edi Noersasongko Purwanto Muljono De Rosal Ignatius Moses Setiadi Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-02-28 2024-02-28 12 4 907 942 10.19139/soic-2310-5070-1631 On Truncated Versions of Xgamma Distribution: Various Estimation Methods and Statistical modelling <p><span class="fontstyle0">In this article, we introduced the truncated versions (lower, upper and double) of xgamma distribution (Sen et al. 2016). In particular, different structural and distributional properties such as moments, popular entropy measures, order statistics and survival characteristics of the upper truncated xgamma distribution are discussed in detail. We briefly describe different estimation methods, namely the maximum likelihood, ordinary least squares, weighted least square and L-Moments. Monte Carlo simulation experiments are performed for comparing the performances of the proposed methods of estimation for both small and large samples under the lower, upper and double versions. Two applications are provided, the first one comparing estimation<br>methods and the other for illustrating the applicability of the new model.</span></p> Subhradev Sen Morad Alizadeh Mohamed Aboraya M. Masoom Ali Haitham M. Yousof Mohamed Ibrahim Copyright (c) 2023 Statistics, Optimization & Information Computing 2023-12-19 2023-12-19 12 4 943 961 10.19139/soic-2310-5070-1660 E-Bayesian estimation and the corresponding E-MSE under progressive type-II censored data for some characteristics of Weibull distribution <p>Estimating the parameters (or characteristics) of a distribution, from the availability of censored samples, is one of the most important topics in statistical inference over the past decades. This study is concerned about the E-Bayesian estimation method to compute the estimates of the parameter, the hazard rate function and the reliability function of the Weibull distribution when the progressive type-2 censored samples are available. The estimations are obtained based on the Squared error loss function (as a symmetric loss) and General Entropy and LINEX loss functions (as asymmetric losses). In addition, the asymptotic behaviour of the derived E-Bayesian estimators is discussed. Moreover, the E-Bayesian estimators under the different loss functions have been compared through Monte Carlo simulation studies by calculating the E-MSE of the resulting estimators, which is a new measure to compare the E-Bayesian estimators. As an application, we analyzed two real data sets that follow from the Weibull distribution.</p> Omid Shojaee Hassan Zarei Fatemeh Naruei Copyright (c) 2023 Statistics, Optimization & Information Computing 2023-12-19 2023-12-19 12 4 962 981 10.19139/soic-2310-5070-1709 Liu-Type Estimator for the Poisson-Inverse Gaussian Regression Model: Simulation and Practical Applications <p>The Poisson-Inverse Gaussian regression model (PIGRM) is commonly used to analyze count datasets with over-dispersion. While the maximum likelihood estimator (MLE) is a standard choice for estimating PIGRM parameters, its performance may be suboptimal in the presence of correlated explanatory variables. To overcome this limitation, we introduce a novel Liu-type estimator for PIGRM. Our analysis includes an examination of the matrix mean square error (MMSE) and scalar mean square error (SMSE) properties of the proposed estimator, comparing them with those of the MLE, ridge, and Liu estimators. We also present several parameters of the Liu-type estimator for PIGRM. We evaluated the performance of the proposed estimator through a simulation study and application to real-life data, using SMSE as the primary evaluation criterion. Our results demonstrate that the proposed estimators outperform the MLE, ridge, and Liu estimators in both simulated and real-world scenarios.</p> Hleil Alrweili Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-05-11 2024-05-11 12 4 982 1003 10.19139/soic-2310-5070-1991 Categorization of Dehydrated Food through Hybrid Deep Transfer Learning Techniques <p>The essentiality of categorizing dry foods plays a crucial role in maintaining quality control and ensuring food safety for human consumption. The effectiveness and precision of classification methods are vital for enhanced evaluation of food quality and streamlined logistics. To achieve this, we gathered a dataset of 11,500 samples from Mendeley and proceeded to employ various transfer learning models, including VGG16 and ResNet50. Additionally, we introduce a novel hybrid model, VGG16-ResNet, which combines the strengths of both architectures. Transfer learning involves utilizing knowledge acquired from one task or domain to enhance learning and performance in another. By fusing multiple Deep Learning techniques and transfer learning strategies, such as VGG16-ResNet50, we developed a robust model capable of accurately classifying a wide array of dry foods. The integration of Deep Learning (DL) and transfer learning techniques in the context of dry food classification signifies a drive towards automation and increased efficiency within the food industry. Notably, our approach achieved remarkable results, achieving a classification accuracy of 99.78% for various dry food images, even when dealing with limited training data for VGG16-ResNet50.</p> Sm Nuruzzaman Nobel Md. Anwar Hussen Wadud Anichur Rahman Dipanjali Kundu Airin Afroj Aishi Sadia Sazzad Muaz Rahman Md Asif Imran Omar Faruque Sifat Mohammad Sayduzzaman T M Amir Ul Haque Bhuiyan Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-02-28 2024-02-28 12 4 1004 1018 10.19139/soic-2310-5070-1896 Generalization of Power Lindley Distribution: Properties and Applications <p>This article introduces the generalized Kumaraswamy power Lindley (GKPL) distribution, a novel probabilistic model derived by combining the generalized Kumaraswamy (GK-G) family with the power Lindley (PL) distribution. The GKPL distribution encompasses a wide range of distributions, including Kumaraswamy power Lindley, Kumaraswamy Lindley, generalized power Lindley, generalized Lindley, power Lindley, and the well-known Lindley, as special cases. Fundamental properties are derived, such as the hazard rate function, survival function, quantile function, reverse hazard function, moments, mean residual life function, entropy, and order statistics. To determine the parameters of the GKPL distribution, four estimation methods, including maximum likelihood, least squares, Cramer-von Mises, and Anderson-Darling methods, are used to estimate the parameters of the GKPL distribution. The effectiveness of the estimation techniques is assessed by employing Monte Carlo simulations. The adaptability and validity of the proposed GKPL distribution are compared with alternative models, including the Kumaraswamy power Lindley (KPL), Extended Kumaraswamy power Lindley (EKPL), type II generalized Topp Leone-power Lindley (TIIGTLPL), exponentiated generalized power Lindley (EGPL), generalized Kumaraswamy Weibull (GKW), generalized Kumaraswamy log-logistic (GKLLo), and generalized Kumaraswamy generalized power Gompertz (GKGPGo) distributions, through analyses of three real datasets.</p> Fatehi Yahya Eissa Chhaya Dhanraj Sonar Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-04-17 2024-04-17 12 4 1019 1041 10.19139/soic-2310-5070-1987 Enhancing Volatility Prediction: Comparison Study Between Persistent and Anti-persistent Financial Series. <p>Predicting financial volatility is crucial for managing risks and making investment decisions. This research introduces a novel method for creating a prediction model that effectively handles the intricate dynamics of financial time series data. By utilizing the advantages of both time series models and recurrent neural networks, we present two hybrid models: Vanilla-RGARCH and LSTM-RGARCH. These models are designed to overcome the shortcomings of Realized GARCH (RGARCH) and HAR models in representing various stylized facts of financial data. While RGARCH models are proficient in capturing asymmetry, they fail to address long-term memory. Conversely, HAR models are adept at capturing long-term memory. The innovative model combines forecasted values from the RGARCH model with components from the HAR model, including daily, weekly, and monthly realized volatility, within a neural network framework. This combination helps to bypass the complexities involved in directly merging the HAR model with RGARCH. Through this method, our hybrid models provide a thorough depiction of the characteristics of financial data.</p> <p>The proposed approach is evaluated on two distinct types of financial series; persistent and anti-persistent, to demonstrate its robustness and capacity to generalize in different contexts. The performance of hybrid models is compared to that of conventional RGARCH and HAR models, demonstrating their superiority in precise prediction of financial volatility and their ability to capture complex trends observed in real data. In addition, a principal component analysis (PCA) is used to visualize the results and facilitate their interpretation.</p> Youssra Bakkali Mhamed EL Merzguioui Abdelhadi Akharif Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-05-11 2024-05-11 12 4 1042 1060 10.19139/soic-2310-5070-2021 Randomized density matrix renormalization group algorithm for low-rank tensor train decomposition <p>Tensor train decomposition is a powerful tool for processing high-dimensional data. Density matrix renormalization group (DMRG) is an alternating scheme for low-rank tensor train decomposition of large tensors. However, it may suffer from the curse of dimensionality due to the large scale of subproblems. In this paper, we proposed a novel randomized proximal DMRG algorithm for low-rank tensor train decomposition by using TensorSketch to alleviate the curse of dimensionality. Numerical experiments on synthetic and real-world data also demonstrate the effectiveness and efficiency of the proposed algorithm.</p> Huilin Jiang Zhongming Chen Gaohang Yu Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-05-14 2024-05-14 12 4 1061 1075 10.19139/soic-2310-5070-2030 New efficient descent direction of a primal-dual path-following algorithm for linear programming <p>We introduce a new primal-dual interior-point algorithm with a full-Newton step for solving linear optimization problems. The newly proposed approach is based on applying a new function on a simple equivalent form of the centering equation of the system, which defines the central path. Thus, we get a new efficient search direction for the considered algorithm. Moreover, we prove that the method solves the studied problems in polynomial time and that the algorithm obtained has the best known complexity bound for linear optimization. Finally, a comparative numerical study is reported to show the efficiency of the proposed algorithm.</p> Billel Zaoui Djamel Benterki Samia Khelladi Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-02-25 2024-02-25 12 4 1076 1090 10.19139/soic-2310-5070-1748 Geoadditive Semiparametric Regression For Modeling Property Price In Surabaya, Indonesia Using Marketplace Data <p>The price growth of the property in Surabaya is the highest among the other cities in East Java, but demand in the residential sub-sector is still there. The value of a property is described by its price. Property price is one of the important factors considered in making an investment decision. The market value is determined by its physical and micro-neighborhood factors. It consists of location and environmental factors. Mass appraisal is an efficient and cost-effective way to value property fairly, transparently, and consistently, as properties with the same attributes will receive equal value. The existence of a price property model is vital in the context of <em>mass appraisal</em>. The objective of mass appraisal is to value a group of properties using data, valuation methods, and statistical tests. Mass appraisal is invaluable for the government to formulate taxes based on the market value. In this research, the Geoaditive model is used to model property price based on its physical and location factors. The results show that the physical (number of bedrooms, number of bathrooms, land area, and building area) and location (longitude and latitude coordinates) factors significantly influence the property prices in Surabaya. The building area has more impact on the property price compared to the land area. The combined effect plot shows also that the properties located in the eastern of Surabaya have a relatively higher price than those in the western part</p> Wahyu Wibowo Mohamad Khoiri Sri Pingit Wulandari Fausania Hibatullah Mochammad Reza Habibi Harun Al Azies Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-04-28 2024-04-28 12 4 1091 1102 10.19139/soic-2310-5070-1764 Optimality conditions for (h, φ)-subdifferentiable multiobjective programming problems with G-type I functions <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>In this paper, using generalized algebraic operations introduced by Ben-Tal [7], we introduce new classes of (h,φ)-subdifferentiable functions, called (h,φ)-G-type I functions and generalized (h,φ)-G-type I functions. Then, we consider a class of nonconvex (h, φ)-subdifferentiable multiobjective programming problems with locally Lipschitz functions in which the functions involved belong to aforesaid classes of (h, φ)-subdifferentiable nonconvex functions. For such (h, φ)-subdifferentiable vector optimization problems, we prove the sufficient optimality conditions for a feasible solution to be its (weak) Pareto solution. Further, we define a vector dual problem in the sense of Mond-Weir for the considered (h, φ)-subdifferentiable multiobjective programming problem and we prove several duality theorems for the aforesaid (h, φ)-subdifferentiable vector optimization problems also under (h, φ)-G-type I hypotheses.</p> </div> </div> </div> Tadeusz Antczak Vinay Singh Solomon Lalmalsawma Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-04-13 2024-04-13 12 4 1103 1122 10.19139/soic-2310-5070-1930 Simulation Structure for Selecting an Optimal Error Distribution Through the GAS Model <p>In econometrics and finance, volatility modelling has long been a specialised field for addressing a variety of issues that pertain to the risks and uncertainties of an asset. However, volatility modelling for risk management is highly dependent on the underlying error distribution. Hence, this study presents a Monte Carlo simulation (MCS) structure for selecting an optimal or the most adequate error distribution that is relevant for modelling the persistence of volatility through the Generalized Autoregressive Score (GAS) model. The structure describes an organised approach to the MCS experiment that includes “background of the study (optional), defining the aim of the study, specifying the research questions, method of implementation, and summarised conclusion”. The method of implementation is a process that consists of writing the simulation code, setting the seed, setting the true parameter a priori, data generation, and performance evaluation through<br>meta-statistics. Among the findings, the study used both fat-tails and √N consistency experiments to show that the GAS model with a lower unconditional shape parameter value (ˆν∗ = 4.1) can generate a dataset that adequately reflects the behaviour of financial time series data, relevant for volatility modelling. This dynamic structure is intended to help interested users on MCS experiments utilising the GAS model for reliable volatility persistence calculations in finance and other areas.</p> Richard T. A. Samuel Charles Chimedza Caston Sigauke Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-03-23 2024-03-23 12 4 1123 1148 10.19139/soic-2310-5070-1998 A new SVM solver applied to Skin Lesion Classification <p>We present a unified framework for solving the nonlinear Support Vector Machines (SVM) training problems. The framework is based on an objective function approximation so that the Problem becomes separable, with low computational cost root-finding methods to solve the resulting subproblems. Because of the diagonalization of the objective function in the first stage of the framework, we named the new SVM solver DiagSVM. To test the performance of the DiagSVM, we reported preliminary numerical experiments with benchmark datasets. From the results, we chose the best combination used in the framework to solve the Skin Lesion Classification (SLC) problem. Since melanoma (skin cancer) is the most dangerous and deadliest disease that affects the skin, the application of the DiagSVM can be integrated into several Computer-Aided Diagnosis (CAD) systems to help them detect skin cancer and significantly reduce both morbidity and mortality associated with this disease. <br><br>Machine learning (ML) and deep learning (DL) based approaches have been widely used to develop robust skin lesion classification systems. For the SLC problem, three pre-trained convolutional neural networks (CNN), Xception, InceptionResNetV2 and DenseNet201, were employed as feature extractors and their dimension was reduced using Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA) and Independent Component Analysis (ICA). Finally, the samples were fed into two SVM solvers: DiagSVM and Libsvm. The experiment shows that using PCA, KPCA, or ICA, the SVM can perform better than without feature reduction. The classification performance of the proposed methodology is analyzed on the ISIC2017 and PH2 datasets. The benchmark and SLC results indicate a promising proposal for accuracy, specificity and sensitivity metrics.</p> Jonatas Silva Atécio Alves Paulo Santos Luiz Matioli Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-04-28 2024-04-28 12 4 1149 1172 10.19139/soic-2310-5070-2005 Temporal regularity of stochastic differential equations driven by G-Brownian motion <p>This paper is devoted to study the temporal regularity of the solution of stochastic differential equations driven by G-Brownian motion (G-SDEs) under the global Lipschitz and linear growth conditions. In addition, a numerical simulation of a particular G-SDE is provided.</p> Amel Redjil Zineb Arab Hanane Ben Gherbal Zakaria Boumezbeur Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-03-12 2024-03-12 12 4 1173 1183 10.19139/soic-2310-5070-1898 Bayesian and Non-Bayesian Estimation for The Parameter of Inverted Topp-Leone Distribution Based on Progressive Type I Censoring <p>In this paper, Bayesian and non-Bayesian estimations of the shape parameter of the Inverted Topp-Leone distribution are studied under a progressive Type I censoring scheme. The maximum likelihood estimator (MLE) and Bayes estimator (BE) of the unknown parameter under the squared error loss (SEL) function are obtained. Three types of confidence intervals are discussed for the unknown parameter. A simulation study is performed to compare the performances of the proposed methods, and two numerical examples have been analyzed for illustrative purposes.&nbsp;</p> Hiba Z. Muhammed Essam Muhammed Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-04 2024-06-04 12 4 1184 1209 10.19139/soic-2310-5070-1768 In-depth Analysis of von Mises Distribution Models: Understanding Theory, Applications, and Future Directions <p>Multimodal and asymmetric circular data manifest in diverse disciplines, underscoring the significance of fitting suitable distributions for the analysis of such data. This study undertakes a comprehensive comparative assessment, encompassing diverse extensions of the von Mises distribution and the associated statistical methodologies, spanning from Richard von Mises' seminal work in 1918 to contemporary applications, with a particular focus on the field of wind energy. The primary objective is to discern the strengths and limitations inherent in each method. To illustrate the practical implications, three authentic datasets and a simulation study are incorporated to showcase the performance of the proposed models. Furthermore, this paper provides an exhaustive list of references pertinent to von Mises distribution models.</p> Said Benlakhdar Mohammed Rziza Rachid Oulad Haj Thami Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-06 2024-06-06 12 4 1210 1230 10.19139/soic-2310-5070-1919