查看更多>>摘要:By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – New study results on artificial intell igence have been published. According to news reporting originating from Poznan, Poland, by NewsRx correspondents, research stated, “Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainabili ty of artificial intelligence (XAI) is due to the increasing consciousness in, a mong other things, data mining, error elimination, and learning performance by v arious AI algorithms.” Our news correspondents obtained a quote from the research from Poznan Universit y of Life Sciences: “Moreover, XAI will allow the decisions made by models in pr oblems to be more transparent as well as effective. In this study, models from t he ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group o f Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was car ried out to determine accuracy indicators such as accuracy, precision, recall, a nd F1-score. It was visualized using Local Interpretable Model Agnostic Explanat ions (LIMEs) to predict the effectiveness of identifying specific types of black currant powders based on texture descriptors such as entropy, contrast, correlat ion, dissimilarity, and homogeneity. Bagging (Bagging_100), Decisio n Tree (DT0), and Random Forest (RF7_gini) proved to be the most ef fective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.9 79.”