首页|Researcher at Catholic University of the Sacred Heart Zeroes in on Machine Learning (The Challenges of Machine Learning: A Critical Review)
Researcher at Catholic University of the Sacred Heart Zeroes in on Machine Learning (The Challenges of Machine Learning: A Critical Review)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Mdpi
Investigators discuss new findings in artificial intelligence. According to news reporting out of Brescia, Italy, by NewsRx editors, research stated, “The concept of learning has multiple interpretations, ranging from acquiring knowledge or skills to constructing meaning and social development. Machine Learning (ML) is considered a branch of Artificial Intelligence (AI) and develops algorithms that can learn from data and generalize their judgment to new observations by exploiting primarily statistical methods.” The news reporters obtained a quote from the research from Catholic University of the Sacred Heart: “The new millennium has seen the proliferation of Artificial Neural Networks (ANNs), a formalism able to reach extraordinary achievements in complex problems such as computer vision and natural language recognition. In particular, designers claim that this formalism has a strong resemblance to the way the biological neurons operate. This work argues that although ML has a mathematical/statistical foundation, it cannot be strictly regarded as a science, at least from a methodological perspective. The main reason is that ML algorithms have notable prediction power although they cannot necessarily provide a causal explanation about the achieved predictions. For example, an ANN could be trained on a large dataset of consumer financial information to predict creditworthiness. The model takes into account various factors like income, credit history, debt, spending patterns, and more. It then outputs a credit score or a decision on credit approval. However, the complex and multi-layered nature of the neural network makes it almost impossible to understand which specific factors or combinations of factors the model is using to arrive at its decision. This lack of transparency can be problematic, especially if the model denies credit and the applicant wants to know the specific reasons for the denial. The model’s “black box” nature means it cannot provide a clear explanation or breakdown of how it weighed the various factors in its decision-making process.”
Catholic University of the Sacred HeartBresciaItalyEuropeCyborgsEmerging TechnologiesMachine Learning