首页期刊导航|Annals of mathematics and artificial intelligence
期刊信息/Journal information
Annals of mathematics and artificial intelligence
Kluwer Academic Publishers
Annals of mathematics and artificial intelligence

Kluwer Academic Publishers

不定期

1012-2443

Annals of mathematics and artificial intelligence/Journal Annals of mathematics and artificial intelligenceISTPSCIAHCI
正式出版
收录年代

    35 years of math and AI: Editorial from the Founder and outgoing Editor-in-Chief

    Martin Charles Golumbic
    1-3页
    查看更多>>摘要:In 1990, we began a new journey by founding this journal, the Annals of Mathematics and Artificial Intelligence. The idea began a few year earlier as the brainchild of Peter L. Hammer in discussions with colleagues who were active with his own journals in discrete mathematics and operations research. I was chosen to initiate and establish the journal together with the late Robert Jaroslow who sadly passed away just a few months after those meetings. The first volume of AMAI was dedicated to his memory.

    The future starts now

    Juergen DixMichael Fisher
    5-6页
    查看更多>>摘要:When I was asked last year whether I would be interested to become Editor-in-Chief of one of the prestigious Springer legacy journals in Mathematics (exactly which journal was not mentioned), I was quite surprised. I replied that while I studied Mathematics and Logic, long ago, I have been working since almost four decades in Computer Science and Symbolic AI and might not be the right person for a hardcore mathematical journal. It just did not enter my mind that the publisher was talking about the Annals of Mathematics and AI (AMAI), the journal I have been an editor of for more than 20 years. Nor could I imagine that Marty would ever retire.

    Deep data density estimation through Donsker-Varadhan representation

    Seonho ParkPanos M. Pardalos
    7-17页
    查看更多>>摘要:Estimating the data density is one of the challenging problem topics in the deep learning society. In this paper, we present a simple yet effective methodology for estimating the data density using the Donsker-Varadhan variational lower bound on the KL divergence and the modeling based on the deep neural network. We demonstrate that the optimal critic function associated with the Donsker-Varadhan representation on the KL divergence between the data and the uniform distribution can estimate the data density. Also, we present the deep neural network-based modeling and its stochastic learning procedure. The experimental results and possible applications of the proposed method demonstrate that it is competitive with the previous methods for data density estimation and has a lot of possibilities for various applications.

    Guest editorial: Revised selected papers from the LION 16 conference

    Ilias S. KotsireasPanos M. Pardalos
    19-20页
    查看更多>>摘要:The sixteenth installment of the conference series "Learning and Intelligent Optimization" (LION) was held June 5-10, 2022 at Milos Island, Cyclades, Greece. One of the iconic landmarks of Milos Island is the "Kleftiko", featuring sea caves and unique rock formations, turquoise waters, and volcanic landscapes to admire. This special issue of the Annals of Mathematics and Artificial Intelligence (AMAI) consists of selected thoroughly revised and extended journal papers originating from LION 16. We would like to thank the authors for contributing their work, and the reviewers whose tireless efforts resulted in keeping the quality of the contributions at the highest standards.

    An improved multi-task least squares twin support vector machine

    Hossein MoosaeiFatemeh BazikarPanos M. Pardalos
    21-41页
    查看更多>>摘要:In recent years, multi-task learning (MTL) has become a popular field in machine learn-ing and has a key role in various domains. Sharing knowledge across tasks in MTL can improve the performance of learning algorithms and enhance their generalization capability. A new approach called the multi-task least squares twin support vector machine (MTLS-TSVM) was recently proposed as a least squares variant of the direct multi-task twin support vector machine (DMTSVM). Unlike DMTSVM, which solves two quadratic programming problems, MTLS-TSVM solves two linear systems of equations, resulting in a reduced com-putational time. In this paper, we propose an enhanced version of MTLS-TSVM called the improved multi-task least squares twin support vector machine (IMTLS-TSVM). IMTLS-TSVM offers a significant advantage over MTLS-TSVM by operating based on the empirical risk minimization principle, which allows for better generalization performance. The model achieves this by including regularization terms in its objective function, which helps control the model's complexity and prevent overfitting. We demonstrate the effectiveness of IMTLS-TSVM by comparing it to several single-task and multi-task learning algorithms on various real-world data sets. Our results highlight the superior performance of IMTLS-TSVM in addressing multi-task learning problems.

    kNN Classification: a review

    Panos K. SyriopoulosNektarios G. KalampalikisSotiris B. KotsiantisMichael N. Vrahatis...
    43-75页
    查看更多>>摘要:The k-nearest neighbors (k/NN) algorithm is a simple yet powerful non-parametric classifier that is robust to noisy data and easy to implement. However, with the growing literature on k/NN methods, it is increasingly challenging for new researchers and practitioners to navigate the field. This review paper aims to provide a comprehensive overview of the latest developments in the k/NN algorithm, including its strengths and weaknesses, applications, benchmarks, and available software with corresponding publications and citation analysis. The review also discusses the potential of k/NN in various data science tasks, such as anomaly detection, dimensionality reduction and missing value imputation. By offering an in-depth analysis of k/NN, this paper serves as a valuable resource for researchers and practitioners to make informed decisions and identify the best k/NN implementation for a given application.

    Bayesian optimization over the probability simplex

    Antonio CandelieriAndrea PontiFrancesco Archetti
    77-91页
    查看更多>>摘要:Gaussian Process based Bayesian Optimization is largely adopted for solving problems where the inputs are in Euclidean spaces. In this paper we associate the inputs to discrete probability distributions which are elements of the probability simplex. To search in the new design space, we need a distance between distributions. The optimal transport dis-tance (aka Wasserstein distance) is chosen due to its mathematical structure and the com-putational strategies enabled by it. Both the GP and the acquisition function is generalized to an acquisition functional over the probability simplex. To optimize this functional two methods are proposed, one based on auto differentiation and the other based on proximal-point algorithm and the gradient flow. Finally, we report a preliminary set of computational results on a class of problems whose dimension ranges from 5 to 100. These results show that embedding the Bayesian optimization process in the probability simplex enables an effective algorithm whose performance over standard Bayesian optimization improves with the increase of problem dimensionality.

    Novel SVM-based classification approaches for evaluating pancreatic carcinoma

    Ammon WashburnNeng FanHao Helen Zhang
    93-108页
    查看更多>>摘要:In this paper, we develop two SVM-based classifiers named stable nested one-class support vector machines (SN-1SVMs) and decoupled margin-moment based SVMs (DMMB-SVMs), to predict the specific type of pancreatic carcinoma using quantitative histopathological sig-natures of images. For each patient, the diagnosis can produce hundreds of images, which can be used to classify the pancreatic tissues into three classes: chronic pancreatitis, intraduc-tal papillary mucinous neoplasms, and pancreatic carcinoma. The proposed two approaches tackle the classification problems from two different perspectives: the SN-1SVM treats each image as a classification point in a nested fashion to predict malignancy of the tissues, while the DMMB-SVM treats each patient as a classification point by assembling information across images. One attractive feature of the DMMB-SVM is that, in addition to utilizing the mean information, it also takes into account the covariance of features extracted from images for each patient. We conduct numerical experiments to evaluate and compare performance of the two methods. It is observed that the SN-1SVM can take advantage of the data struc-ture more effectively, while the DMMB-SVM demonstrates better computational efficiency and classification accuracy. To further improve interpretability of the final classifier, we also consider the ℓ_1-norm in the DMMB-SVM to handle feature selection.

    Realtime gray-box algorithm configuration using cost-sensitive classification

    Dimitri WeissKevin Tierney
    109-130页
    查看更多>>摘要:A solver's runtime and the quality of the solutions it generates are strongly influenced by its parameter settings. Finding good parameter configurations is a formidable challenge, even for fixed problem instance distributions. However, when the instance distribution can change over time, a once effective configuration may no longer provide adequate performance. Realtime algorithm configuration (RAC) offers assistance in finding high-quality configurations for such distributions by automatically adjusting the configurations it recommends based on instances seen so far. Existing RAC methods treat the solver as a black box, meaning the solver is given a configuration as input, and it outputs either a solution or runtime as an objective function for the configurator. However, analyzing intermediate output from the solver can enable configurators to avoid wasting time on poorly performing configurations. We propose a gray-box approach that utilizes intermediate output during evaluation and implement it within the RAC method Contextual Preselection with Plackett-Luce (CPPL blue). We apply cost-sensitive machine learning with pairwise comparisons to determine whether ongoing evaluations can be terminated to free resources. We compare our approach to a black-box equivalent on several experimental settings and show that our approach reduces the total solving time in several scenarios and improves solution quality in an additional scenario.

    A novel method for solving universum twin bounded support vector machine in the primal space

    Hossein MoosaeiSaeed KhosraviFatemeh BazikarMilan Hladik...
    131-150页
    查看更多>>摘要:In supervised learning, the Universum, a third class that is not a part of either class in the classification task, has proven to be useful. In this study we propose (N(U)TBSVM), a Newton-based approach for solving in the primal space the optimization problems related to Twin Bounded Support Vector Machines with Universum data ((U)TBSVM). In the N(U)TBSVM, the constrained programming problems of (U)TBSVM are converted into unconstrained opti-mization problems, and a generalization of Newton's method for solving the unconstrained problems is introduced. Numerical experiments on synthetic, UCI, and NDC data sets show the ability and effectiveness of the proposed N(U)TBSVM. We apply the suggested method for gender detection from face images, and compare it with other methods.