首页期刊导航|Robotics & Machine Learning Daily News
期刊信息/Journal information
Robotics & Machine Learning Daily News
NewsRx
Robotics & Machine Learning Daily News

NewsRx

Robotics & Machine Learning Daily News/Journal Robotics & Machine Learning Daily News
正式出版
收录年代

    Studies from China University of Geosciences in the Area of Machine Learning Described (Babd: a Bitcoin Address Behavior Dataset for Pattern Analysis)

    28-29页
    查看更多>>摘要:Data detailed on Machine Learning have been presented. According to news reporting originating from Wuhan, People’s Republic of China, by NewsRx correspondents, research stated, “Cryptocurrencies have dramatically increased adoption in mainstream applications in various fields such as financial and online services, however, there are still a few amounts of cryptocurrency transactions that involve illicit or criminal activities. It is essential to identify and monitor addresses associated with illegal behaviors to ensure the security and stability of the cryptocurrency ecosystem.” Financial support for this research came from Yunnan Key Laboratory of Blockchain Application Technology. Our news editors obtained a quote from the research from the China University of Geosciences, “In this paper, we propose a framework to build a dataset comprising Bitcoin transactions between 12 July 2019 and 26 May 2021. This dataset (hereafter referred to as BABD-13) contains 13 types of Bitcoin addresses, 5 categories of indicators with 148 features, and 544,462 labeled data, which is the largest labeled Bitcoin address behavior dataset publicly available to our knowledge. We also propose a novel and efficient subgraph generation algorithm called BTC-SubGen to extract a ${k}$ -hop subgraph from the entire Bitcoin transaction graph constructed by the directed heterogeneous multigraph starting from a specific Bitcoin address node. We then conduct 13-class classification tasks on BABD-13 by five machine learning models namely ${k}$ -nearest neighbors algorithm, decision tree, random forest, multilayer perceptron, and XGBoost, the results show that the accuracy rates are between 93.24% and 97.13%. In addition, we study the relations and importance of the proposed features and analyze how they affect the effect of machine learning models.”

    New Robotics and Automation Study Findings Have Been Reported from Shanghai Jiao Tong University (Multibeam Forward-looking Sonar Video Object Tracking Using Truncated L1-l2 Sparsity and Aberrances Repression Regularization)

    29-30页
    查看更多>>摘要:Investigators discuss new findings in Robotics Robotics and Automation. According to news reporting out of Shanghai, People’s Republic of China, by NewsRx editors, research stated, “Multibeam forward-looking sonar (MFLS) video object tracking is a challenging problem due to the negative impacts of weak features and background clutter. In this letter, a novel multibeam forward-looking sonar video object tracking method via hybrid regularization scheme is proposed.” Financial support for this research came from National Natural Science Foundation of China (NSFC). Our news journalists obtained a quote from the research from Shanghai Jiao Tong University, “The proposed regularization scheme is a composite method with truncated l(1)-l(2) sparsity regularization and aberrances repression regularization. While the truncated l(1)-l(2) sparsity regularization explores the structural sparsity of the learned filter to address background clutter, the aberrances repression regularization can alleviate the undesired spatial bounding effect. The resulting optimization problem is solved by alternating direction method of multipliers (ADMM). A proximal operator with truncated soft-thresholding scheme is proposed for the sub-problem with truncated l(1)-l(2) sparsity regularization.” According to the news editors, the research concluded: “Experiments based on five multibeam forwardlooking sonar videos for underwater docking validate the effectiveness of the proposed method, compared to other 8 state-of-the-art tracking methods.” This research has been peer-reviewed.

    University Putra Malaysia Reports Findings in Artificial Intelligence (Review of MR spectroscopy analysis and artificial intelligence applications for the detection of cerebral inflammation and neurotoxicity in Alzheimer's disease)

    30-31页
    查看更多>>摘要:New research on Artificial Intelligence is the subject of a report. According to news reporting originating in Selangor, Malaysia, by NewsRx journalists, research stated, “Magnetic resonance spectroscopy (MRS) has an emerging role as a neuroimaging tool for the detection of biomarkers of Alzheimer’s disease (AD). To date, MRS has been established as one of the diagnostic tools for various diseases such as breast cancer and fatty liver, as well as brain tumours.” The news reporters obtained a quote from the research from University Putra Malaysia, “However, its utility in neurodegenerative diseases is still in the experimental stages. The potential role of the modality has not been fully explored, as there is diverse information regarding the aberrations in the brain metabolites caused by normal ageing versus neurodegenerative disorders. A literature search was carried out to gather eligible studies from the following widely sourced electronic databases such as Scopus, PubMed and Google Scholar using the combination of the following keywords: AD, MRS, brain metabolites, deep learning (DL), machine learning (ML) and artificial intelligence (AI); having the aim of taking the readers through the advancements in the usage of MRS analysis and related AI applications for the detection of AD. We elaborate on the MRS data acquisition, processing, analysis, and interpretation techniques. Recommendation is made for MRS parameters that can obtain the best quality spectrum for fingerprinting the brain metabolomics composition in AD. Furthermore, we summarise ML and DL techniques that have been utilised to estimate the uncertainty in the machine-predicted metabolite content, as well as streamline the process of displaying results of metabolites derangement that occurs as part of ageing.”

    New Machine Learning Data Have Been Reported by Researchers at Bhabha Atomic Research Centre (Dislocation-grain Boundary Interactions In Ta: Numerical, Molecular Dynamics, and Machine Learning Approaches)

    31-32页
    查看更多>>摘要:Investigators publish new report on Machine Learning. According to news reporting from Mumbai, India, by NewsRx journalists, research stated, “The motivation of this work was to find the appropriate molecular dynamics (MD) and slip transmission parameters of dislocation-grain boundary (GB) interaction in tantalum that correlate with the stress required for the grain boundary to deform. GBs were modeled using [(11) over bar2], [(1) over bar 10], and [111] as rotation axes and rotation angle between 0 degrees and 90 degrees.” Financial support for this research came from Bhabha Atomic Research Centre. The news correspondents obtained a quote from the research from Bhabha Atomic Research Centre, “Dislocation on either {110} or {112} slip planes was simulated to interact with various GB configurations. Drop in shear stress, drop in potential energy, critical distance between dislocation and GB, and critical shear stress for dislocation absorption by the GB were the parameters calculated from MD simulations of dislocation-GB interactions. Machine learning models eXtreme Gradient Boosting and SHapley Additive exPlanations (SHAP) were used to find the correlation between the various parameters and yield stress of the GB configurations. Machine learning results showed that the MD parameters-critical distance between the dislocation and GB, drop in shear stress; and slip transmission parameter-m’ have a stronger correlation with yield stress. The SHAP results sorted the prominent slip plane and rotation axis affecting the yield stress.”

    Recent Studies from Nankai University Add New Data to Robotics (Visual Servoing Trajectory Tracking and Depth Identification for Mobile Robots With Velocity Saturation Constraints)

    32-32页
    查看更多>>摘要:Data detailed on Robotics have been presented. According to news reporting originating in Tianjin, People’s Republic of China, by NewsRx journalists, research stated, “The paper proposes a novel visual servoing trajectory tracking controller satisfying velocity saturation constraints for mobile robots, which can simultaneously realize the unknown image depth identification. Compared with existing saturation controllers, the boundness of velocity commands can be explicitly determined though the control law is coupled with the unknown depth.” Funders for this research include National Key Research and Development Project, Tianjin Science Fund for Distinguished Young Scholars, National Natural Science Foundation of China (NSFC), Fundamental Research Funds for the Central Universities. The news reporters obtained a quote from the research from Nankai University, “In addition, the asymptotic stability (generally realizing UUB) is achieved theoretically in the presence of both velocity saturation constraints and the unknown depth parameter. To guarantee the velocity commands within the allowed speed limit, the saturation function is introduced into the visual servo control law to reshape tracking errors. Furthermore, to deal with the unknown depth, an adaptive updating law is skillfully constructed, which can simultaneously identify it under the persistent excitation (PE) condition. Also, to explicitly demonstrate the saturation performance of the designed visual servo controller, the boundness of velocity commands is analyzed, following which parameter selection rules are provided. The asymptotic convergence of tracking errors is theoretically proved with Lyapunov techniques.”

    Investigators from Karlsruhe Institute of Technology (KIT) Report New Data on Robotics (Metagraspnetv2: All-in-one Dataset Enabling Fast and Reliable Robotic Bin Picking Via Object Relationship Reasoning and Dexterous Grasping)

    33-34页
    查看更多>>摘要:Investigators publish new report on Robotics. According to news originating from Karlsruhe, Germany, by NewsRx correspondents, research stated, “Grasping unknown objects in unstructured environments is one of the most challenging and demanding tasks for robotic bin picking systems. Developing a holistic approach is crucial to building such dexterous bin picking systems to meet practical requirements on speed, cost and reliability.” Financial support for this research came from German Federal Ministry for Economic Affairs and Climate Action (BMWK). Our news journalists obtained a quote from the research from the Karlsruhe Institute of Technology (KIT), “Proposed datasets so far focus only on challenging sub-problems and are therefore limited in their ability to leverage the complementary relationship between individual tasks. In this paper, we tackle this holistic data challenge and design MetaGraspNetV2, an all-in-one bin picking dataset consisting of (i) a photo-realistic dataset with over 296k images, which has been created through physics-based metaverse synthesis; and (ⅱ) a real-world test dataset with 3.2k images featuring task-specific difficulty levels. Both datasets provide full annotations for amodal panoptic segmentation, object relationship detection, occlusion reasoning, 6-DoF pose estimation, and grasp detection for a parallel-jaw as well as a vacuum gripper. Extensive experiments demonstrate that our dataset outperforms state-of-the-art datasets in object detection, instance segmentation, amodal detection, parallel-jaw grasping, and vacuum grasping. Furthermore, leveraging the potential of our data for building holistic perception systems, we propose a single-shot-multi-pick (SSMP) grasping policy for scene understanding accelerated fast picking in high clutter. SSMP reasons about suitable manipulation orders for blindly picking multiple items given a single image acquisition. Physical robot experiments demonstrate that SSMP effectively speeds up cycle times through reducing image acquisitions by more than 47% while providing better grasp performance compared to state-of-the-art bin picking methods. Note to Practitioners-In robotic bin picking, most proposed methods and datasets focus on solving only one aspect of the grasping task, such as grasp point detection, object detection, or relationship reasoning. They do not address practical aspects such as the widespread use of vacuum grasp technology or the need for short cycle times. In practice, however, efficient bin picking solutions often rely on multiple task-specific methods. Hence, having one dataset for a large variety of vision-related tasks in robotic picking reduces data redundancy and enables the development of holistic methods. While deep learning has been proven highly effective for bin picking vision systems, it demands large, high-quality training datasets. Collecting such datasets in the real-world, while assuring label quality and consistency, is prohibitively expensive and time-consuming. To overcome these challenges, we set up a photo-realistic metaverse data generation pipeline and create a large-scale synthetic training dataset. Furthermore, we design a comprehensive real-world dataset for testing. Unlike previously proposed datasets, our datasets provide difficulty levels and annotations in simulation and real-world for a comprehensive list of high-level tasks, including amodal object detection, scene layout reasoning, and grasp detection. In real-world applications, cycle time is a critical factor affecting the productivity and profitability of a robotic system. We tackle time-efficiency through scene understanding and demonstrate the capability of our data regarding holistic system development by proposing a single-shot-multi-pick (SSMP) policy. Our SSMP algorithm, trained exclusively on our synthetic data, distinguishes between uncovered and occluded items, and infers specific manipulation orders to perform multiple blind picks in a single shot. Physical robot experiments show that SSMP was able to reduce image acquisitions by more than 47% without compromising grasp performance.”

    Researcher from Ho Chi Minh City University of Transport Discusses Findings in Machine Learning (Experimental study and machine learning based prediction of the compressive strength of geopolymer concrete)

    34-35页
    查看更多>>摘要:Research findings on artificial intelligence are discussed in a new report. According to news reporting originating from Ho Chi Minh City, Vietnam, by NewsRx correspondents, research stated, “This study aims to investigate and predict the compressive strength of geopolymer concrete (GPC).” Our news correspondents obtained a quote from the research from Ho Chi Minh City University of Transport: “The effects of curing method, curing time and concrete age on the compressive strength of GPC, were evaluated experimentally. Four curing methods, namely room temperature (25oC), mobile dryer (50oC), heating cabinet type 1 (80oC), and heating cabinet type 2 (100oC) were adopted. Additionally, three curing times of 8h, 16h and 24h, as well as three concrete ages of 7 days, 14 days, and 28 days, were considered. To predict the compressive strength of GPC, 679 test results were collected to develop various machine learning models. The test results indicated that increasing the curing temperature, curing time and concrete age all led to the improvements in the compressive strength of GPC. The mobile dryer showed promise as a curing method for cast in place GPC.” According to the news reporters, the research concluded: “The proposed machine learning models demonstrated good predictive capacity for the compressive strength of GPC with relatively high accuracy. Through sensitivity analysis, the concrete age was identified as the most influential variable affecting the final compressive strength of GPC.”

    New Artificial Intelligence Study Findings Have Been Published by Researchers at Kazan State Power Engineering University (Advancing parallel programming integrating artificial intelligence for enhanced efficiency and automation)

    35-36页
    查看更多>>摘要:A new study on artificial intelligence is now available. According to news reporting from Kazan State Power Engineering University by NewsRx journalists, research stated, “This article delves into the burgeoning integration of Artificial Intelligence (AI) in parallel programming, highlighting its potential to transform the landscape of computational efficiency and developer experience.” Our news correspondents obtained a quote from the research from Kazan State Power Engineering University: “We begin by exploring the fundamental role of parallel programming in modern computing and the inherent challenges it presents, such as task distribution, synchronization, and memory management. The advent of AI, especially in machine learning and deep learning, offers novel solutions to these challenges. We discuss the application of AI in automating the creation of parallel programs, with a focus on automatic code generation, adaptive resource management, and the enhancement of developer experience. The article examines specific AI methods genetic algorithms, reinforcement learning, and neural networks and their application in optimizing various aspects of parallel programming. Further, we delve into the prospects of combining these AI methods for a synergistic effect, emphasizing the potential for increased efficiency and accuracy. The importance of integrating AI technologies with existing development tools is also highlighted, aiming to bring AI’s benefits to a broader developer audience.”

    Studies Conducted at Northern University on Computational Intelligence Recently Reported (Improved Genetic Algorithm Approach for Coordinating Decision-making In Technological Disaster Management)

    36-37页
    查看更多>>摘要:Current study results on Computational Intelligence have been published. According to news originating from Barranquilla, Colombia, by NewsRx correspondents, research stated, “The increasing frequency of technological events has resulted in significant damage to the environment, human health, social stability, and economy, driving ongoing scientific development and interest in emergency management (EM). Traditional EM approaches are often inadequate because of incomplete and imprecise information during crises, making fast and effective decision-making challenging.” Financial support for this research came from Sistema General de Regalas de Colombia. Our news journalists obtained a quote from the research from Northern University, “Computational Intelligence techniques (CI) offer decision-supporting capabilities that can effectively address these challenges. However, there is still a need for deeper integration of emerging computational intelligence techniques to support evidence-based decision-making while also addressing gaps in metrics, standards, and protocols for emergency response and scalability. This study presents a coordinated decision-making system for multiple types of emergency case scenarios for technological disaster management based on CI techniques, including an Improved Genetic Algorithm (IGA), and Multi-objective Particle Swarm Optimization (MOPSO). The IGA enhances emergency performance by optimizing the task assignment for multiple agents involved in emergency response with coordination mechanisms, resulting in an approximately 15% improvement compared to other state-of-the-art methods.”

    Lankenau Medical Center Reports Findings in Atopic Dermatitis (Patient Phenotyping for Atopic Dermatitis With Transformers and Machine Learning: Algorithm Development and Validation Study)

    37-37页
    查看更多>>摘要:New research on Skin Diseases and Conditions Atopic Dermatitis is the subject of a report. According to news reporting from Wynnewood, Pennsylvania, by NewsRx journalists, research stated, “Atopic dermatitis (AD) is a chronic skin condition that millions of people around the world live with each day. Performing research into identifying the causes and treatment for this disease has great potential to provide benefits for these individuals.” The news correspondents obtained a quote from the research from Lankenau Medical Center, “However, AD clinical trial recruitment is not a trivial task due to the variance in diagnostic precision and phenotypic definitions leveraged by different clinicians, as well as the time spent finding, recruiting, and enrolling patients by clinicians to become study participants. Thus, there is a need for automatic and effective patient phenotyping for cohort recruitment. This study aims to present an approach for identifying patients whose electronic health records suggest that they may have AD. We created a vectorized representation of each patient and trained various supervised machine learning methods to classify when a patient has AD. Each patient is represented by a vector of either probabilities or binary values, where each value indicates whether they meet a different criteria for AD diagnosis. The most accurate AD classifier performed with a class-balanced accuracy of 0.8036, a precision of 0.8400, and a recall of 0.7500 when using XGBoost (Extreme Gradient Boosting).” According to the news reporters, the research concluded: “Creating an automated approach for identifying patient cohorts has the potential to accelerate, standardize, and automate the process of patient recruitment for AD studies; therefore, reducing clinician burden and informing the discovery of better treatment options for AD.”