查看更多>>摘要:In smart cities, ensuring robust authentication, security, and scalability of infrastructure presents significant challenges. Traditional centralized authentication methods expose vulnerabilities and increase energy consumption, particularly in resource-constrained IoT nodes. Moreover, existing blockchain-based authentication systems encounter substantial overhead, delays, and complexities, compromising their effectiveness in diverse environments. A critical issue in real-time smart systems is the significant authentication delays caused by the high volume of requests processed by the blockchain. To address these challenges, we propose an innovative blockchain architecture that connects distributed fog servers for seamless IoT node authentication within smart-city networks. Our model integrates trust-based analyses, encompassing the behavior and data trust evaluations of the IoT at fog servers, to fortify security by detecting malicious nodes and tampered data early in the process. This process streamlines verification by reducing the influx of untrusted data during consensus, ensuring only the most reliable data advances for blockchain operations, and enhancing efficiency and reliability. This architecture guarantees data confidentiality and integrity through lightweight encryption and digital certification. It fosters scalability, seamless communication, and information sharing among smart city entities, facilitating internetwork node identification across a spectrum of smart systems. Performance assessment of the proposed model revealed notable improvements in computation cost, execution time, and power consumption. Our findings revealed a network lifetime enhancement of up to 35 % compared with centralized and blockchain schemes. Furthermore, a security assessment confirmed the effectiveness of the model in preventing tampering and various attacks, thereby satisfying the stringent security requirements of smart cities.
查看更多>>摘要:Cross-chain data sharing in the Internet of Things (IoT) has become a critical challenge due to the isolation of industry-specific blockchains and the lack of trust mechanisms between heterogeneous networks. This issue is particularly important as IoT data sharing enables cross-industry collaboration, unlocks data value, and fosters innovation in applications such as smart cities and intelligent transportation. Existing solutions, including notary mechanisms, side-chains, and relay chains, often suffer from centralization issues, limited scalability, or inadequate incentives for active participation, making them insufficient to address the dynamic and large-scale requirements of IoT ecosystems. To tackle this problem, this paper proposes a reputation-based incentive mechanism for cross-chain data sharing, which integrates a main-subchain architecture with a two-stage notary group election algorithm based on evolutionary game theory. Additionally, a Stackelberg game is employed to model interactions between data producers and consumers, optimizing data pricing strategies and incentivizing trustworthy sharing. The proposed framework is evaluated through extensive simulations on a star-topology blockchain network, testing its scalability, fairness, and effectiveness. Results demonstrate that the mechanism not only mitigates centralization problems but also enhances trust, collaboration, and efficiency across heterogeneous blockchains, providing a robust foundation for IoT data-sharing applications.
查看更多>>摘要:Nowadays, one can observe the convergence of the Internet of Things (IoT) and Edge Computing (EC) infrastructures towards establishing a data collection and processing ecosystem in close proximity to end users. The aim is to enhance the performance of the supported applications by reducing the latency in data processing and service delivery. Various services can be employed to facilitate the execution of tasks prompted by end users or any type of external applications. Those services are mainly present at EC nodes that become the hosts of the data collected by IoT devices, the executors of the desired tasks and the intermediaries when transferring the discussed data to the Cloud back end. It is obvious that the implementation of an efficient framework for managing services across distributed edge nodes becomes imperative especially if we bear in mind that nodes are constrained devices and cannot host numerous services. In this paper, we introduce a proactive model designed to allocate the available services to core parts of the EC ecosystem based on the observed demand. This will give us the opportunity to determine 'where' to place any individual service putting it in locations (i.e., in EC nodes) where an increased demand is identified, while saving resources by restricting the number of nodes that become the final hosts (to avoid the flooding of the network). The paper delves into the evaluation of the proposed model, offering a comparative analysis with a baseline scheme utilizing real datasets. Through the envisioned experimental validation, the paper demonstrates that the proposed approach enhances the ability of diverse engaged edge nodes to accurately deduce the appropriate location for service placement.
查看更多>>摘要:This paper investigates the security and reliability performance of hybrid cognitive satellite-terrestrial networks employing a Low Earth Orbit (LEO) satellite as a decode-and-forward (DF) relay. The terrestrial user (TU) operates within an underlay cognitive radio (CR) network, where the primary user (PU) shares its spectrum with the TU while imposing interference power constraints to protect its quality-of-service. To counteract eavesdropping from a terrestrial adversary, the TU incorporates artificial noise (AN) into its transmission, creating a tradeoff between security and reliability. The TU-to-LEO and TU-to-PU links are modeled using Shadowed Rician and Nakagami-m fading, respectively. Key performance metrics, including the outage probability (OP) and intercept probability (IP), are analyzed under varying system parameters such as power-splitting factor, channel conditions, and interference thresholds. Analytical results are validated through Monte Carlo simulations, and simplified approximations are presented for practical implementation. Results demonstrate the efficacy of the proposed approach in balancing security and reliability.
Enwereonye, Uchenna P.Shahraki, Ahmad SalehiAlavizadeh, HoomanKayes, A. S. M....
1.1-1.15页
查看更多>>摘要:The future of smart cities, industrial automation, and connected vehicles is heavily reliant on advanced communication technologies. These technologies, particularly massive Machine-Type Communication (mMTC), are the backbone of the many connected devices required for these applications. Grant-free access in 5G and beyond, while enhancing transmission efficiency by eliminating the need for permission requests, also introduces significant security risks. These risks, such as unauthorised access, data interception, and interference due to the absence of centralised control, are of paramount importance. Physical layer security (PLS) techniques, with their ability to exploit the unique properties of wireless channels to bolster communication security, offer a promising solution. This paper provides a comprehensive review of PLS techniques for securing grant-free mMTC, comparing different approaches and exploring the challenges of their integration. Our findings lay the groundwork for future research and the practical implementation of advanced security solutions in grant-free mMTC, a development that will also enhance the security of advanced 5G and 6G networks.
查看更多>>摘要:Efficient and fair resource allocation is a critical challenge in vehicular networks, especially under high mobility and unknown channel state information (CSI). Existing works mainly focus on centralized optimization with perfect CSI or decentralized heuristics with partial CSI, which may not be practical or effective in real-world scenarios. In this paper, we propose a novel hierarchical deep reinforcement learning (HDRL) framework to address the joint channel and power allocation problem in vehicular networks with high mobility and unknown CSI. The main contributions of this work are twofold. Firstly, this paper develops a multi-agent reinforcement learning architecture that integrates both centralized training with global information and decentralized execution with local observations. The proposed architecture leverages the strengths of deep Q-networks (DQN) for discrete channel selection and deep deterministic policy gradient (DDPG) for continuous power control while learning robust and adaptive policies under time-varying channel conditions. Secondly, this paper designs efficient reward functions and training algorithms that encourage cooperation among vehicles and balance the trade-off between system throughput and individual fairness. By incorporating Jain's fairness index into the reward design and adopting a hybrid experience replay strategy, the proposed algorithm achieves a good balance between system efficiency and user equity. Extensive simulations demonstrate the superiority of the proposed HDRL method over state-of-the-art benchmarks, including DQN, DDPG, and fractional programming, in terms of both average throughput and fairness index under various realistic settings. The proposed framework provides a promising solution for intelligent and efficient resource management in future vehicular networks.
查看更多>>摘要:Transfer-based adversarial attack implies that the same adversarial example can fool Deep Neural Networks (DNNs) with different architectures. Model-related approaches train a new surrogate model in local to generate adversarial examples. However, because DNNs with different architectures focus on diverse features within the same data, adversarial examples generated by surrogate models frequently exhibit poor transferability when the surrogate and target models have significant architectural differences. In this paper, we propose a Two-Stage Generation Framework (TSGF) through frequency-domain augmentation and multi-scale feature alignment to address this issue. In the stage of surrogate model training, we enable the surrogate model to capture various features of data through detail and diversity enhancement. Detail enhancement increases the weight of details in clean examples by a frequency-domain augmentation module. Diversity enhancement incorporates slight adversarial examples into the training process to increase the diversity of clean examples. In the stage of adversarial generation, we perturb the distinctive features that different models focus on to improve transferability by a multi-scale feature alignment attack technique. Specifically, we design a loss function using the intermediate multi-layer features of the surrogate model to maximize the difference between the features of clean and adversarial examples. We compare TSGF with a combination of three closely related surrogate model training schemes and the most relevant adversarial attack methods. Results show that TSGF improves transferability across significantly different architectures. The implementation of TSGF is available at https://github.com/zhanghrswpu/TSGF.
查看更多>>摘要:Available online The rapid expansion of the Internet of Things (IoT) in wireless and mobile networks demands novel approaches for efficient data transmission and management. Traditional IP-based networking architectures struggle to meet the high-speed, low-latency, and scalable requirements of IoT. Named Data Networking (NDN), a content-centric networking paradigm, provides an alternative by focusing on data retrieval based on content names rather than device addresses. However, while NDN offers significant advantages in reducing latency and improving data dissemination, its integration with edge computing for real-time IoT applications remains suboptimal due to challenges in dynamic resource allocation, routing efficiency, and robustness under uncertain network conditions. This paper proposes a novel adaptive NDN-Edge Computing framework that dynamically optimizes data retrieval, caching, and computational resource allocation. Unlike prior studies that focus solely on theoretical models or static configurations, our framework introduces a multi-objective optimization model for balancing latency, reliability, and energy efficiency in IoT environments. Additionally, we formulate a robust optimization approach to ensure network resilience against unpredictable traffic surges, topology changes, and edge node failures. Through extensive simulations and real-world case studies, we demonstrate that the proposed integration significantly improves latency (up to 25 % reduction), energy efficiency (15 % improvement), and cache hit ratio (20 % increase) compared to conventional NDN and edge computing approaches. This work contributes to the ongoing research by providing a scalable, adaptive, and resilient NDN-edge computing framework that enhances IoT data processing while addressing critical limitations of existing solutions. Future work will focus on security enhancements and the integration of blockchain for decentralized trust management in IoT ecosystems.
查看更多>>摘要:Due to the rapid growth of IoT and artificial intelligence, deploying neural networks on IoT devices is becoming increasingly crucial for edge intelligence. Federated learning (FL) facilitates the management of edge devices to collaboratively train a shared model while maintaining training data local and private. However, a general assumption in FL is that all edge devices are trained on the same machine learning model, which may be impractical considering diverse device capabilities. For instance, less capable devices may slow down the updating process because they struggle to handle large models appropriate for ordinary devices. In this paper, we propose a novel data-free FL method that supports heterogeneous client models by managing features and logits, called Felo; and its extension with a conditional VAE deployed in the server, called Velo. Felo averages the mid-level features and logits from the clients at the server based on their class labels to provide the average features and logits, which are utilized for further training the client models. Unlike Felo, the server has a conditional VAE in Velo, which is used for training mid-level features and generating synthetic features according to the labels. The clients optimize their models based on the synthetic features and the average logits. We conduct experiments on two datasets and show satisfactory performances of our methods compared with the state-of-the-art methods. Our codes are released in the github.
查看更多>>摘要:Traffic engineering (TE) is important for improving network performance. Recently, segment routing (SR) has gained increasing attention in the TE field. Many segment routing traffic engineering (SR-TE) methods compute optimal routing policies by solving linear programming (LP) problems, which suffer from high computation time. Therefore, various methods have been proposed for accelerating TE optimization. However, prior methods solve individual TE optimization problems from scratch, overlooking valuable information from existing historical solutions. We argue that these data can imply the distribution of optimal solutions for solving future TE problems. In this paper, we provide a new perspective on accelerating SR-TE optimization. First, we generated and analyzed historical solutions of a widely used LP model, and revealed two key findings from the data: Flows are predominantly routed through a small subset of intermediate nodes; similar decisions can be made for some flows. Then, inspired by the findings, we propose RS4SR, the first framework to our knowledge leveraging historical solutions for SR-TE acceleration. It can significantly reduce the size of LP model by performing candidate recommendation and flow clustering. Experiments on real-world topologies and various traffic matrices demonstrate that a simple implementation of RS4SR is sufficient to obtain near-optimal solutions within the time limit of two seconds on large-scale networks, utilizing a small number of historical solutions.