首页期刊导航|网络空间安全科学与技术(英文版)
期刊信息/Journal information
网络空间安全科学与技术(英文版)
网络空间安全科学与技术(英文版)

季刊

网络空间安全科学与技术(英文版)/Journal journal of cybersecurity science and technologyCSCD
正式出版
收录年代

    Optimal monitoring and attack detection of networks modeled by Bayesian attack graphs

    Armita KazeminajafabadiMahdi Imani
    1-15页
    查看更多>>摘要:Early attack detection is essential to ensure the security of complex networks,especially those in critical infrastruc-tures.This is particularly crucial in networks with multi-stage attacks,where multiple nodes are connected to external sources,through which attacks could enter and quickly spread to other network elements.Bayesian attack graphs(BAGs)are powerful models for security risk assessment and mitigation in complex networks,which provide the probabilistic model of attackers'behavior and attack progression in the network.Most attack detection techniques developed for BAGs rely on the assumption that network compromises will be detected through routine monitor-ing,which is unrealistic given the ever-growing complexity of threats.This paper derives the optimal minimum mean square error(MMSE)attack detection and monitoring policy for the most general form of BAGs.By exploiting the structure of BAGs and their partial and imperfect monitoring capacity,the proposed detection policy achieves the MMSE optimality possible only for linear-Gaussian state space models using Kalman filtering.An adaptive resource monitoring policy is also introduced for monitoring nodes if the expected predictive error exceeds a user-defined value.Exact and efficient matrix-form computations of the proposed policies are provided,and their high perfor-mance is demonstrated in terms of the accuracy of attack detection and the most efficient use of available resources using synthetic Bayesian attack graphs with different topologies.

    Detecting compromised email accounts via login behavior characterization

    Jianjun ZhaoCan YangDi WuYaqin Cao...
    16-36页
    查看更多>>摘要:The illegal use of compromised email accounts by adversaries can have severe consequences for enterprises and society.Detecting compromised email accounts is more challenging than in the social network field,where email accounts have only a few interaction events(sending and receiving).To address the issue of insufficient features,we propose a novel approach to detecting compromised accounts by combining time zone differences and alternate logins to identify abnormal behavior.Based on this approach,we propose a compromised email account detec-tion framework that relies on widely available and less sensitive login logs and does not require labels.Our frame-work characterizes login behaviors to identify logins that do not belong to the account owner and outputs a list of account-subnet pairs ranked by their likelihood of having abnormal login relationships.This approach reduces the number of account-subnet pairs that need to be investigated and provides a reference for investigation priority.Our evaluation demonstrates that our method can detect most email accounts that have been accessed by disclosed malicious IP addresses and outperforms similar research.Additionally,our framework has the capability to uncover undisclosed malicious IP addresses.

    A convolutional neural network to detect possible hidden data in spatial domain images

    Jean De La Croix NtivuguruzwaTohari Ahmad
    37-52页
    查看更多>>摘要:Hiding secret data in digital multimedia has been essential to protect the data.Nevertheless,attackers with a stega-nalysis technique may break them.Existing steganalysis methods have good results with conventional Machine Learning(ML)techniques;however,the introduction of Convolutional Neural Network(CNN),a deep learning para-digm,achieved better performance over the previously proposed ML-based techniques.Though the existing CNN-based approaches yield good results,they present performance issues in classification accuracy and stability in the network training phase.This research proposes a new method with a CNN architecture to improve the hidden data detection accuracy and the training phase stability in spatial domain images.The proposed method comprises three phases:pre-processing,feature extraction,and classification.Firstly,in the pre-processing phase,we use spatial rich model filters to enhance the noise within images altered by data hiding;secondly,in the feature extraction phase,we use two-dimensional depthwise separable convolutions to improve the signal-to-noise and regular convolutions to model local features;and finally,in the classification,we use multi-scale average pooling for local features aggregation and representability enhancement regardless of the input size variation,followed by three fully connected layers to form the final feature maps that we transform into class probabilities using the softmax function.The results identify an improvement in the accuracy of the considered recent scheme ranging between 4.6 and 10.2%with reduced training time up to 30.81%.

    Towards the universal defense for query-based audio adversarial attacks on speech recognition system

    Feng GuoZheng SunYuxuan ChenLei Ju...
    53-70页
    查看更多>>摘要:Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial exam-ples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary's intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is avail-able out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.

    Security estimation of LWE via BKW algorithms

    Yu WeiLei BiXianhui LuKunpeng Wang...
    71-87页
    查看更多>>摘要:The Learning With Errors(LWE)problem is widely used in lattice-based cryptography,which is the most promising post-quantum cryptography direction.There are a variety of LWE-solving methods,which can be classified into four groups:lattice methods,algebraic methods,combinatorial methods,and exhaustive searching.The Blum-Kalai-Was-serman(BKW)algorithm is an important variety of combinatorial algorithms,which was first presented for solving the Learning Parity With Noise(LPN)problem and then extended to solve LWE.In this paper,we give an overview of BKW algorithms for solving LWE.We introduce the framework and key techniques of BKW algorithms and make compari-sons between different BKW algorithms and also with lattice methods by estimating concrete security of specific LWE instances.We also briefly discuss the current problems and potential future directions of BKW algorithms.

    MRm-DLDet:a memory-resident malware detection framework based on memory forensics and deep neural network

    Jiaxi LiuYun FengXinyu LiuJianjun Zhao...
    88-109页
    查看更多>>摘要:Cyber attackers have constantly updated their attack techniques to evade antivirus software detection in recent years.One popular evasion method is to execute malicious code and perform malicious actions only in memory.Mali-cious programs that use this attack method are called memory-resident malware,with excellent evasion capability,and have posed huge threats to cyber security.Traditional static and dynamic methods are not effective in detect-ing memory-resident malware.In addition,existing memory forensics detection solutions perform unsatisfactorily in detection rate and depend on massive expert knowledge in memory analysis.This paper proposes MRm-DLDet,a state-of-the-art memory-resident malware detection framework,to overcome these drawbacks.MRm-DLDet first builds a virtual machine environment and captures memory dumps,then creatively processes the memory dumps into RGB images using a pre-processing technique that combines deduplication and ultra-high resolution image cropping,followed by our neural network MRmNet in MRm-DLDet to fully extract high-dimensional features from memory dump files and detect them.MRmNet receives the labeled sub-images of the cropped high-resolution RGB images as input of ResNet-18,which extracts the features of the sub-images.Then trains a network of gated recurrent units with an attention mechanism.Finally,it determines whether a program is memory-resident malware based on the detection results of each sub-image through a specially designed voting layer.We created a high-quality dataset consisting of 2,060 benign and memory-resident programs.In other words,the dataset contains 1,287,500 labeled sub-images cut from the MRm-DLDet transformed ultra-high resolution RGB images.We implement MRm-DLDet for Windows 10,and it performs better than the latest methods,with a detection accuracy of up to 98.34%.Moreover,we measured the effects of mimicry and adversarial attacks on MRm-DLDet,and the experimental results demonstrated the robustness of MRm-DLDet.

    FMSA:a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems

    Kaisheng FanWeizhe ZhangGuangrui LiuHui He...
    110-121页
    查看更多>>摘要:Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuses on imple-menting a model stealing attack on intrusion detection systems.Existing model stealing attacks are hard to imple-ment in practical network environments,as they either need private data of the victim dataset or frequent access to the victim model.In this paper,we propose a novel solution called Fast Model Stealing Attack(FMSA)to address the problem in the field of model stealing attacks.We also highlight the risks of using ML-NIDS in network security.First,meta-learning frameworks are introduced into the model stealing algorithm to clone the victim model in a black-box state.Then,the number of accesses to the target model is used as an optimization term,resulting in minimal queries to achieve model stealing.Finally,adversarial training is used to simulate the data distribution of the target model and achieve the recovery of privacy data.Through experiments on multiple public datasets,compared to existing state-of-the-art algorithms,FMSA reduces the number of accesses to the target model and improves the accuracy of the clone model on the test dataset to 88.9%and the similarity with the target model to 90.1%.We can demonstrate the successful execution of model stealing attacks on the ML-NIDS system even with protective measures in place to limit the number of anomalous queries.

    DLP:towards active defense against backdoor attacks with decoupled learning process

    Zonghao YingBin Wu
    122-134页
    查看更多>>摘要:Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively implant the backdoor,and triggers on the input can mislead the models during testing.Our study shows that the model shows different learning behaviors in clean and poisoned subsets during training.Based on this observation,we propose a general training pipeline to defend against backdoor attacks actively.Benign models can be trained from the unreli-able dataset by decoupling the learning process into three stages,i.e.,supervised learning,active unlearning,and active semi-supervised fine-tuning.The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets.