首页期刊导航|电子科技学刊
期刊信息/Journal information
电子科技学刊
电子科技大学
电子科技学刊

电子科技大学

周小佳

季刊

1674-862X

journal@intl-jest.com

028-83200028 83200199

610054

成都市建设北路二段4号

电子科技学刊/Journal Journal of Electronic Science and Technology of ChinaCSCD北大核心
查看更多>> 《Journal of Electronic Science and Technology》(中译刊名:《电子科技学刊》,简称:JEST;曾用名:《中国电子科技》,简称:JESTC)于2003年12月创刊,是由教育部主管,电子科技大学主办,电子科技大学学报编辑部编辑出版的学术类季刊。JEST是专注于电子科技领域的全英文学术期刊,主要刊登国内外电子领域的科研成果、学术综述、研究快报等。JEST主要设置了以下栏目:通信技术、计算机科学与信息技术、信息与网络安全、生物电子学和生物医学、神经网络与智能系统、光电子与光子技术等。 JEST依托电子科技大学在全国电子领域学科中的领先优势,旨在繁荣电子科技领域的学术交流,近年来发展较快,已经成为完全面向世界的国际期刊,海外论文比和海外审稿率均达到了较高的比例。JEST已被INSPEC、美国CA、国内万方数据等知名数据库收录,并成为欧洲DOAJ和加拿大CAOD开放获取期刊。
正式出版
收录年代

    Iterative physical optics method based on efficient occlusion judgment with bounding volume hierarchy technology

    Yang SuYu-Mao WuJun Hu
    1-12页
    查看更多>>摘要:This paper builds a binary tree for the target based on the bounding volume hierarchy technology,thereby achieving strict acceleration of the shadow judgment process and reducing the computational complexity from the original O(N3)to O(N2 log N).Numerical results show that the proposed method is more efficient than the traditional method.It is verified in multiple examples that the proposed method can complete the convergence of the current.Moreover,the proposed method avoids the error of judging the lit-shadow relationship based on the normal vector,which is beneficial to current iteration and convergence.Compared with the brute force method,the current method can improve the simulation efficiency by 2 orders of magnitude.The proposed method is more suitable for scattering problems in electrically large cavities and complex scenarios.

    Evaluation of a software positioning tool to support SMEs in adoption of big data analytics

    Matthew WillettsAnthony S.Atkins
    13-24页
    查看更多>>摘要:Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Small and medium sized enterprises(SMEs)are the backbone of the global economy,comprising of 90%of businesses worldwide.However,only 10%SMEs have adopted big data analytics despite the competitive advantage they could achieve.Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics.The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK.This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners.The results of the evaluation are presented with a discussion on the results,and the paper concludes with recommendations to improve the scoring tool based on the proposed framework.The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics.

    Big data challenge for monitoring quality in higher education institutions using business intelligence dashboards

    Ali SorourAnthony S.Atkins
    25-41页
    查看更多>>摘要:As big data becomes an apparent challenge to handle when building a business intelligence(BI)system,there is a motivation to handle this challenging issue in higher education institutions(HEIs).Monitoring quality in HEIs encompasses handling huge amounts of data coming from different sources.This paper reviews big data and analyses the cases from the literature regarding quality assurance(QA)in HEIs.It also outlines a framework that can address the big data challenge in HEIs to handle QA monitoring using BI dashboards and a prototype dashboard is presented in this paper.The dashboard was developed using a utilisation tool to monitor QA in HEIs to provide visual representations of big data.The prototype dashboard enables stakeholders to monitor compliance with QA standards while addressing the big data challenge associated with the substantial volume of data managed by HEIs'QA systems.This paper also outlines how the developed system integrates big data from social media into the monitoring dashboard.

    Fine-grained grid computing model for Wi-Fi indoor localization in complex environments

    Yan LiangSong ChenXin DongTu Liu...
    42-52页
    查看更多>>摘要:The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the position of the access point(AP)or wall changes,updating the fingerprint database in real-time is difficult.An appropriate indoor localization approach,which has a low implementation cost,excellent real-time performance,and high localization accuracy and fully considers complex indoor environment factors,is preferred in location-based services(LBSs)applications.In this paper,we proposed a fine-grained grid computing(FGGC)model to achieve decimeter-level localization accuracy.Reference points(RPs)are generated in the grid by the FGGC model.Then,the received signal strength(RSS)values at each RP are calculated with the attenuation factors,such as the frequency band,three-dimensional propagation distance,and walls in complex environments.As a result,the fingerprint database can be established automatically without manual measurement,and the efficiency and cost that the FGGC model takes for the fingerprint database are superior to previous methods.The proposed indoor localization approach,which estimates the position step by step from the approximate grid location to the fine-grained location,can achieve higher real-time performance and localization accuracy simultaneously.The mean error of the proposed model is 0.36 m,far lower than that of previous approaches.Thus,the proposed model is feasible to improve the efficiency and accuracy of Wi-Fi indoor localization.It also shows high-accuracy performance with a fast running speed even under a large-size grid.The results indicate that the proposed method can also be suitable for precise marketing,indoor navigation,and emergency rescue.

    Multi-scale persistent spatiotemporal transformer for long-term urban traffic flow prediction

    Jia-Jun ZhongYong MaXin-Zheng NiuPhilippe Fournier-Viger...
    53-69页
    查看更多>>摘要:Long-term urban traffic flow prediction is an important task in the field of intelligent transportation,as it can help optimize traffic management and improve travel efficiency.To improve prediction accuracy,a crucial issue is how to model spatiotemporal dependency in urban traffic data.In recent years,many studies have adopted spatiotemporal neural networks to extract key information from traffic data.However,most models ignore the semantic spatial similarity between long-distance areas when mining spatial dependency.They also ignore the impact of predicted time steps on the next unpredicted time step for making long-term predictions.Moreover,these models lack a comprehensive data embedding process to represent complex spatiotemporal dependency.This paper proposes a multi-scale persistent spatiotemporal transformer(MSPSTT)model to perform accurate long-term traffic flow prediction in cities.MSPSTT adopts an encoder-decoder structure and incorporates temporal,periodic,and spatial features to fully embed urban traffic data to address these issues.The model consists of a spatiotemporal encoder and a spatiotemporal decoder,which rely on temporal,geospatial,and semantic space multi-head attention modules to dynamically extract temporal,geospatial,and semantic characteristics.The spatiotemporal decoder combines the context information provided by the encoder,integrates the predicted time step information,and is iteratively updated to learn the correlation between different time steps in the broader time range to improve the model's accuracy for long-term prediction.Experiments on four public transportation datasets demonstrate that MSPSTT outperforms the existing models by up to 9.5%on three common metrics.

    Benchmarking YOLOv5 models for improved human detection in search and rescue missions

    Namat BachirQurban Ali Memon
    70-80页
    查看更多>>摘要:Drone or unmanned aerial vehicle(UAV)technology has undergone significant changes.The technology allows UAV to carry out a wide range of tasks with an increasing level of sophistication,since drones can cover a large area with cameras.Meanwhile,the increasing number of computer vision applications utilizing deep learning provides a unique insight into such applications.The primary target in UAV-based detection applications is humans,yet aerial recordings are not included in the massive datasets used to train object detectors,which makes it necessary to gather the model data from such platforms.You only look once(YOLO)version 4,RetinaNet,faster region-based convolutional neural network(R-CNN),and cascade R-CNN are several well-known detectors that have been studied in the past using a variety of datasets to replicate rescue scenes.Here,we used the search and rescue(SAR)dataset to train the you only look once version 5(YOLOv5)algorithm to validate its speed,accuracy,and low false detection rate.In comparison to YOLOv4 and R-CNN,the highest mean average accuracy of 96.9%is obtained by YOLOv5.For comparison,experimental findings utilizing the SAR and the human rescue imaging database on land(HERIDAL)datasets are presented.The results show that the YOLOv5-based approach is the most successful human detection model for SAR missions.

    Machine learning model based on non-convex penalized huberized-SVM

    Peng WangJi GuoLin-Feng Li
    81-94页
    查看更多>>摘要:The support vector machine(SVM)is a classical machine learning method.Both the hinge loss and least absolute shrinkage and selection operator(LASSO)penalty are usually used in traditional SVMs.However,the hinge loss is not differentiable,and the LASSO penalty does not have the Oracle property.In this paper,the huberized loss is combined with non-convex penalties to obtain a model that has the advantages of both the computational simplicity and the Oracle property,contributing to higher accuracy than traditional SVMs.It is experimentally demonstrated that the two non-convex huberized-SVM methods,smoothly clipped absolute deviation huberized-SVM(SCAD-HSVM)and minimax concave penalty huberized-SVM(MCP-HSVM),outperform the traditional SVM method in terms of the prediction accuracy and classifier performance.They are also superior in terms of variable selection,especially when there is a high linear correlation between the variables.When they are applied to the prediction of listed companies,the variables that can affect and predict financial distress are accurately filtered out.Among all the indicators,the indicators per share have the greatest influence while those of solvency have the weakest influence.Listed companies can assess the financial situation with the indicators screened by our algorithm and make an early warning of their possible financial distress in advance with higher precision.

    Call for Papers: Special Section on Progress of Analysis Techniques for Domain-Specific Big Data

    Ling TianJian-Hua TaoBin Zhou
    后插1-后插2页