首页期刊导航|Computers and Electronics in Agriculture
期刊信息/Journal information
Computers and Electronics in Agriculture
Elsevier Science Publishers
Computers and Electronics in Agriculture

Elsevier Science Publishers

0168-1699

Computers and Electronics in Agriculture/Journal Computers and Electronics in AgricultureSCIEIISTP
正式出版
收录年代

    Cattle face recognition based on a Two-Branch convolutional neural network

    Weng Z.Zhang Y.Gong C.Meng F....
    9页
    查看更多>>摘要:? 2022Due to changes in cattle posture and different shooting angles, some features of collected cattle face images are missing, which leads to a decline in the accuracy of cattle face recognition. This paper proposes a cattle face recognition model based on a two-branch convolutional neural network (TB-CNN). The collected two cattle face images from different angles are input to the convolutional neural network of different channels for feature extraction, the features of the two channels are feature fused, and the global average pooling layer is combined with the classifier to identify the individual cattle. The squeeze-and-excitation block (SE) is embedded in the feature extraction network in the network model to improve the network feature extraction capability. The global average pooling layer is used to replace the fully connected layer, which improves the network classification capability and reduces the number of network parameters. The experimental results show that the recognition rate of the cattle face recognition model based on the TB-CNN is 99.85% on the Simmental beef cattle face image dataset, 99.81% on the Holstein cow face image dataset, and 99.71% on the beef cattle and cow mixed dataset. The cattle face recognition model proposed in this paper has good robustness and generalization ability, which can effectively reduce the influence of cattle face angle changes on the cattle face recognition rate and improve the accuracy of cattle face recognition.

    Discrimination of onion subjected to drought and normal watering mode based on fluorescence spectroscopic data

    Ropelewska E.Slavova V.Sabanci K.Fatih Aslan M....
    8页
    查看更多>>摘要:? 2022 Elsevier B.V.Drought stress can affect the yield and quality of cultivated plants. The deficit of water may result in the physiological and anatomical reactions at organ, tissue and cellular levels of the plant species. The objective of this study was to discriminate different onion samples with the use of innovative models based on fluorescence spectroscopic data using different classifiers. The onion growing under drought and normal watering conditions were compared. Additionally, the five different samples of onion including three varieties (Konkurent bql, Asenovgradska kaba, Trimoncium) and two lines (white, red) subjected to both the drought mode and normal watering mode were differentiated. The results were evaluated based on confusion matrices, average accuracies, and the values of TP (True Positive) Rate, FP (False Positive) Rate, Precision, F-Measure, ROC (Receiver Operating Characteristic) Area and PRC (Precision-Recall) Area. In the case of the discrimination of two classes: drought mode and normal watering mode, an average accuracy reached 100% for white line of onion for a model built using the Naive Bayes, Multilayer Perceptron, JRip and LMT classifiers and for red line of onion for all used classifiers (Naive Bayes, Multilayer Perceptron, IBk, Multi Class Classifier, JRip, LMT). The values of TP Rate, Precision, F-Measure, ROC Area and PRC Area were equal to 1.000, and FP Rate was 0.000. For onion samples subjected to drought, five classes including the Konkurent, Asenovgradska kaba, Trimoncium varieties and the white and red lines were discriminated with an average accuracy of up to 90% for the LMT classifier. The same classes of samples but subjected to normal watering were correctly distinguished in 84% for the Naive Bayes classifier.

    Novel, technical advance: A new grapevine transpiration prototype for grape berries and whole bunch based on relative humidity sensors

    Morales F.Santesteban H.Luquin J.Irigoyen J.J....
    9页
    查看更多>>摘要:? 2022 The AuthorsGrape berry transpiration is considered an important process during maturation, but scientific evidence is scarce. In the literature, there is only one report showing reduced maturation when bunch transpiration is artificially slowed down. Traditionally, grape berry transpiration has been measured by weighing grape berries on scale for a given time, correctly assuming that the weight reduction is due to water lost. Commercially available instruments adequate to measure gas exchange in small fruits are not suitable for whole grape berry bunch. Here, we present an open differential chamber system that can be used with isolated grape berries or alternatively with a whole grape berry bunch for measuring grape berry/bunch transpiration based on the use of relative humidity sensors from Vaisala. When used with isolated grape berries, open differential chamber system validation was made by using Tempranillo grape berries collected at different phenological stages. For the whole bunch transpiration prototype, two different validations were made. Firstly, measurements were made inserting inside the chamber an increasing number of Eppendorf tubes filled with water. Secondly, transpiration was measured in whole Tempranillo bunches sampled at different phenological stages. An important output of this work is that the fact of detaching the bunch from the plant did not change the bunch gas exchange rates at least for several hours. For validations, transpiration values obtained with our prototype were compared with water losses inferred from grape berry weighing on scale for a given time, obtaining highly significant correlations. We tested the system applying to the bunch an anti-transpirant, confirming that the anti-transpirant application reduced bunch transpiration and delayed maturity.

    Improved Na+ estimation from hyperspectral data of saline vegetation by machine learning

    Chen D.Zhang F.Liu C.Wang W....
    13页
    查看更多>>摘要:? 2022 Elsevier B.V.Monitoring the growth state of vegetation using remote sensing is the current trends in agricultural research. This study aims to identify an optimal hyperspectral vegetation extraction framework to improve leaf Na+ monitoring in the northwestern part of China based on the hyperspectral data of saline vegetation. The Partial Least Squares (PLS), Support Vector Machine (SVM), Random Forest (RF) models were constructed to model the leaf Na+, while the Aggregated Boosted Tree (ABT) and Random Forest (RF) variable importance screening methods were used to optimize the variables in the leaf Na+ extraction. Then, the optimal variable screening method and the model of inverting vegetation Na+ was identified. The results showed that the estimation of Na+ content within saline vegetation leaves by constructing spectral indices is feasible as 33 vegetation indices meets the requirements, the RF (R2 = 0.73, RMSE = 0.50) and PLS (R2 = 0.72, RMSE = 0.59) models are relatively good, followed by the SVM (R2 = 0.68, RMSE = 0.53) model. In addition, all the three models have been improved using the ABT variable importance screening method, where the RF (R2 = 0.81, RMSE = 0.42) model had the most satisfactory effect. Similarly, based on the RF importance screening method, all the three models have improved significantly, among which the most effective was the SVM (R2 = 0.82, RMSE = 0.45) model. This study indicates that ABT-RF and RF-SVM are the most ideal combination framework to invert the Na+ content of saline vegetation leaves. This study brings out some inspiration for the combination between the screening approach of variables and model building, improving the accuracy of hyperspectral sensor to monitor the changes in the relevant chemical characteristics of vegetation.

    AGROVOC: The linked data concept hub for food and agriculture

    Subirats-Coll I.Kolshus K.Turbati A.Stellato A....
    12页
    查看更多>>摘要:? 2021 The Food and Agriculture Organization of the United NationsNewly acquired, aggregated and shared data are essential for innovation in food and agriculture to improve the discoverability of research. Since the early 1980′s, the Food and Agriculture Organization of the United Nations (FAO) has coordinated AGROVOC, a valuable tool for data to be classified homogeneously, facilitating interoperability and reuse. AGROVOC is a multilingual and controlled vocabulary designed to cover concepts and terminology under FAO's areas of interest. It is the largest Linked Open Data set about agriculture available for public use and its highest impact is through facilitating the access and visibility of data across domains and languages. This chapter has the aim of describing the current status of one of the most popular thesaurus in all FAO's areas of interest, and how it has become the Linked Data Concept Hub for food and agriculture, through new procedures put in place.

    Design and implementation of a smart beehive and its monitoring system using microservices in the context of IoT and open data

    Aydin S.Nafiz Aydin M.
    18页
    查看更多>>摘要:? 2022 Elsevier B.V.It is essential to keep honey bees healthy for providing a sustainable ecological balance. One way of keeping honey bees healthy is to be able to monitor and control the general conditions in a beehive and also outside of a beehive. Monitoring systems offer an effective way of accessing, visualizing, sharing, and managing data that is gathered from performed agricultural and livestock activities for domain stakeholders. Such systems have recently been implemented based on wireless sensor networks (WSN) and IoT to monitor the activities of honey bees in beehives as well. Scholars have shown considerable interests in proposing IoT- and WSN-based beehive monitoring systems, but much of the research up to now lacks in proposing appropriate architecture for open data driven beehive monitoring systems. Developing a robust monitoring system based on a contemporary software architecture such as microservices can be of great help to be able to control the activities of honey bees and more importantly to be able to keep them healthy in beehives. This research sets out to design and implementation of a sustainable WSN-based beehive monitoring platform using a microservice architecture. We pointed out that by adopting microservices one can deal with long-standing problems with heterogeneity, interoperability, scalability, agility, reliability, maintainability issues, and in turn achieve sustainable WSN-based beehive monitoring systems.

    Carp-DCAE: Deep convolutional autoencoder for carp fish classification

    Banerjee A.Bhattacharjee D.Nasipuri M.Das N....
    16页
    查看更多>>摘要:? 2022 Elsevier B.V.The fisheries industry relies heavily on automatic fish species identification for its socio-economic well-being. Due to the similarity in shape and size of the major carps, it can be difficult to recognise them using morphological features. To recognise these species automatically, our proposed autoencoder network models have been applied to a fish dataset containing 1500 images of three major carps of India. As a feature, the autoencoder model's latent representation is used. After the training phase is complete, the decoder is removed and fish species are categorised using several classifiers. Different variations of autoencoders, such as the Simple Autoencoder, the Deep Autoencoder, and the Deep Convolutional Autoencoder are applied with different hyper parameters. An encouraging maximum accuracy rate of 97.33% is obtained in 250 epochs with a learning rate of 0.0001 using Deep Convolutional Autoencoder. Some well-known machine learning classifiers, such as Logistic Regression, Naive Bayes, K-Nearest Neighbor, Support Vector Machine, and Random Forest, are also used to evaluate the latent representation's effectiveness using the latent representation as a feature vector. The Support Vector Machine-based latent representation of the Deep Convolutional Autoencoder outperformed all other approaches significantly. The models’ performance is compared to that of Hu moments, Haralick texture, Weber local descriptor, HOG descriptor etc. with best classifiers along with different deep learning models, such as InceptionV3, InceptionResNetV2, MobileNet, VGG16 and VGG19. The Deep Convolutional Autoencoder outperforms the other models by 52%, 43.55%, 13.77%, 6.67%, 22.22%, 15.11%, 6.66%, 4.89%, and 9.78% respectively. It demonstrates the efficacy of this systematic study in identifying major carps.

    Comparing satellites and vegetation indices for cover crop biomass estimation

    Swoish M.Reiter M.S.Thomason W.E.Da Cunha Leme Filho J.F....
    9页
    查看更多>>摘要:? 2022Cost-share programs based on measures of participation rather than performance are available to farmers who plant cover crops. However, cover crops only provide significant ecological benefits like reduced nutrient loss when adequate biomass is established. The purpose of this study was to determine whether satellite imagery can effectively estimate cover crop biomass in fields with diverse species composition, and whether increased spatial resolution and satellite imaging frequency can increase biomass estimation accuracy. Aboveground biomass samples of 1 m2 were collected for 86 sites within 26 agricultural fields containing unique cover crop species composition to assess biomass production. In-field sensors were used to measure normalized difference vegetation index (NDVI) and groundcover percentage. Three satellites (Landsat-8 [30 m resolution], Sentinel-2 [10 m resolution], and PlanetScope [3 m resolution]) were used to calculate eight vegetation indices (VIs) for comparison with cover crop biomass. Multiple linear regression, correlation coefficients, and root mean square error (RMSE) were used to perform hierarchical clustering to rank VIs calculated from each satellite for biomass estimation accuracy. Satellites predicted cover crop biomass at the field level very accurately (r2 up to 0.79), demonstrating the potential of large-scale biomass estimation at relatively low cost compared to in-field sampling. All satellite-VI pairs estimated biomass more accurately than the in-field sensors. Performance of VIs varied by satellite, but each satellite had at least one VI that performed very well for both site-level and field-averaged data. When using PlanetScope or Landsat-8 imagery, the perpendicular vegetation index provided the most accurate cover crop biomass estimation on a per-site basis and ratio vegetation index performed best using Sentinel-2 imagery. PlanetScope was the only satellite to provide useable imagery for every site due to increased revisit period; however, its increased spatial resolution did not increase estimation accuracy overall compared to Landsat-8 or Sentinel-2.

    Recognition of sweet peppers and planning the robotic picking sequence in high-density orchards

    Ning Z.Luo L.Ding X.Dong Z....
    8页
    查看更多>>摘要:? 2022 Elsevier B.V.To improve the operational efficiency of and to prevent possible collision damage in the near-neighbor multi-target picking of sweet peppers by robots in densely planted complex orchards, this study proposes an algorithm for recognizing sweet peppers and planning a picking sequence called AYDY. First, the convolutional block attention module is embedded into the you only look once model (YOLO-V4), and this combined model is used to recognize and localize sweet peppers. Then, the clustering algorithm for the fast search-and-find of density peaks is improved based on the inflection points and gaps of a decision graph. Sweet peppers with multiple near-neighbor targets are automatically partitioned into picking clusters. An anti-collision picking sequence for a picking cluster is determined based on the experience of experts. The algorithm combines Gaussian distance weights with the winner-takes-all approach as an optic neural filter. In tests, the F1-score of this method for sweet peppers in a densely planted environment was 91.84%, which is a 9.14% improvement compared to YOLO-V4. The average localization accuracy and collision-free harvesting success rate were 89.55% and 90.04%, respectively. The recognition and localization time for a single image was 0.3033 s. The time to plan a picking sequence for a single image was 0.283 s. When the robotic arm harvested 22 and 24 sweet peppers, compared to sequential and stochastic planning, the proposed method had higher collision-free picking rates by 18.18, 18.18, 16.67, and 25 percentage points, respectively. This method can accurately detect sweet peppers, reduce collision damage, and improve picking efficiency in high-density orchard environments. This study may provide technical support for anti-collision picking of sweet peppers by robots.

    Basic motion behaviour recognition of dairy cows based on skeleton and hybrid convolution algorithms

    Li Z.Song L.Duan Y.Wang Y....
    13页
    查看更多>>摘要:? 2022 Elsevier B.V.Accurate and rapid recognition of the basic motion behaviours of dairy cows is the key to intelligent perception of their health status. As spatiotemporal data with a long time range, a 3D convolution kernel is more suitable for feature extraction of dairy cows’ basic motion behaviours. Based on image features, the traditional 3D convolutional neural network (CNN) requires many parameters and insufficient depth, and the robustness is always poor. To accurately recognize basic motion behaviours (walking, standing, and lying) of cows, in this research, basic motion behaviours based on cow skeletons and a mixed convolution algorithm were proposed. The depth of the 3D CNN was increased by connecting a deep 2D convolution in series connection after each 3D convolution, and a parallel 2D convolution was added on the basis of the series. Then, the 3D and 2D feature maps were correlated to share spatial information. Simultaneously, the key point information of the cow's skeleton corresponding to the frame was added in the form of a heat map in the parallel 2D convolution feature. While increasing the depth of the 3D convolutional network, it effectively controlled the number of model parameters and robustness. Three hundred cow videos containing the three specific motion behaviours were selected for testing. The results showed that after 5-fold cross validation, the final classification ACC of this method was 91.80%, which was 3.40% higher than that of a mixed 3D/2D convolutional tube (MiCT). To verify the robustness of the method, a gamma transform was applied to adjust the image brightness to simulate real brightness changes. Under different brightness, the accuracy of this method had a maximum offset of 6.40%, which was significantly lower than that of temporal segment networks (TSNs) and MiCT. Furthermore, the classification ACC fluctuated slightly even when adding different degrees of random noise to the cow skeleton. All of the results showed that the proposed method was effective for the classification of walking, standing, and lying behaviours of cows and can be used to identify the basic motion behaviours of cows.