Music Sentiment Classification Method Based on Hybrid F-MFCC Parameters and Multi integrated ML Algorithm
Aiming at the problems of insufficient feature extraction and low accuracy in current music emotion classification methods,a study proposes an improved Mel frequency cepstral coefficient to better extract music emotion features,and combines multiple integrated machine algorithms to classify music emotions.The results showed that the improved Mel frequency cepstral coefficient parameters had extraction accuracies of 72.5%,66.9%,58.2%,and 56.3%for the four emotional features of anger,happiness,relaxation,and sadness,respectively.The overall classification accuracy of the research method for the four emotions is higher than that of the comparison algorithm,reaching 90.3%,89.6%,91.4%,and 92.5%,respectively.The experimental results show that by combining the improved Mel frequency cepstral coefficient parameters and multiple integrated machine learning algorithms,the accuracy of music sentiment classification has been significantly improved,providing efficient technical support for intelligent music recommendation and sentiment analysis.
F-MFCCMLMusic emotion classificationFeature extractionMulti head attention mechanism