In the past few decades,image recognition technology has undergone rapid developments and has been inte-grated into people's lives,profoundly changing the course of human society.However,recent studies and applications indicate that image recognition systems would show human-like discriminatory bias or make unfair decisions toward certain groups or populations,even reducing the quality of their performances in historically underserved populations.Conse-quently,the need to guarantee fairness for image recognition systems and prevent discriminatory decisions to allow people to fully trust and live in harmony has been increasing.This paper presents a comprehensive overview of the cutting-edge research progress toward fairness in image recognition.First,fairness is defined as achieving consistent performances across different groups regardless of peripheral attributes(e.g.,color,background,gender,and race)and the reasons for the emergence of bias are illustrated from three aspects.1)Data imbalance.In existing datasets,some groups are overrep-resented and others are underrepresented.Deep models will facilitate optimization for the overrepresented groups to boost the accuracy,while the underrepresented ones are ignored during training.2)Spurious correlations.Existing methods con-tinuously capture unintended decision rules from spurious correlations between target variables and peripheral attributes,failing to generalize the images with no such correlations.3)Group discrepancy.A large discrepancy exists between differ-ent groups.Performance on some subjects is sacrificed when deep models cannot trade off the specific requirements of vari-ous groups.Second,datasets(e.g.,Colored Mixed National Institute of Standards and Technology database(MNIST),Corrupted Canadian Institute for Advanced Research-10 database(CIFAR-10),CelebFaces attributes database(CelebA),biased action recognition(BAR),and racial faces in the wild(RFW))and evaluation metrics(e.g.,equal opportunity and equal odds)used for fairness in image recognition are also introduced.These datasets enable researchers to study the bias of image recognition models in terms of color,background,image quality,gender,race,and age.Third,the debiased methods designed for image recognition are divided into seven categories.1)Sample reweighting(or resam-pling).This method simultaneously assigns larger weights(increases the sampling frequency)to the minority groups and smaller weights(decreases the sampling frequency)to the majority ones to help the model focus on the minority groups and reduce the performance difference across groups.2)Image augmentation.Generative adversarial networks(GANs)are introduced into debiased methods to translate the images of overrepresented groups to those of underrepresented groups.This method modifies the bias attributes of overrepresented samples while maintaining their target attributes.Therefore,additional samples are generated for underrepresented groups,and the problem of data imbalance is addressed.3)Feature augmentation.Image augmentation suffers from model collapse in the training process of GANs;thus,some works augment samples on the feature level.This augmentation encourages the recognition model to produce consistent predictions for the samples before and after perturbing and editing the bias information of features,making it impossible for the model to pre-dict target attributes based on bias information and thus improving model fairness.4)Feature disentanglement.This method is one of the most commonly used for debiasing,which removes the spurious correlation between target and bias attributes in the feature space and learns target features that are independent of bias.5)Metric learning.This method uti-lizes the power of metric learning(e.g.,contrastive learning)to encourage the model to make predictions based on target attributes rather than bias information to promote pulling the same target class with different bias class samples close and pushing the different target classes with similar bias class samples away in the feature space.6)Model adaptation.Some works adaptively change the network depth or hyperparameters for different groups according to their specific requirements to address group discrepancy,which improves the performance on underrepresented groups.7)Post-processing.This method assumes black-box access to a biased model and aims to modify the final predictions outputted by the model to miti-gate bias.The advantages and limitations of these methods are also discussed.Competitive performances and experimental comparisons in widely used benchmarks are summarized.Finally,the following future directions in this field are reviewed and summarized.1)In existing datasets,bias attributes are limited to color,background,image quality,race,age,and gender.Diverse datasets must be constructed to study highly complex biases in the real world.2)Most of the recent studies dealing with bias mitigation require annotations of the bias source.However,annotations require expensive labor,and mul-tiple biases may occasionally coexist.Mitigation of multiple unknown biases must still be fully explored.3)A tradeoff dilemma exists between fairness and algorithm performance.Simultaneously reducing the effect of bias without hampering the overall model performance is challenging.4)Causal intervention is introduced into object classification to mitigate bias,while individual fairness is proposed to encourage models to provide the same predictions to similar individuals in face recognition.5)Fairness on video data has also recently attracted attention.