Transformer-based deep learning (DL) methods have gradually been advocated for remote sensing (RS) image semantic segmentation due to the great global modeling capability. Nevertheless, Transformer-based DL methods have not yet been sufficiently explored on the large-scale hyperspectral image (HSI) semantic segmentation. Current algorithms lack a comprehensive consideration of the impact of positional encoding (PE) interpolation when constructing Transformer-based decoders. Moreover, existing segmentation heads usually directly concatenate multiscale features to achieve segmentation, which ignores the inherent semantic differences between different features. To address the above issues, a U-shaped multimixed Transformer network (UM2Former) is proposed for large-scale HSI semantic segmentation. First, a weight encoder consisting of two modules, the overlap-down and the channel-weight, is built to extract hierarchical discriminative spectral-spatial features and decrease spectral redundancy. Second, the proposed multimixed Transformer block (MMTB) develops a PE-free module, spatial-feature-retention attention (SFRA) mechanism, in which “multimixed” represents the global dependency modeling of each pixel with the retented average spatial characteristics of different locations in the input feature maps. Finally, a linear fuse segmentation head (LFSH) is designed to align semantic information among multiscale feature maps and achieve accurate segmentation. Experiments were conducted in single cities and the entire large-scale WHU-OHS HSI dataset. The segmentation results indicated that the proposed method achieved higher accuracy compared to the existing semantic segmentation methods, with performance improvements of 17.80% and 4.16% in terms of intersection over union (mIoU) and overall accuracy (OA), respectively. The source code will be available at https://github.com/ZhaohuiXue/ UM2Former.