H.266/Versatile Video Coding (VVC) is the latest international video coding standard to encode ultra-high-definition video effectively. The quadtree with nested multi-type tree (QT-MTT) structure provides various sizes of coding tree partitioning and allows the nested binary tree (BT) split and ternary tree (TT) split at each QT level. Furthermore, numerous advanced coding tools are equipped in the H.266/VVC encoder. However, the encoding time increases tremendously. Previous researches regarding the fast coding algorithm of H.266/VVC seldom mention perceptual redundancy. This paper utilizes the human vision model of just noticeable difference to extract the visually distinguishable pixels that may affect the visual perception. We observe that the distributions acquired by the horizontal and vertical projections of visually distinguishable pixels within the coding unit are related to their corresponding MTT splitting modes. Therefore, the distributions representing the perceptual information of human vision are used to be the input features of machine learning. Fast MTT decision determined by the random forest models of machine learning is proposed to quickly select the partition for intra coding. Experimental results demonstrate that the proposed method can effectively accelerate intra coding process while maintaining good bitrate and video quality based on the properties of the visual perception. The proposed algorithm provides better performance than the previous work.
ASJC Scopus subject areas
- 工程 (全部)