Point cloud upsampling is critically useful for 3D reconstruction and 3D data understanding due to hardware limitation which often obtain sparse point sets. Recent point cloud upsampling approaches attempt to generate a dense point set with a single upsampling stage. After revisiting the task, we propose a new upsampling module, which conducts multi-branch network strategy to refine the generated point set. In each branch, we upsample points by duplicating feature space and pass through MLPs and self-attention unit. Further, we incorporate an auxiliary network to encode global features from input point cloud, which preserves structure information in the first place, and aggregate global features with generated point features to enhance overall performance. Specifically, our proposed network assembles global features with generated point features using attention fusion that allows each point to acquire global information from weighted attention map. Extensive qualitative and quantitative evaluation on different datasets demonstrate how our method outperform other existing approaches.