We present a novel learning-based method for single image super-resolution (SR). Given a single input low-resolution (LR) image (and its image pyramid), we propose to learn context-specific image sparse representation, which aims at modeling the relationship between low and high-resolution image patch pairs of different context categories in terms of the learned dictionaries. To predict the SR image, we derive the context-specific sparse representation of each image patch in the LR input with additional locality and group sparsity constraints. While the locality constraint searches for the most similar image patches and uses the corresponding highresolution outputs for SR, the group sparsity constraint allows us to utilize the information from most relevant context categories for predicting the final SR output. Experimental results show the proposed method is able to quantitatively and qualitatively achieve state-of-the-art performance.