Cross-view action recognition is a challenging problem, since one typically does not have sufficient training data at the target view of interest. With recent developments of domain adaptation, we propose a novel low-rank based domain adaptation model for mapping labeled data from the original source view to the target view, so that training and testing can be performed at that domain. Our model not only provides an effective way for associating image data across different domains, we further advocate the structural incoherence between transformed data of different categories. As a result, additional data discriminating ability is introduced to our domain adaptation model, and thus improved recognition can be expected. Experimental results on the IXMAS dataset verify the effectiveness of our proposed method, which is shown to outperform state-of-the-art domain adaptation approaches.