We address the problem of multi-label generalized zero-shot learning where the task is to predict the labels (usually more than one) of a target image whether each of its labels belongs to the seen or unseen category. To alleviate the extreme data-imbalance problem, in which no annotated images are available for unseen classes during training, state-of-the-art single-label zero-shot learning methods learn to synthesize the class-specific visual features from seen classes. However, synthesizing multi-label visual features from multi-label images has not been extensively studied. By exploring the relationship between an image and its labels, we address the multi-label generalized zero-shot learning problem via a hybrid framework of generative and adaptive learning. We convert an image into a label classifier, which can vary among intra-class samples. The adaptive mechanism facilitates the usage of a single-label feature generating model for creating multi-label features from multi-label images. We show that the proposed method improves the state of the art ZSL/GZSL methods on two benchmark datasets.