Frequently asked question (FAQ) retrieval, which seeks to provide the most relevant question, or question-answer (QA) pair, in response to a user's query, has found its applications in widespread use cases. More recently, methods based on bidirectional encoder representations from Transformers (BERT) and its variants, which typically take the word embeddings of a question in training time (or query in test time) as the input to predict relevant answers, have shown good promise for FAQ retrieval. However, these BERT-based methods do not pay enough attention to the global information specifically about an FAQ task. To cater for this, we in this paper put forward a question-aware graph convolutional network (QGCN) to induce vector embeddings of vocabulary words, thereby encapsulating the global question-question, question-word and word-word relations which can be used to augment the embeddings derived from BERT for better F AQ retrieval. Meanwhile, we also investigate leverage domain-specific knowledge graphs to enrich the question and query embeddings (denoted by K-BERT). Finally, we conduct extensive experiments to evaluate the utility of the proposed approaches on two publicly-available FAQ datasets (viz. TaipeiQA and StackF AQ), where the associated results confirm the promising efficacy of the proposed approach in comparison to some top-of-the-line methods.