In recent years, natural language processing (NLP) technology has made great progress. Models based on transformers have performed well in various natural language processing problems. However, a natural language task can be carried out by multiple different models with slightly different architectures, such as different numbers of layers and attention heads. In addition to quantitative indicators such as the basis for selecting models, many users also consider the language understanding ability of the model and the computing resources it requires. However, comparing and deeply analyzing two transformer-based models with different numbers of layers and attention heads are not easy because it lacks the inherent one-to-one match between models, so comparing models with different architectures is a crucial and challenging task when users train, select, or improve models for their NLP tasks. In this paper, we develop a visual analysis system to help machine learning experts deeply interpret and compare the pros and cons of asymmetric transformer-based models when the models are applied to a user’s target NLP task. We propose metrics to evaluate the similarity between layers or attention heads to help users to identify valuable layers and attention head combinations to compare. Our visual tool provides an interactive overview-to-detail framework for users to explore when and why models behave differently. In the use cases, users use our visual tool to find out and explain why a large model does not significantly outperform a small model and understand the linguistic features captured by layers and attention heads. The use cases and user feedback show that our tool can help people gain insight and facilitate model comparison tasks.
- explainable A.I
- natural language processing
ASJC Scopus subject areas
- Materials Science(all)
- Process Chemistry and Technology
- Computer Science Applications
- Fluid Flow and Transfer Processes