[2106.11520] BARTScore: Evaluating Generated Text as Text Generation

Significance

Evaluating conditionally generated text with pre-trained seq2seq model

Keypoints

  • Propose a method to evaluate generated text from various perspectives with pre-trained BART
  • Demonstrate correlation of proposed method with human judgment

Review

Background

The gold standard for evaluating the quality of generated text is to perform costly human evaluation. Although many recent natural language processing (NLP) models have been studied for evaluating the quality of the text with respect to the reference as in BERTScore, these methods still cannot provide quantitative evaluation of the generation quality from diverse perspectives. The perspectives of evaluation can include informativeness (Info), relevance (Rel), fluency (Flu), coherence (Coh), factualty (Fac), semantic coverage (Cov), adequacy (Ade). The authors focus on auto-regressive seq2seq tasks, which can be thought of a conditional text generation: \begin{align} p(\mathbf{y}|\mathbf{x},\theta) = \Pi^{m}_{t=1} p(\mathbf{y}_{t}|\mathbf{y}_{<t},\mathbf{x},\theta) \end{align} where $\mathbf{x}$ is the given condition and $\mathbf{y}$ is the generated text, and $\theta$ is the pre-trained model.

Keypoints

Propose a method to evaluate generated text from various perspectives with pre-trained BART

BART is a pre-trained seq2seq model based on encoder and decoder of the Transformer. The authors propose BARTScore, which is simply the weighted log probability sum of $\mathbf{y}$ given $\mathbf{x}$: \begin{align} \mathrm{BARTScore} = \sum^{m}_{t=1}w_{t} \log p(\mathbf{y}_{t}|\mathbf{y}_{<t}, \mathbf{x}, \theta). \end{align} The weighting term $w_{t}$ can be defined from different previous methods like Inverse Document Frequency. BARTScore fully utilize the pre-trained parameters of BART, enabling evaluation of the generated text from different directions such as faithfulness (source $\rightarrow$ generated), precision (reference $\rightarrow$ generated), recall (generated $\rightarrow$ reference), and $\mathcal{F}$ score (reference $\leftrightarrow$ generated). Extension of BARTScore using prompting and fine-tuning is suggested to further improve the evaluation performance.

Demonstrate correlation of proposed method with human judgment

The proposed BARTScore is experimented on the quality evaluation of machine translation (WMT19), text summarization, and data-to-text datasets. BARTScore enhanced by fine-tuning tasks (CNN+Para) outperforms other unsupervised methods, and the performance is further improved by adding the prompt. 210623-1 Performance by correlation with human judgment on WMT19 dataset (machine translation)

BARTScore also outperforms other methods on the summarization task. 210623-2 Performance of text summarization on REALSumm, SummEval-CNNDM, NeR18-newsroom datasets

Experiment on data-to-text datasets also demonstrate excellent correlation score of the proposed BARTScore. 210623-3 Performance of data-to-text on BAGEL, SFRES, SFHOT datasets

Results suggest that BARTScore can be utilized as an quantitative evaluation metric for text generation tasks. Further analysis of the experiments are referred to the original paper.

Related

Share

Comment

#image-generation #multi-modal #language-model #retrieval-augmentation #robotics #forecasting #psychiatry #instruction-tuning #diffusion-model #notice #graph-neural-network #responsible-ai #privacy-preserving #scaling #mixture-of-experts #generative-adversarial-network #speech-model #contrastive-learning #self-supervised #image-representation #image-processing #object-detection #pseudo-labeling #scene-text-detection #neural-architecture-search #data-sampling #long-tail #graph-representation #zero-shot #metric-learning #federated-learning #weight-matrix #low-rank #vision-transformer #computer-vision #normalizing-flow #invertible-neural-network #super-resolution #image-manipulation #thread-summarization #natural-language-processing #domain-adaptation #knowledge-distillation #scene-text #model-compression #semantic-segmentation #instance-segmentation #video-understanding #code-generation #graph-generation #image-translation #data-augmentation #model-pruning #signal-processing #text-generation #text-classification #music-representation #transfer-learning #link-prediction #counterfactual-learning #medical-imaging #acceleration #transformer #style-transfer #novel-view-synthesis #point-cloud #spiking-neural-network #optimization #multi-layer-perceptron #adversarial-training #visual-search #image-retrieval #negative-sampling #action-localization #weakly-supervised #data-compression #hypergraph #adversarial-attack #submodularity #active-learning #deblurring #object-tracking #pyramid-structure #loss-function #gradient-descent #generalization #bug-fix #orthogonality #explainability #saliency-mapping #information-theory #question-answering #knowledge-graph #robustness #limited-data #recommender-system #anomaly-detection #gaussian-discriminant-analysis #molecular-graph #video-processing