Evaluating ChatGPT as a Question Answering System: a Comprehensive Analysis and Comparison with Existing Models

Document Type : Research Article

Authors

1 Faculty of Computer Engineering, University of Isfahan

2 Department of Computer Engineering, University of Isfahan, Iran

3 ADAPT Research Centre, Ireland. Dublin City University, Ireland

Abstract

In the current era, a multitude of language models have emerged to cater to user inquiries. Notably,

the GPT-3.5-Turbo language model has gained substantial attention as the underlying technology for

ChatGPT. Leveraging extensive parameters, this model adeptly responds to a wide range of questions.

However, due to its reliance on internal knowledge, the accuracy of responses may not be absolute.

This article examines ChatGPT as a Question Answering System (QAS), comparing its performance to

other existing QASs. The primary focus is on evaluating ChatGPT’s efficiency in extracting responses

from provided paragraphs, a core QAS capability. Additionally, performance comparisons are made

in scenarios without a surrounding passage. Multiple experiments were conducted in ChatGPT,

exploring response hallucination and considering question complexity. Evaluation employed well-

known Question Answering (QA) datasets, including SQuAD, NewsQA, and PersianQuAD, across

English and Persian languages. Metrics such as F score, exact match and accuracy were used in the

assessment. The study reveals that, while ChatGPT demonstrates competence as a generative model, it

is less effective in question answering compared to task-specific models. Providing context improves

its performance, and prompt engineering improves precision, particularly for questions lacking

explicit answers in the provided paragraphs. ChatGPT excels at simpler factual questions compared

to the "how" and "why" question types. The evaluation highlights occurrences of hallucinations,

where ChatGPT provides responses to questions without available answers in the provided context.

Keywords

Main Subjects



Articles in Press, Accepted Manuscript
Available Online from 28 September 2024
  • Receive Date: 12 February 2024
  • Revise Date: 19 September 2024
  • Accept Date: 28 September 2024