Exploring the Capabilities and Limitations of Pre-training Language Models: A Study of ChatGPT

Abstract

Pre-training language models like ChatGPT have shown promising results in natural language processing tasks. However, there is still a need to understand their capabilities and limitations in various fields such as customer service, healthcare, education, e-commerce and academia. The study examines the capabilities and limitations of pre-training language models like ChatGPT, through a comprehensive study of its performance on various natural language processing tasks. The research includes experiments to evaluate the model’s ability to understand and generate human-like language, its robustness to different types of inputs, and its ability to generalise to new tasks. Additionally, the ethical and societal implications of using pre-trained language models, such as issues related to bias, transparency and accountability are investigated. The research helps to further our understanding of the potential applications and impact of these models on various fields and natural language processing as a whole.

Presenters

Fatma Dogan Akkaya
Lecturer, Communication, Kastamonu University, Istanbul, Turkey

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

2023 Special Focus: Whose Intelligence? The Corporeality of Thinking Machines

KEYWORDS

Pre-training language models, ChatGPT, Human-like communication, Capabilities, Limitations