hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk
The Chinese University of Hong Kong, Shenzhen, China.
The Chinese University of Hong Kong, Shenzhen, China.
The Chinese University of Hong Kong, Shenzhen, China.
University of Manchester, Manchester, United Kingdom.
Show others and affiliations
2023 (English)In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) / [ed] Anna Rogers; Jordan Boyd-Graber; Naoaki Okazaki, Stroudsburg, PA: Association for Computational Linguistics, 2023, Vol. 1, p. 7581-7596Conference paper, Published paper (Refereed)
Abstract [en]

Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation (NLG), has been widely used for dialogue systems, chatbots, text summarization, as well as AI-Generated Content (AIGC), e.g., idea generation, and scriptwriting. However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models. In this work, we aim to preliminarily test whether NLG can generate humor as humans do. We build the largest dataset consisting of numerous Chinese Comical Crosstalk scripts (called C3 in short), which is for a popular Chinese performing art called 'Xiangsheng' or '相声' since 1800s. We benchmark various generation approaches including training-from-scratch Seq2seq, fine-tuned middle-scale PLMs, and large-scale PLMs with and without fine-tuning. Moreover, we also conduct a human assessment, showing that 1) large-scale pretraining largely improves crosstalk generation quality; and 2) even the scripts generated from the best PLM is far from what we expect. We conclude humor generation could be largely improved using large-scale PLMs, but it is still in its infancy. The data and benchmarking code are publicly available in https://github.com/anonNo2/crosstalk-generation. © 2023 Association for Computational Linguistics.

Place, publisher, year, edition, pages
Stroudsburg, PA: Association for Computational Linguistics, 2023. Vol. 1, p. 7581-7596
National Category
General Language Studies and Linguistics
Identifiers
URN: urn:nbn:se:hh:diva-52061Scopus ID: 2-s2.0-85174406020ISBN: 9781959429722 (print)OAI: oai:DiVA.org:hh-52061DiVA, id: diva2:1812941
Conference
61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, 9-14 July, 2023
Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2023-11-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusFull text

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Tiwari, PrayagWang, Benyou
By organisation
School of Information Technology
General Language Studies and Linguistics

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 24 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf