Assessing ChatGPT and Human Capabilities in Poetry Translation with Juliane House’s 2015 TQA and Reader Response A Case Study
Main Article Content
Abstract
This study investigates ChatGPT’s potential to capture the nuances of literary texts by comparing AI-generated and human translations of Kim So-wŏl’s Korean poem “Azaleas.” A triangulated framework—comprising Juliane House’s Translation Quality Assessment (the TQA) model, expert evaluation, and reader response analysis—was used to assess each translation’s ability to preserve the poem’s emotional depth, cultural resonance, and stylistic integrity. Translation A (ChatGPT) and Translation B (human) were evaluated across covert and overt error categories. Findings indicate that while ChatGPT excels in producing emotionally engaging and rhythmically fluent translations, it often diverges from the source text in thematic fidelity and semantic precision. The TQA scores reflect this contrast: Translation A scored 83 and 63 from two evaluators, while Translation B received 94 and 76, confirming its closer alignment with the original. Expert assessments showed a preference shift—from initially favoring ChatGPT’s fluency in a blind review to preferring the human translation upon reviewing the source text—highlighting the role of contextual access in translation evaluation. Reader responses showed a modest preference for ChatGPT in emotional impact and imagery, suggesting aesthetic appeal can sometimes outweigh textual fidelity. These findings underscore both the creative promise and current limitations of AI in literary translation, particularly in poetry. Future research should refine AI models and explore hybrid evaluation frameworks that blend algorithmic fluency with human interpretive insight.
Article Details
Copyright for articles and reviews rests with the authors. Copyright for translations rests with the translator, subject to the rights of the author of the work translated. The University of Florida Press will register copyright to each journal issue.