The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing with Interactive Visualization
The use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The use of Large Language Models (LLMs) for writing has sparked controversy
both among readers and writers. On one hand, writers are concerned that LLMs
will deprive them of agency and ownership, and readers are concerned about
spending their time on text generated by soulless machines. On the other hand,
AI-assistance can improve writing as long as writers can conform to publisher
policies, and as long as readers can be assured that a text has been verified
by a human. We argue that a system that captures the provenance of interaction
with an LLM can help writers retain their agency, conform to policies, and
communicate their use of AI to publishers and readers transparently. Thus we
propose HaLLMark, a tool for visualizing the writer's interaction with the LLM.
We evaluated HaLLMark with 13 creative writers, and found that it helped them
retain a sense of control and ownership of the text. |
---|---|
DOI: | 10.48550/arxiv.2311.13057 |