How to Use AI in your WritingComplications Ensue
Complications Ensue:
The Crafty Screenwriting, TV and Game Writing Blog




Archives

April 2004

May 2004

June 2004

July 2004

August 2004

September 2004

October 2004

November 2004

December 2004

January 2005

February 2005

March 2005

April 2005

May 2005

June 2005

July 2005

August 2005

September 2005

October 2005

November 2005

December 2005

January 2006

February 2006

March 2006

April 2006

May 2006

June 2006

July 2006

August 2006

September 2006

October 2006

November 2006

December 2006

January 2007

February 2007

March 2007

April 2007

May 2007

June 2007

July 2007

August 2007

September 2007

October 2007

November 2007

December 2007

January 2008

February 2008

March 2008

April 2008

May 2008

June 2008

July 2008

August 2008

September 2008

October 2008

November 2008

December 2008

January 2009

February 2009

March 2009

April 2009

May 2009

June 2009

July 2009

August 2009

September 2009

October 2009

November 2009

December 2009

January 2010

February 2010

March 2010

April 2010

May 2010

June 2010

July 2010

August 2010

September 2010

October 2010

November 2010

December 2010

January 2011

February 2011

March 2011

April 2011

May 2011

June 2011

July 2011

August 2011

September 2011

October 2011

November 2011

December 2011

January 2012

February 2012

March 2012

April 2012

May 2012

June 2012

July 2012

August 2012

September 2012

October 2012

November 2012

December 2012

January 2013

February 2013

March 2013

April 2013

May 2013

June 2013

July 2013

August 2013

September 2013

October 2013

November 2013

December 2013

January 2014

February 2014

March 2014

April 2014

May 2014

June 2014

July 2014

August 2014

September 2014

October 2014

November 2014

December 2014

January 2015

February 2015

March 2015

April 2015

May 2015

June 2015

August 2015

September 2015

October 2015

November 2015

December 2015

January 2016

February 2016

March 2016

April 2016

May 2016

June 2016

July 2016

August 2016

September 2016

October 2016

November 2016

December 2016

January 2017

February 2017

March 2017

May 2017

June 2017

July 2017

August 2017

September 2017

October 2017

November 2017

December 2017

January 2018

March 2018

April 2018

June 2018

July 2018

October 2018

November 2018

December 2018

January 2019

February 2019

November 2019

February 2020

March 2020

April 2020

May 2020

August 2020

September 2020

October 2020

December 2020

January 2021

February 2021

March 2021

May 2021

June 2021

November 2021

December 2021

January 2022

February 2022

August 2022

September 2022

November 2022

February 2023

March 2023

April 2023

May 2023

July 2023

September 2023

November 2023

January 2024

February 2024

June 2024

September 2024

October 2024

November 2024

December 2024

 

Saturday, September 02, 2023


 I've been hearing a lot of about the use of AI -- large language models, or LLMs -- like ChatGPT -- in writing. For example, game writing. Couldn't ChatGPT come up with a lot of barks? And quests? And stories? And backstories?

The most important thing to know about ChatGPT is that when you pronounce it in French, it means, "Cat, I farted." ("Chat, j'ai pété.")

Okay, maybe not the most important thing, but surprisingly relevant.

ChatGPT is not "artificial intelligence" in the sense of "the machine is smart and knows stuff." ChatGPT takes a prompt and then searches through an enormous database of writing to figure out what is the most likely answer. Not the smartest or the best answers, just the most likely one.

That means if you ask ChatGPT a question, you are getting the average answer. Don't base your medication on ChatGPT results. 

Note that this is *not* how Google answers work. Google's algorithm evaluates the value of a site based on links to other sites -- and rates the links according to the value of those sites. So it is more likely to answer based on what the Encyclopedia Brittanica said than what Joe Rogan said the other day. ChatGPT is more likely to give you Joe Rogan.

What does this mean for writing? ChatGPT will give you a rehash of what everyone else has already done.

Two problems with this:

a.  It is a hash. A mashup. ChatGPT does not know which bits of story go with which other bits. It has no sense of story logic. It does not know if it is giving you a good answer or a bad answer. It is not trying to give you a good answer. It is giving you an answer based on how much of one kind of thing or another kind of thing shows up in its database. Its database is probably The Internet. 

So, for example, if you asked it what happened at a typical wedding, it might decide that the groom has cheated the night before and the couple broke up -- because people generally do not write on the Internet about happy couples getting hitched without a hitch. 

b.  It is what everyone else has already done. Good writing comes out of your creativity. Your personality. Your experience of life. It is filtered through who you are. It has your voice. We are not hiring you to give us clams and tired tropes. We are hiring you to come up with something fresh and compelling. Something heartfelt. Something that gets a rise out of you, and therefore might get a rise out of the reader, or the audience, or the player.

If you ask ChatGPT for barks, it will give you the least surprising barks ever. The most average. The more boring.

If you ask it for stories, it will give you stories you've heard before, unless it gives you stories that make no sense that are based on bits and pieces of stories you've heard before.

So, how can you use large language models in your writing?

Simple:  it can tell you what to avoid.

Ask it to give you barks. DO NOT USE THE ONES IT SHOWS YOU. Make up other ones.

Ask it to give you plots. DO NOT USE ITS PLOTS. THEY ARE TIRED. 

Working with LLMs can teach you to take your writing beyond the expected. It will give you the expected. If you write away from or around what's expected, your writing will become unexpected. Fresh. Original.

Or, you can take its plots and then twist them -- which is what pro writers do all the time. We're not making up plots from scratch all the time. 90% of the time what we write is like something else, but twisted or subverted or redoubled in some way. (This appropriation and adaptation of earlier, better writer's work is called "culture.")

Tl; dr: you can legitimately use ChatGPT and other large language models for writing. But not directly. ChatGPT can only give you (a) nonsense and (b) cheese. But it is useful as a warning. If ChatGPT came up with something, it's probably too tired for you to use it. 

PS  Another reason not to use LLMs: the content they train on is, generally, whatever they can scrape off the Internet. They can't tell whether the content they train on is human-generated or machine-generated. So, the more LLM content there is out there, the more LLMs are training on LLM output. Initially, you may be getting what a computer thinks a human would say next. But after a while, you are getting what a computer thinks a computer would say next. See the problem? 

0 Comments:

Post a Comment

Back to Complications Ensue main blog page.



This page is powered by Blogger.