ChatGPT and us

30 March 2023

Foreword: I am no AI specialist.
(sorry - that was five, depending on how you count them)

Seriously, while it's very cool and all, I haven't had nearly the time I'd like to play with everything these tools can do.  The below wall of text will likely contain errors introduced in the name of brevity.  I've edited this so many times...  Anyway, on with the show.

OpenAI’s GPT 4 and Google’s Bard - among others - are about to be unleashed on the world in a myriad of applications, both visible and not.  ChatGPT in particular has proven to be a very useful code generation tool, within certain bounds.  Its text and article generation are very effective, and often summarise large reams of data from disparate sources rather well.  As an inference engine, it has improved in leaps and bounds over previous work in the field.

But it's not without its problems.

There has been lots of discussion about what Large Language Models (currently masquerading as 'AI' in general conversation) - and ChatGPT in particular - will shortly be doing to benefit the world.  They'll be writing marketing blurbs, concentrating and collating information, writing computer code, and a myriad other things.  Many of these will be at the interface of the large and complicated network of information humanity has built (otherwise known as The Internet), doing the tasks that people find onerous or overly time-consuming.  This will likely involve a large amount of summarisation and content generation for user consumption.

Microsoft has very literally bought into the tool in a very big way, including stumping up for the bulk purchase of thousands of the ultra-high-end Nvidia GPUs that OpenAI use to power the  algorithms behind it all (that’s the ‘GPT’ part of ChatGPT).  It's already used in the Bing search engine, and will pop up in many more parts of the Microsoft ecosystem as they try to maximise the benefit from their massive investment.

When training these large models, the base data comes from the internet.  In short, a snapshot was taken of the internet as it stood two or three years ago.  This data was then used to train the model in sentence structure, grammar, idiomatic speech, and more.  But what was the training data?  Who curated it, and to what requirements?  Is there any inherent bias in the data, or even in the model design?   This technology will certainly be used to generate news articles in some form, and these are questions that we don’t have answers to, so we have no way of knowing if the end results of a generated article are trustworthy, or if they’re going to lead to an echo-chamber effect in media.

And that’s just one of the many questions that we don’t have an answer to.  Safety is also proving to be a problem.  These models are being trialled in the public arena.  It’s already being used to write malware.  Until recently, ChatGPT would cheerfully tell you the most efficient way to kill a large number of people.  It would accept incorrect statements that the user gave it as factually correct, leading to the tool returning false information later in a conversation.  

While it is constantly being updated and tweaked, there will always be people who will attempt to use these tools for nefarious purposes.  Made harder by the fact that ‘harm’ is often dependant on context.  A member of one of the test teams for OpenAI noted “…for nearly every prompt that is safe and useful, there is an unsafe version. You want the model to write good job ads, but not for some nazi group. Blog posts? Not for terrorists. Chemistry? Not for explosives…".  With such a massive project - I believe the official Scope of Work just read 'Yes' in bold characters - it's not the easiest problem to solve, or to put guard rails around.

In academia, suggested tools to check for the use of AI in presentations, and cheating in exams, have so far not been reliable.  The models are now replicating human document generation to a remarkable degree, to the point where there is no ability to differentiate between the two.  

So, some work needs to be done.  Maybe a lot of work.  A bunch of tech leaders have just put their names under a letter calling for a pause in AI development for systems ‘more powerful than GPT-4’.  It states that “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources… Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Goldman-Sachs recently estimated that approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely.  Administrative workers and lawyers are expected to be most affected.   It could also lead to more production by the people using AI tools.  The Goldman analysts also note that “Although the impact of AI on the labor market is likely to be significant, most jobs are only partially exposed to automation, and are thus likely to be complemented rather than substituted by AI”.  They also noted that technological innovation that initially displaces workers has historically also created employment growth over time.

I’ve only covered text generation here, but the above questions also apply to tools like Midjourney, DALL-E and other image generation applications.  Things could get even more pointed when companies start using AI as a service, and train their own AI on their corporate datasets.  Are there privacy concerns about your data if, for example, your bank, finance company, or lawyer (see above) decides to make an AI chatbot to relieve some of the load off its frontline staff?  It'll be an interesting few years seeing how this all plays out.

Footnote: The image at the top of this article was generated by DALL-E, with a prompt around Rodin's Thinker.  I got this as a result, and thought it was too good to pass up.  Task failed successfully.

 

GRAEME EVANS
March 2023

Back to Articles

Other Recent Articles

Read More
Read More
Read More
Read More
Read More