Heard about ChatGPT?

Robert Engels
4 min readDec 15, 2022

Now your wondering what the fuzz is about? Here is a little help..

You might have a friend that recently told you about this great tool helping you to write an essay on nearly any topic of choice, produces Limericks at will, can help you doing homework and in addition has an ability to deliver computer code for you as a non-programmer?

Image by Pikura, pixabay 2022.

Your friend was right. OpenAI (a company started by Sam Altman, Elon Musk and others in 2015) has been building systems that actually can assimilate vast amounts of texts, process it and bring it back to you in a variety of forms. The model they build was called GPT-3 and became known as a great language interpretation model, but it came with some drawbacks.

One of the main drawbacks was that it was not good in understanding query-prompts and then generate appropriate inputs. It was also not good in follow-up questions.

The company then used another technology, called Reinforcement Learning together with crowd-sourcing a community of humans answering query-prompts, to build an additional model of how people do just that: answer queries. Combining these two models provided a tool called InstructGPT, launched in February 2022.

From there it went fast. InstructGPT showed much better capability to answer your queries, and it also learned how to deal with follow-up questions on these answers. What was lacking was a good user interface. In November 2022, a new interface was launched under the name ChatGPT, and indeed, it imitated a chatbox as you now it. The answers come quick, are relatively good at a first glance for a wide variety of topics, and they even made it type character-by-character like a human!

And honoustly, that was all that was needed to make an already existing technology (Large Language Models, GPT) fit for the masses. Within a month after it´s launch “everbody” spoke about this new tool, played with it, blogged, videocasted about it. The ChatGPT tool went from unknown to being discussed everywhere (including my daughters philosophy classes at high school) within a few weeks. Kudo´s!

And now? Is it as intelligent as it seems to be?

Good question. No. It´s not intelligent. The system matches extremely complex patterns in huge amounts of texts using a kind of statistical methods figuring out what words follow upon each other in different contexts and cases. That is all. And by doing so, it can really provide reasonable answers to questions you provide it with.

At least at first sight. Many people have tried and published results that seem amazing when you quick-read them, but seem complete bogus when deep-diving into them. Pregnant woman should eat broken porcelain (containing a lot of calcium), since that is what babies need, medical and health issues are interpreted and discussed in strange ways, biographies of well known people are incorrect and all of a sudden a famous book has gotten a different ending.

Not surprising, as said the system uses statistical methods to come to answers, and such models are always approximations of the truth. And even if the system provides corrects answers, you will still have to fine-read them, as the system as such does not know what it doesn´t know and thus you will get no warning if results are uncertain. It will continue to deliver answers with confidence even if the quality cannot be guaranteed, bringing about misleading and error-prone answers. And even worse, if the fake information is re-injected into the system, the next generation of answers will be based on earlier errors and incorrect statements.

And there is another danger. This technology can be used to generate systems of fake-news facts, fake personalities and add that to the image-generation we already have. Than abuse is around the corner, easy to read, looking authorative, backed up by fake-facts making all look real. Who is going to tell that these are not real facts provided to you by real people, when in fact all is computer generated? In the future we might need a completely different way of identifying people we interact with, not only to identify whether we communicate with a human or not, but also to check if the provided identity is correct! Are you really who you say you are? And you want me to transfer my savings to another account? You can imagine the situations you do not want to end up in.

But I like it, so why so negative?

Another good input. Yes, it seems to work great. The answering looks very trustworthy. For many queries reasonable answers are given, albeit a bit shallow in many cases. And let´s be honest, having such a tool available for knowledge intensive tasks might be just fantastic! And this is probably where it will go first, providing quick insight in the topic you will have to write an essay about. Or the lawyer getting some quick answers on his next case and previous judging around it. Or a scientist who gets some initial results and ideas for a new, large vaccine program to kick her off. What with the tourist, just wanting to learn a bit about the country he travels to next week? Or the school kid getting help with mathematics or history lessons. There are plenty of positive scenario´s thinkable. And we should (and presumably will) investigate those, learn from them and produce even better tools.

But do not forget, with all fun and interesting possibilities, with all the good and benefits of this new technology, all comes with an obligation. An obligation to make sure we do not (automatically) change facts into fakes, that errors, mistakes and purposely faked information stays detectable.

We need mechanisms to ensure that all knowledge we have collected to improve and build our society and technology advances stay attributable and usefull. We will need it!

--

--

Robert Engels

Broad interest in topics like Semantics, Knowledge Representation, Reasoning, Machine Learning and putting it together in intelligeable ways.