Everyone is Chatting About AI Chatbots

ChatGPT and Dall-E are just two of the more than 450 generative AI chatbot programs out there, with scads of new AI chatbots coming online so fast that we mere mortals will forever be playing catch-up.

What are chatbots used for?

Since its introduction in November 2022, ChatGPT has been creating articles, poems, term papers, social media posts and every kind of written content known to humankind, with alarmingly effective results. The fact that these pieces of content were written by a machine is nearly impossible to detect. For short-form content responses, such as emails or video game chats, it can often be indiscernible from human responses.

Generative AI chatbots can do remarkable, and scary, things. Often when writing, their words are what you’d expect and where you’d expect them to be placed. However, the machines haven’t taken over because they cannot think or form intent.

What are chatbots used for?

Since its introduction in November 2022, ChatGPT has been creating articles, poems, term papers, social media posts and every kind of written content known to humankind, with alarmingly effective results. The fact that these pieces of content were written by a machine is nearly impossible to detect. For short-form content responses, such as emails or video game chats, it can often be indiscernible from human responses.

Generative AI chatbots can do remarkable, and scary, things. Often when writing, their words are what you’d expect and where you’d expect them to be placed. However, the machines haven’t taken over because they cannot think or form intent.

The question is, “How do AI chatbots work?”
An even better question is, “What is generative AI generating, exactly?”

Generative AI does not know anything. And, contrary to popular opinion, it does not have the ability to think. It is not sentient, nor can it evaluate sentiment. It is a neural network modeled on the human brain’s network of neurons. It learns based on a “large language model” that requires feeding it billions of data points from digital text, including books, news articles, online chat logs, Wikipedia content and more.

From this massive and endless data resource, it pinpoints billions of distinct data points in the way people connect words, letters and symbols so that, when prompted, it learns to generate its own content. For example, to identify a fire hydrant, it would have to pinpoint patterns in thousands of images of fire hydrants to learn to recognize them.

Given that the internet is its teacher, generative AI can, and does, regurgitate offensive content or content with blatant factual errors, with absolute confidence. It also cannot discern between fact or fiction, nor legally protected material, such as copyrights, trademarks or sourcing rights, or the origins of any source material, whether true or false. This leads to accusations of legal violations and plagiarism. For example, when it writes code, it can introduce inappropriate steps or blatant errors that can cost the commissioning company dearly in recalls and damage to its reputation.

Should marketers be worried about AI?

For marketers, the impact of AI chatbot-created content is still unfolding. For example, ChatGPT creates content based on verbal cues and descriptions; the greater the detail, the greater chance that the content will resonate as being produced by a human. It works well for a quick social media post. However, because AI chatbots cannot think, they cannot express the subtleties and nuances necessary for articulating a brand position or brand messaging.

It’s similar for designers of marketing materials. While art and design chatbots, like Dall-E, can produce images from prompts, they tend to be impersonal and not related to a company’s brand position or product, nor are colors used strategically to amplify aspects of the brand. Compelling logos result from a human having a conversation with another human to capture the subtleties and nuances of the brand in an image that unequivocally provides a visual representation of the company’s products and brand position.

AI chatbots are a good tool for conducting initial research because their generative AI can delve deeper than traditional internet research. By providing carefully curated prompts, a chatbot can cut research time significantly by pinpointing information related to the prompts to pull data relevant to the search. It’s like web surfing powered by generative AI, where select information comes to you. However, the above-mentioned cautions still apply. Consider the sources of the chatbot’s information to be sure that they are authentic, accurate and open to sourcing.

To help determine the source of content, a watermark test for AI chatbots is being launched on Feb. 15 that can reveal text written by AI. The code will be free. However, it works best when it’s incorporated into a chatbot’s software. The watermark system has its limitations, but it will help teachers and professors discern their students’ original work from what was told to a chatbot.

Chatbots are here to stay

Chatbots are here to stay, and we humans need to be aware of what they can and cannot do. Their development will always be ahead of any tools we can develop to expose what is created by a generative AI platform. But, there are telltale signs with language use and word choice that reveal the actual author of the content.

Stay vigilant. Stay tuned…

– (This was 100% written by a human)