Skip to main content
06.02.2023 By Current Global's North America Healthcare team.

Poking and Prodding ChatGPT: 7 Early Takeaways for the Chatbot’s Potential Role in Health Communications.

Hands working on a laptop keyboard overlaid with graphic outline of a chatbot head.

Ever fascinated by new technologies and by new ways of communicating, the Healthcare practice at Current Global has been testing out ChatGPT, the “large language model” that’s taken the world by storm since late last year. To put it through its paces, we’ve asked it questions, pressed it on responses, and fact-checked its work. We’ve had it write blurbs, blogs, social media copy, essays, and more, to figure out what it can and can’t do. As others have found, the results have been mixed – it can generate passable responses (some areas more than others) but has shortcomings that make it a no-go for certain types of work.

Here are the early takeaways of three members of Current Global’s Healthcare team.

Alice G. Walton, PhD – Scientific Communications and Editorial lead.

As a science writer and editor, I’ve been on the more skeptical end of the spectrum getting to know ChatGPT. Although some of its capabilities are fascinating or funny (ask it to explain how ibuprofen works in Old English), its penchant for making up facts and its sometimes-monotonous tone leave a lot to be desired. Some of these issues may be fixed in the next iterations, but for now, I’d say exercise cautious curiosity.

It presents serious drawbacks for science writing (and for facts in general).

If you ask ChatGPT to write you a referenced piece on a topic in science, it can generate a clear, if humdrum, response. But look more closely, and you may notice that some of its references aren’t real. (Pressed, ChatGPT will apologize for its inaccuracy and tell you they were only examples.) The model wasn’t trained on references, which is a major drawback for people who must annotate their work, comply with FDA regulations, or just like sticking to facts. Its inability to verify sources and fact-check its own work – at least for now – is a major barrier for science writers.

It doesn’t have a “voice”.

While the chatbot can generate content in various levels of complexity, it doesn’t really have a voice. Its copy can be repetitive and falls back on the same transition words, structure, and grammatical errors a student might. This may change at least partially in the next iteration, but for now, its sweet spot seems to be short, informational copy that doesn’t require tone or nuance.

Most industries will need to put in place guidelines.

In these early days, ChatGPT is already making us think about some hard questions – around authorship, legal responsibility, copyright, disclosure, and more. For those writing about topics in healthcare or mental health, there are additional considerations, like whether it is appropriate to use an AI tool to write materials for the public, patients, vulnerable populations. Should its use be disclosed, always or sometimes? There’s talk of watermarks in the future, but it’s unclear how that would work or what hacks would follow. In any case, some organizations are beginning to develop guidelines around its use, and hopefully more industry associations and governing bodies will follow suit.

Daniel Brackins, Digital Lead.

ChatGPT has a lot of potential. It’s a powerful tool that can supercharge the work people are already doing. It is not a magic solution and is only as good as the user allows it to be.

The end user is typically the lowest common denominator in ChatGPT output quality.

There has been a lot of concern that AI will make certain jobs obsolete. In its current form, I argue that these jobs will become more important than ever. For example, the following prompt is generic and consequently will produce a decent, but generic response: “write a LinkedIn post that talks about the benefits of AI in the healthcare field.” Where the end user comes into play is their creative and strategic thinking to pull out the best that ChatGPT can offer. A more refined prompt would be, “write a LinkedIn post that talks about the benefits of AI in the healthcare field. Focus specifically on billing, insurance, and payers as it relates to patients. Ensure that the post is written using colloquial language and isn’t overly scientific in tone; it should be written at the 8th grade level. Also provide three relevant hashtags.” Therefore, there’s a science and art to developing prompts (and prompt marketplaces, such as PromptBase, have already sprung up). Prompt writing will become an essential skillset for people of every industry.

There are limitations on complex tasks.

ChatGPT can help you write code and formulas, but don’t expect it to do the heavy lifting. While it can give you suggestions or even review your complex tasks, it won’t provide a solution immediately. It won’t effectively give you the entire code for a website (yet). However, if you need help with elements of the website you are building, it can give you snippets of code or diagnose issues within the code. This does require the end user to have the knowledge to refine and guide ChatGPT.

Melanie Holst, Client Experience Intern.

I’ve found ChatGPT could be helpful in saving time in simple or repetitive tasks, although it has some limitations.

The program could be used for basic background research.

Recently, I tested it out as I was researching a biomarker in multiple sclerosis. To get some background on the science before I dug into a media audit, I asked ChatGPT to “explain to me like I am a 5-year-old what the X biomarker test is and why it’s important.” It gave me a full answer in easy-to-read language instantly, condensing a research endeavor from 25-30 minutes to 5-7 minutes. Unfortunately, ChatGPT’s data source is only updated through 2021, so it didn’t include the most recent scientific findings.

It could be used to automate tedious tasks – but still needs to be fact-checked.

I’ve also tested ChatGPT to see if it can handle media searches. ChatGPT was able write me computer code to automate media searches based on specific key words and add the results to an excel document. All I had to do was copy and paste the code into the software program and it was able to reload my media searches automatically. However, in some of the requests I’ve made, ChatGPT has “hallucinated” responses that were inaccurate. It can fabricate events and come up with examples that are untrue. While I can appreciate some of ChatGPT’s features, the technology certainly has room for improvement.

—————

The bottom line? ChatGPT is an impressive tool – just one of many – whose abilities will grow in the future. While it has some drawbacks for certain industries, this does feel like a moment in time that’s opening the door to new dimensions of communication. We’ll be eager to watch – and experiment with – the tools as they evolve.

 

 

Recent articles

07.06.2024
Women, Let’s Give Ourselves Grace: Takeaways from the 2024 PRWeek Healthcare Conference.
Read now
A diverse ground of three Asian women, sitting at a table and smiling at the camera.
31.05.2024
How a Few Small Boxes Can Make a Big Difference in Healthcare Communications.
Read now
Selfie-style shot of the Current Global London team. There are around 20 people on the shot, all smiling.
23.05.2024
Culture Makes Great work and the Workplace Great.
Read now

Let's talk.

We thrive on solving the toughest business and brand challenges. We’d love to discuss how we can help you ignite your spark.

hello@currentglobal.com