The future of AI in medicine has taken a significant step forward with the recent announcement of GPT-4 by OpenAI. This latest linguistic and visual marvel has piqued the interest of healthcare professionals globally, leaving many wondering if GPT-4 will replace them or serve as a digital copilot. In this column, we'll demystify GPT-4, examining its potential applications, limitations, and misconceptions while addressing the burning question: Will GPT-4 don our white coats and stethoscopes?
As we delve into the world of GPT-4, it's essential to recognize that this AI model is the advanced successor of GPT-3/ChatGPT. It boasts impressive abilities to comprehend and generate text, as well as analyze images. The possibilities are intriguing - imagine showing GPT-4 an image of a traffic light and asking, "Okay, what should (or shouldn't) I do here?" GPT-4 might then effortlessly identify the color of the right light as red and advise, "I wouldn't recommend crossing the road. The light is red"
While this example might seem trivial, the potential implications for medical applications is profound. Now imagine presenting GPT-4 with an X-ray and asking, "What do you see in the X-ray, and where?" GPT-4 could potentially examine the image, pinpoint a lesion, and offer a precise location and description. Although this feat may be beyond the current version of GPT-4, it's not entirely out of reach in the near-term.
“Sorry, can you repeat your question?”Source: AI-generated image using DALL-E with the prompt 'AI robot in a white coat lecturing human doctors in a large auditorium, Vincent Van Gogh-style’.
Chess, GPS, and GPT-4: A Tricky Assumption
Imagine you're playing chess with a computer that effortlessly outmaneuvers you at every turn. It's tempting to think that if it can excel at a task as complex as chess, surely it can do something as "simple" as prescribing medications, which might be simple for our clinical readers. But that assumption can be a slippery slope. GPT-4's chat format creates the illusion of a conscious, superior being providing its opinion, leading some to overestimate its capabilities. It's like believing your GPS can give you relationship advice just because it's good at finding the fastest route to the hospital.
It's easy to assume that if certain achievements have been made, more advancements are just around the corner. However, expecting GPT-4 to provide accurate answers to all questions, across all fields of work and in the right format is not practical. If AI could achieve such a level of expertise, it would not only replace providers but also result in job losses for UN politicians, football coaches, professors, and all other types of knowledge workers. Just a friendly reminder that AI does and should have its limits, and you won’t be alone if all clinicians are replaced.
GPT and Clinicians: The Art of Asking Questions
In the famous sci-fi novel, The Hitchhiker's Guide to the Galaxy, hyper-intelligent aliens spend millions of years building a computer named "Deep Thought," equipped to answer any question posed to it. However, the aliens had only a single question they could ask, and when the time came, they excitedly queried Deep Thought, "Please let us know the ultimate answer to everything and the universe!" The computer's response was, "What a stupid question is this? What does it even mean?" and million years of effort were wasted. Asking the right question is crucial!
As a clinician, you might use Google search similarly as your patients do, but your medical expertise enables you to ask more specific, relevant questions to obtain appropriate answers for your patients. For example, you might search for "the optimal approach to manage hypertensive urgency to avoid brain ischemia," rather than searching for “please help me with my seizure’. As clinicians, Google search has most likely never made us fearful of losing our jobs.
The same principle applies to using ChatGPT. While this AI tool can provide valuable information, it is essential to ask the right questions to get the most accurate and helpful answers. The ability to formulate precise, meaningful questions is a unique skill that medical professionals possess, making your role indispensable in patient care. AI tools like ChatGPT can complement and support you, but they cannot replace your expertise in framing the right questions for each patient's unique needs.
“What should I ask?” Source: AI-generated image using DALL-E with the prompt 'These people have no clue on what to do with a cosmic superintelligent AI.’
The True Potential of GPT: Beyond a Chat
You may wonder if the chat format limits the perception of GPT's full potential. This is where APIs come into play. Some people may be misguided by the chat interface, imagining an intellectual being present behind the screen, without understanding the true extent of its capabilities. In fact, OpenAI is actively working to unlock GPT's potential through APIs, which will enable integration with various applications and services, expanding its reach and impact beyond the chat interface.
Wait, What is an API?
An API can be thought of as a vehicle for a product feature, providing the power to access certain functionalities. Consider an EHR that needs to calculate the likelihood of a claim being denied to build an automated internal workflow, such as sending a patient a text message about the denial and flagging it for doctors in the EHR. When the EHR company cannot do this on its own, it might use a 3rd party service. The EHR sends patient information through the service provider's API, which carries the feature and processes the information. In this example, the API allows the EHR to access the service provider's specialized features. For the service provider, developing and maintaining APIs is its core expertise, involving advanced technology and significant effort. This makes APIs essential for their product's success and a primary source of revenue.
GPT-4 can offer great potential through third-party applications, which, in simpler terms, means that it can be integrated into various software tools to perform different tasks. It can be used to develop an application that transforms complex medical records into simplified, patient-friendly versions, like a translator for medical jargon. Additionally, GPT-4 can assist in creating applications that automatically extract relevant information from medical records and generate insurance claims, acting as a digital assistant to simplify administrative processes and reduce human error.
Limitations of GPT-4 in Clinical Decision Support
The GPT series, including GPT-4, is known for generating seemingly plausible yet factually incorrect statements, as these models are trained to create text that appears legitimate. This propensity for producing inaccurate information, or "hallucinations," means that users should approach its responses with caution, particularly in critical fields like medicine where the stakes are high. If you use GPT for direct medical guidance, there should be a process where the content can be reviewed by experts, at least for the current version of GPT.
Another limitation of GPT-4 is steerability, or the difficulty in controlling the model's output to follow specific instructions or produce desired outcomes. For example, when attempting to use GPT-4 as an assistant for clinicians, the model may generate responses based on generic knowledge rather than the specific medical protocol provided even when you ask GPT-4 not to.
Additionally, the format and tone of the output can be inconsistent, sometimes resembling communication with a patient rather than a fellow medical professional.
Achieving full steerability remains an ongoing challenge for AI developers. While GPT-4 has shown promise in various applications, it is essential for medical professionals to be aware of these limitations when using the technology as an aid in decision-making processes.
“We can do this together” Source: AI-generated image using DALL-E with the prompt 'Finally, the AI robot and the doctor are together. Wishing you all of the love and happiness!’
Embracing Innovation: A New Era of Collaboration Between AI and Healthcare Professionals
In conclusion, the advent of GPT-4, in tandem with other rising stars like no-code platforms, provides an unprecedented opportunity for medical professionals to become innovators and change-makers in their field. By leveraging the power of GPT-4, healthcare professionals without technical expertise can now contribute to the development of applications and tools tailored to their unique needs and knowledge. This new level of accessibility allows doctors and other medical professionals to actively shape the future of healthcare, paving the way for more efficient, patient-centered care, and innovative solutions that ultimately benefit both practitioners and patients.
In the near future, there will be thousands of tools built on GPT-like large language models, such as Microsoft's CoPilot. These tools are not chatbots themselves, but they are built on the power of GPT-driven APIs, which enable the creation of complex features that understand and process natural language. This is the perfect time for you, as a clinician, to contribute to innovation rather than fearing the risk of job loss. By leveraging your domain knowledge and these powerful APIs, you can "design" features tailored to your specific needs, ultimately enhancing your practice and patient care. The functions you create with GPT itself might not be something you can deploy to the world directly, but you can also take advantage of no-code platforms, which convert these abstracted blocks of features into something real, tangible, or even apps. Embrace this new era of technology and use it to your advantage to streamline workflows, improve patient outcomes, and enhance decision-making processes.
If you can't beat them, join them.