What are Large Language Models (LLMs) such as ChatGPT?
LLMs are machine learning models trained on vast amounts of text data, capable of predicting the next word in a sentence and generating human-like text. This is an incredible tool to process data to streamline and simplify processes and tedious tasks.
How does Synthpop apply LLM technology to healthcare applications?
This technology can enable a wide range of use cases. Below are a few examples of Synthpop’s technology supporting healthcare teams.
Automating patient intake
LLMs can automate the intake process by creating intelligent chatbots that interact with patients, efficiently filling out intake forms to save time and enhancing the patient experience. Our system can also be trained to perform tasks within your system, based on rules that are typically tedious and time-consuming to manually accomplish for each new patient. AI processing and feedback allow you to interact with the referring clinics in real time if any documents are missing.
Creating encounter notes
Our AI-assisted charting allows more time for meaningful patient-provider interaction. AI and LLMs can transcribe doctor-patient interactions and convert them into coherent and structured SOAP notes, and After-Visit Summaries (AVS).
By processing the notes, patient medical history, and insurance information, our AI assistants may be used to make recommendations for next steps, including reminders for necessary documentation.
LLMs can understand insurance policy details, automate the verification process, and generate clear summaries, cutting down verification times. The speed and accuracy reduces errors and the potential for denial. However, if denials are received, our AI agents can quickly generate context-rich appeal letters. Our AI agents can be trained to navigate phone trees and long wait times before human interaction is needed.
How does AI work with the clinical staff?
Synthpop aims to reduce clinical burnout by using trained AI “Assistants” to enhance your existing workflow, expanding the capabilities of your clinical and administrative staff. The quick review and summary of countless data sets in a very short amount of time, designed to the outputs you need, cuts hours out of administrative work. Contact us to discuss how you can use Synthpop’s AI assistants to make healthcare more human.
How does pricing work?
Synthpop offers various pricing models, based on your need. There are no hidden fees and no long-term contracts. Let’s Connect! to get started, and keep in touch about promotions and packages.
I’ve seen multiple articles that say ChatGPT is not suitable for healthcare because it can “hallucinate” and provide information that isn’t accurate. How does Synthpop address these accuracy challenges?
Synthpop addresses these challenges by using large datasets that are fine-tuned for specific healthcare applications. For example, our AI agents ingest thousands of pages of insurance policies and stay up-to-date with edits. We also train the AI models (while protecting health information– link to security question below–) using massive amounts of patient data.
Our models have been shown to generate summaries that are accepted by clinicians as-is over 90% of the time. These steps, combined with human expert involvement and rigorous testing, produce a purpose-built solution that is optimized for the unique tasks it was designed to perform.
Multiple publications have advised healthcare providers to avoid using ChatGPT because confidential patient data can’t be protected. How does Synthpop address such privacy concerns?
Security and Privacy are incredibly important to Synthpop. We’ve developed a unique way to replicate the statistical properties of real-world data, without containing personally identifiable information. The data we process satisfies the Department of Human Health Services (HHS) criteria for “de-identified”, and as such it is no longer considered Protected Health Information (PHI). In other words, HIPAA no longer applies. It doesn’t get any more secure than that!
How is Synthpop more secure than most other AI products and services on the market?
We’ve designed products that meet the highest standards of security. Our entire software infrastructure, including our AI agents, is HIPAA Compliant. We are using Microsoft Azure, Amazon Web Services, and Google Cloud, all of which are SOC2 Type II compliant and are HITRUST Certified.
Several vendors are willing to sign a Business Associate Agreement (BAA) to address patient privacy concerns. Doesn’t a Business Associate Agreement address privacy concerns? Why does Synthpop require additional contracts?
Cloud services such as AWS provide all the protections necessary to satisfy HIPAA regulations, and they will even sign Business Associate Agreements with healthcare organizations. But does this make them HIPAA compliant? That’s not completely clear.
With software and cloud services, the extent to which patient data is protected is dependent on how the technology is configured. Unfortunately, many healthcare organizations, particularly those that are small, lack the expertise to properly configure these services. For example, Amazon published a 26-page document describing how organizations can use AWS in a manner that is HIPAA compliant. As a result, configuration mistakes are common.
The Synthpop solution is fully secured without requiring any technical expertise on the part of the customer.
Patient Concerns and Consent
How do I explain the use of AI note-taking to patients?
From our experience, transparency about the use of AI note-taking is the best strategy. Explaining to the patient why you chose to use AI in your practice and its benefits to you and the patient are a good place to start. For example:
“It's essential for us to maintain transparency and ensure you're comfortable with the technological tools we use during your visits. One such tool is Synthpop’s AI-powered transcription service. Here's what you need to know about it: In order to provide you with the most attentive care, I utilize a service that transcribes our conversation, ensuring every detail is documented. If at any point you'd prefer not to use this service, please inform me, and I will disable it.”
Frequently Asked Patient Questions
What is the purpose of using AI / Synthpop?
By transcribing our conversations, Synthpop allows me to focus entirely on our discussion without the distraction of manual note-taking. This ensures that I can be more present and attentive during our consultations.
How secure is the encryption used to protect my recorded conversations?
Synthpop adheres to HIPAA-compliant data storage and processing protocols to ensure the security of the transcribed conversation.
Will any third parties have access to my conversations or the AI-generated summaries?
No third parties will have access to your conversations or the summaries generated by the AI. Your personal health information will never be sold.
Can I access or review the transcriptions and AI-generated summaries?
You have the right to request access to both the audio recordings and the AI-generated summaries of our discussions.
How accurate is the AI-generated summary of our conversation?
Synthpop's AI summaries are designed to capture the essence of our conversation accurately. For added assurance, I review each summary to confirm its accuracy.
Can I request my information be deleted?
If you have specific concerns, we can facilitate a request to delete your information.
How do I revoke consent to use Synthpop in the future if I change my mind?
If you ever decide you'd prefer not to use Synthpop, kindly inform me, and we will immediately cease its use during your appointments.
What is the impact on my care if I choose not to use Synthpop during consultations?
Opting out of Synthpop will not compromise the quality of your care in any way. However, its usage does enhance our consultations by allowing me to concentrate more on our discussion. It may help me to get as much information as possible to your insurance company for coverage.