What Role Does AI Play in Thyroid Health?

0
What Role Does AI Play in Thyroid Health?

This transcript has been edited for clarity.

Kaniksha Desai, MD: Welcome to the Thyroid Stimulating Podcast. I’m your host, Dr Kaniksha Desai. This podcast was created in partnership with the American Thyroid Association (ATA) to discuss up-to-date diagnosis and management of a wide array of thyroid diseases.

Today, for the first episode of 2025, we are diving into an exciting and transformative topic: the role of artificial intelligence (AI) in the field of thyroidology. From improving diagnostic accuracy to personalizing treatment plans, AI is revolutionizing how we understand and manage thyroid disorders.

Whether you’re a clinician, a researcher, or simply curious about the future of thyroid care, this episode is packed with insights that you don’t want to miss. Let’s uncover how AI is shaping the future of thyroid health.

Our guest today is Dr Johnson Thomas, who is affiliated with Mercy Hospital, Springfield, and Missouri State University. Dr Thomas’s interests include the application of AI in thyroidology, particularly in enhancing the risk stratification of thyroid nodules. His research also includes the development of AI models aimed at improving the accuracy of thyroid nodule diagnosis and thereby reducing subjectivity and enhancing malignancy prediction.

Additionally, he has contributed to studies on the efficacy of ultrasound-guided laser ablation for thyroid nodules. Lastly, he is the recipient of the Kenneth Simcic Award for academic excellence in endocrinology and the Researcher of the Year award, both from Mercy ministry. Thank you so much for joining us today.

Johnson Thomas, MD: Thank you, Dr Desai. It’s a pleasure to be here.

Desai: Tell me what got you interested in AI.

Thomas: When I started my career a few years ago, as an endocrinologist, the first few years I was doing about 300 biopsies per year. To be precise, 87% of these nodules came back as benign, so I thought that surely there should be a better way to do this.

I used to code computer programming from sixth grade, so I thought maybe I could use AI to solve this problem. Back in 2015 or 2016, I started collecting data, mainly about thyroid nodules — whether they were hypoechoic, had irregular margins, and stuff like that.

I then created a model to predict thyroid cancer from that. Think of it like the FRAX score for osteoporosis. You put some stuff in and it’ll spit out the probability of cancer. We presented this at an ATA meeting back in 2017. As you can imagine, there is a large amount of subjectivity in that someone has to look at these images and see whether the nodule is hypoechoic or very hypoechoic.

Then we decided to completely avoid human interaction and collect ultrasound images to create a convolutional neural network, which will directly take images of nodules and then give a prediction. The next problem was, how do you trust the predictions from these AI models?

If you ask any thyroid expert, they look at a thyroid nodule ultrasound and they say, yeah, that looks benign or that looks cancerous. They don’t usually go through the Thyroid Imaging Reporting and Data System (TI-RADS) or ATA guidelines; they just have it in their mind.

We asked, what if we could show similar images and their diagnosis? We developed a model for that and presented at a 2019 ATA meeting. After presenting that, I got many more opportunities to work with experts in the field and to collaborate internationally.

My interest in AI increased and I started doing more projects. It’s actually my interest in thyroid nodules and how to make it more efficient that actually got me into AI.

Desai: I’m happy to hear that thyroid is the reason you went into AI. What do you think is the biggest area where AI has made strides in the field of thyroidology? Is it just the ultrasound imaging?

Thomas: When I talk to people about using AI in thyroid, they’re like, oh, that’s coming, we are thinking about it. Most people knowingly or unknowingly are using AI in their practice if they’re using certain molecular markers. Most molecular marker predictions are based on AI. We are already using this and we have been using it for quite some time. As endocrinologists, we use pumps every day, and that’s also AI algorithms, or similar algorithms, helping us every day.

Molecular markers have already shown that AI can help us reduce unnecessary surgeries. Now, we are seeing more radiology applications, like using AI for risk-stratifying thyroid nodules from ultrasound images. There are already US Food and Drug Administration (FDA)-approved algorithms. I believe there are six FDA-cleared algorithms that you can use for this right now.

The first application got FDA clearance back in 2013. Not everyone is using it, but in the future, we’ll see more applications in these fields.

Desai: Can AI tools reliably classify thyroid nodules using systems like TI-RADS or the ATA? How do they help avoid unnecessary biopsies?

Thomas: First, I look at AI as a second set of eyes that helps us. It definitely reduces the subjectivity, it standardizes reports, and hopefully it reduces biopsy. Can it reliably classify thyroid nodules? Well, there are several nuances .

There are studies showing that AI can be as good as or better than a radiologist in stratifying thyroid nodules. A study by Buda and colleagues in 2019 showed that it’s as good as expert radiologists.

I do have concerns, and there are some nuances. What I have seen is that AI is really good at finding papillary thyroid cancers because they have those classic features, such as microcalcification and irregular borders. AI is not so good at identifying follicular or follicular gradient lesions. They look kind of benign. If you’re going for malignancy prediction, AI may not work well in those kinds of nodules.

For TI-RADS, there are studies showing that they’re actually pretty good. Most of these AI platforms, you can also change. For example, it says “this nodule has irregular margins.” If you think it’s more ill-defined, you can actually change it. Not all those features may be correct, but you have the option to edit that before you finalize it.

The other thing is there are not many studies on identifying medullary thyroid cancer or anaplastic thyroid cancer. We haven’t seen anything on anaplastic because that could look like a normal, big thyroid with irregular stuff in it. We have to be cognizant about the potential pitfalls of using AI from ultrasound images.

They are also not really good at describing mixed cystic nodules. If you look at a mixed cystic nodule, one part is solid and one part is cystic. If you do a full sweep of the thyroid, at one end, it might be mostly solid, and the other end might be cystic — and the middle, let’s say 50/50. Those are the ones that sometimes AI struggles to classify.

The other thing is that these systems are trained on certain ultrasound machines. They are trained on the most common ones, but if you’re using an ultrasound machine that is not very compatible with the training data, you may not get quality output.

Another thing is that, when we get a drug approach, most of the time we have head-to-head comparisons as to how much weight you lost on one medicine vs the other, but there is no head-to-head comparison or prospective analysis by independent researchers for this AI algorithm. That’s something I would like to see in the future — how good these algorithms are in the real world and a comparison between these different algorithms.

Desai: Thank you for sharing some of the drawbacks. It appears that the AI algorithms are pretty good, but it’s still great to have an endocrinologist or radiologist on the back end finalizing. Can you tell me a about how healthcare providers can integrate AI-powered ultrasound interpretations, practically, into their workflow?

Thomas: There are multiple ways of using it, but the most efficient way is if you have an AI system integrated in your ultrasound. While you’re scanning a patient, you freeze their nodules, take two orthogonal views, and just press a button; you’ll get the TI-RADS scoring and you’ll know whether to biopsy or not.

You can instantaneously show the patient that “this is what we got from the AI system but this is what I think.”

This is currently available in the United States with certain machines. You can buy it as an add-on. The other option is you do the scanning, you get the images through your packs, you view the nodule you want to analyze, pick a slide you like, and send it to AI. It will send you the report back. You can copy/paste it in your radiology report.

There was a study published in China where a robot would scan your thyroid and give you a report.

Desai: Without any humans?

Thomas: You put in their workflow for a thyroid ultrasound. The patient goes to a room, the robot comes in, scans the neck, and then you get a report. We are not there yet, but that’s in the research phase.

Desai: We talked about nodules, but how about thyroid cancer? Can AI predict lymph node metastases or distant metastases?

Thomas: Absolutely. Multiple studies show that we can look at the thyroid ultrasound images or incorporate other features, like whether the patient had a BRAF mutation or received radiation, and then predict the chance of lymph node metastasis.

The problem with that is papillary thyroid cancer; if you take out all the lymph nodes in the neck, up to 70% of the patients might have micrometastases. Whether it’s relevant for clinical management has to be seen. Predicting clinically relevant lymph nodes is very helpful. That can direct the surgeon to look and maybe get further imaging to make those decisions.

If you send it for molecular markers and it comes back as suspicious, some of them actually give you the probability of metastasis, whether it’s lymph node metastasis, whether it’s actually low or high risk for lymph nodes, or the chance of recurrence.

Those are all based on AI algorithms. We are using those reports now in the clinic. Also, once you perform surgery and stage the patient, what’s the chance of recurrence? We have guidelines from the ATA where we go through the algorithm to figure out the probability of this coming back again.

Dr Grani and his group in Italy came up with an AI algorithm to predict the chance of recurrence. That will help us be more aggressive if you want to be initially, to figure out if there are lymph nodes. Those things are clinically important because, right now, we go with a set of rules. Those rules don’t incorporate all the features that might be relevant. In Italy, they have a large database that many thyroid centers participate in. I wish we had something similar in the United States. They are creating these models to help manage these patients in a more efficient way.

We don’t want to overtreat, but also we don’t want to undertreat. As you know, sometimes a persistent thyroglobulin or a tumor marker can affect your sleep. If we can make that go away in the first round and give peace of mind to the patient, I think that’s really important.

Desai: The other application that I can see is that we have so many thyroid cancer survivors and a limited number of endocrinologists. If we can risk-stratify which thyroid cancer survivors are going to do great without a recurrence, that’s one group that can be cured and discharged from clinic vs somebody who’s going to be a high risk for recurrence, who we’re going to hang onto and monitor more frequently. At the same time, if we tell the patient, oh, your risk for recurrence based on this algorithm is very low, it gives them peace of mind too, right?

I wanted to kind of switch over. We talked about the nodules and the cancer for the providers providing the care, but let’s talk about patient education a little bit and how we can use AI for that.

How do conversational tools like ChatGPT help patients understand their thyroid disease, the diagnosis, and the treatment options? Is the information any good?

Thomas: When Google came out, it was like, well, people are going to look up their symptoms and come up with differential diagnoses, and how are we going to deal with all these things? We still get patients, but I think an empowered patient who knows more about their disease is very important.

I have patients coming up with their differential diagnosis from ChatGPT. We did a few studies on how ChatGPT, Gemini, and other chatbots will respond to thyroid cancer-related questions, and we evaluated this with a few sets of physicians. It’s currently under review, so hopefully that will come up.

I can tell you that, because these chatbots are trained on the whole corpus of the internet, there are things that may not be medically accurate or there could be some fringe theories that most physicians may not be in agreement with that come out in in the chats.

We have to educate the patients. Just like we tell patients that Google is a great resource to do research, but let’s make sure that these things that you find are relevant to your healthcare. Let’s look at your data and your medical records to make sure that this is applicable to you. That’s how I usually take care of that.

People are going to use these chatbots more and more, and it’s getting better. The other thing is that this is the worst it can be. Some of these frontier models, which are not available to everyone, are actually pretty good at solving medical questions.

I asked a very tough endocrine question, which, if you ask your endocrinologist, it will take some time to answer. Most of the models didn’t give the right answers. One model gave a really great answer, which was the right answer, along with the reason for that. These are not currently available publicly, but in the future they will be available to the patient. Sometimes it’s also helpful for us physicians to generate a differential diagnosis, but these are not approved for diagnosis or management.

Desai: I know you briefly touched on it, but what do you think some of the misinformation risks are? Is it just anxiety related, or do you think there could be real issues with misinformation? How can you mitigate that?

Thomas: I’ll give you a real-life example. I had a patient with a thyroid ultrasound. Nowadays, patients get the results before we sometimes even see the report. One of the nodules came back as TI-RADS 5, and the patient went to one of the chatbots and typed in, I have this TI-RADS 5 nodule; what is it? What’s the next thing?

Of course, it said it’s a highly suspicious nodule and it needs to be biopsied. Reviewing that ultrasound with the patient, it was not a TI-RADS 5 nodule. It was actually a benign nodule because there is subjectivity in this interpretation.

The next thing is the patient got really anxious and started calling our office, saying, “It’s cancer and you guys are booked out — we need to do a biopsy soon.” That’s a problem with unnecessary anxiety, but that’s also true with googling. It’s not any different.

The other thing is misinformation and hallucination. Sometimes the AI models will hallucinate things that are not relevant to your medical disease and put it in. After you read that, if you believe it, you might be believing misinformation. Some of the things that come out are wrong. These models are being retrained to make sure they reduce the risk. Recently, Google came out and said that, yes, you can use these chatbots in high-risk areas provided there is human expert supervision.

That’s how we should approach this. Patients can use that, but bring back the data. Also, we need to do more to educate patients, saying that these answers might actually seem very convincing. If you enter five symptoms, it’ll give you a beautiful diagnosis. Sometimes that’s just too good to be true.

Desai: You need to work with the patient to interpret that what the chatbot is saying is actually accurate. Let’s talk a little bit about AI for chart summaries and note taking. This is of interest to many people, even outside of endocrinology or the thyroid world. How can AI tools help thyroidologists summarize complex patient charts and histories more efficiently?

Thomas: I think that will definitely help with our burnout. Usually, I see about 18 to 20 patients a day. The night before, I look at the charts to figure out why I’m seeing those patients and what are the recent labs. If there are thyroid nodules, I go through all the images to see which nodule needs to be biopsied.

As you can imagine, this is a very time-consuming process that might take about 2 to 2.5 hours a day. What if AI can summarize that for you? Instead of taking 2 to 2.5 hours, maybe you can do this in 30 or 40 minutes, or just before you see the patient.

Our institution is piloting a program that can summarize for you. If you’re seeing a patient for hypothyroidism, it will summarize recent labs, what medication the patient is taking, and relevant information from the chart. It has not received a robust rating from the people who are using it and gave me feedback, but it’s getting better.

The same is true with transcribing. Right now, when I see a patient, I have an AI tool on my phone. After getting permission from the patient, I put it on in front of the patient and me, and then we start talking about their medical trouble. Many times, patients like this because I’m not staring at a screen and typing while I’m talking to them.

This makes it much easier, and patients have told us in our reviews that they like this better because the doctor is actually looking at the patient and listening to them. We are not worried about finishing charting by the time we are finished seeing the patient. At the end of the visit, I’m able to get a summary and I can generate an after-visit summary and a plan.

The other thing that I find interesting is that, for example, if I’m seeing a patient for hypothyroidism, they might also talk about weight loss, hair loss, and stuff like that. Many times we don’t document those things, but since AI is listening to all these things, the AI program will convert that to a good assessment and plan. It will also include all the stuff we discussed.

You will have to edit a little bit, but most of the time it makes note creation an easy process. I hope that in the future this gets better and will reduce our burnout.

Desai: That sounds wonderful to leave a patient encounter and have your note done for you, and all you have to do is sign it, right?

Do we have anything that does that for new patients, that can summarize charts for a new patient referral, like outside records, and streamlining that kind of documentation process?

Thomas: Not for a new patient from an outside center with paper charts, but if they are in the same system, yes, it will do the same thing.

We have the same thing happening in the ER. They will summarize the current medical history. You can also use it for signoffs, and there were a couple of studies which showed that AI-generated signoffs from ER to inpatient physicians were comprehensive compared to regular signoffs.

It can summarize, even for new patients if it’s available in your electronic medical records. Right now, if it’s paper chart referral, I don’t know about any system that can do that for you. Hopefully, in the future, we should be able to digitize those and summarize it.

A major caveat here is that AI can still hallucinate. It might tell you that your patient is on a blood thinner even though they may not be. I usually tell my colleagues to trust but verify, and make sure this is accurate before you act on this, especially for any major decision. Make sure the data that you have are accurate.

Desai: It’s so important to verify. Let’s talk about the in-basket a little bit. How do you see AI answering patient questions that might come in? Can AI do that?

Thomas: As you know, there are many healthcare systems in the United States currently using this. If we see 20 patients a day, and if you’ve been practicing for quite some time, I would say the number of in-basket messages is around 40 to 50.

It should take 30 seconds to a minute or maybe up to 2 minutes in certain cases. That adds up. We have systems that autogenerate a reply for the patient, and then you can make modifications and send it. That will definitely save time. Again, you have to verify the answer that the AI generated. I have physicians who have been using it and they really like it.

One of the studies also showed that AI is more compassionate when responding to patients. Sometimes we are busy seeing patients and we just type a short reply.

AI might further explain the things, saying, “I looked at the trend, your TSH is actually within normal limit. We looked at other things and your dose is correct. You can continue that. If you have any questions, let me know,” or “Let’s do the blood test before next visit.” It’s more of a comprehensive answer than just, “Yes, everything is good. See you next time.”

When there were concerns in the questions raised by the patients, the AI also tried to soothe the patient in the reply. Those are good things, but always make sure the answer is correct clinically, based on your judgment.

As I said, I think we are just beginning. It’s going to get better. Hopefully, it will be like an autopilot setting, where 90% of the time, you just have to click “send” and maybe 10% you have to edit.

Desai: Hopefully this will help curb those patient messages or get back to the patients faster so they get a response and they’re happy.

Do you use AI for writing medical appeals and medical necessity letters for getting images for thyroid cancer, getting Synthroid approved, or anything like that?

Thomas: I have done that. I started 2 years ago when it came out. It was reasonably good, but recently it has gotten really good.

It will come up with the guidelines and pick the correct guidelines. Let’s say you’re picking up a GLP-1 agonist. You just have to give a little bit of information. The patient has diabetes, cardiovascular disease, whatever they have, other comorbidities, and it will come with a pretty good letter with a good rationale, data, and mainly guidelines backing up why that medication should be used.

Just like physicians are using this, the other end is also using AI these days. We joke around that it’s going to be the battle of the bots in the future. Your AI will come up with a letter, their AI will come up with a different letter, and they fight each other.

Desai: Are you still getting all your appeals going through, or are they getting denied using other AI?

Thomas: Right now it’s working. The other thing is, if you’re doing a peer-to-peer, you can also pull from guidelines, saying, this is the reason I want to do this, but you can also tell the person you’re talking to that this is based on the guideline. The patient has X, Y, and Z, and this is what the guideline says.

“If you have that relevance, please give that to me. Otherwise, this is what we have.” You have better points to substantiate your claim, or you might find out that you cannot use, let’s say, Zometa [zoledronic acid] when the calcium is 10.6 and the upper limit is 10.4.

Desai: Let’s take a step back and talk about research. I know this is somewhat more controversial, but how does AI work in literature reviews and medical research?

Thomas: I think that’s an exciting field. One, it saves a large amount of time. I use Perplexity frequently to find relevant references. Sometimes when you click on the references, it might actually go to a nonexistent site. There is no study like that, so you always have to make sure that the studies the AI is coding are real. It might make up things.

The other thing is that Google released Google Deep Research. If you input a specific question, let’s say application of molecular markers in thyroid nodules, it will generate a research plan and say, I’m going to research these things. I’m going to research how it is reducing biopsy, how it’s going to reduce the cost of surgery, etc.

You read through that, you accept it, and then the AI goes for, let’s say, 2 minutes, to complete all the steps. It then generates a 2000-word article for you, specifically tailored for your questions, and with graphs, charts, and references.

If you are doing literature research, it’s a good starting point. The AI already generated all these things for you and has the references, and then you can tailor it more toward your research question. I think it’s really a good tool for doing literature reviews and figuring out what has been done recently. Most of the chatbots, if you allow internet access, can find the recent research and summarize that for you.

Another thing people are using AI chatbots for is data analysis. Be very careful in doing this because AI is very convincing, and it might come up with a statistical analysis that looks really good on paper. If you talk to a statistician, they might say, well, that’s not the right statistical test to use because of X, Y, and Z. You could use that to get a preliminary idea about what your data are showing, and you could refine your research question based on that.

I’m not a native English speaker, so the way I write English compared with another person might be very different. It could help with copyediting and it can also format your article for the specific journal where you’re planning to submit.

Keep in mind that most of the journals have a policy about using AI in papers. Make sure you adhere to that. Most of the journals know that people use AI, but they want you to disclose when you use AI. It’s always good practice. If you say you wrote everything and there are hallucinations, that’s not good.

There was a study that showed about 10% or 15% of the articles published last year actually have the AI chatbot’s reply verbatim without editing. That’s really bad practice. Please don’t do that. Edit your article before you send it in.

Desai: Are you worried at all that, if you do your data analysis using AI, your data are secure?

Thomas: That’s a really important question. The same thing applies when you input any patient’s medical information. They might store it. Again, you might be violating your institution’s guidelines or other guidelines. There is no HIPAA privacy, and if you look at the AI chatbots’ fair-use statements, they actually tell you not to use it for medical diagnosis. If there is an error, they say that it is clearly stated that this is not for unsupervised medical use.

The data that you input could be stored in the memory and used for training. Certain companies have specifically said that your data will not be used, but then again, you have to make sure that you adhere with the standards.

The same thing is true for using this for data analysis. This might be stored. Most journals also recommend not using these technologies when you’re reviewing papers, because if you upload a paper to this chatbot, it might store it before publication. Even though it is very tempting to get help from AI, be careful about what you input and what will be stored.

Be very mindful about data privacy and protection.

Desai: That is so important for our listeners. Make sure your data are HIPAA compliant and secure. AI does store data. Let’s talk about where this is going to go in the future. How do you see AI advancing in its ability to detect and analyze thyroid nodules, as well as thyroid cancer, in the future?

Thomas: I think in the future we are going to use more and more AI, for better or worse. It will definitely reduce the time needed to read the whole thyroid ultrasound. Probably it will be more like an automated analysis.

You get an ultrasound and AI will probably spit out the whole report for you. You’ll make sure this is accurate, you’ll compare the images, and you sign off on it. That’s one definite possibility we can anticipate in the near future.

If we extrapolate that, just like I told you, you’d order an ultrasound. A robot comes in, does your ultrasound for you, and you get the full report after that. You decide on whether to biopsy or not, based on those images. The AI analyzes the patient’s medical history and labs and gives you a recommendation.

You do a biopsy and let’s say it came back as AUS [atypia of undetermined significance]. The AI might be able to look at the slides and tell you, “This looks like cytologic atypia rather than nuclear atypia. With all the stuff we have and looking at the ultrasound history, this is very likely benign. I would recommend repeating an ultrasound in a year.”

The AI could also say, based on all these things, “I’m seeing some concerning features in the cytology slides. You should probably think about getting molecular markers to see if it has any aggressive mutations and go from there. You could also tell the surgeon to get more imaging before doing surgery.”

Once that’s done, it will also tell you, “This is the chance of recurrence, consider radioactive iodine treatment (or not).” I think we will see AI involved in all parts of diagnosis and treatment of thyroid nodules and thyroid cancer, but that is in the distant future. In the not-so-distant future, we will be able to use these in our clinics and hospitals for risk-stratifying thyroid nodules.

Desai: Do you see it also being used in hypo- and hyperthyroidism, or abnormal thyroid function tests that are difficult to interpret?

Thomas: There are some studies that looked at smartwatch data to see if we can predict hypothyroidism based on the heart rate variability. One study showed that it is possible. Can we detect patients switching from hyperthyroidism to hypothyroidism while on methimazole treatment? That’d be great, right? You only see the patient, let’s say, every 6 months, and you do a blood test 3 months in between.

What if 1 month after you started the patient on methimazole, let’s say 20 mg, and 1 month out, the patient is going into hypothyroidism? She has some symptoms but it’s not too bad. By the time you check it in 3 months, it might be severe hypothyroidism.

I’m just giving you a hypothetical example, but using this algorithm, the AI will detect that the patient is going from hyperthyroidism to hypothyroidism and would probably recommend getting the labs before the 3 months’ time so we can downtitrate the methimazole dose.

I think, many times, AI is overkill in interpreting thyroid values. Most of the time it’s straightforward. There was another study that looked at whether AI can predict what kind of hyperthyroidism the patient has, whether it’s due to Graves disease, thyroid inflammation, or from a toxic nodule in an acute care setting where they don’t have many resources and it’s going to take some time to get the Graves disease antibody.

In that study, they said, yes, with the ultrasound information and the lab information, the AI was able to tell with high certainty which one of these the patient had. If it was thyroiditis, they let the patient go. If it was the other two cases, they could start treatment.

Again, if you have immediate follow-up, I don’t think it will make a big difference. Those are the areas where AI is being used. To me, most of these are straightforward. Get your labs, talk to your patients, and it’s easy to interpret, but it could be helpful for providers who are not experienced or if they need a second opinion, with a rationale provided as to why it decided to do so. In that case, I can understand; but most of the time, it’s straightforward.

Desai: We’ve talked about a large amount of great information today. What are three takeaway points you want our listeners to know about AI use in the field of thyroid diseases?

Thomas: Most of the studies have shown that AI can reduce subjectivity in thyroid risk stratification and reduce unnecessary biopsies, even up to 50% in many of these cases. Keep in mind that we might be able to avoid these unnecessary biopsies and reduce the physical and mental burden to the patient, and economic burden to the healthcare system, too.

The other thing is that, even though I am a technology optimist, there is a large amount of hype in AI — not only in thyroid AI, but also generally in medical AI. Try it out yourself, if you can, before you decide to buy or to invest in AI for your practice. Sometimes incorporating AI into your workflow might increase the time for what you’re doing right now, and it may not make a big difference in return on investment. Make sure you assess it yourself rather than going for the hype.

AI in thyroid care is advancing quickly, with new tools and research emerging regularly. I would say staying updated on these developments is essential, as they hold the potential to further enhance diagnosis, treatment, and patient outcomes.

Desai: I really enjoyed talking about what AI has been able to do in the past couple of years and where it’s going to be hopefully in the next 5 to 10 years. I think it’s going to change the way we practice medicine. It already has.

Thank you again for joining us today. To all our listeners, please stay tuned for our next episode, where we’re going to talk about using supplements to support your thyroid health — whether that’s a good idea or a bad idea. Thank you.

Thomas: Thank you, Dr Desai. It was a pleasure talking to you.

link

Leave a Reply

Your email address will not be published. Required fields are marked *