Log in

issue 1

MESSAGE FROM THE ECC CO-CHAIRS: 

Dear SABI Family,

Welcome to the very first edition of our Early Career Committee newsletter! As co-chairs of the Society of Advanced Body Imaging Early Career Committee (SABI-ECC), we are both honored and excited to share this new chapter with you. Our goal is to create a vibrant platform that not only keeps you informed but also fosters a supportive and collaborative environment for all early career professionals in our field.

This newsletter is more than just an update—it's a celebration of the incredible achievements and milestones of our community. We’re excited to share insightful articles tailored specifically for trainees by the JCAT editor, inspiring success stories from our mentors, and details about upcoming events. You’ll also find valuable resources, editorials, and educational cases. We hope to provide you with the tools and inspiration to thrive in your early career journey.

We want to extend a heartfelt invitation to all trainees and early career professionals to join us in shaping the future of SABI-ECC. Your unique perspectives and contributions are what make our community so special. Whether you are looking for mentorship, eager to share your own experiences, or simply seeking to connect with like-minded peers, there is a place for you here.

Thank you for being part of this exciting journey. We look forward to growing together and celebrating each of our successes along the way.

Warm regards,
Anu and Melina

Anugayathri Jawahar, MDCo-Chair, Early Career Committee
SABI-ECC

Melina Hosseiny, MD
Co-Chair, Early Career Committee
SABI-ECC

introduction to ECC members: 

Co-Chairs

Anugayathri Jawahar, MD, Chair

Northwestern University

Melina Hosseiny, MD, Chair
UC San Diego

Vice Chairs

Robert Rasmussen, MD
MGH

Soheil Kooraki, MD
UCLA

Innovation Liaisons

Harpreet Singh Grewal, MD
Florida State University

Rita Maria Lahoud, MD
Tufts

Pouria Ruozrokh, MD, MPH
Mayo Clinic

Communication Liaisons


Fereshteh Yazdanpanah, MD, MBA
UPenn

Kamyar Ghabili, MD
Penn State

Research Liaisons

Surbhi Raichandani, MD
Emory University

Bahar Ataeinia, MD, MPH
UPenn

Hafsa Babar, MD
Beth Israel Deaconess

Education Liaisons

Ashleesha Udare, MD
Thomas Jefferson

Amir Imanzadeh, MD
UC Irvine

Shiva Singh, MBBS
UAMS

Medical Student Liaison

Bradly Kasper, MS2
VCOM

ECC MEMBER SPOTLIGHT

Surbhi Raichandani, MD
Academic title:
 Assistant Professor, Emory University

How long have you been a member of SABI?

I’ve been a member of SABI since 2019, back when it was known as SCBTMR. I first joined after getting an opportunity to present an exciting talk in Denver, Colorado, as a first-year radiology resident, on simplifying the MR diagnosis of cardiomyopathy through an algorithmic approach. It’s been an incredible journey since then!

Introduction:

I completed my fellowship in body MRI at Stanford University and am now thrilled to continue my academic career as an Assistant Professor in the Abdominal Division of the Department of Radiology at Emory University. My clinical and research focus includes prostate and pelvic imaging, hepatobiliary and liver imaging, with a budding interest in rectal cancer. I also have a keen interest in informatics and education.

What is your favorite part of being a member of SABI?

My favorite part of being a member of SABI is the networking and how down-to-earth everyone is. I’ve connected with amazing researchers and clinicians here, and I continue to learn a lot from their many publications and strong academic reputation. I also met fantastic folks in the ECC who I’m lucky to be working with. It's a very close-knit, great group.

What (or who) made you motivated to join the SABI?

My upper-level resident motivated me to join SABI.

What activities of SABI do you like?

I enjoy being part of the ECC, contributing to the many initiatives of SABI, and working with amazing people. It's incredibly fulfilling to play a role, no matter how small, in shaping and supporting this impactful community.

What are your hobbies or favorite activities when you have time?

I love photography, traveling, and exploring exciting new places. I carry my trusty Fuji X-T30 with a pancake TT Artisan 25mm lens everywhere I go; it's practically attached to my hip and brings me joy in every shot.

What’s the next place on your travel bucket list?

Iceland. I want to see the beautiful horses and jump in the Blue Lagoon. I'm also excited about exploring the stunning landscapes, hiking the glaciers, and witnessing the splendors of the Northern Lights (without the need for a major coronal mass ejection this time!).

What’s one item you can’t live without?

My Kindle. It's perfect for keeping me company with a good book during long flights and quiet moments between adventures.

What’s something about you (a fun fact) that not many people know?

I can tie a shoelace in under 5 seconds and am learning to play the kalimba which is my new favorite musical instrument!

JCAT'S RECENT ISSUE HIGHLIGHTS

It is great to be a part of this inaugural ECC newsletter. JCAT, founded in 1977, is the official journal of SABI. It covers the full spectrum of advanced imaging, from MR to CT, from PET/MR to  ultrasound. It is also a general radiology journal, covering the entire range of specialties, including neuroradiology, and is also the bridge where physics meets clinical practice. We are the “how to” journal, the technical journal, where advances across the spectrum of what is radiology are published. We encourage submissions from SABI, and each year award at the annual meeting the best published article of the past year that included SABI authorship.

Below are the three articles voted by our editors as the best of the Jul/Aug 2024 issue, which also has a remarkable guest section on neuroendocrine tumors.

Case of the Issue


Contributed by Soheil Kooraki, MD
Radiologist, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles (UCLA), CA, USA

38-year-old female with previous history of pneumothorax, presented with shortness of breath.

Non-contrast chest CT scan: Several thin-walled, variable in size, parenchymal cysts in bilateral lower lobes.

Contrast enhanced abdomen CT scan: Round, well-circumscribed, enhancing exophytic mass lesion in the left kidney with central low-attenuation component. (Pathologically confirmed as low-grade Oncocytic tumor)

Reveal the Diagnosis Here 


Editorial: 

Understanding Hallucinations in Large Language Models: An Introduction for Radiologists

Pouria Rouzrokh, MD, MPH, MHPE
Research Associate, Mayo Clinic Artificial Intelligence Laboratory, Mayo Clinic, MN, USA

Introduction

Since the advent of ChatGPT, which stunned the world with its ability to mimic human conversation, the term "large language models" (LLMs) has become a focal point in various fields, including medicine (1). Radiologists, along with other medical experts, have begun exploring the potential of these models, recognizing their capacity to transform workflows and enhance decision-making processes. While ChatGPT remains the most prominent example, the landscape is now populated with numerous other LLMs, both flagship and smaller ones, each bringing its own set of capabilities. However, despite their impressive abilities and the excitement they generate, these models share a critical vulnerability: the tendency to produce outputs that are not grounded in reality. This phenomenon, known as "hallucination," is particularly concerning in fields like radiology, where inaccurate information can have serious, even life-threatening consequences (2). Whether interpreting imaging findings described for a scan or seeking evidence-based answers through LLM-driven chatbots, radiologists may encounter responses that, while confident and polished, may not be factually correct. This underscores the urgent need for radiologists to understand why hallucinations occur in LLMs, how they can potentially disrupt the clinical workflow, and what strategies can be employed to mitigate these risks.

The Basics of Large Language Models (LLMs)

To understand what makes LLMs like ChatGPT unique, it is essential to break down the term "GPT"—Generative Pre-trained Transformer. Each part of this acronym reveals a crucial aspect of how these models function and why they are particularly powerful.

Generative: This distinguishes LLMs from earlier artificial intelligence (AI) types known as discriminative models. While discriminative models excel at tasks like classifying images, such as identifying pneumonia in a chest X-ray, or segmenting regions in a scan, like isolating a tumor, generative models take a different approach. They are capable of creating new data points, whether those are words, images, or other forms of content. This generative capacity is what allows LLMs to produce coherent and contextually relevant text based on the prompts they receive.

Pre-trained: This refers to the extensive training process that these models undergo. Unlike earlier models that required specific, narrow datasets, LLMs are trained on vast amounts of data sourced from the internet, encompassing a wide array of topics, languages, and styles. This pre-training allows the models to build a comprehensive understanding of language patterns and relationships. However, the sheer volume and diversity of this data can also introduce challenges, as not all information is accurate or reliable. Even with efforts to curate and clean the data, the possibility of learning from imperfect or biased information remains, setting the stage for potential errors in the model's outputs.

Transformer: The "Transformer" architecture is what enables these models to process and generate text with remarkable fluency (3). Although the technical details of transformers are beyond the scope of this article, the key concept for radiologists to grasp is that these models are essentially next-token predictors. A token, in the context of LLMs, can be thought of as a word or a sub-word that the model uses to construct its responses. In simpler terms, these models generate text one token at a time, continually evaluating what has already been said to determine the most likely next token. This process happens rapidly and repeatedly, allowing the model to construct sentences and paragraphs that flow naturally. However, because each prediction is based on probabilities and influenced by the pre-existing data in the model's memory, there is always a risk that the model might generate a token that, while statistically plausible, does not align with reality—leading to what is called hallucinations.

Example: Hallucinations in Practice

Consider a radiology trainee who wants to understand the difference between a well-differentiated liposarcoma (often referred to as an atypical lipomatous tumor or ALT) and a benign lipoma. Instead of consulting traditional resources like textbooks or peer-reviewed articles, the trainee opts for a more expedient approach—asking a chatbot based on an LLM. The trainee types in the query, and the chatbot begins to generate its response.

As the chatbot constructs its answer, it draws upon vast amounts of pre-trained data. The process it uses to generate the response involves predicting one token, or word, at a time, based on the context provided by the previous tokens. At each step, the model evaluates numerous potential next tokens, each with a certain probability of being selected. Sometimes, multiple tokens may have nearly identical probabilities, introducing a degree of randomness into which token the model ultimately selects.

Imagine the chatbot generates the phrase, "Atypical lipomatous tumors, also known as well-differentiated liposarcomas, are generally locally…" At this point, the model needs to predict the next token. The correct word might be "aggressive," leading to the accurate statement "locally aggressive." However, due to the probabilities involved, the model might instead predict "less," leading to "locally less aggressive." While this phrase is not as obviously incorrect as labeling the tumor benign, it still significantly alters the meaning and could mislead the radiologist.

This deviation occurs not only because of the randomness in token selection but also due to the model's pre-training on vast and varied datasets. These datasets include both medical and non-medical sources, and the frequency with which certain word pairs—like "locally" and "less"—appear together in the training data might influence the model to choose a less accurate term. As a result, the model might generate an answer that, while statistically plausible, does not align with the precise medical understanding needed in radiology. This highlights the importance of recognizing how these models operate and why they might occasionally produce misleading or incorrect information—what is referred to as hallucinations.

While verifying every piece of information generated by LLMs—by cross-checking each statement, link, or reference—might be the most accurate way to mitigate hallucinations, this approach is also incredibly tedious and time-consuming. Given the demands of radiology, it is worth exploring whether there are less labor-intensive methods that can still effectively reduce the risk of encountering misleading information. Fortunately, there are strategies that can help streamline this process. These approaches can be broadly categorized into two groups: prompt-based techniques, which focus on how questions and queries are framed, and more technical, code-based techniques, which involve adjustments at the algorithmic level.

Prompt-Based Techniques for Mitigating Hallucinations

Assigning a Persona

One effective strategy to reduce hallucinations in LLMs is to assign the chatbot a specific persona before posing the question. For instance, instead of directly asking, "What is the difference between a lipoma and an ALT?" the user might first tell the chatbot, "You are a radiologist," and then ask the question. This approach leverages the model's inherent process of next-token prediction, where it considers not only the query provided but also the context established by previous inputs. By assigning a role, such as that of a radiologist, the LLM is more likely to draw from its pre-trained medical knowledge rather than general or irrelevant information.

The significance of this technique lies in how the model retrieves and selects information. When the chatbot is told it is a radiologist, it considers this information in predicting the next tokens, subtly biasing the model to prioritize medical data related to radiology over other general knowledge. This can be particularly useful when dealing with nuanced or specialized topics. Moreover, the more specific and detailed the persona, the more likely the model will align its output with the relevant domain-specific information. For example, specifying that the chatbot is an "expert body radiologist with years of experience in interpreting lipomas and ALTs at a world-renowned institute" further narrows the scope, making it more probable that the model will generate a response rooted in expert-level radiology knowledge.

By enriching the chatbot's persona with detailed, domain-specific roles and contexts, it is possible to steer it to consult the most relevant segments of its knowledge base, thereby reducing the likelihood of hallucinations and increasing the accuracy of its responses. However, one should also note that assigning a persona to LLMs is a double-edged sword; recent research has shown that persona-assigning could increase implicit biases in the LLMs that are hard to detect and avoid (4).

Encouraging Reasoning Before Answering

Another effective technique for reducing hallucinations in LLMs is to encourage the model to reason before providing an answer. Instead of simply asking a question and expecting an immediate response, the user can guide the model to think through its reasoning process first. Techniques like "Chain of Thought" or "Tree of Thought" exemplify this approach (5,6), where the model is prompted to work through its reasoning step-by-step before arriving at a final answer. The underlying principle is to promote deliberate, sequential reasoning, which helps the model sift through its vast memory of pre-trained data more effectively and align its final answer with accurate and relevant knowledge.

It is essential to ask the model to output its reasoning first and then generate an answer based on that reasoning. If the reverse is done, there is a risk that it might generate an incorrect answer and then attempt to justify it rather than arrive at the correct conclusion through careful consideration. This issue is similar to the phenomenon observed when LLMs answer multiple-choice questions differently depending on the order of the choices presented (7). The tokens the model encounters first can heavily influence its final output, leading to variability in responses. By encouraging pre-answer reasoning, the likelihood that the model will select the most accurate tokens and deliver a more reliable response is increased.

Providing a Reference Text

The final prompt-based strategy to mitigate hallucinations in LLMs is to provide the model with a specific reference text that might contain the answer to the queried question. For example, if a radiology trainee needs to differentiate between a lipoma and an ALT, and they know the answer could be buried within a complex reference, they can add this text to the LLM prompt. By instructing the model to generate an answer only if the information is found within the provided reference, the risk of hallucinations is significantly reduced. This is because the model focuses on validated, evidence-based content rather than its broader, potentially less reliable pre-training data.

While this method is effective, it requires finding a reliable reference for each query, which can be time-consuming and may not always be feasible in fast-paced environments. Additionally, some models may have limited capacity for prompting, making it challenging to include lengthy references. Furthermore, even when a reference containing the correct response is added to the prompt, the model may still provide an incorrect answer due to the complexity of the text, difficulties in extracting nuanced information, or the inherent randomness in token prediction. Nevertheless, providing a reference that is likely to contain the correct answer significantly reduces the likelihood of hallucinations.

Algorithmic Solutions for Mitigating Hallucinations

In addition to the prompting techniques, there is a second group of techniques that can mitigate LLM hallucinations, but they often require a certain level of coding knowledge and technical expertise, which may be more challenging for radiologists without a background in these areas. However, it is crucial for radiologists to be aware of these techniques, as understanding them allows for informed advocacy within their institutions. By being knowledgeable about these solutions, radiologists can encourage their organizations to implement them, either through collaboration with technically skilled colleagues or by leveraging institutional resources. This proactive approach can lead to greater accuracy and efficiency when integrating LLMs into radiological workflows.

Retrieval-Augmented Generation (RAG)

In the last prompt-based approach discussed, the challenge of manually finding and inserting the right reference document into the prompt—a task that can be tedious and time-consuming—was highlighted. Retrieval-Augmented generation (RAG) automates this process, addressing the main downside by enabling LLMs to find and use relevant documents from a pre-constructed knowledge base (8). For instance, in a system designed to answer radiology questions, a comprehensive collection of radiology documents or articles would serve as the knowledge base. When a user poses a question, the RAG algorithm identifies the most relevant documents within this knowledge base, incorporates them into the prompt, and allows the LLM to generate a response based on these documents.

The automation of RAG hinges on converting both the documents in the knowledge base and the user’s query into numerical representations known as vectors—a process called embedding. By leveraging mathematical techniques to compare the vector representation of the query with those of the documents, RAG can identify the most content-relevant resources to include in the prompt. This greatly enhances the model’s ability to provide accurate, well-informed answers.

However, RAG is not without its challenges. One significant issue is that, regardless of how extensive a knowledge base might be, it is possible that the documents containing the answer to a specific query simply do not exist within the knowledge base. In such cases, the model might still fail to provide an accurate answer. Additionally, sometimes documents that are retrieved based on their relevance to the query—due to shared concepts or terminology—may not actually contain the specific answer needed. Including such documents in the prompt could lead to confusion or incorrect answers from the model.

Despite these challenges, RAG remains a well-known technique for automating the retrieval of relevant reference documents, significantly reducing the likelihood of hallucinations compared to models that answer questions without any supporting references. Ongoing research in AI aims to address the limitations of RAG, further improving its effectiveness and reliability.

Fine-Tuning

Fine-tuning (or instruction-tuning) is a technique used to improve the performance of an LLM on specific tasks by training it on a custom dataset tailored to those tasks (9). Unlike the initial pre-training phase, which involves massive amounts of data and extensive computational resources, fine-tuning is a lighter and more targeted process. It involves training the model on a smaller, carefully curated dataset composed of questions and their corresponding answers relevant to the desired application—such as radiology. This allows the model to become more adept at answering the types of questions it will encounter in practice, thereby reducing the likelihood of hallucinations.

However, fine-tuning comes with its own set of challenges. First, while it is less resource-intensive than full pre-training, fine-tuning still requires significant hardware and computational resources, making it a costly endeavor. Additionally, creating a compelling fine-tuning dataset is crucial. This dataset must be sufficiently large and meticulously organized, with accurate and relevant queries and answers. If the dataset is not clean and well-curated, the model’s performance may suffer, leading to incorrect or misleading answers.

Another limitation of fine-tuning is that it effectively locks the model’s knowledge to the data it has been fine-tuned on. This means that if a question that was not covered in the fine-tuning dataset is asked, the model may struggle to provide an accurate answer. Moreover, there is a risk that fine-tuning a model for a specific task could cause its performance to decline in other areas where it previously performed well. This trade-off occurs because fine-tuning makes the model more specialized, potentially at the expense of its general capabilities.

Despite these drawbacks, when fine-tuning is executed correctly, and the fine-tuning dataset accurately represents the questions likely to be asked, the model’s reliability in those areas improves significantly. This focused training helps minimize hallucinations, making the model’s responses more accurate and dependable for the specific tasks it has been fine-tuned to address.

Multi-Agent Frameworks

Up to now, the discussion has focused on how a single LLM can address a user’s query. However, there is another approach that involves creating a network of different LLMs working together to solve a problem—a method known as a multi-agent framework. In this setup, each LLM, or "agent," handles a specific aspect of the query. By breaking down a complex user query into smaller, more manageable tasks, each agent can focus on its area of expertise. The inputs and outputs of these agents are interconnected, allowing the system to work collaboratively toward answering the user’s query (10).

For example, consider a radiologist who is interpreting an imaging study and wants to determine the possible diagnoses based on the imaging findings of a given computed tomography (CT) scan, along with the patient’s clinical history and lab results. Instead of relying on a single model to process all this information—which could lead to compounded errors due to the next-token prediction nature of LLMs—different agents could be assigned to each domain. One agent could analyze the CT scan imaging findings, another could evaluate the patient’s clinical history and disease progression, and a third could interpret the lab data.

The agent responsible for the imaging findings could employ RAG to search through a curated database of CT scan findings and generate a list of potential diagnoses. Meanwhile, the clinical history agent might analyze the patient’s background and symptoms, identifying relevant patterns and conditions. The lab data agent would similarly assess the lab results, considering how they correlate with possible diagnoses. These agents would then share their findings, and a final agent could synthesize all the information, providing the radiologist with a comprehensive report that includes the most likely diagnosis and other possible considerations.

This multi-agent framework is more robust to hallucinations because it can leverage the strengths of different models, each specialized for a specific task. It also opens the door to innovation by allowing users to mix and match LLMs from different sources, each with its own strengths and weaknesses. This flexibility enables the creation of tailored workflows that minimize hallucinations and maximize accuracy.

However, this approach has its downsides. Relying on multiple LLMs can be costly, both in terms of resources and time. Each agent requires computational power, and the process of integrating and synthesizing their outputs might take longer than using a single model. Additionally, while this method reduces the risk of hallucinations, it is more complicated to implement and might face unforeseen difficulties in real-time performance. Despite these challenges, the multi-agent framework offers a powerful way to harness the strengths of different models, providing a more reliable and comprehensive approach to complex queries in radiology and beyond.

Final Thoughts

While LLMs hold tremendous potential for advancing radiology practice, their current limitations necessitate careful consideration and thoughtful implementation. In this report, the fundamentals of LLM hallucinations were explored, and several strategies to mitigate these hallucinations were discussed, starting with prompt-based techniques and moving on to more advanced technical solutions. While these techniques can help to reduce the likelihood of hallucinations significantly, it is essential to acknowledge that none of them can completely eliminate this issue, and each comes with its own set of pros and cons. The field of LLMs and AI is advancing rapidly, and it is also likely that new techniques will emerge or existing ones will be integrated into the models themselves, making them more user-friendly and less reliant on manual intervention.

It is also worth noting that while this discussion has focused on LLMs, many of the principles and strategies mentioned are equally relevant to large multimodal models (LMMs)—advanced versions of these models that can process and analyze not just text but also images, video, and other data types. These multimodal models offer powerful tools for integrating different forms of data but also come with their own unique challenges, including the potential for hallucinations across multiple data types.

Finally, it is essential to remember that when using LLMs for clinical queries, the data entered into their prompts is shared with the model's host organization, particularly for commercial models like ChatGPT. This data could potentially be used for training future generations of models, so it is imperative to avoid including any HIPAA-sensitive or other confidential information in these prompts. As AI continues to be integrated into radiological workflows, maintaining data privacy and security must remain a top priority.

References

1. Bhayana R. Chatbots and Large Language Models in Radiology: A Practical Primer for Clinical and Research Applications. Radiology. 2024;310(1):e232756. doi: 10.1148/radiol.232756.

2. Ahmad MA, Yaramis I, Roy TD. Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI. arXiv [cs.CL]. 2023. http://arxiv.org/abs/2311.01463.

3. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Adv Neural Inf Process Syst. 2017;30. https://proceedings.neurips.cc/paper/7181-attention-is-all.

4. Gupta S, Shrivastava V, Deshpande A, et al. Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs. arXiv [cs.CL]. 2023. http://arxiv.org/abs/2311.04892.

5. Wei J, Wang X, Schuurmans D, et al. Chain of thought prompting elicits reasoning in large language models. Adv Neural Inf Process Syst. 2022;abs/2201.11903. https://proceedings.neurips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html.

6. Yao S, Yu D, Zhao J, et al. Tree of thoughts: Deliberate problem solving with large language models. Adv Neural Inf Process Syst. 2023;abs/2305.10601. doi: 10.48550/arXiv.2305.10601.

7. Pezeshkpour P, Hruschka E. Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions. arXiv [cs.CL]. 2023. http://arxiv.org/abs/2308.11483.

8. Gao Y, Xiong Y, Gao X, et al. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv [cs.CL]. 2023. http://arxiv.org/abs/2312.10997.

9. Zhang S, Dong L, Li X, et al. Instruction Tuning for Large Language Models: A Survey. arXiv [cs.CL]. 2023. http://arxiv.org/abs/2308.10792.

10. Guo T, Chen X, Wang Y, et al. Large Language Model based Multi-Agents: A Survey of Progress and Challenges. arXiv [cs.CL]. 2024. http://arxiv.org/abs/2402.01680.


INTERVIEW: Q&A with Dr Aoife Kilcoyne

Interviewed by: Rita Maria Lahoud; MD, Radiology Resident, Tufts Medical Center, MA, USA

Dr. Kilcoyne is currently an Assistant Professor at Harvard Medical School and Program Director of the Body Imaging Fellowship in the Division of Abdominal Imaging at Massachusetts General Hospital. She has authored multiple publications and has been involved in professional societies both in leadership and committee roles.

What inspired you to pursue an academic career in Body Imaging?

I started my Radiology training at the National Liver Transplant Center, at St. Vincent’s University Hospital Dublin, Ireland. There, I was very fortunate to have some incredible mentors and peers with expertise in Hepatic and Pancreaticobiliary Imaging. Weekly multidisciplinary team meetings were full of complex and varied cases. I enjoyed seeing the essential part that Diagnostic and Interventional Radiology Attendings played in the evaluation and management of some of the most unwell patients in the hospital, many of whom had come from centers across the country to be treated. The Academic Radiology Research Trust in St Vincent’s Radiology Group had begun an exciting collaboration with Massachusetts General Hospital, where I came, on the MacErlaine scholarship to complete a clinical and research fellowship.

What are some of the most rewarding aspects of working in an academic setting?

Fantastic colleagues! I feel lucky every day to be surrounded by such knowledgeable, experienced and dedicated physicians. The working environment is extremely collaborative. At our daily teaching conference, residents, fellows and attendings share the most interesting cases of the day. It is an ideal forum for learning and for teaching and a privilege to be able to pass on some of the knowledge acquired to future generations of Radiologists.

Each Radiologist in our group has a subspecialty area of interest, within Abdominal Imaging. Our twice weekly lecture program is targeted towards fellows, but provides an opportunity for all staff members to keep up to date with evolving new imaging techniques.

How has being involved in professional societies impacted your career?

I have been fortunate to have committee and leadership roles in the SAR (Society of Abdominal Radiology), ACR (American College of Radiology) and RSNA. In these roles I have had the opportunity to interact with and learn from leaders in Radiology practices in other academic centers of excellence across the country. This has allowed me to learn about how other centers have employed newer imaging techniques as well as learning about practice workflow and optimization. Creating this network as an early career attending has been a wonderful experience. These friendships have been very fruitful and many remain long-lasting.

How do you envision the future of Body Imaging education and training?

While the COVID pandemic presented a number of challenges to trainee education, it also provided immense opportunities in international education and collaboration. Grand Rounds speakers can now present to sites across the country and the world from their own centers.

Online, case-based learning modules, such as those provided by SABI, rather than didactic lectures provide an interactive forum from which trainees can learn. Hands on workshops, such as those run at SAR, have proven very popular. Trainees can interact with experts and learn from other peer participants.

As Oncologic treatments in particular, have become more complex, Radiologists are now at the center of the multidisciplinary team, not simply in interpreting images but assisting in treating patients and directing patient management.

There has been and will be huge further growth in prostate cancer imaging as well as metabolic liver imaging, including fat, iron and fibrosis quantification.

What is your perspective on the future impact of artificial intelligence on Body Imaging?

There is immense potential for AI in body imaging ranging from workflow optimization, using AI to aid in the selection of appropriate imaging protocols as well as appropriate triaging and interpretation of cases. Related to image quality, the use of deep learning reconstruction algorithms can be used to improve image quality/reduce noise in CT and MRI. AI assisted image analysis, using deep learning and convolutional neural networks has the potential to improve diagnostic accuracy.

Challenges exist, however, related to the complexity of abdominal imaging and specifically, frequently, the presence of multiple pathologies. In addition, medicolegal concerns related to responsibilities of the specific AI algorithm/vendor versus the interpreting radiologist, have delayed progress in this area. It is an exciting area and one in which there will likely be huge growth in the coming years.

What advice would you give to those considering a fellowship in Body Imaging?

Do it! Body imaging is an interesting and challenging subspecialty that is constantly evolving. The broad array of organ systems and imaging techniques ensures that every working day is varied and rewarding.

Geography is obviously important from a family and cost of living perspective. For a single year of fellowship it’s definitely worth stepping outside your comfort zone, exploring a new city and making some new connections.

Reaching out to past graduates (recent and older) of the program to gain their perspective is essential. No fellowship program is perfect but there are definitely pros and cons to individual programs that will appeal to different interests and overall career goals. Good Luck!

ANNOUNCEMENTS


SABI and AIRE Partner to Offer Free AI Literacy Course

The Society for Advanced Body Imaging (SABI) is excited to announce a partnership with Artificial Intelligence in Radiology Education (AIRE) to present the 2024 AI Literacy Course, a groundbreaking educational program designed to equip radiologists with essential AI knowledge. Directed by Dr. Jordan D. Perchik, this free one-week course will cover key AI applications across various radiology subspecialties, including Pediatric, Musculoskeletal, and Cardiac Radiology, as well as Quality Assurance and Ethical Considerations. The course, which has already empowered hundreds of radiology trainees worldwide, will begin on September 30 and conclude with a hybrid virtual and live hands-on session at the SABI Annual Meeting on October 5 in Washington, DC. Don't miss this opportunity to enhance your understanding of AI and its transformative impact on radiology practice. For more information and to register, visit this link.


SABI interesting cases for trainees #8 - The Sneaky Small Bowel Follow Through

When: September 18th, 2024
How long: 50 MInutes with 10 minutes for Q&A
Where: Zoom
Time: 7:00-8:00 PM EST
Audience: Residents and Abdominal Imaging Fellows

Join Here 

Copyright 2024 by SABI
Powered by Wild Apricot Membership Software