Integrating AI into Your Clinical Practice
Artificial intelligence, particularly in the form of Large Language Models (LLMs), is rapidly becoming a practical tool in professional settings. An LLM is a sophisticated AI trained on vast amounts of text data to understand context, generate human-like writing, and answer complex questions. Within the University of Florida, NaviGator provides a secure, private environment for clinicians, faculty, and staff to leverage the power of these models for a wide range of language-based tasks.
For a busy clinician, the applications are immediate and impactful. Imagine being able to summarize a dense new research article in minutes before rounds, or brainstorm different ways to explain a complex diagnosis to a patient and their family. From drafting administrative emails to generating initial talking points for a presentation, LLMs can serve as a powerful assistant, helping you manage information and streamline communication. The key is learning how to interact with them effectively to ensure the results are accurate, relevant, and useful for your specific needs.
The resources on this page are designed to provide a foundational understanding of NaviGator and demonstrate how you can begin using it to enhance your daily workflow.
Resources
The following resources will help you understand the core concepts behind NaviGator and teach you how to interact with it effectively. Starting with a basic definition of the technology, we then move into the practical skills you need to get reliable, high-quality results for your clinical work.
Introduction to Large Language Models
This video defines LLMs and their function in modern AI tools. It introduces NaviGator as the University of Florida’s secure platform for interacting with LLMs and provides practical examples of how a clinician might use this technology—from summarizing research articles to brainstorming patient talking points—to improve efficiency and manage complex information.
Basics of Prompt Engineering: The RTF Formula
Learn the essential skill of prompt engineering to ensure your interactions with NaviGator are efficient and accurate. This video breaks down the RTF (Role, Task, Format) framework, a simple method for structuring your requests. By clearly assigning a role, defining a specific task, and specifying the desired output format, you can transform NaviGator from a general tool into a precise clinical assistant. The video also touches on advanced techniques like “chain-of-reasoning” to improve the reliability of complex queries.
Prompt Engineering: The CREATE Formula
For more nuanced clinical questions, the CREATE formula offers even greater control over the AI’s output. This video details the six components—Character, Request, Examples, Adjustments, Type of output, and Evaluation—that allow you to build highly specific prompts. You’ll learn how to add critical context, set constraints (like patient allergies or local resistance patterns), and provide examples to guide the model, making NaviGator a more sophisticated partner for complex, multi-step clinical reasoning.
HIPAA Compliance with NaviGator
This essential video reviews your legal and ethical obligations under the Health Insurance Portability and Accountability Act (HIPAA). It explains that you must not input Protected Health Information (PHI) unless using a model specifically approved for restricted data. The video also details the university’s data classification system (Open, Sensitive, and Restricted) and clarifies your responsibility to match your data type to an approved LLM on the NaviGator platform before proceeding.
Choosing the Right Tool: A Guide to NaviGator Models
NaviGator AI Models are constantly updating. For the most up-to-date information on available models and their approved data classification, visit the NaviGator AI overview page.


