When Google Maps or Waze redirects us away from a traffic accident, or Netflix suggests a movie we might like based on our past viewing, we don’t think twice about it. This is AI helping us make decisions in our daily lives.
But when the words artificial intelligence are paired with words like “health care” or “patient outcomes,” some people see red flags.
At SHRS, faculty and researchers are stepping forward to ensure that AI is used in the most responsible way to improve the efficiency and effectiveness of health care delivery, and most importantly, to improve patient outcomes.

“It is important to acknowledge that AI is applied within a collection of tools, and like any other tools, they can be used well or misapplied,” states Elizabeth Skidmore, SHRS associate dean for Research and professor, Department of Occupational Therapy (OT).
“How these tools are applied matters, because misapplication can reinforce biases and contribute to inequities,” she continues. “The field is evolving quickly and best practices are still emerging. It is important that the application of AI involves methods that ensure transparency and reproducibility.”
Ethical Use of AI: The GREAT PLEA
In 2023, Yanshan Wang, vice chair of Research and assistant professor, Department of Health Information Management, published an article in the journal npj Digital Medicine that elaborated on the ethical use of AI. His paper, “The GREAT PLEA,” proposed a system for the responsible use of generative AI (GenAI) in health care.
GenAI learns patterns from existing data, such as evidence-based medicine, and uses this knowledge to generate new outputs or predictions.

According to Wang, “The GREAT PLEA” sets forth nine principles that scientists, programmers and stakeholders should consider before applying AI in practice. “It is an acronym for the principles of Governability, Reliability, Equity, Accountability, Traceability—and Privacy, Lawfulness, Empathy and Autonomy,” says Wang.
He notes that the focus is often put on the performance of AI rather than the ethical considerations. “The use of AI in health care is unique in that health care is a type of public service,” says Wang. “Any use of AI by the health care community should adhere to the same ethical principles that otherwise guide our work.”
“If you break down each of the principles in ‘The GREAT PLEA,’ you will see that we are advocating for putting patients at the center of all AI work,” he continues.
By adhering to these guidelines, Wang believes clinicians will be better able to gain—and maintain—the trust of their patients as they incorporate AI into their diagnoses, evaluations and treatment plans.

But Wang cautions that assessments must improve in order to ensure that generative AI remains both accurate and ethical.
He calls for experts such as physicians and other scientists to continually review new content in many dimensions, including those of accuracy and misinformation. “There are critical limitations that experts can put on generative AI to maintain the quality of the data,” he continues. “If these measures are followed, GenAI can be a powerful tool in achieving better health outcomes.”
Ensuring Fidelity
Over the past 15 years, Skidmore and her interprofessional team of clinicians and researchers have been working to develop and implement an intervention that helps people with cognitive impairments actively engage in their rehabilitation. “When they are actively engaged, we find these patients derive more benefit and have significantly less disability six to 12 months later,” notes Skidmore. “But we need to be able to assess the fidelity of our intervention and make sure what we intend to deliver in rehabilitation hospitals is actually happening.”

In the past, Skidmore hired well-trained, well-paid therapists known as “raters” to watch videos of rehabilitation therapists delivering occupational therapy, physical therapy and speech-language pathology interventions in hospital settings. The raters used a standardized checklist developed by Skidmore and her team to determine whether or not the interventions met the criteria for fidelity.
Because she hoped to scale this intervention out to dozens of rehabilitation hospitals across the country, she needed to find a more cost-effective method. Skidmore tapped into the knowledge of Wang and Health Informatics Associate Professor Leming Zhou.
“Through our collaborations, we were able to use machine learning to solve a real-world problem,” says Skidmore.
The AI tool looked at thousands of hours of videos to determine if selected intervention sessions had good fidelity, using Skidmore’s pre-established protocol. Then she and her colleagues began to set up and test models, to determine if the AI model could meet or exceed the gold standards previously achieved by their highly trained raters.

The answer was yes. “We saw that there were patterns in what our raters were seeing that could be identified and reliably reproduced with the machine learning approach,” states Skidmore. “But it could be done much more quickly, and in a more cost-effective manner.”
Moving forward, she hopes to use this approach in an expanded study, having therapists upload their videos and get a score on the fidelity of their interventions as a way of providing feedback and optimizing the rehabilitation for patients.
“Drs. Wang and Zhou helped us think carefully about how we design our projects and how we design our data collection,” she continues. “They also helped us develop assessable models and to understand what we can and cannot conclude from the data.”
Improving the Standard of Care
In the Augmentative and Alternative Communication (AAC) and Brain Computer Interface (BCI) iNNOVATION Laboratory (iLAB) in the Department of Communication Science and Disorders (CSD), Professor Katya Hill is using AI and machine learning to improve communication and the quality of life for people with severe physical impairments.
With funding from The Beckwith Institute’s Clinical Transformation Program, she is in the early stages of a clinical trial exploring new methods for communicating with patients who have varying levels of consciousness. The study utilizes mindBEAGLE, an innovative BCI system, to gain insights into the cognitive and communication capabilities of patients who are unable to respond through conventional means.

“Evaluating patients in a coma presents significant diagnostic challenges, as traditional communication methods are unavailable due to their inability to provide verbal or physical responses,” explains Hill’s graduate student researcher Amber Lieto.
“mindBEAGLE uses artificial intelligence and supervised machine learning, which means it is trained to recognize patterns in brain waves using a classification system,” continues Lieto. “The technology detects unique brain responses when a person focuses on specific ‘target’ stimuli rather than other non-target stimuli. Based on brain wave patterns, the system can learn to identify if the user is communicating ‘yes’ or ‘no’ when answering questions.”

Lieto says mindBEAGLE may allow health care providers to investigate brain function and conduct cognitive assessments that provide more comprehensive information about a patient’s ability to follow commands. The implications of this research could potentially transform how medical professionals interact with and care for patients with impaired consciousness.
Graduate student researcher Michael S. O’Leary is also working with Hill to develop a commercialized product that uses BCI to give individuals with severely limited physical abilities extended access to the capabilities of an AAC device, even after they can no longer touch a switch or use eye gaze to access their language software.
“We’re using machine learning to recognize patterns in brain waves as seen on an EEG that enable patients to make selections on a communication device,” says O’Leary.

“Even though everyone’s brain waves are different, our newest model is trained to recognize what we call the ‘aha!’ moment—that point in time when the computer knows the patient is paying attention to a certain image, such as a letter, on the screen. This allows the patient to elicit a response whenever they want, without the need to pause the system—something that they would otherwise not be able to do with current systems.”
The commercial product consists of a specially designed headset with dry electrodes that plugs into an AAC device.
Looking to the Future
Wang reports that AI models already use electronic health record (EHR) data to predict which patients are at high risk of developing certain diseases, identify optimal treatment plans based on patient characteristics, and monitor patients for adverse events or treatment effectiveness.
In addition, he says that EHRs can be integrated with other health care data sources, such as medical imaging, genomics and wearable devices to provide a more comprehensive view of a patient’s health status. “This integrated data can be used to develop more sophisticated AI models that can predict health outcomes with greater accuracy and provide personalized care recommendations based on individual patient characteristics,” explains Wang.


But in the future, clinicians must have more sophisticated training in order to fully understand the risks and benefits of this constantly evolving technology.
“Although artificial intelligence has been on our radar for years, it’s really only been in the past couple of years that it has been infiltrating more of our conversations in health care,” reflects Skidmore.
“I had on-the-job training in the use of AI, thanks to my colleagues, but it’s imperative that we as faculty train students in the responsible use of AI,” she continues.
“The curricula for our programs, particularly our doctoral programs, contain voluminous amounts of material for students to learn. It will be necessary for us to see how AI training can be incorporated into accreditation standards in the future.”
Skidmore goes on to say that students are both excited and overwhelmed at the prospect of using AI. “By training them in how to interface with experts who have access to evidence-based approaches and the knowledge to apply AI and machine learning technologies, they will understand the importance of us all working together to achieve a common goal—better outcomes for our patients.”