Quantcast
Channel: Healthcare IT News - Quality and Safety

HIMSS24 Nursing Informatics Forum: How can IT help healthcare's key workforce?

0
0

The HIMSS24 Nursing Informatics Forum is celebrating more than 25 years in Orlando next week, with an agenda looking back at the past quarter century – and forward to a future where the nature of technology-enabled nursing promises no shortage of change.

With the fitting title "Legacy and Innovation," the Nursing Informatics Forum – cohosted by Sean Michaels, DNP, of Orlando Health and Whende Carroll, RN, of HIMSS – the preconference forum takes place on Monday, March 11.

Across opening and closing keynote addresses and six forum sessions in between, industry leaders will explore the dynamic field of nursing informatics amidst rapid technological advancements and significant industry changes.

With a primary focus on addressing the global nursing shortage and associated burnout, the discussions emphasize creative and empowering solutions, with the keynote opening speech to be delivered by Kenrick Cato, professor of informatics at University of Pennsylvania Medicine.

In his presentation, Cato will discuss the evolution of healthcare technology innovation over the past five, 10 and 15 years, highlighting its impact on nursing capabilities and emphasizing the pivotal role of nursing informatics in ensuring nurses' involvement in strategic decision-making related to technology.

Sessions throughout the forum cover various themes, including the role of nurses in shaping an innovative healthcare future with AI, ethical considerations in AI integration, insights into technology adoption and integration in nursing practice, and the influential role of nurse informaticists in inspiring change.

Following Cato's opening remarks, three sessions in a row will explore the impact of AI on nursing.

The first lecture, delivered by Tom Lawry, managing director at Second Century Tech, will offer an overview of key AI concepts, including AI's potential to drive innovation and ensure equitable healthcare solutions and the intersection of nursing, informatics and AI shaping the future of healthcare.

The second, delivered by Robbie Freeman, digital experience and chief nursing informatics officer at Mount Sinai Health System, will both focus on the central role nurses will play in driving AI and digital innovation in healthcare and examine the ethical considerations of AI.

A third discussion focuses on ethical integration of AI in nursing to be led by Olga Yakusheva, professor of nursing and public health at University of Michigan, and Tracee Coleman, clinical informatics consultant at Optum Health. They'll explain the strategy developed in collaboration with the Nursing Knowledge Big Data Science Initiative to leverage AI's potential while addressing ethical concerns.

Crucial integration strategies, key considerations and pitfalls, and the benefits and risks of AI in nursing will all be discussed, with a special focus on the importance of structure.

The forum culminates in a keynote closing speech from Connie Delaney, dean and professor at the University of Minnesota School of Nursing, who will focus on the collective call to action at the nexus of nursing and technology, and assess the challenges and barriers that nurses and nursing informaticists face upon taking leadership roles in healthcare technology initiatives.

Enterprise Taxonomy: 

How should health systems put ethical AI into practice?

0
0

By now, health systems seeking to capitalize on the enormous potential of artificial intelligence are well aware – or should be, at least – of the inherent risks, even dangers, of algorithms and models that are suboptimally designed or trained on the wrong data.

But understanding the hazards of algorithmic bias or murky modeling techniques isn't the same as knowing how to protect against them.

How can healthcare providers know how to spot biased black box systems? How should they mitigate the risks of training algorithms on the wrong datasets? How can they build an ethical and equitable AI culture that prioritizes transparency and trustworthiness?

At the HIMSS24 AI in Healthcare Forum on March 11, an afternoon panel discussion will tackle those questions and more.

The session, The Quest for Responsible AI, is set to be moderated by HIMSS director of clinical research Anne Snowdon and features two leading thinkers about artificial intelligence in healthcare: Michael J. Pencina, chief data scientist at Duke, director of Duke AI Health and professor of bioinformatics at the Duke School of Medicine; and Brian Anderson, the former chief digital health physician at MITRE, who was just announced as the new CEO of the Coalition for Health AI, which he cofounded.

In advance of HIMSS24, we spoke with Anderson about the imperatives of AI transparency, accountability and data privacy, and how healthcare organizations can prioritize them and act on them as they integrate AI more tightly into their care delivery.

Q. What are some of the biggest ethics or responsibility challenges around AI's role in healthcare, as you see them?

A. Part of the challenge starts at a very high level. All of us are patients or caregivers at one point in our life. Healthcare is a highly consequential space. Artificial intelligence is essentially tools and programs that are trained on our histories. And the data that we have to train these programs [can be] fundamentally biased in some pretty noticeable ways. The most accessible kinds of data, the most robust data, oftentimes comes from big academic medical centers that are really well staffed, that have the ability to create these robust, complete datasets.

And it's often on urban, highly educated, more often white than not, kinds of people. The challenge, then, is how do we build fair models that do not have an unjustified bias to them. And there are no easy answers to that. I think it takes a coordinated approach across the digital health ecosystem in terms of how we invest and think about intentionally partnering with communities that haven't been able to tell their story from a digital perspective to create the datasets that can be used for training purposes.

And it opens up some other challenges around how we think about privacy and security, how we think about ensuring that all this data that we're looking to connect together is actually going to be used to help the communities that it comes from.

And yet, on the flip side of this, we have this great promise of AI: That it's going to enable people that traditionally don't have easy access to healthcare to be able to have access to patient navigator tools. To be able to have an advocate that, as an example, might be able to go around helping you navigate and interact with providers, advocating for your priorities, your health, your needs.

So I think there's a lot of exciting opportunities in the AI space. Obviously. But there are some real challenges in front of us that we need to, I think, be very real about. And it starts with those three issues: All of us are going to be patients or caregivers at one point in our life. All these algorithms are are programs that are trained on our histories, and we have a real big data problem in terms of the biases that are inherent in the data that is, for the most part, the most robust and accessible for training purposes.

Q. How then should health systems approach the challenge of building transparency and accountability from the ground up?

A. With the Coalition for Health AI, the approach that we've taken is looking at a model's lifecycle. A model is developed initially, it's deployed and it's monitored or maintained. In each one of those phases, there are certain considerations that you need to really focus on and address. So we've talked about having data that is engineered and available to appropriately train these models in the development phase.

If I'm a doctor at a health system, how do I know if a model that is configured in my EHR is the appropriate model? If it's fit for purpose for the patient I have in front of me? There are so many things that go into being able to answer those questions completely.

One is, does the doctor even understand some of the responsible AI best practices? Does the doctor understand what it means to look critically at the AI's model card? What do I look for in the training data? What do I look for in the approach to training? In the testing data? Were there specific indications that were tested? Are there any indications or limitations that are called out, like, don't use it on this kind of patient?

Those are really important things. When we think about the workflow and the clinical integration of these tools, simply having pop-up alerts is an [insufficient] way of thinking about it.

And, particularly in some of these consequential spaces where AI is becoming more and more used, we really need to upskill our providers. And so having intentional efforts at health systems that train providers on how to think critically about when and when not to use these tools for the patients they have in front of them is going to be a really important step.

You bring up another good point, which is, "OK, I'm a health system. I have a model deployed. Now what?'

So you've upskilled your doctors, but AI, as you know, is dynamic. It changes. There's performance degradation, there's model drift, data drift.

I would say one of the more unanswered questions is the one you're bringing up, which is: Health systems, the majority of them are in the red. And so to go to them and say, "OK, you've just bought this multimillion-dollar AI tool. Now you have to stand up a governance committee that's going to monitor that and have another suite of digital tools that are going to be your dashboards for monitoring that model." If I were a health system, I would run for the hills.

So we don't have yet a scalable plan as a nation in terms of how we're going to support critical access hospitals or FQHCs or health systems that are less resourced, that don't have the ability to stand up these governance committees or these very fancy dashboards that are going to be monitoring for model drift and performance.

And the concern I have is that, because of that, we're going to go down the same path that we've gone down with many of the other kinds of advances we've had in health, particularly in digital health, which is just a reinforcing of the digital divide in health systems: Those that can afford to put those things in place do it, and those that don't, they would be irresponsible if they were to try to purchase one of these models and not be able to govern it or monitor it appropriately.

And so some of the things that we're trying to do in CHAI are identify what are the easily deployable tools and toolkits – Smart on FHIR apps, as an example – who are the partners in the platform space, a Microsoft, a Google cloud or an AWS that can build the kind of tools that can be more scalable and more easily deployed for health systems that are on any one of these cloud providers to be able to use them more easily, in perhaps a remote way?

Or how can we link assurance labs that are willing to partner with lesser-resourced health systems to do remote assurance, remote monitoring of locally deployed models?

And so it's this balance, I think, of enabling health systems to do it locally, while also enabling external partners – be it platform vendors or other assurance lab experts – to be able to, in this cloud interoperable world that we live in, to be able to help in perhaps a more remote setting.

Q. Congratulations, by the way, on your new CEO position at the Coalition for Health AI. What has been front and center for CHAI recently, and what are you expecting to be talking about with other HIMSS24 attendees as you walk around the convention center next week?

A. I would say, and this goes for MITRE, too, the thing that has been front and center at MITRE and at CHAI, we have this amazing new set of emerging capabilities that are coming out in generative AI. And the challenge has been coming to agreement on how do you measure performance in these models?

What does accuracy look like in a large language model's output? What does reliability look like in a large language model's output, where the same prompt can yield two different responses? What does measuring bias look like in measuring the output of one of these large language models? How do you do that in a scalable way? We don't have consensus perspectives on these important fundamental things.

You can't manage what you can't measure. And if we don't have agreement on how to measure, a pretty consequential space that people are beginning to explore with generative AI is going unanswered. We urgently need to come to an understanding about what those testing and evaluation frameworks are for generative AI, because that then informs a lot of the regulatory work that's going on in this space.

That's perhaps the more urgent thing that we're looking at. I would say it's something that MITRE has been focused on for quite some time. When we look at the non-health-related spaces, a lot of the expertise that our team, the MITRE team, brought to CHAI was informed by a lot of the work going on in different sectors.

And so I know that in healthcare, we're used to other sectors telling us, "I can't believe that you haven't done X or Y or Z yet." Or, like, 'You're still using faxes? How backward are you in healthcare?"

I would say similarly in this space, we have a lot to learn from other sectors that have been explored, like how we think about computer vision, algorithms and the generative AI capabilities in other domains beyond health that will help us get to some of these answers more quickly.

Q. What else are you hoping to learn next week in Orlando?

A. I think one of the things that I'm really excited about – again, it's something that I learned at MITRE – which is the power of public private partnerships. And I would never want to speak for the U.S. government or the FDA, and I won't here, but I would say, I think one of the things I'm really excited about – and I don't know how this is going to play out – but is seeing how the U.S. government is going to be participating in some of these working groups that we're going to be launching on our webinar next week.

You're going to get leading technical experts in the field from the private sector, working alongside folks from the FDA, ONC, Office for Civil Rights, CDC. And what comes out of that, I hope, is something beautiful and amazing, and it's something that we as society can use.

But I don't know what it's going to look like. Because we haven't done it yet. We're going to start doing that work now. And so I'm really excited about it. I don't know exactly what it's going to be, but I'm pretty excited to see kind of where the government goes, where the private sector teams go when they start working together, elbow-to-elbow.

The session, "The Quest for Responsible AI: Navigating Key Ethical Considerations," is scheduled for the preconference AI in Healthcare Forum, on Monday, March 11, 1-1:45 p.m. in Hall F (WF3) at HIMSS24 in Orlando. Learn more and register.

Enterprise Taxonomy: 

EHR developers adopt FHIR-based oncology standardization

0
0

In a potential big advancement for oncology treatment and information sharing, several leading electronic health record vendors this week made voluntary commitment to adopt the United States Core Data for Interoperability Plus Cancer, or USCDI+ Cancer, a recommended minimum set of key cancer-related data elements to be included in a patient's EHR. 

They've also pledged to support the necessary data elements for a new cancer care payment model developed by the Centers for Medicare and Medicaid Services. 

WHY IT MATTERS

The Cancer Moonshot Initiative, which was first launched in 2016 and then resurrected in 2022, is a multipronged effort that aims to lower costs and improve patient care and outcomes for cancer patients and requires EHRs to embrace interoperability and new data standards.

According to Tuesday's White House Office of Science and Technology Policy blog, in coordination with the Department of Health and Human Services Office of the National Coordinator for Health Information Technology, the National Institutes of Health, CMS and the Cancer Moonshot, the group of EHR developers voluntarily committed to adopting data elements that cover vital information about a person’s treatment history, test results and disease status to improve data sharing by healthcare providers.

The Administration said that adoption by Epic, Oracle Health, Meditech, athenahealth, Flatiron, Ontada, ThymeCare and CVS Health EHRs will improve care coordination for people facing cancer nationwide, especially in rural and underserved areas. Standardizing data across the EHRs also opens new possibilities for faster research results and more effective public health interventions, OSTP also noted in a blog post.

Because the EOM data elements also form the core of USCDI+ Cancer, a recommended minimum set of key cancer-related data elements to be included in a patient's EHR, the Administration said it is calling upon the entire healthcare ecosystem to support national health information exchange.

THE LARGER TREND

Health data and research have for too long been trapped in silos, former President Barrack Obama noted in January 2016, when he announced the Cancer Moonshot in his final State of the Union Address – and appointed then Vice President Joe Biden to lead it.

At the time, he said only 5% of cancer patients in the U.S. ended up in a clinical trial.

"Most aren’t given access to their own data," Obama said. "At the same time, community oncologists – who treat more than 75% of cancer patients – have limited access to cutting-edge research and advances."

That number has gone up, according to research published in the Journal of Clinical Oncology in 2021. 

By 2020, "at least 25.4% of adult cancer patients were estimated to participate in one or more cancer clinical research studies," the researchers said, concluding that based on enrollment data from the Commission on Cancer, "enrollment to cancer treatment trials was 6.3%, higher than historical estimates of <5%."

Now, FHIR-based oncology data exchange through the EOM could improve oncology delivery for years to come, said the Digital Medicine Society's Jennifer Goldsack. 

She told Healthcare IT News in January that harnessing the power of digital innovation to achieve the goal of reducing cancer deaths by 50% becomes possible when data is flowing. 

"Data doesn't live in a manilla folder in a file cabinet with either the only the practice manager or the principal investigator of a clinical trial being the one with access."

With the right interoperability specifications and privacy and security can fundamentally change how we deliver healthcare, Goldsack said.

"We've been talking about learning health systems and precision medicine for decades, but now, we can deliver."

ON THE RECORD

"These commitments are not to us, but to the people who rely on these electronic health record systems, including providers and patients," Dr. Danielle Carnival, deputy assistant to the President for the Cancer Moonshot, said in the blog. "We commend this voluntary action from leaders in the electronic health record developer community, as it will help clinicians provide better treatment for people living with cancer."

Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.

Enterprise Taxonomy: 

Artificial intelligence can seem like magic – but it's crucial to see past the mirage

0
0

ORLANDO – Dr. Jonathan Chen began his thought-provoking performance at the HIMSS24 AI in Healthcare Forum on Monday, by invoking a famous quote from science fiction titan Arthur C. Clarke: "Any sufficiently advanced technology is indistinguishable from magic."

In a 21st Century where technologies are advancing faster than ever – especially artificial intelligence, in all its form – it can indeed feel like we're living in a world of the wizardry of illusion, said Chen, assistant professor at the Stanford Center for Biomedical Informatics Research.

"It's becoming really hard to tell what is real and what is not nowadays," he said.

To illustrate the point, Chen peppered his audience-participation-heavy demonstrations, with some pretty impressive magic tricks involving mystery rope, card guessing and a trick copy of Pocket Medicine, the indispensable reference book for residents doing their rounds.

The sleight of hand was fun, but Chen had a very serious point to make: For all the value it offers, AI – especially generative AI – is fraught with risk if not developed transparently and used with a clear-eyed understanding about its potential risks.

"As a physician, my job is to restore patients back to health all the time. But I'm also an educator. So rather than try to trick you today, I thought it might be more interesting to show you step-by-step how such an illusion is created," said Chen.

"It's invisible forces at play," he said, echoing the black-box concept of machine learning algorithms whose inner workings can't be gleaned. "Nowadays. In the age of generative AI, what can we believe anymore?"

Indeed. Chen showed a video of someone speaking, who was the very spitting image of himself. In an ever-so-slightly stilted voice, this person said:

Before we dive in, allow me to introduce myself. Although that phrase may take on a surreal meaning today. I'm not the real speaker. Nor did the real speaker write this introduction. The voice you're hearing, the image you're seeing on the screen, and even these introductory words, were all generated by AI systems.

We are actively amidst the arrival of a set of disruptive technologies that are changing the way all of us do our work and live our lives. These profound capabilities and potential applications could reshape healthcare, offering both new opportunities and ethical challenges.

To make sure we're still anchored in reality. However, let's welcome the real-life version of our speaker. Take it away, Dr. Jonathan Chen, before they start thinking I'm the one who went to medical school.

"Whoa," said the real Dr. Chen. "That was weird."

No question, hospitals and health systems large and small are finding real and concrete success stories with a wide array of healthcare-focused use cases, from automating administrative tasks to turbocharging patient engagement offerings.

"I certainly hope that one day, hopefully soon, AI systems can manage the overwhelming flood of emails and basket messages I'm being bombarded with," said Chen.

In the meantime, whether they're "actual practical uses that can save us right now" or dangerous applications that can do harm with misinformation, "the pandora's box has been opened, good or bad," he said. "People are using this for every possible application you can imagine – and many you wouldn't imagine."

He recalled a recent conversation with some medical trainees.

"One of them stopped me, said, 'Wait a minute, we are totally using ChatGPT on ICU rounds right now. Are you saying we should not be using this as a medical reference?'

"I said, 'No! We should not use this as a medical reference!' That doesn't mean you can't use it at all. But you just have to understand what it is and what it is not."

But what it is is evolving by the day. If generative LLMs are essentially just autocomplete on steroids, the models "are now demonstrating emergent properties which surprise many in the field, including myself," said Chen. "Question answering, summarization, translation, generation of ideas, reasoning with a theory of mind – which is really bizarre.

"Although maybe it's not that bizarre. Because what is all of your intellectual and emotional thought that you prize so deeply? How do you express and communicate that, but through the language and medium of words. So perhaps it's not that strange that if you have a computer that's so fast on manipulating words, it can create a very convincing illusion of intelligence."

It's crucial, he said, for clinicians to keep an eagle eye out for what he calls confabulation.

"The more popular term is hallucination, but I really don't like that. It's not actually a really medically accurate term here, because hallucination implies somebody who believes something that is not true. But these things, they don't believe anything, right? They don't think. They don't know. They don't understand. What they do is they string together words in a very believable sequence, even if there's no underlying meaning. That is the perfect description of confabulation.

"Imagine if you were working with a medical student who is super book-smart, but who also just made up facts as you went on rounds. How dangerous would that be for patient care?"

Still, it's becoming apparent that "we are converging upon a point in history where, human versus computer generated content, real versus fabricated information, you can't tell the difference anymore."

What's more, the technology may actually be getting more empathetic – or, of course, getting a lot better at making it appear that it is. Chen cites a recent study by some of his colleagues at Stanford that got a lot of attention this past year.

"They took a bunch of medical questions on Reddit where real doctors answered these questions, and then they fed those same questions through chatbots. And then they had a separate set of doctors grade those answers in terms of their quality on different levels, and found that the chatbot-generated answers scored higher, both in terms of quality and in empathy. Like, the robot was nicer to people than real doctors were!"

That and other examples "tell us that I don't think we as humans have as much of a monopoly on empathy and therapeutic relationship as we might like to believe," said Chen, who has written extensively on the topic.

"And for better and for worse, I fully expect that in the, not just in future, more people are going to receive therapy and counseling from automated robots than from actual human beings. Not because the robots are so good and humans are not good enough – but because there's an overwhelming imbalance in the supply and demand between our patients and people who need these types of support and a human-driven healthcare workforce can never keep up with that total demand."

Still, there will always, always be a central need for humans in the healthcare equation.

Chen closed with another quote, from healthcare IT and informatics pioneer Warner Slack: "Any doctor that could be replaced by computer should be replaced by computer."

"A good human, a good doctor, you cannot replace them no matter how good a computer ever gets," said Chen. "Am I worried about a doctor replacing my job? I'm totally not."

What concerns him is a generation of physicians "burned out by becoming data entry clerks" and by the "overwhelming need of tens of million patients" in the U.S. alone.

"I hope computers and AI systems will help take over some work so we can get some joy back in our work," he said. "While AI is not going to replace doctors, those who learn how to use AI may very well replace those who do not."

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.

Enterprise Taxonomy: 

John Halamka on the risks and benefits of clinical LLMs

0
0

ORLANDO – At HIMSS24 on Tuesday, Dr. John Halamka, president of Mayo Clinic Platform, offered a frank discussion about the substantial potential benefits – and very real potential for harm – in both predictive and generative artificial intelligence used in clinical settings.

Healthcare AI has a credibility problem, he said. Mostly because the models so often lack transparency and accountability.

"Do you have any idea what training data was used on the algorithm, predictive or generative, you're using now?" Halamka asked. "Is the result of that predictive algorithm consistent and reliable? Has it been tested in a clinical trial?"

The goal, he said, is to figure out some strategies so "the AI future we all want is as safe as we all need."

It starts with good data, of course. And that's easier discussed than achieved.

"All algorithms are trained on data," said Halamka. "And the data that we use must be curated, normalized. We must understand who gathered it and for what purpose – that part is actually pretty tough."

For instance, "I don't know if any of you have actually studied the data integrity of your electronic health record systems, and your databases and your institutions, but you will actually find things like social determinants of health are poorly gathered, poorly representative," he explained. "They're sparse data, and they may not actually reflect reality. So if you use social determinants of health for any of these algorithms, you're very likely to get a highly biased result."

More questions to be answered: "Who is presenting that data to you? Your providers? Your patients? Is it coming from telemetry? Is it coming from automated systems that extract metadata from images?"

Once those questions are answered satisfactorily, that you've made sure the data has been gathered in a comprehensive enough fashion to develop the algorithm you want, then it's just a question of identifying potential biases and mitigating them. Easy enough, right?

"In the dataset that you have, what are the multimodal data elements? Just patient registration is probably not sufficient to create an AI model. Do you have such things as text, the notes, the history and physical [exam], the operative note, the diagnostic information? Do you have images? Do you have telemetry? Do you have genomics? Digital pathology? That is going to give you a sense of data depth – multiple different kinds of data, which are probably going to be used increasingly as we develop different algorithms that look beyond just structured and unstructured data."

Then it's time to think about data breadth. "How many patients do you have? I talked to several colleagues internationally that say, well, we have a registry of 5,000 patients, and we're going to develop AI on that registry. Well, 5,000 is probably not breadth enough to give you a highly resilient model."

And what about "heterogeneity or spread?" Halamka asked. "Mayo has 11.2 million patients in Arizona, Florida, Minnesota and internationally. But does it offer a representative data of France, or a representative Nordic population?"

As he sees it, "any dataset from any one institution is probably going to lack the spread to create algorithms that can be globally applied," said Halamka.

In fact, you could probably argue there is no one who can create an unbiased algorithm developed in one geography that will work in another geography seamlessly.

What that implies, he said, is you need a global network of federated participants that will help with model creation and model testing and local tuning if we're going to deliver the AI result we want on a global basis."

On that front, one of the biggest challenges is that "not every country on the planet has fully digitized records," said Halamka, who was recently in Davos, Switzerland for the World Economic Forum.

"Why haven't we created an amazing AI model in Switzerland?" he asked. "Well, Switzerland has extremely good chocolate – and extremely bad electronic health records. And about 90% of the data of Switzerland is on paper."

But even with good digitized data. And even after accounting for that data's depth, breadth and the spread, there are still other questions to consider. For instance, what data should be included in the model?

"If you want a fair, appropriate, valid, effective and safe algorithm, should you use race ethnicity as an input to your AI model? The answer is to be really careful with doing that, because it may very well bias the model in ways you don't want," said Halamka.

"If there was some sort of biological reason to have race ethnicity as a data element, OK, maybe it's helpful. But if it's really not related to a disease state or an outcome you're predicting, you're going to find – and I'm sure you've all read the literature about overtreatment, undertreatment, overdiagnosis – these kinds of problems. So you have to be very careful when you decide to build the model, what data to include."

Even more steps: "Then, once you have the model, you need to test it on data that's not the development set, and that may be a segregated data set in your organization, or maybe another organization in your region or around the world. And the question I would ask you all is, what do you measure? How do you evaluate a model to make sure that it is fair? What does it mean to be fair?"

Halamka has been working for some time with the Coalition for Health AI, which was founded with the idea that, "if we're going to define what it means to be fair, or effective, or safe, that we're going to have to do it as a community."

CHAI started with just six organizations. Today, it's got 1,500 members from around the world, including all the big tech organizations, academic medical centers, regional healthcare systems payers, pharma and government.

"You now have a public private organization capable of working as a community to define what it means to be fair, how you should measure what is a testing and evaluation framework, so we can create data cards, what data went into the system and model cards, how do they perform?"

It's a fact that every algorithm will have some sort of inherent bias, said Halamka.

That's why "Mayo has an assurance lab, and we test commercial algorithms and self-developed algorithms," he said. "And what you do is you identify the bias and then you mitigate it. It can be mitigated by returning the algorithm to different kinds of data, or just an understanding that the algorithm can't be completely fair for all patients. You just have to be exceedingly careful where and how you use it.

"For example, Mayo has a wonderful cardiology algorithm that will predict cardiac mortality, and it has incredible predictive, positive predictive value for a body mass index that is low and a really not good performance for a body mass index that is high. So is it ethical to use that algorithm? Well, yes, on people whose body mass index is low, and you just need to understand that bias and use it appropriately."

Halamka noted that the Coalition for Health AI has created an extensive series of metrics and artifacts and processes – available at CoalitionforHealthAI.org. "They're all for free. They're international. They're for download."

Over the next few months, CHAI "will be turning its attention to a lot of generative AI topics," he said. "Because generative AI evaluation is harder.

With predictive models, "I can understand what data went in, what data comes out, how it performs against ground truth. Did you have the diagnosis or not? Was the recommendation used or helpful?

With generative AI, "It may be a completely well-developed technology, but based on the prompt you give it, the answer could either be accurate or kill the patient."

Halamka offered a real example.

"We took a New England Journal of MedicineCPC case and gave it to a commercial narrative AI product. The case said the following: The patient is a 59-year-old with crushing, substantial chest pain, shortness of breath – and left leg radiation.

"Now, for the clinicians in the room, you know that left leg radiation is kind of odd. But remember, our generative AI systems are trained to look at language. And, yeah, they've seen that radiation thing on chest pain cases a thousand times.

"So ask the following question on ChatGPT or Anthropic or whatever it is you're using: What is the diagnosis? The diagnosis came back: 'This patient is having myocardial infarction. Anticoagulate them immediately.'

"But then ask a different question: 'What diagnosis shouldn't I miss?'"

To that query, the AI responded: "'Oh, don't miss dissecting aortic aneurysm and, of course, left leg pain,'" said Halamka. "In this case, this was an aortic aneurysm – for which anticoagulation would have instantly killed the patient.

"So there you go. If you have a product, depending on the question you ask, it either gives you a wonderful bit of guidance or kills the patient. That is not what I would call a highly reliable product. So you have to be exceedingly careful."

At the Mayo Clinic, "we've done a lot of derisking," he said. "We've figured how to de identify data and how to keep it safe, the generation of models, how to build an international coalition of organizations, how to do validation, how to do deployment."

Not every health system is as advanced and well-resourced as Mayo, of course.

"But my hope is, as all of you are on your AI journey – predictive and generative – that you can take some of the lessons that we've learned, take some of the artifacts freely available from the Coalition for Health AI, and build a virtuous life cycle in your own organization, so that we'll get the benefits of all this AI we need while doing no patient harm," he said.

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.

Enterprise Taxonomy: 

Microsoft and 16 health systems debut network for responsible AI

0
0

This week at HIMSS24 saw the debut of the Trustworthy & Responsible AI Network, or TRAIN, a network that aims to operationalize responsible AI principles to improve the quality, safety and trustworthiness of AI in healthcare.

A new consortium of healthcare organizations is behind TRAIN. Members include AdventHealth, Advocate Health, Boston Children's Hospital, Cleveland Clinic, Duke Health, Johns Hopkins Medicine, Mass General Brigham, MedStar Health, Mercy, Mount Sinai Health System, Northwestern Medicine, Providence, Sharp HealthCare, University of Texas Southwestern Medical Center, University of Wisconsin School of Medicine and Public Health, Vanderbilt University Medical Center, and Microsoft as the technology-enabling partner.

The network is collaborating with OCHIN, which serves a national network of community health organizations with expertise, clinical insights and tailored technologies, and TruBridge, a conduit to community healthcare, to help ensure every organization, regardless of resources, has access to TRAIN's benefits.

New AI systems have the potential to transform healthcare by enabling better care outcomes, improving efficiency and productivity, and reducing costs. From helping screen patients, to developing new treatments and drugs, to automating administrative tasks and enhancing public health, AI is creating new possibilities and opportunities for healthcare organizations and practitioners.

As new uses of AI in healthcare continue to unfold and grow, the need for rigorous development and evaluation standards becomes even more important to ensure effective and responsible applications of AI, TRAIN said.

Through collaboration, TRAIN members potentially can help improve the quality and trustworthiness of AI by sharing best practices related to the use of AI in healthcare settings, enabling registration of AI used for clinical care or clinical operations through a secure online portal, providing tools to enable measurement of outcomes associated with the implementation of AI, and facilitating the development of a federated national AI outcomes registry for organizations to share among themselves. The registry will capture real-world outcomes related to efficacy, safety and optimization of AI algorithms.

"Even the best healthcare today still suffers from many challenges that AI-driven solutions can substantially improve," said Dr. Peter J. Embí, professor and chair of the department of biomedical informatics and senior vice president for research and innovation at Vanderbilt University Medical Center.

"However, just as we wouldn't think of treating patients with a new drug or device without ensuring and monitoring their efficacy and safety, we must test and monitor AI-derived models and algorithms before and after they are deployed across diverse healthcare settings and populations, to help minimize and prevent unintended harms," he continued.

"It is imperative that we work together and share tools and capabilities that enable systematic AI evaluation, surveillance and algorithmvigilance for the safe, effective and equitable use of AI in healthcare," he added. "TRAIN is a major step toward that goal."

When it comes to AI's tremendous capabilities, there is no doubt the technology has the potential to transform healthcare, said Dr. David Rhew, global chief medical officer and vice president of healthcare at Microsoft.

"However, the processes for implementing the technology responsibly are just as vital," he concluded. "By working together, TRAIN members aim to establish best practices for operationalizing responsible AI, helping improve patient outcomes and safety while fostering trust in healthcare AI."

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Enterprise Taxonomy: 

How real-time surveillance software can improve hospital safety

0
0
Enterprise Taxonomy: 
Node settings: 
Exclude from Accelerate RSS feed

One of Epic's emeritus CIO advisors on helping others optimize their EHRs

0
0

Robert Slepin has a very interesting job, as emeritus CIO advisor at electronic health record giant Epic. There are only a dozen or so of these contractors, who are available on demand to Epic provider clients that need strategic advice or interim executive assistance from someone who's deeply experienced with planning, implementing and maintaining Epic EHR system.

Slepin – who has served as chief information officer or in other top IT roles at health systems such as Johns Hopkins Medicine International, Sutter Health, John C. Lincoln Health Network in Arizona, University Health Network in Toronto and AxisPoint Health in Colorado – enjoys helping other healthcare leaders manage the challenges of EHR operation and optimization.

We spoke with him for his perspective on the world of Epic and its technology – discussing the kinds of hurdles hospitals and health systems call him in to advise, how he works with teams onsite at a provider organizations, his methodology for problem-solving and troubleshooting and more.

Q. You're in a unique position with the biggest EHR player on the block. You also have long experience as a healthcare CIO. What are common challenges that other CIOs and IT leaders at hospitals and health systems call you in for?

A. CIOs and health system leaders call me in to be a strategic advisor, and sometimes also as a project director, for clinical transformation, electronic health record and other large-scale or innovative IT projects that improve the patient experience and outcomes, clinician wellbeing, cost efficiency, and health equity.

Common challenges exist at each stage of the project life cycle, from assessing feasibility to positioning a project for board approval to successfully implementing a project to realizing benefits longer term.

My services include help with building a business case; assessing readiness and risks; determining project scope, approach, phasing, schedule and budget; designing the governance and project structure and process; advising key stakeholders on project status, risks and opportunities; project planning; staffing the team; managing the human side of change; directing the project; transitioning the project to operations; and advising, coaching and mentoring CIOs and IT directors on EHR optimization, digital strategy, portfolio roadmaps and operations.

Risk management is a common challenge. In all aspects of planning, implementing and operating EHRs and other enterprise systems, it is essential to identify, evaluate and mitigate risks to ensure the responsible use of resources and delivery of value. While there are numerous potential risks to watch out for in an EHR project, the highest priority always is mitigating risk to patient safety.

Because transitioning from one system to another in a clinical setting is inherently high-risk, it is of the utmost importance to keep eyes wide open and ensure every patient remains safe when systems are designed, built and transitioned into production. CIOs and health system leaders seek my advice and support at each stage of a project to help them in identifying, evaluating and mitigating patient safety and other risks.

Cost is another common challenge. EHR and other projects require a significant capital investment and often an increase in operating expenditure to sustain the system over the longer term. CIOs are challenged to develop business cases that are credible and affordable from both the capital and operating spend perspectives.

The total cost of ownership of the system is typically forecast over a ten-year period, and benchmarks with peer hospitals are used to inform budget-setting and assess reasonableness. While boards, CEOs and CFOs usually want to see some amount of hard-dollar return on investment, the purpose of EHR projects is not to save money but rather to save lives, improve patient care and help people optimize their health.

I support budget development and cost optimization by assisting with high-level and line-item cost estimation and benchmarking and by providing an independent, objective perspective informed by experience at numerous other sites.

Benefits identification and realization is another typical challenge. There is a growing body of evidence from numerous EHR and clinical information system implementations across the world over the past few decades. The available evidence is a starting point for identifying benefits that may apply to the context of any given organization. Business cases usually include not only costs and risks but also financial, clinical and other benefits to be achieved as a result of the project.

While it is easy to find public evidence of others' outcomes, it can be hard to translate the information to one's local context since every organization is unique and has a different baseline.

It is even more difficult to get buy-in from clinical, operations and medical executives to commit to meeting specific targets, for example, taking costs out of their operating budgets from increased efficiencies, given the complexity, uncertainty, volatility and ambiguity of a large, complex EHR implementation; lapse of time between starting a project, going live and realizing benefits; and internal political risks and external environmental uncertainties.

To meet the challenge of strengthening benefits planning, CIOs have sought my assistance with benchmarking with other sites; identifying opportunities to translate benefit concepts to their organizations; engaging with key stakeholders to seek sponsorship and participation in benefits planning; aligning benefits plans with the EHR implementation plan; and designing and assisting with the transition of the benefits plan into an operational governance structure and process post-live.

Q. How exactly do you go about working with the team onsite at the provider organization? What is your role? Who do you interact with?

A. My approach in working with the onsite team at the provider organization depends on the role I am playing. I might be serving as a board advisor, executive advisor or project director – or a blend of these roles in some cases. It also depends on other factors, for example, the specific problem the organization needs to solve, and the objectives, scope and schedule for the engagement.

Leadership preferences and organizational culture can also influence how my role is defined and whom I interact with. When my engagement is with a provider organization under the umbrella of Epic's Emeritus program, which is my preference because it is so easy for providers to engage me in this way using their existing agreement, I coordinate closely with Epic's senior leaders responsible for the implementation at the site.

When I am engaged as a board advisor, the first thing I do typically is to meet with the CEO to discuss the requirements for the engagement, ensure clarity and alignment and write-up a formal letter that both of us sign. The letter specifies the objectives, deliverables, approach, schedule, working relationships, status reporting, compensation and other elements.

Before or after finalizing the letter with the CEO, I would meet with the board of directors or board committee chair and the CEO together to review, discuss and ensure alignment.

I would also be introduced to the other key stakeholders, for example, the CIO, CFO and CMO, and kick off the engagement at the executive level, followed by participation in various organizational meetings, document review and one-on-one interviews with executives, business sponsors, project directors, IT directors and other key individuals identified by the CEO as being appropriate to meet with.

Over the course of approximately 30 days, I would draft and revise an initial report of observations and recommendations, review the draft with selected key stakeholders to receive their input and ultimately deliver a final written report and oral presentations to management and the board. Depending on the engagement, my advisory services could continue into a subsequent phase of the project – and last as long as several months to a year post-go-live of the new EHR.

As an executive advisor, my approach is similar to the board advisor role, but there would be less if any interaction at the board level. I advise and coach CXOs in all aspects of clinical transformation, social and technical, at all phases of the program life cycle.

What this looks like includes one-on-one or small group meetings and sometimes occurs during a meal together, which is an opportunity to get away from the office, be in a more comfortable, relaxed setting, and go deeper into areas of interest.

In any advisory role, I work closely with colleagues at Epic (and other IT vendors depending on the project) to get their perspectives, consult with them on their and my ideas for action, and keep them informed. Leaders at Epic I typically work with include VPs, implementation executives, implementation directors, technical coordinators and customer happiness executives.

They bring valuable information, insights and recommendations, which I carefully consider and integrate into my assessment and recommendations.

As a contract project director leading an EHR implementation, my job is to complete any remaining pre-implementation planning, finalize the budget, fully staff and train the project team, operationalize governance and project management office controls, execute all aspects of the project, safely go live on time and within budget, stabilize the new system, and transition the system to the operations team.

In this role, I work with the same stakeholders as I would as an advisor – plus many more roles and people across the hospital or health system, for example, most departments within IT; medical staff, clinical and corporate leadership; providers, nurses and health professionals; safety, quality, risk, legal and compliance; and finance, human resources, communications, facilities, purchasing and other corporate functions.

As a project director, I am joined at the hip with Epic's implementation director and work closely also with Epic's implementation executive – as well as peer IT leaders and other sponsors across the healthcare organization.

The critical need for excellence in partnering, communication, coordinating and aligning with Epic and health system leaders cannot be understated. These projects are a huge lift and it takes a village of people working together very, very well to get the job done in an optimal way.

Q. In this role, you obviously are what could be called a problem solver. What is your methodology for going about solving problems with EHRs?

A. Problem solving is a critical process during an EHR project– but it's best, of course, to prevent problems in the first place. Problem prevention requires following best practices and adopting sound guiding principles such as putting patients first in all decisions and actions; co-creating a transformation with clinical leadership, enabled by IT and Epic; and building quality into the EHR design, implementation and operations.

Problems inevitably occur, of course, and my approach to solving problems is my own recipe and multi-framework. It is a blend of Epic's proven EHR implementation methodology with acclaimed frameworks for problem-solving, leading organizational change, governing IT, managing programs and projects, developing and delivering software, caring safely, operating IT, and continuously learning and improving.

To achieve program objectives and transformational outcomes, build in quality and solve problems when they occur, my approach includes the following aspects:

  • Always put patients first

  • Respect everyone

  • Instill a clear, inspiring vision and actionable mission

  • Create a culture of safety, candor, curiosity and learning

  • Iteratively plan, execute, check and adjust

  • Engage stakeholders

  • Implement good governance

  • Recruit and develop exceptional talent

  • Integrate human and technical aspects of change

  • Adapt best and leading practices

  • Apply a scientific approach to continuous improvement

  • Leverage frameworks such as Lean, Agile, Cynefin, Toyota Kata, Project Management Body of Knowledge, IT Infrastructure Library, COBIT and others

  • Learn with the global community of Epic users

  • Partner with Epic and other vendors

When problems arise, it is important for work to be visible, for changes (which sometimes contribute to problems) to be documented and apparent, for people to feel safe in speaking up, and for leaders to be visible, accessible, respectful and supportive. These conditions enable everyone to see problems for what they are, draw attention to problems and work together to solve them.

My favorite way to solve problems is not solving them myself. I prefer to coach and support other team members in problem-solving, whether on their own or in smaller groups with colleagues. It would of course be easier in some cases for me to take command, figure it out and solve the problem. And sometimes I do need to own the problem.

But I view my role on projects being as much about building people's skills and abilities as about building systems, infrastructure, devices, software, data, interfaces, reports and workflows. For people to learn to be better problem-solvers, they need experience, methods/tools and coaching.

One of my greatest sources of satisfaction in a project engagement is the opportunity to serve as a role model and coach for the team, especially the next generation of leaders in hospitals and healthcare organizations.

When I engage or lead others in problem-solving, I use numerous problem-solving techniques and teach and coach others in using these methods. None is my own invention. I have studied, practiced and applied ways of solving problems (and improving processes).

There are tools from high-reliability organizations that many hospitals have on hand, and I like to adopt or adapt these local methods and tools because they are familiar, accessible and easily repurposable for the project. Epic also has excellent problem-solving approaches based on extensive experience implementing and operating their software, which can be leveraged.

SBAR (Situation-Background-Assessment-Recommendation) is a template I frequently use for problems that are simple or complicated, but not too complex.

I also use techniques from the worlds of Toyota, Lean, Systems Thinking and Complexity Science, such as Toyota's Practical Problem Solving and A4 methodologies, 5 Whys Analysis, Root Cause Analysis, Theory of Constraints evaporating cloud technique, Toyota Kata and more.

The Cynefin framework helps me figure out what domain the problem exists in: chaotic, complex, complicated or clear. Depending on the domain, I might choose different techniques.

Q. Please talk about one example from your body of work at Epic. What was the challenge? How did you solve it? What were the outcomes?

A. One example is from 2019-2020, when I was honored to play several roles on Alberta Health Services' (AHS) Connect Care clinical information system (CIS) project, a more than $1 billion, multiyear program that included the rollout of Epic.

I advised the board, CEO and executive sponsors, and I served as a project director for the implementation work-stream, partnering with the clinical program officer and working closely with the CIO, CMIO, PMO, applications executive, Epic leaders, AHS clinical leaders and many team members.

One challenge during this engagement, which was under the auspices of the Epic Emeritus program, was providing the board with my view of the project's overall status and risk posture and giving the board members clear, concise, relevant information to help them fulfill their fiduciary responsibilities for oversight and to increase their sense of comfort and confidence in the direction of the project, considering the many risks inherent in such a large, complex endeavor.

I met this challenge in much the same way I described earlier in how I operate as a board advisor. The board received an independent, objective, expert perspective on the project – and the outcomes included additional information, which the board found valuable and enabling them to feel a higher level of comfort with the information and their understanding of the project's direction.

Another challenge was advising management on risks, reviewing mitigation actions and providing recommendations to ensure readiness for deploying Epic across the Edmonton Zone in the first of nine waves, with an initial go-live in November 2019.

As is typical of these projects, objectives included staying on schedule, sticking to the budget, and meeting scope and quality requirements with the highest priority on patient safety through the transition. Through ongoing risk identification, evaluation, monitoring and discussions, I contributed to the Connect Care PMO's risk management program.

One of the outcomes was a positive report from the Alberta provincial auditor, with no recommendations for improvement. Most important, the transition to Epic occurred with eyes wide open and a focus on patient safety, helping keep patients safe through the transition.

My other responsibility was codirecting the implementation work stream, which included data abstraction, protocol conversion, appointment and case conversion, cutover and command center. In this role, I regularly reported to the engagement, adoption and implementation committee and participated in the overall project steering committee, PMO and other groups.

This work stream was accountable to create and implement an overall implementation strategy to drive consistency and alignment across all waves; develop a structured implementation framework to support and guide zone and local ability to execute implementation tasks; provide support and direction to sponsors and key stakeholders to ensure effective and aligned implementation activities; ensure alignment with organization change management, communication, technical, CMIO and other strategies; remove barriers; escalate issues; support execution; and drive continuous improvement.

AHS's Wave One was a success. According to Alberta's Auditor General's review of the initial Epic launch in November 2019 in a published report, "AHS experienced a few technical issues, some service delays, and some initial frustrations immediately following the launch. However, overall AHS felt the launch and immediate sustainment of Wave One was a success and, most importantly, was safe for patients undergoing care at that time."

The successful outcomes from Wave One were not because of me; it was a huge team sport. I did my small part to help. There were thousands of AHS team members involved in the design of the system and numerous people focused on Wave One. Working together, the team developed and implemented the processes, controls and plans for executing the work and managing the risks to enable an on-time launch of the first wave of Connect Care safely and effectively.

According to the auditor's report, one of the key success drivers was that "the program was not led or managed as an 'IT project.' Representatives of operations, clinical staff and physicians and AHS IT co-led the program. The involvement of operational and clinical staff in the program was pervasive."

Clinical, operations and medical leadership and co-creation of a program like Connect Care is a critical success factor and worth calling out. This co-leadership model was so important to AHS's results and also is commonly seen as a key enabler of other hospitals' and healthcare organizations' clinical transformation projects.

Co-designing the change, participation, active, visible presence of leaders from clinical, operations and IT – this kind of approach to leading change is vital.

AHS' board, senior executives, CIO, CMIO, clinical program officer, and many other leaders and team members – they were amazing in their extraordinary commitment, talent, teamwork and execution. While I joined the team for a relatively short period of time during their multiyear journey and can comment only on the outcomes I saw when I was there, the true value and return from these kinds of investments is seen after the project is done, which is when the clinical transformation actually starts.

The EHR implementation plants the seeds in the soil; go-live and transition to operations adds water and sunlight; adoption of the new system and optimization of its capabilities happens during ongoing operations, as people learn and adjust to the new system, do things in new and better ways, and are constantly improving and innovating – this is where value is created for patients and those who care for them.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Enterprise Taxonomy: 

Enhancing hospital IT's speed-to-market capability

0
0

As healthcare providers worldwide grapple with worsening healthcare staff shortages and rising service demand, low-code technologies have proven indispensable in bridging these twin challenges while promoting positive outcomes for both patients and clinicians.

Take the case of Mount Alvernia Hospital, a 300-bed private not-for-profit medical institution in Singapore. From the usual three months, it is now putting out applications after a couple of weeks by leveraging a high-performance low-code platform.

Need for speed

As with the rest of the industry during the pandemic, Mount Alvernia was also thrown into a situation where it had to up the ante in its digital transformation. This involved the rapid development and deployment of critical solutions like health tracking and reporting. 

"We initially relied on the waterfall development model on .net and Java platforms, but the need for speed and agility led us to partner with OutSystems," Bruce Leong, the hospital's director of Technology and Strategy, told Healthcare IT News.

The platform provided by OutSystems has enabled their 13-individual IT developer team to build solutions with references from pre-built templates and plugins "in record time."

Their pivot to no-code technology was met with no resistance from the IT team; It even boosted their morale as they could do away from the often tedious and time-consuming work of traditional coding. This made them more capable of meeting constant demand from staff and clinicians to shorten the development time of new solutions to as close as 1-2 weeks.

Since the pandemic, the Mount Alvernia IT team has created 13 applications, including the Staff Health System (which was initially a contact tracing solution), Doctors Directory, Medical Records Tracking, and Electronic Meal Ordering. 

While improving the team's speed-to-market capability, OutSystems also ensured the cybersecurity of their platform. Leonard Tan, OutSystems Singapore and Greater China region director, explained: 

"Our platform provides 500 validations from design to runtime, ensuring that every aspect built with it prioritises security. This includes automatic fixes for DDOS attacks, newly identified code vulnerabilities, and mobile threats, among others. Furthermore, enterprise IT teams can enhance security assurance by applying enterprise SAST solutions."

"These solutions offer a robust governance model tailored for enterprise software factories and come with compliance designations like SOC2, HIPAA, and more. With these measures in place, our applications can seamlessly scale from department-level usage to handling millions of simultaneous users without compromising security, speed, or performance. Our AI-driven performance tool continuously monitors code, ensuring consistent and peak-level performance and scalability."

Fostering innovation

"[Low-coding] not only sped up development but also allowed our team to focus on innovation rather than reinventing the wheel."

Bruce Leong, Director of Technology and Strategy, Mount Alvernia Hospital

As part of their DNA, innovation also aligns with the hospital's digital transformation strategy to stay agile. Currently, the Mount Alvernia IT team is developing a new patient application called Alvernia Connect. 

Sharing more details about the project, he said: "Alvernia Connect represents a leap forward in our digital engagement with doctors and patients. This app is designed to streamline and digitalise processes that were previously manual and time-consuming.

"We’re anticipating a positive response, particularly for features like digital admission forms and appointment scheduling, which significantly reduce wait times and paperwork. Our goal with Alvernia Connect is not just to introduce convenience but to offer the patient a seamless and stress-free experience."

The first module of Alvernia Connect is targeted to be released anytime soon this first quarter of the year. 

Assisting AI development

As much as 70% of new in-house applications will be developed through low- or no-code technologies by 2025, noted Tan.

"In APAC, we continue to see more emerging use cases [of low- or no-code technologies] across various verticals, with the healthcare sector being one which we see going through significant digital transformation. With the shift towards patient-centricity, digitalisation of healthcare delivery services in APAC will continue to accelerate."

Leonard Tan, Regional Director, OutSystems Singapore and Greater China

Low-code can be instrumental to the development of such applications as patient portals and mobile apps, hospital management systems, medication management systems, and monitoring and triaging systems.

In the ongoing AI revolution, low-code can also assist with developing AI-powered tools for monitoring patients' conditions, managing medications, and providing convenient interactions through chatbots, among other use cases. "With the rise of AI, low-code can help healthcare players build AI-powered applications efficiently and enhance customer experiences, without going into the complicated coding work," Tan said. 

Staying agile

Today, people over the age of 60 make up 14% of the region's population; by the end of the decade, they are expected to rise to 18% and then comprise a quarter of the population by 2050. Alongside this rapid pace of population growth, demand for healthcare services that are more convenient and personalised is also projected to rise. 

"In the years ahead, we are anticipating an increased demand for healthcare propelled by the ageing population in Asia, with Hong Kong, South Korea, and Japan leading the pack. As a result, healthcare providers will find it increasingly challenging for nursing manpower to keep pace with the demand growth."

"Additionally, rising consumer demands for omnichannel experiences are pushing the healthcare sector to increasingly adopt new innovations such as digital tools and AI."

"With more healthcare players developing new applications for both internal use and patients, there's also an increasing demand for shorter development cycles, with IT teams expected to deliver applications within one to two weeks."

Leong advises the healthcare industry to "remain flexible and agile enough to tap into new solutions to meet these demands."

"To combat the healthcare manpower shortage, it becomes even more pressing to increase productivity and efficiency, and this is where technology and automation come in as a key solution."

Cleveland Clinic's advice for AI success: democratizing innovation, upskilling talent and more

0
0

One of the big questions today in healthcare is how hospitals and health systems are going to make best use of the power of artificial intelligence across the enterprise.

In just a short time, AI has arguably become the most powerful technology to ever impact healthcare. It brings with it many benefits and many challenges. So harnessing it in just the right way to study and use health data is critical.

According to Albert Marinez, chief analytics officer at Cleveland Clinic, provider organizations must:

  • Build a strong data foundation to leverage AI at scale

  • Create a data and AI innovation ecosystem to engage with industry innovators, validate opportunities and accelerate outcomes

  • Activate the organization by democratizing innovation and getting out of the way

  • Upskill by bringing in new talent for a generational change in how organizations deliver services

We interviewed Marinez to discuss this guidance and the power of health data and AI.

Q. Please describe the landscape as it exists today for data and AI in healthcare. What challenges and opportunities are provider organizations faced with?

A. The enthusiasm for data analytics and AI in revolutionizing healthcare is undeniable, promising better patient care and efficiency. As the sector evolves, with both new and established players integrating AI into their solutions, it's clear that while AI offers substantial benefits, it's not a cure-all. Recognizing the fragmented landscape, we must proceed with caution, acknowledging AI's potential without overestimating its immediate impact.

As a leading provider organization, our focus is on providing world-class care to as many patients as possible. We want to create better ease of access to our system, ensure our clinicians are equipped with the best technology that enables them to spend more time with patients, and create experiences for our patients that support them on their health journey.

As we engage on these opportunities, our key data management priorities include safeguarding patient privacy and data security amidst increasing reliance on technology and external threats. It's vital to handle sensitive health information with care, ensuring robust security measures. AI's reliance on cloud and novel approaches to handling data means we need to be quite vigilant in how we manage and deploy new solutions.

Ethical considerations are also paramount, as AI applications must be equitable, transparent and accountable. Our commitment at Cleveland Clinic is reflected in our AI Taskforce, which evaluates algorithms for quality, ethics and bias, aiming to mitigate health disparities and ensure responsible AI use.

Q. To leverage the power of data and AI, you say the Cleveland Clinic first is building the foundation – a strong data foundation that is a requirement to leverage AI at scale. Please elaborate on this focus and on what you are doing.

A. To fully unleash the potential of artificial intelligence in transforming healthcare, a foundational step must be establishing a robust data platform. AI's strength lies in its capacity to sift through vast amounts of data, synthesizing and uncovering patterns, correlations and insights that would otherwise remain obscured.

Our organization is privileged to possess extensive datasets and domain-specific data marts, which form the bedrock for our AI-driven initiatives. However, the value of these datasets is contingent upon their quality. A rigorous data quality program is not just beneficial but essential to ensure that our AI algorithms can generate reliable and actionable insights. Without high-quality data, even the most sophisticated AI models are rendered ineffective.

The scalability and flexibility required to meet our goals necessitate a shift toward cloud-based platforms. While our existing on-premise platform has served us well, it falls short in accommodating our growing needs and aspirations.

The cloud offers a scalable, flexible environment that can support our global analytics requirements, providing the foundation necessary for deploying AI at scale. This transition is about more than just infrastructure; it's about adopting a platform that empowers us to rapidly adapt to new technologies and innovations as they emerge, ensuring our architecture remains future-proof.

In focusing our efforts, three pivotal areas stand out. First, the identification and adoption of a cloud platform that aligns with our global analytics vision. This platform must not only meet our current demands but also have the capacity to grow with us, accommodating complex datasets and sophisticated AI applications.

Second, the development of a future-proof architecture. This involves creating a flexible, scalable framework that allows for seamless integration of new technologies and methodologies, ensuring our systems evolve in tandem with advancements in AI and data science.

Lastly, the emphasis on agility across our teams is crucial. By fostering an agile culture, we can enhance collaboration, speed up innovation cycles, and adapt more quickly to the changing landscape of healthcare data and AI.

Q. Next you say the clinic is creating a data and AI innovation ecosystem to engage with industry innovators, validate opportunities and accelerate outcomes. Please discuss your work here to help educate your peers who may be getting ready to dive in.

A. In today's rapidly evolving healthcare landscape, the pace at which new technologies emerge far outstrips our capacity to adopt and effectively utilize them independently to build new solutions. Recognizing this, our strategy emphasizes the creation of a data and AI innovation ecosystem, designed to foster engagement with industry pioneers, streamline the validation of emerging opportunities and expedite the realization of tangible outcomes.

Our engagement with external innovators is guided by a set of critical considerations. We seek solutions that address concrete challenges, are ethically grounded and free from bias, and have demonstrated efficacy within other healthcare settings. We prioritize solutions that can be deployed swiftly and possess the inherent flexibility to adapt alongside technological advancements.

Our team is dedicated to constructing a robust foundation that underpins our innovation ecosystem. By making these foundational elements accessible, we aim to empower not just our own innovators but also our partners and collaborators across the industry, enabling them to swiftly navigate from ideation to implementation.

This strategic focus on building and nurturing an innovation ecosystem represents our commitment to staying at the forefront of healthcare technology. It's about more than just keeping pace with change; it's about leading the charge, breaking new ground, and shaping the future of healthcare through continuous innovation and collaboration.

Q. After that, you say there is a need to activate the organization. Please describe what you mean here.

A. Long term, we want every caregiver at Cleveland Clinic to engage in AI. Our approach to integrating artificial intelligence into our organization means embarking on the journey to bake this into our cultural DNA. We have a multipronged approached to this:

  1. First, we prioritize education and awareness to demystify AI for our caregivers, highlighting the innovative solutions we're implementing and their potential to enhance patient care and operational efficiency. This foundational step is critical to sparking new ideas.

  2. Second, we dedicate efforts to identifying high-value use cases where AI can significantly impact, whether by improving patient outcomes, increasing diagnostic accuracy or enhancing operational efficiency. This focus allows us to channel our resources and energies into areas with the most substantial potential benefits.

  3. Lastly, we strive to activate the innovation potential within our team of world-class clinicians. By creating an environment that encourages exploration and provides the necessary tools and support, we enable our clinicians to experiment with and apply AI in meaningful ways. This effort to inspire internal innovation is a testament to our commitment to fostering a culture of continuous learning, collaboration and innovation.

Together, these efforts ensure Cleveland Clinic not only remains at the cutting edge of healthcare innovation but also sets a benchmark in adopting and applying AI to improve both patient care and operational excellence.

Q. You say another focus must be on talent – upskilling, reskilling and bringing in new talent for a generational change in how you deliver healthcare services. How are you going about this challenge?

A. To navigate the generational shift, a comprehensive talent strategy focusing on upskilling current staff, reskilling those in evolving roles, and attracting new talent is essential. Initiatives like continuous education programs and digital learning platforms offer current employees the opportunity to enhance their skills in emerging technologies such as AI and data analytics.

Simultaneously, we're working on clear career pathways and cross-functional training programs for caregivers transitioning to new roles, ensuring the workforce remains versatile and adaptable.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Enterprise Taxonomy: 




Latest Images