In this episode, Jason Nadal, BESLER’s Information Security Officer, and Eric Englebretson, BESLER’s Director of Information Technology, join us to discuss current trends in healthcare and AI.
Podcast (hfppodcast): Play in new window | Download
Learn how to listen to The Hospital Finance Podcast® on your mobile device.Highlights of this episode include:
- AI and current events and what this means for healthcare finance, vendors, and hospitals
- Broader categories of AI
- Risks for AI and how we should manage that risk
- How we can train end users on what is and what is not safe behavior with AI
- What we should know about copyright/theft of intellectual property
Kelly Wisness: Hi, this is Kelly Wisness. Welcome back to the award-winning Hospital Finance Podcast. We’re pleased to welcome back BESLER IT leaders Jason Nadal and Eric Englebretson. In this episode, we’re discussing current trends in healthcare and AI. Welcome and thanks for joining us again, Jason and Eric.
Eric Englebretson: Thank you again for having us.
Jason Nadal: It’s good to be on.
Kelly: All right. Well, let’s go ahead and jump in today. So, there’s a lot of talk about AI and current events today. What does this mean for healthcare finance, vendors, and hospitals?
Jason: I think first we need to talk a bit about what AI even is and the kinds of things that it can be used for. So, AI stands for artificial intelligence, but that’s just kind of a blanket term for any assistive technology that’s using computers. At BESLER, we’re using it for a great deal of things, and it’s actually helped assist in a lot of the roles that we have, whether we have people that are extremely specialized or we have more broad of a role. We use it internally for assistance. For example, writing code for programmers or helping to assist in automating security reviews is a huge time saver for me. We like to talk about force multipliers here, such as how tools and technologies can help multiply what can be produced, both in terms of quantity and in quality. So AI can be used to more quickly search for answers. You can create frameworks and example slides, notes for presentations, create marketing material, images, but there’s a whole lot more that you can do with it in terms of development work. Like all tools, there are ways that AI can be used for great improvements or helping to create representations of new ideas, but it can also be used by bad actors. Just as we’ve found that there’s ways to improve efficiencies with our employees, bad actors can reap those same benefits, unfortunately.
Eric: And I really, really love the term force multiplier because when AI is used well, that’s the benefit that AI is supposed to bring to us. It doesn’t replace people. It should make them more effective. And it can do that by helping to automate some of the tedium out of more mundane tasks. It can spot patterns and anomalies that we ourselves might not catch. And at least for me on the IT side of the house, it’s great for doing things like analyzing error messages where other avenues have turned up short. We’re already seeing new tools for hospitals and healthcare overall to improve patient outcomes. It’s helping a lot with diagnostic accuracy, and it’s improving back office and administrative workflows.
Kelly: Yeah. I mean, I can say that it’s helped a lot on the marketing side too. We’re starting to use it for different things too. So yeah. It’s definitely helpful in so many different areas. So, AI is a term that means a lot of different things to different people. So can you share some of the broader categories of AI?
Eric: Sure. As Jason mentioned, artificial intelligence, it does encompass just a really wide range of technologies that’s designed to simulate human intelligence and machines. I’m really just going to focus on one here, which is sometimes as referred to as artificial narrow intelligence or ANI. And we could get into. We could probably turn it into a whole podcast in and of itself about some of the more specialized forms of AI, but this kind of artificial narrow intelligence is the type of AI most people are going to run into on a daily basis. It’s the most prevalent form. It’s specialized, single-domain tasks. So, think of something like Siri or Alexa. Those are going to be the things that this is going to be perfect for. Website chatbots that we have also become used to interacting with. What I think are really exciting are the possibilities for healthcare, like I touched on a minute ago. You can feed data on a patient into an AI model trained on healthcare data to help doctors and staff a diagnosis. And due to the computational abilities that AI brings and the ability to feed it so much data, an AI designed for diagnosis can spot patterns in healthcare that humans would miss. And that’s a limitation that is natural to us. That’s not the way we process data. But it’s perfect for AI, and joining those two systems is where we really see a lot of benefit.
I’m just going to cover briefly how some AI gets trained, and I’m going to oversimplify it just a bit because again, we could turn it into a multi-hour podcast arc, but I’m just going to cover two key ways to train AI for different purposes. One is called supervised learning, and that’s where the algorithm is trained on labeled data, and it’s learning to make predictions based on input and output pairs with the goal of being to predict outcomes for new data. And you’ll see this for things like spam detection. So you can say, hey, this piece of data is a good piece. This piece of data is a bad piece. That’s where those labels come in. You see it for things like weather forecasting or economic or pricing predictions, just to name a few. It’s really easy, but it requires labeled data, which requires a lot of human effort upfront because somebody with the expertise has got to say, “Yes. This is good. Yes. This is bad.” And again, I’m oversimplifying a little bit. So, it takes a lot of upfront effort. And then you’ve got what we call unsupervised learning, and that’s where you take in your data. You just throw it all at an algorithm, and it starts to identify patterns in this unstructured, unlabeled data. And you’re not giving any specific instructions. You’re saying, “Hey, look at this. What do you see?”
And the goal here is to really glean insights from large volumes. And you’re going to see this used for anomaly detection, recommendation engines. So, think of the things you’ll see on maybe Amazon. “People that bought this widget that you just bought also bought XYZ,” or, “Based on the movies you’ve watched, you’d like this other movie.” And so, you can apply that to things like medical imaging. “Oh, we’ve seen this type of thing before, and that’s often coupled in patients that have this.” And so unsupervised learning is computationally very expensive, but it doesn’t take as much effort upfront, and there’s really some great benefits that can be potentially gleaned. I’m really super excited about AI being applied to things like diagnostic imaging, like I mentioned, which again, you can see these little details that humans might miss just because we can’t be everywhere at all times. We don’t have that much computational power. And possibly our eyes just simply can’t see because it’s on such a tiny level, but it gets pointed out to a radiologist and a care team, and they can then follow up on something like that.
Jason: Yeah. That’s a great point. So as Eric discussed, there’s a lot of excitement about AI specifically taking these huge amounts of data and whittling that down to a basic conclusion. So, it’s really difficult for the human mind to hold a whole lot of different data points simultaneously and determine a risk factor or an outcome. So, take a medical record. Think about all the data points involved in coming to a determination of whether a patient will be readmitted to a hospital, for example. So that would have a large financial impact if there’s key factors involved in a patient being readmitted. Currently, it’s pretty difficult in some cases to see this in advance. AI can give you the tools to see those similarities that you might not realize. What I find really fascinating is there are very large-scale models trained on a huge portion of the internet. Think of things like ChatGPT, for example. But there’s also more specialized models, things called mini-models, and I suspect we’ll see more specialized models focused on healthcare in the future that won’t have that huge computational cost or the huge amount of other data that it’s trained on that won’t really help you determine the outcomes for your particular situation.
Kelly: Yeah. I agree. That’s definitely coming. So, you mentioned some risks for AI. Given that there are risks of using AI, can you share some of those with us and how we should manage that risk?
Eric: So, I think the first and most obvious is simply the risk of leaking PHI into another company’s model. So, like Jason just mentioned, the AI gets trained on these large swaths of internet data. Well, it’s also going to get trained on the questions asked to it. And so, if you’re putting PHI into these questions, well, that gets retained by the ChatGPTs, the Claudes, the Copilot. And so, people need to really understand the seriousness and think about the questions that they put into these different types of chat or AI agents. These third-party models that are hosted by the companies behind, ChatGPT and Claude. So, we’ve got Anthropic. We’ve got OpenAI. We’ve got Microsoft. Like I said, they absorb those questions that are put into their models, and they add that to their training data. Of course, what they want to do is continue to improve over time. But again, that puts the data at risk because that data gets into these models. I think that the safest method, like Jason mentioned, is going to be working with local models or the mini-models that are going to be hosted directly by a hospital system or your company and only pasting in the data that’s been deidentified, potentially. And even thinking about though, it gives me the heebie-jeebies a little bit. There are some types of data that are hard or outright impossible to deidentify. And again, once that data goes in, there’s no way to get it back out. It’s really a serious, serious thing, and I’m definitely going to punt this over to Jason for the risk management piece.
Jason: Sure. So, let’s talk about what the actual risks are with AI. So, Eric mentioned a really critical piece in terms of PHI getting in. We’re not at the point yet in healthcare where we’re seeing trends of AI actually running the show and making decisions in healthcare or controlling critical systems. So, at this point, AI in the patient care world might do something like scan an MRI or other scanning data, diagnostic data to determine which patient should get a second look. So, there might be an indicator associated with a specific outcome, such as cancer, or which patients have indicators that there’s a billing or transcription error. The risks here in terms of your standard use case are low. There’s a human doing second check and making that actual assessment. What you should be concerned about is where the data is and what the risks are of that data leaking. So, at BESLER, we’ve taken an approach to categorize the use case of a component, product, or a tool using AI. I think of our approach as a set of rings like an onion. So, the outermost ring would be the least sensitive in terms of what data is being used with the AI in question. So, think of marketing or publicly showing information about your facility or company. It might create a PowerPoint deck that highlights key strengths. So, it’s very unlikely that anything in that type of document would contain PHI or sensitive information. So, this use case would likely be safe even if that data were to leak out or if it was used to train other AI.
On the other end of the spectrum, that very inner part of the onion, you may have patient data that you would never want out in the public internet, potentially used to train a model, the potential of that model leaking out the training data. So, while it might be great to ask AI about that readmission case for John Doe, born in 1968, it’s a pretty bad idea to approve asking a cloud chat interface about it. So, on the other end of things for development components, we have things in the security world, CVEs. They’re common vulnerabilities and exposures. So, for AI, we typically evaluate published stories about questionable uses of AI, ability to break through kind of security gated behaviors. We evaluate where the training data comes from, if it’s something that we can research, and how that’s used. We pay a really great deal of attention to the privacy statements that these different models are publishing about themselves and how a model is trained. What a company is allowed to do with your data is super important.
So, for example, if you’re using an AI tool to improve your headshots for marketing purposes or to spruce up your LinkedIn profile, you don’t want to be able to give permission for others to use your likenesses for deepfakes. So that’s people ethically using them. Just having your data out there with some of these tools puts it at risk if people do unethical things with it as well. So, we take this broad look at known vulnerabilities. We look at national origin. We look at privacy statements and especially whether or not we can have a BAA with a company that’s hosting, developing, or otherwise providing access to this AI. So, in general, at BESLER, we have BAA set up with our vendors, but it is especially, especially important with AI. So, for the most sensitive data, you can still use AI, but you likely want that hosted and locked down within your environment. So for this purpose, as Eric mentioned before, using these local models, which are models that you host yourself and cannot call out to the internet, they’re a great tool.
So, circling back to the training, if the training data for your use case is PHI or confidential information, you need to assess how that data is protected. Does your legal department deem it possible to deidentify the data? That may be simpler for text-based data, but that’s really challenging for imaging data. It’s known as DICOM or Digital Imaging and Communications in Medicine. So, these images can contain patient identifiers as text on the image. Think of ultrasounds, for example, where the patient name’s up in the corner. So those images can be tricky to determine whether the image itself would have enough detail to identify a specific individual in the future. And that risks not meeting the expert determination or safe harbor methods of deidentification under HIPAA. So, assuming the data has not been sufficiently deidentified, well, there’s three stages that you can take to protect the data. So first, we talked about using a local LLM. This means that your data doesn’t enter a cloud-based LLM dataset where other entities may use it. So, either that’s hosted on your organization servers, or it’s walled off in such a way that only your organization can use it. Second, and this is more of a legal protection for liability rather than an actual technical protection, but some companies that do have LLMs, which operate under a BAA, make sure you get one. So whenever possible, use them. At the time of this podcast, at least OpenAI and Microsoft have these. Both have restrictions on those, and you really need to thoroughly assess those for your organization’s specific needs.
Additionally, protect against exfiltration. This is really, really important. Needs to be regularly assessed for additional improvements because trends are going so fast with anything AI. That includes things like privacy statements. It includes how they’re using their data. You really need to stay on top of it. You don’t want your training data to become part of the model where you risk users being able to access data where it’s not appropriate. If your user has appropriate access to see Jane Doe’s records, it’s not really appropriate to leak Jane Doe’s data. Whether it’s used to train the LLM, for example, or to help discover if Jane Doe has signs of cancer, you don’t want that information getting out. So LLMs have a concept of hallucinations. So, these seem to be nonsensical or off-topic responses to your questions if they make a mistake. When choosing an LLM, look for cases where security researchers or bad actors have found exploits with that model that you’re assessing for use. So, there’s a lot in there, but you really have to stay on top of constantly reassessing the AI that’s out there.
Kelly: Definitely. You gave us a lot to think about, both of you. That was some really great tips and information regarding risk. So, what about different use cases? Can we make a one-size-fits-all process for approving AI technology?
Jason: Well, I think you can. The problem is that I don’t think you should. So, these use cases should really be evaluated separately. So, we’ve discussed these rings of use before. Those determine what type of AI would be best used for each use case. So, you can approve a tool like ChatGPT, for example, for helping to prepare a presentation, but you shouldn’t approve that same tool for an internal review of PHI or business logic regarding PHI, for example. It’s helpful to maintain a white list of approved products and models, but that white list should really be specific to each use case. So that’s where categorizing those use cases in rings really helps. So, take, for example, ChatGPT again. You can approve it for a general question-and-answer case but deny it for questions about sensitive company information and salaries perhaps. So, this should parallel your existing data management policy and in fact should be added as part of that policy because it’s very closely related. I would strongly caution against saying, “AI product X is just approved for use,” as a blanket statement without analyzing all of the data that it can touch.
Eric: Yeah. Completely agree. Right now it’s really just too, I’m going to say, Wild West of an industry in AI to have a one-size-fits-all approach unless that approach is potentially, “Don’t use AI at all.” But honestly, I think that’s pretty risky too. But like it or not, AI is here to stay with us. And I think we can really harness what it’s able to do in the healthcare space to do something incredibly beneficial for patients, their outcomes, and others in the industry. Once things calm down and AI becomes more of a commodity, it might be possible to have a one-size-fits-all approach, but right now my magic eight ball says, “Cannot predict now.” It’s really best to work with your security teams and legal counsel to evaluate each AI product and its use case as necessary, like Jason mentioned.
Kelly: Yeah. That does make a lot of sense. I can’t imagine a one-size-fits-all right now for sure. So how can we train end users on what is and what is not safe behavior with AI?
Jason: It’s a tricky question. Eric explained a lot about how training an AI model works, and it’s very important to get that information across to your end users as well so they have a basic knowledge of what goes into an AI model. The deeper of an understanding that your employees have of how information gets into the models, how it’s used, the more likely they will be to understand how to safeguard your own data. Take some of the magic out of AI because at times some of the outcomes do seem impressive enough to look like magic. So, we have an approach with cybersecurity where we train in monthly snippets, paying extra attention to attacks in the news. Some of that training involves how to recognize things like deepfakes. We’re finding that the world involving AI is moving way faster than standard cyber attacks. So, our training is being supplemented with that as well.
Eric: Yeah. It’s the speed of change here, which really concerns me the most. We’ve got companies throwing AI into their products regardless of whether or not they need them. You’ve got AI toothbrushes and AI enhanced printing experiences. And though my tone might sound like it, I’m not really knocking that. I’m just pointing out an example of how prevalent the technology is becoming really only a few years. And because AI is the new trendy thing, you have a lot potential for holes in the dam for data to leak out. Then we have the fingers to plug those holes. Jason is spot on here. If the users understand how data makes it into AI models, then there’s really more opportunity for them to pause and ping the security team with a, “Hey, my HP printer is trying to enhance my printing experience. What does that mean? What is it doing with my data?” And when they’re willing to ask those questions, then all your employees are working in the same direction. And honestly, we’re all better off, not only just in security, but just as a whole. On the security side, again, the reason we’re having people say, “Hey, is this AI approved for this purpose?” We’re not trying to ruin anybody’s favorite productivity tool. At the end of the day, we want everybody to turn in our best work, but the data security has got to be paramount. And that starts with educating our team members to ask questions while we supplement with the training that Jason mentioned.
Kelly: Oh, definitely. Y’all do a great job with security training here. And I know that we’ve even come to you with a lot of requests about AI, but I know it’s really all about the safety. So, we see a lot of cases in the news, and this kind of goes with– we’re going to go towards a different kind of path here. So, regarding copyright or theft of intellectual property as a way to make AI models better, what should we know about this?
Jason: So, this is really a hot topic in the news right now. It takes a huge amount of data, as we said, to create what are known as frontier models, kind of multi-tool of AI models. Most of these are extremely private about what data specifically is used to create a model. So, it’s understandable because for most products, the business logic is the most proprietary part of what makes a product unique. For AI models, the voice or tone and knowledge of the model is the key differentiator between what makes one of those models good versus average. A lot of the public internet, especially social media and public forums like Reddit have been used, but publishers and authors have alleged that copyrighted material has been used to train models. Things like books, magazines, things that are private on the internet. Most recent allegations against Meta, for example, have alleged that key markers of copyright have been explicitly removed to not reveal that source material was copyrighted and used in the model.
So, at BESLER, we’ve taken a stance to not knowingly used models based on stowing material. However, for most models, you’re not going to be able to truly determine the source of that data. What isn’t known is what effects downstream would there be for companies that are using such models? So, this hasn’t been tried in court yet. I would suspect that plaintiffs will focus on financial damages, the originality of the works. Defenders could try tactics of, “Hey, this is fair use,” or, “It’s parity. It’s a derivative work. Enough has been changed that it’s not the same as the original.” Given the substantial amount of AI being used in companies across the board in every sector – many companies are using the output of such models – it seems like there’s going to be a strong likelihood of some sort of large-scale settlement.
Eric: And really this is another tough one. We could turn the legal and ethical concerns about AI data sourcing, again, probably into its own multi-hour podcast episode. And due to the fact that AI models are proprietary, like Jason mentioned, and giving away a model’s data sources could potentially allow a competitor a leg up, we are likely never to know the entire corpus of training data that went into some of these models. And again, we’ll keep looking at the models, but we have to assume innocence here until guilt is proven. And the safest path overall is going to be using local models with your own data. But for a general purpose utility, Grammarly, GPT, Copilot, or whatnot, that’s just not possible. You’re safest right now, I think, sticking with the major players rather than Joe’s excellent AI service, but I’m not a lawyer, nor do I play one on TV. And so again, for these types of questions, listening to people on a podcast is great. But as always, you’re going to be best off consulting with your in-house legal counsel where possible.
Kelly: Yeah, definitely. Thank you for sharing all that with us. That was very interesting and things to kind of keep in mind there. So, what about AI use and products we already use? Are there any concerns there?
Jason: Well, there’s a lot of products and companies that are eager to include AI in their products and services, as Eric mentioned before. So, most of the time, vendors are eager to share what they’re including as a competitive advantage, but it’s important to stay on top of what is being included in sites or that you’re already using. So, it’s even more crucial to do this with products that contain your sensitive data and as always, perform security reviews as to how that data is being used and do that on a regular basis and work that into your data management policy.
Eric: And I’ll say that for us, one of the first ones we noticed that was just an immediate huge concern was Adobe’s Acrobat. They tossed in some AI features, almost unannounced, load up Adobe Acrobat after it updates, and it’s like, “Hey, we’ve got AI in our product now.” And so much PHI comes into us via PDFs, and we for sure don’t want that data shoveled into Adobe Systems. I’m sure for a lot of people – I mean, I’ve used this myself – summarizing non-PHI PDFs is incredibly helpful. But for us to have that just running across the board, way too much of a risk. So, we worked with Adobe to shut that down for our employees. But like when Jason said, when you notice new AI features in a product, loop in the security team, better safe than sorry here.
Kelly: Most definitely, it’s good advice. Well, thank you both for sharing your insights with us on this very important and ever-changing topic of AI. And if a listener wants to learn more or to contact you to discuss this topic further, how best can they do that?
Jason: We have a wonderful contact form on our website. Hit up www.besler.com. There’s a contact form there. And we will be more than happy to engage with you.
Kelly: Awesome. Thank you for sharing that. And thank you all for joining us for this episode of The Hospital Finance Podcast. Until next time…
[music] This concludes today’s episode of The Hospital Finance Podcast. For show notes and additional resources to help you protect and enhance revenue at your hospital, visit besler.com/podcasts. The Hospital Finance Podcast is a production of BESLER | SMART ABOUT REVENUE, TENACIOUS ABOUT RESULTS.
If you have a topic that you’d like us to discuss on the Hospital Finance podcast or if you’d like to be a guest, drop us a line at update@besler.com.
