In this episode, Aaron Maguregui, Partner at Foley & Lardner LLP, shares his expertise in shaping policy and standards at the intersection of AI and healthcare, with a focus on HIPAA compliance and AI solutions.
Podcast (hfppodcast): Play in new window | Download
Learn how to listen to The Hospital Finance Podcast® on your mobile device.Highlights of this episode include:
- Key legal frameworks governing the use of AI scribes in healthcare
- How CFOs should evaluate vendor contracts with AI scribe providers
- Legal risks arise from data use and model training
- How AI scribes impact fraud and abuse compliance
- Litigation and liability risks of AI scribe adoption
- What regulatory development CFOs should track
- How CFOs can partner with legal teams to de-risk AI scribe investments
Kelly Wisness: Hi, this is Kelly Wisness. Welcome back to the award-winning Hospital Finance Podcast. We’re pleased to welcome Aaron Maguregui. Aaron is a partner at Foley & Lardner LLP, where he leads work in health technology and digital health. He focuses on artificial intelligence governance, patient engagement, privacy compliance under HIPAA, CCPA, and CPRA, as well as telehealth and major technology platform transitions. With over a decade of experience, he has advised startups, Fortune 500 companies, payers, pharmacies, and hospitals on AI compliance, data risk, and digital health integrations. He also chairs the American Telemedicine Association’s Artificial Intelligence Committee, helping shape policy and standards for AI and healthcare. In this episode, we’re discussing HIPAA and AI Solutions. Welcome, and thank you for joining us, Aaron.
Aaron Maguregui: Hey, Kelly. Thanks for having me on.
Kelly: All right. Well, let’s go ahead and jump in. So, what are some of the key legal frameworks governing the use of AI scribes in healthcare?
Aaron: Yeah. So great question. The major framework that I think everyone immediately goes to is HIPAA, right? And so, with respect to HIPAA, everyone is fairly aware of the privacy and the security rule, and how that would apply in sort of the AI situation. And this is really easy because it means encryption, it means audit controls, business associate oversight. Those are sort of the table stakes. I think what people are starting to– I think what companies and vendors and developers are really starting to realize is that HIPAA is really just the starting point. State privacy laws like the CCPA or Texas’ new privacy law, they’re adding stricter obligations, especially around de-identified data. The FTC is also sort of there in the hunt and not to be forgotten, and really, we’re sort of starting to see this– and I say, “Starting to see,” but it’s been going on for quite some time now, a proliferation of state consumer privacy laws that are sort of including HIPAA and medical record data and health wellness data. And so really, when we think about AI and just really the power it has in sort of putting different data sets together, you’re really pulling in a whole host of laws that are making it quite perplexing for folks to comply with. But it’s definitely an evolving landscape. And so folks thinking about this should really sit down with a privacy lawyer or their compliance team and really think through the various regimes that are in play when you’re taking a bunch of data sets and using it to train a product like an AI scribe.
Kelly: Wow. Well, thank you for sharing those legal frameworks with us. So how should CFOs evaluate vendor contracts with AI scribe providers?
Aaron: The contract is really the first place to protect your organization. You really shouldn’t start any type of work or any type of engagement without really getting a contract into place. That goes without saying. But really thinking about the components of the agreement in the healthcare space, you’re typically going to find yourself with the Business Associate Agreement in play. And what that means is that essentially HIPAA’s going to apply. And so we want to take that Business Associate Agreement, we want to make sure that it applies to any of the data that is going to be at play here. And that really means compliance with the privacy and the security rule. What happens if there’s a breach or an OCR investigation? Really the next thing that we sort of think about when you’re contracting with an AI vendor is really indemnification, liability caps, and cyber insurance coverage. And stepping back, it really sounds negative to begin a relationship thinking about the worst things that could happen, but it ends up being important. And so, understanding, who’s on the hook for what? And really, what has to happen if some kind of claim or liability exists?
I think the last thing that I’ll point out here, and it’s something that’s becoming really prevalent, is really the data rights involved in AI, and really, this is happening a lot on the vendor side. On the vendor side, we really need that de-identified data in order to build our model, to train our model. And on the hospital system side or the health system side, we really look at that data as our intellectual property. In addition to our protected health information and personally identifiable information, it is really an asset for these data owners. And so there’s a little bit of friction when we think about a vendor that really wants to use data to train their own model, and going to a health system or health plan and asking them, “Is it okay if we use de-identified data to train our model?” That’s a contract by contract approach, but these kind of discussions have to happen upfront in order to understand what the dynamics of the relationship are. And there’s a lot of reasons to be able to allow the use of de-identified information trained models. There’s a lot of reasons to– health systems and health plans may want to avoid it. And so really, it’s really understanding your contract rights, and really, in short, your contract should really transfer as much and speak to as much of that legal and financial risk as possible. So, there’s really no uncertainties.
Kelly: Yeah. That makes a lot of sense. Thanks for explaining that for us. So, what legal risks arise from data use and model training?
Aaron: The secondary use of data is really the biggest risk area. I think everyone both on the vendor side and the buyer side are really in lockstep with respect to the use of data for purposes of the services. But it’s that secondary use data, that secondary use case that’s always of critical importance. And really, kind of piggybacking off of the last discussion, is that on the vendor side, they want to use the de-identified data to improve their model, to showcase their performance, to be able to use that analysis and the analytics derived from that information and really build a better product. And on the buyer’s side, the buyers see that as their intellectual property. And so really the risk comes in sort of these unauthorized uses of data or the derivatives of data. And so really what we think about is– on the vendor side, we say, “Look, you have to explain to your health plan systems clients or your hospital clients exactly what it is you’re looking to do with the data, and then also understand where they’re coming from.” They don’t want a vendor going out running next door to their competitor and saying, “Hey, guess what? We have a whole model trained on our experience with XYZ hospital system.”
And so really getting out in front of that, having conversations so that everybody’s on the same page as to what the data risks are really minimizes those risks. But on top of that secondary use case risk that we’re just discussing, I think it’s really this concept of AI-generated documentation in the billing area. Really, there’s added exposure anytime you’re working with a government-sponsored program or even a commercial insurer, and it’s everybody’s favorite topic, which is the False Claims Act. Given just how much data we’re able to ingest and use and access, really, we find ourselves in this new world with so much data and so many analysis at our fingertips that I think you’re going to start to see more of a False Claim Act exposure issue there. And so, I think for the folks listening, I think that means that, yes, there are apparent efficiencies and money savings, cost savings that AI brings, but you also have to think about how you implement those, and the regulatory risks associated with them.
Kelly: Yeah, definitely. Definitely keep that in mind. So how do AI scribes impact fraud and abuse compliance?
Aaron: AI scribes, they can help by producing more complete records. They can also create risk by unintentionally upcoding encounters, and really it is unintentional, right? We’re looking to find out as much about patients and the patient experience as possible, and in doing so, we’re pulling from all sorts of data sets and encounter information. And so, what we get is this robust data record or data set and really that gives us more insights. And so, I think what regulators are looking for is really documentation inflation. The OIGs have already made it clear that organizations are responsible for coding integrity, regardless of whether AI is in the loop. So, I think what we should be planning for here is routine auditing and compliance monitoring as part of the cost model, right? And really I can see health plans not only wanting to understand what the outcomes that the coding and the audit reviews have produced, like has been occurring in the recent history, but also how the models that led the health plans to– sort of these coding and audits were trained and were basically waited for purposes of producing their services. And so, I think red flags would obviously be revenue jumping, but it’s certainly something to think through and provide some safeguards and avoid that unwanted scrutiny of a health plan audit or a state attorney general’s audit.
Kelly: Sure. Nobody likes the word audit, right? So definitely want to prevent that. So, what are some of the litigation and liability risks of AI scribe adoption?
Aaron: Yeah. So, kind of working backwards here, right? We always think about that concept that we’ve been hearing more and more about, which is human-in-the-loop. And so, the liability is obviously AI producing a flawed note that leads to some sort of clinical error. And that really begs a lot of different questions that I don’t think we have answers to yet, but that human-in-the-loop sort of component really informs us of what the litigation and liability risk is. The unfettered or maybe the unchecked trust of the AI scribe that writes a medical note, because either we’re short on time or we’re trying to move our day along, is really where that liability creeps in. And so that human-in-the-loop component is the best mitigator we have, especially when we’re thinking about administrative tools. And I alluded to this a bit earlier, but the question about, who is liable? It’s not settled, right? But if history tells us anything, I think the answer is that everybody could be liable, anybody that has deep pockets. You think about the hospitals, the health systems, and the vendors that are developing these.
I think the sort of liability landscape is vast, and there’s different arguments for the different parties involved. And so, when I sort of think about that, it sort of brings to mind the malpractice risk and really the consumer protection risk. So obviously the vendors are going to say that the physicians should have signed off. And when they did sign off, they were certifying that the medical record was complete and true and accurate. The physicians will sign off that they were really looking at the ultimate outcome in the medical note and they missed that– maybe a blood pressure read was off or something that was ingested and really displayed incorrectly. So I think the financial reality is that adopting AI scribes right now is a little bit of an unknown risk, right, from a liability allocation perspective, but essentially having tight guardrails from a policy perspective– but not only a policy perspective, but a risk shifting perspective, and by that, I mean really thinking through your insurance coverage, really goes a long way.
Kelly: Yeah. Thank you for explaining those risks for us. So, what regulatory development should CFOs track?
Aaron: Yeah. So, on the state side, every other week we find out about a new law that’s being either proposed or rolled out, and really thinking through how states are treating AI. I said a while back during an article that I was writing that AI could become really the new privacy, but with new clothes. And what I meant by that was privacy is one of the harder topics to tackle for a multi-state organization just because if you’re in one state, you’re really just in one state from a state privacy perspective. We now have, I think, somewhere in the vicinity of 15 to 17 comprehensive privacy laws, in addition to the more basic state privacy laws that are already in existence. And so I think you’re going to start to see that and you already have started to see that with respect to AI. Additionally, the FDA is looking closely at whether certain documentation tools cross into clinical decision support, which trigger the device regulations, or if they trigger the software as a medical device regulations.
And so having a strong risk management framework and understanding what your product does and how your product uses and discloses data will go a long way into understanding what your regulatory requirements are. And so, from my perspective, tracking the states is a really solid way to understand where the regulatory sort of environment is at the moment. We don’t have a federal AI law in place. Hopefully that’s something that may happen here in the next few years, but it’s certainly something to hope for so that some of these bigger multi-state cross-country organizations get a little bit of clarity in terms of how to scale a 50-state model.
Kelly: No, definitely. There’s a lot to keep in mind there. So how can CFOs partner with legal teams to de-risk AI scribe investments?
Aaron: It’s a shameless plug for the attorney, right?
Kelly: Okay. [laughter] Yeah.
Aaron: Get the attorney in and– get the attorney in early. I work with a lot of early-stage startups and they want to know what success looks like in the early contracting days of an agreement with their first partnership, their first client, and I tell them, “It’s really transparency at the SOW stage.” And the reverse is true, right, at the hospital system or even at the vendor. The CFO should understand the compliance costs, the audit costs, training and monitoring, what those costs entail and what they look like. And finally, they should be part of the governance process. CFOs are really a great asset in the negotiating room because they understand the concepts of ROI and what really moves the needle from a cost perspective. And so, ROI is one of those fun terms that really go hand in hand with AI. But defining ROI and evidencing ROI are not always the easiest of activities. And so really getting that CFO in there in order to really leverage his or her skill set in understanding where the costs and really where the true savings of AI can be realized, I think is just a great tool for any company to use and really to monitor their success with AI.
Kelly: Makes a lot of sense, Aaron. Thanks. And thanks for sharing your insights with us on HIPAA and AI solutions. If a listener wants to learn more, contact you to discuss this topic further, how best can they do that?
Aaron: So, I’m always happy to answer any emails I receive. And my last name’s a little bit longer, so I’ll spell it for you. But it’s amaguregui@foley.com, amaguregui@foley.com. I’m also really active on LinkedIn and so happy to connect and always happy to talk shop.
Kelly: Great. Thank you so much for joining us. And thank you all for joining us for this episode of The Hospital Finance Podcast. Until next time…
[music] This concludes today’s episode of The Hospital Finance Podcast. For show notes and additional resources to help you protect and enhance revenue at your hospital, visit besler.com/podcasts. The Hospital Finance Podcast is a production of BESLER | SMART ABOUT REVENUE, TENACIOUS ABOUT RESULTS.
If you have a topic that you’d like us to discuss on the Hospital Finance podcast or if you’d like to be a guest, drop us a line at update@besler.com.
