false
ar,zh-CN,zh-TW,en,fr,de,hi,it,ja,es,ur
Catalog
Governing the Unbound: SCAI's Role in the Future o ...
2025 SCAI's Role in the Future of Artificial Intel ...
2025 SCAI's Role in the Future of Artificial Intelligence
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, Laura, thanks so much. Appreciate it. This evening, Tuesday, March 18th, welcome to a webinar regarding a paper that was published today in J-SKY. The name of the paper is Governing the Unbound, Sky's Role in the Future of Artificial Intelligence. It's a part of a J-SKY artificial intelligence issue. Good afternoon, good evening. The authors with us this evening are Drs. Afnan Tariq, Oussam Lada, Dipali Tukai, Raghava Velaghaledi. I'm Curtis Rooney, your moderator, and Sky's Vice President for Government Affairs. The paper is the product of Sky's Advocacy Committee. And let me just ask you all to introduce yourselves, starting with Dr. Tariq, if you don't mind, just telling the audience where you practice and what kind of setting it is and any other kind of details you'd like to provide. Yeah, no, thanks for having us, Curtis, and thanks for putting this webinar for the Sky membership. I think it's an important and timely or just reaching its peak time topic. But for my background, my name is Afnan Tariq. I'm an interventional and structural cardiologist in Southern California, particularly Orange County. I have a faculty appointment at the University of California, Irvine, and a strong interest, obviously, in interventions, but general cardiology and the appropriate utilization of technology. I'll say that broader, and then AI being one of those technology tools. So I'll hand it off now. Dr. Lada? Hello. Hi. Good afternoon. Good evening. Thank you, Curtis. And then thanks for arranging this webinar. Thank you to the Sky. My name is Kusum Lada. I'm interventional cardiology. I work for Sutter Health in Bay Area, California. I'm also board of trustee for the Sky. I think this is really exciting time that we are getting into these technological machineries tool to get integrated into interventional cardiology. However, it is really important for all of us to participate in this integration so that we can provide the best care to our patients and to our physicians. So thank you. Thank you. Dr. Tukay? Hi. Good afternoon and good evening, depending on wherever you are. Thanks, Curtis, for putting this together. And it's really great to be here in these times. So I am an interventional cardiologist practicing in Southeast Houston. I'm Dipali Tukay. I forgot to mention my name. But I'm in Southeast Houston. I'm an interventional cardiologist predominantly, but I do a fair amount of general cardiology. I have been with the Sky Advocacy Committee now for about nine years, and I have enjoyed working on various aspects of regulation and contributing as a group towards structuring things. And AI is such a new kid in the block with so much potential that harnessing that potential in the right direction is very important for people who know the field very well, which happens to be the interventional cardiologist. And Sky being like the lead society for interventional cardiology, I think it seemed very timely and appropriate for us to write this particular paper and get the conversation started. Excellent. Thank you. Dr. Bill Galetti? Hi, everybody. Thanks, Curtis and Sky, for putting this together. My name is Raghava Bill Galetti. For the past few years, I've been an interventional cardiologist based in Boston, also an investigator with the Framingham Heart Study. Like Afnan, I also have an interest in technology. And one of the things that spurred me to be on the Government Relations Committee at Sky is I have served either as a mentor or a judge for a variety of hackathons in biomedicine organized by MIT. And it always struck me how much discordance there is between what the technologies folks think should happen in medicine versus what the doctors think should happen in medicine. And I think that that was a big impetus for me to say, well, you know, we should be taking charge of our destiny and of what influences our work life. And that should apply to AI-based technologies too. So here I am. Thank you very much. I think that's really helpful. I was going to ask, as my first question, what drew you to it? So I think, I think you've described it. Dr. Tarek, did you have a similar experience or what's your experience? Yeah, I mean, certainly a similar experience. I think we all get into cardiology and interventional cardiology because we like technology. We like tools, whether that's the newest, fanciest stance, whether that's valvular intervention, and it's understanding the utilization of those appropriately. But like Raghava mentioned, we're usually kind of consulted as end users. And very rarely it's engineers who bring the product to you and then ask you how you can use it. You know, as my background, I uniquely had an opportunity to go into building new medical devices briefly through a CRF innovation fellowship, which I did. And that was really needs driven innovation. So understanding the framework is really helpful with technology and how to really identify unmet needs. We have so many needs in interventional cardiology and we have emerging tools and understanding how to apply those effectively, safely, and with maximum impact is really the goal of what we should be trying to achieve. Excellent. Dr. Lata, what's your experience? What drew you to writing on this subject? I think overall that some of the image interpretations and sometimes like, you know, overlying your imaging, like, you know, coronary angiogram or some of the intravascular imaging or like you get analysis of like coronary flow, those kind of things during the procedures. But overall in the clinic also, it can make you more efficient by your voice recognitions and getting into just direct live experience of the patients to your chart and which you can forget. And eventually it can be helpful for your patient's care as well as for the physician. It can be non-clinical stuffs are pretty time consuming and can be a big obstacle in providing the best care and a lot of time to patients. Dr. Chakai, why did you volunteer to write this paper? Well, I think primarily as it's been already mentioned, this is something new and the people who are developing it, the engineers, the programmers, you know, they truly come from the stance that this can solve everything. This is like the key to opening up everything's fixed room kind of thing. And, you know, as a physician, as Raghav mentioned, you know, we know as end users of that technology, that's not necessarily true. You know, if there's A, if there's B, C is not necessarily your answer, which is what algorithms usually do. There's a lot of thought process to it. And I think, you know, pointing that out that, hey, you know, when you do this, this is great. But once you have this information, this kind of thought process needs to go with it. And then over and about a conversation with the patient, because you can't just have a program telling somebody, oh, this is what you have. This is what you need to take. And you're done. And that doesn't go well with patients. You know, they want, hey, why this? Why not that? How did this happen? And that's something algorithms can't come with. So how do we use the AI technology in conjunction with rather than as a supplementation of the clinical thought process? And I think that's what's drawn me to be able to contribute an overall help in shaping the process as it is developing rather than come in 10 years later and say, oh, this doesn't work. We should have done it this way. So just staying with you, do you have experience with AI? Both personal or your medical? My experience with AI is utilizing the currently available AI technologies with voice recognition, developing your notes. And you already see, you know, when the electronic medical records marketed to you, they basically say, oh, yeah, this is going to be a perfect note for you. And then once you go through it, you realize, wait a minute, this can actually generate a medical legal liability for you. You have to go through it again. You have to correct it. Right. And I think that's a very simple example of why this whole process actually needs to be guided by people who use it and who understand it rather than people who believe they can fix a problem that they see in a field that is not their primary field. Excellent. Dr. Lata, what's your experience? I think the similar stuff, like, you know, when your patients are like send you in a medical, like what we call in the epic is MHO, the AI will generate your draft. You need to definitely go through those drafts. Those are sometimes can be striking and sometimes can be can kill you with all this information. And if you have like some of the medications they are asking and they will be like, OK, it can cause the bleeding and this and this. But below that, it can give you some other non non-cardiac medication information. So you have to be definitely use this bit of wise, like, you know, your eyes. And so one thing I like over the years, we all know, but right now, as Deepali said, that the machineries cannot replace the human being. This is just a new tool. And we just need to use this tool so that we can increase the patient care. But we have to be very, very cautious about this utilization because in a blink of time, especially in the very complicated and intricate field like intervention cardiology, we can harm the patient. So this is really good. But machineries is still a machinery. And those also don't take any responsibility. They don't have empathy also about like they're not going to cry if your patient dies. So it's important that, you know, we do use all these things to to provide the best and efficacious care to our patients. But we have to be very careful in implementation. And that was also one of the reasons that we want to be ahead of all this ballgame and let the leaders of this group, elite group of SCI and the interventional group guide how we can implement the AI in our practice to provide the best care to our patients. And Dr. Tariq, I saw you smile when Dr. Takai was talking about liability. And I know you're a lawyer. I wonder what your experience is. I'm the only one who smiles when you use the word liability. No, it's I think it's when you are there. It's it's funny that, you know, the incentives that we have that drive us to, you know, deviate from what you might think is optimal care for the patient. Right. You might just want to have an open, honest conversation. But the perceived threat of medical legal liability, either through too much or too little information, is always going to be there in the background. I think it's so, you know, coming back to first principles like medicine is one of the original professions. There's a reason that physicians are allowed to self-regulate. Right. It's because you develop the ability to regulate your peers through professional development over time. That's why we're allowed to have medical boards. That's why we have these special exemptions. That's why medicine is in a very unique place that I think AI, off the shelf AI, really can't approach, nor at this point should it. Now, that may change or evolve, let's say, the traditional provision of medicine. But that's where I think the, you know, a professional society like Sky coming in, taking leadership, saying that we are here to be the voice of our members, of the voice of how we deliver optimal care. Because I think when those and you and I probably know the case law on this, that was a long time ago. But those, those were the days, you know, you said, this is a profession that is a calling. And it's a lifelong calling, which you aspire to, right? It's not where you're filling out charts and just serving as documentation or a license. But it's a calling to provide care for patients. And that's upon which the profession was built. And so the more we can return to that, I think the better we are. And yeah, that's more of a philosophical case. But I think, in particular, when we use technology, and whether that's stents, or balloons, or whether that's a TAVR valve, or whether that's AI, it's understanding the patients in whom you're using that technology and how to use it appropriately in the service of better patient care. I agree. And Dr. Vallegiletti, did you, you mentioned hackathons. Did that come up in the, in your experience with AI? Or was that completely different? No, these were more sort of general purpose hackathons. You know, they are, you know, where a variety of things, things are, are looked at in terms of healthcare technology. Healthcare technology, anything from something as simple as making a better nasal swab, make it acceptable for children to have nasal swabs for influenza. So something that basic end of the spectrum, all the way to a startup company that I once reviewed that, that proposed to go to gyms and get blood samples and do a panel of 50 biomarkers and tell the people what they should do to live their life better. Right. I mean, you know, it's, it's that entire spectrum you see when you go, you know, look at startups or hackathons or stuff like that. My original entry into the AI space actually came from the research side of my life. I had used the precursors of current generation AI, which is machine language, machine learning and, and natural language processing to do some of my research projects. And, you know, there are members, if there are people in the audience who are researchers, they may appreciate that, you know, traditionally in medicine, if you want answers, some questions are better answered by experimental evidence. So you do clinical trials, some are by observation evidence. You do studies like the Framingham Heart Study, for example. But there are many questions that require us to harness clinical records. And therein lies the problem because the chart is very, very difficult as a source of research information. So to give you an example, I had used the natural language processing to extract ejection fractions from charts so that I can do a project relating ejection fractions with PCI outcomes. So, so things like that, that's how I got started in this whole interest in the AI space about 10, 12 years ago. And then that slowly grew to involve other things. Now with generative AI, there are actually now AI models where, you know, if you put your research papers into it, they'll write your thesis for you. So these things are moving, you know, that that's been out for the past few months. So, so these things are moving really fast. What I want the audience to understand is something that Afnan already said, is whether we like it or not, these technologies are going to hit us. And I think from Sky's perspective, what we want to do is advocate that whatever comes on board should bear two core principles in mind. One is the best possible patient care, and two, preserving the physician's autonomy to provide that best possible patient care. And I think if we as a society, with the help of all our members, can put forth guidance based on those two principles, I think that will be a victory for all of us, clinicians and researchers. But just to highlight what you're talking about, when we began to write the paper, we said the FDA had approved 100 AI applications, and I looked it up for the webinar, and it was 1000 by now. So I mean, it's moving incredibly fast. So, you know, with that in mind, I'm just wondering what the upsides are of AI. Let's talk a little bit about that. We mentioned some of the scheduling and some of the paperwork. Dr. Chukai, if you don't mind talking a little bit about what you think is the upside, and then we'll talk about the downside. So I guess upside is, as everybody's mentioned, is going to be taking away a lot of clinical work off from you, which could be in the office space, you know, whether it's your scheduling, your paperwork, your notes, or even like, you know, potentially at some point, you just talk and it's picking up on everything, it's putting your CPT codes, and it's even sending the medication out, if you're able to correctly say that, and pen it for you to just sign, you know, stuff like that. And potentially, I see a lot of role for it in the future, if correctly done for, you know, benefit verification and prior authorization, you know, just the way the insurance has everything already on a virtual system, this can run from the physician into. So I can see a lot of upside in the office space for that. In terms of what we do in the cath lab, I see a big benefit from, you know, better lesion assessments, better stent or device assessment, be it the valve, be it the stent, in terms of its dimensions, or how you want to put it in, you know, a lot of guidance from there that can make the process more precise and less stressful for you, instead of having to think a whole lot and hopefully shorten the duration, time duration spent, and the contrast utilized. So there's a lot of upside to utilizing this technology across the board from cath lab to your exam room as an interventional cardiologist. Dr. Lara, upsides? So I think it's very useful, the pre-procedure or some other, it can provide you a lot of robust comprehensive clinical information of very extensive patients, critically patients who are being a patient for 20 years. So going through thousands and thousands of chart can have a human error and in different state. So AI has a potential, like very good utilization we can do in getting the cardiac review of those patients' information over the period of 20 years. So you can make a very clinical, good clinical judgment in the patient care. In the cath lab, we are ahead of time and like we are out of the plain coronary angiography. Now we are doing even intra-coronary imaging, but we want a little bit more than that. Now we are into plaque, like what is the, what kind of plaque is that, whether this is like, what is the histopathology of the plaque, or whether we can do some kind of stenting with those histopathology and prevent the further MI, acute MIs and so. And also, of course, the polycyte valvular area, very precise stent placement. The post-discharge patients, we can also use these things about how to provide some of the best heart failure medications, how we can integrate the AI so that patient can get a timely and real-time medications and we can get the information. So of course, we have to find out about more human role in that providing and then regulating that to get to the physicians and how we can do that. But we can provide a lot of best care about the patient, doesn't miss the dual endoplatelet therapy, heart failure medication. They can be discharged, they can get a cardiac rehab and those kinds of things. And then AI can give all this information to the patients that why these are important and they should not be missing in the real time. Thanks. Dr. Bellagallati, you mentioned the Framingham study. Do you see a upside for AI with respect to research like that? Oh, yes, absolutely. I mean, any data analysis, data gathering, data distribution enterprise would benefit from some amount of automation and maybe a lot of automation as time goes by. So there is definitely a role for artificial intelligence tools on the research side. Now, I think where I'm not quite sure, there will be value in it. I would consider, tend to think of research as a sort of two-step process. One, you have an idea and you conceptualize it in terms of how you're gonna come up with an analysis to address the question and the idea. That to me requires an experienced investigator. I don't think there's any substitute for that. But once you have decided how you're gonna execute it, the execution piece can be dramatically made simpler, more efficient in the analysis performed with much more finesse than human beings trying to do that with writing their code. That part of it, I think there is a chance that AI will very quickly get into that space and help with speeding up these analyses, which will be all to the good because there's always a gap between the time research is done and it gets published for doctors to read and patients to benefit from. And even if you save a few months, that's still good. So there is value in data analysis for sure. Yeah. Dr. Tarek? I mean, there's tremendous upside. We're so early in the experience. I think one thing that we have to also understand is that we're, sorry, let me edit that. We're early in the experience for generative AI. We're not early in the experience for AI machine learning, right? This is a field that has been developing for well over a decade. And it's kind of understanding those things. And I know that you have to get really deep into the semantics to understand those differences, but we're very early in the generative AI journey, but not all AI is generative and nor should it be. I think Raghav was talking a lot about different machine learning applications that can be used in specific cases. And when you look at very large datasets, some sort of machine learning or optimization of those datasets can be very beneficial. But again, they're not currently, it can only train on what's there. So if you're in experimental use, you're setting new frontiers and that requires a human mind to currently set that. Currently, again, there's a lot of things to say there. So I think there's tremendous upside. And since, I don't know if you're gonna go back around if we're doing like a snake draft format, but I'll just go straight to the downsides, is that it can be used in a tremendous, a number of ways that can negatively impact or just not appropriately impact. The issue with creating so much, the ability to generate so much is you can generate a lot of noise. And noise is not helpful. I'm moving away from veering away from theoretical actual harm to patients. I know that'll probably come up as a different topic, but there are so many things out there that understanding what is true signal and what is noise, which is what AI should be able to do. It's going to be, at least in this emerging period, a difficult area to moderate. And I would say moderate there. And that's again, where experience and expertise through training, through lived experience as clinicians allows us, affords us a unique opportunity to be involved in the adjudication and kind of the moderation of these platforms and what the signal is, because there are a lot of promises out there. I have four kids, they make a lot of promises. Not all of those are gonna be true, right? And so that's kind of the way I look at it now. Yeah, no, I get it. Well, let's be Debbie Downers. Who wants to go first? So if we are talking about the downsides of AI, I think there are a lot and we have to be careful. The first thing is itself, if you go and put, like these are your risk factors and everything, and they give you like 20 different labs and the patient comes with the labs. So don't only think that this will be really an optimal minimizing cost. This can be increased the cost. And then I don't know how to use all this in some of the labs. I have no idea about even increased CRP and ESR in the patients who are already well optimized. I don't know what else to be done to prevent the risk of MI and SES. So those kinds of things can be harmful. Of course, the individualized approach to the medicine, and which is very important thing to provide to in interventional cardiology, gender specific, sex specific, we provide the care. We just have started. So this is far beyond that. And also one thing worries me, who takes the responsibility if AI has misdiagnosed? We have still not figured out. And who has responsibility? Is it the companies taking responsibility? Is it your organization? Is it the society or is it a federal law? So if the federal law is taking the responsibility, so we have to define that. And that can be only defined if we can form a very good regulatory oversight. And then that can be done by the society in conjunction with the federal or the organization that who should be taking the responsibility because we are ahead and we are moving fast, as Curtis said, that there are hundreds of the projects and they're all approved. But I'm not sure, and you can answer me Curtis, how they are approving this because I thought all this approving process is pretty comprehensive and challenging without writing that who will be over the time is responsible for any kind of events that can lead to a patient life. I'm not sure about that. And that's pretty- Yeah, that's a very interesting question. And in that world, in liability world, you use the reasonable man standard, right? And so that will all have to be adjudicated. And I suspect it might be adjudicated by physicians on one side of it. So, all right, that's a downside. Let's talk about- Yeah, even at the organization level, if the physicians are like, we are using the AI, I have never consented that AI will do the mistake on my behalf and I will take the responsibility. So I think I'm out of that game. I should, I am not responsible. So, you know, the way things stand right now, Curtis, I'm sorry, I'm gonna jump in a little bit here. The way things stand right now everywhere, you know, in the policy making scenario where the governmental agencies are actually regulating AI in some form or are forming committees to regulate AI in some form, everywhere when you read through their documents, eventually the responsibility is of the physician. That's where we stand right now. Okay, it's not of any company, it's not of any agency, it's the physician's responsibility. And I think one of the downsides to AI, and it's not a downside of AI itself, but it's a downside of how a technology gets utilized. Everything is whoever invents it, invents it with best intent, and then somebody goes and blows up Nagasaki and Hiroshima. So, you know, you can always end up having that happening, right? And I think that's where we have to be very cautious because a lot of people who are developing these technologies are developing it with the whole premise that, oh, this is going to replace the clinician. And I think that is where the danger is because then they go market it and people these days believe anything that's on social media or wherever it's marketed. And where do they go? Where are we going from there? And how is this going to be regulated? And right now there is no policy and there is no, you know, there is no guidelines as to how this can be even promoted, advertised, or utilized. And basically there is a lot of potential in healthcare right now, given that there's a lot of, you know, quote unquote profit margin. There are a lot of people who are getting into healthcare because it's a fantastic business that never goes down. And I think that's not a downside of AI, but that's a downside of human thought process in certain ways that we need to figure out how to regulate that. And that's going to be one of the biggest challenges of introducing AI as a tool in healthcare and keeping it as a adjunct to physician, clinician, rather than a supplement, which is what a lot of people are looking at it right now as. Yeah, and I'll just comment that, you know, when we were going to publish, well, in the paper, we talked a lot about some of the guidelines that had been put in process in terms of, you know, AI and various government agencies, what their role is with AI during the Biden administration. And then we had to insert a paragraph or a sentence basically saying, whatever we said before, the Trump administration has reversed it. So it's not only the technology, it's the policy that is just so fascinating that it's going to take a little while to work through. So Kurti, in the current era, how much we are like, we have some value at the level of Congress to speak up about all these healthcare decisions and then involvement of physicians, especially in the specialized field in making the healthcare decisions and participating in what is our value. So, you know, we actually have met with a couple on both sides of the aisle, a couple of the more luminaries in this area. One actually has a degree in AI, the other is kind of self-taught, but very knowledgeable because of his district. And so we are involved with those folks and having a conversation, but, you know, they're not much further along than we are given the nature of the product. They're just, you know, chasing headlines essentially. So, you know, it's up to us to set up a system to be able to inform them. I think that's really the important thing. I think this is a very nice moment to like just kind of state to our membership that, hey, you know, for all this lobbying and getting the policymakers to listen to us, it costs money and this would be a good time to donate to the SkyPak, even if it's $5, donate. And that's why I love you. Deepali is a hardcore SkyPak, so she's not gonna miss that point, a good point. Yeah, no, it's an excellent point. It really is very important, especially at this time. It's so, you gotta get it on the ground floor and that's how you do it. Take us home, Dr. Villagiletti. What are the other downsides? So, you know, I think, you know, Apnan and Kusum and Deepali did a good job giving individual examples of scenarios where we may have downsides. So I wanna take a step back, sort of, you know, give a more big picture answer, which I think follows up on what Deepali said, but I think would also address one of the questions somebody posed in the chat box. And that is, when you think about technology, you know, AI, whatever technology in general, I tend to put them in one of three possible buckets and one is technology as a non-clinical substitute. And if you were to ask the majority of physicians, where do you need help? I would suspect that's what they would want it because nobody went to medical school saying, you know, one day I'm gonna be doing all the check boxes that'll get the AUC right so that I can bill perfectly for my patient, right? That's nobody's dream ever. You know, you dream about, I'm gonna grow up, become a doctor and then I take care of a patient. So technology as non-clinical substitute is what most physicians want that'll help with burnout, that'll help with workload, stuff like that. Technology as a physician's substitute is where the dream of the techno-utopians is, you know, the Silicon Valley types, the people who are, you know, at the forefront of generative AI, the people who are at the forefront of all this. They think that if you can create an artificial intelligence system that knows all the answers, then you wouldn't have to wait for a doctor's appointment. Then you wouldn't have to go, you know, see a physician. You can just sit at home and ask a question and you'll get an answer and the drug gets just shipped to you from an online pharmacy. And that, I think, is the fundamental concern with this whole thing. Leave alone the individual particular clinical scenarios where they may or may not be AI's downsides or technology's downsides. As a physician community, that's what we have to face. There is sort of a misplaced or a misaligned emphasis on what technology ought to be. We want it as a non-clinical substitute. The techno-utopians want it as a clinical substitute. So what can we as a society do? I think, you know, in America, we are a capitalist society. We cannot tell private enterprise what to do. But any healthcare technology operates under the umbrella of, you know, a healthcare legal regime, a policy regime. And that's where I think we can influence. I think if the federal government or the National Institute of Health or the National Science Foundation are gonna put money into developing new technologies or aiding the development of new technologies, we can go advocate and say that, put it for things that will prevent physician burnout or alleviate physician burnout or maybe even act as a physician's sidekick as opposed to, you know, a replacement. And if Silicon Valley chooses to focus on the replacement, there's nothing really much we can do about it. But what we can do is talk to public authorities and say, there should be a policy regime in place that this would be done, back to the two points I made earlier. Best quality for patient care and safety for patient care and preserving the physician's autonomy to provide that best quality of patient care. That I think is where our heart and soul should be because, you know, tech startups are gonna do whatever they can to make a physician substitute. And there's, you know, there's no point us getting all, you know, worried about it. Is what I think. Interesting. Let's, so we do have a chat and we do have a question. So from Dr. Alison Dupont, she asks, for busy clinicians, one of the most promising benefits of AI is reducing physician burnout. Should we as a society, as Sky, prioritize which aspects of AI to focus on first? And whoever wants to take the question first. So that's what I was trying to get at in what I answered is, you know, yes, it would be great if we can prioritize, no question. But, you know, we will not be able to set the agenda for private enterprise, right? We can influence public agendas by saying that we desperately need technologies that are non-clinical substitutes. So it alleviate physicians burnout, take the workload away and help doctors focus on what they're good at and what they've always dreamed about, which is the doctor patient interaction. I think that we can do. And we can also advocate for a policy structure and a legal regime that would necessitate that any privately developed technology would also have to keep patient safety, patient quality and physician autonomy in mind. So let's, anybody else want to take that question? I mean, I completely agree, but I think it's, there is an inevitability here and it's not because it's, you know, incentive misalignment. Like it would be great if we could have, going back to the father's point, it would be great if we could have AI to solve for incentive misalignment in healthcare rather than capitalize on the misalignment. That's, you know, using it, it's, you give it, you give a consultant a hammer and they'll make $10 million out of it. But anyway, but besides that, I mean, the other things that we just have to be careful about is appropriately focused on patient safety and quality, but you don't get to safety and quality without access. And if AI can enable patient access by facilitating physician time, facilitating physician access to care, then I think you start the funnel where you can go there. I do think, you know, there are intentional or otherwise, there are some institutional safeguards here, at least from large technology corporations. The liability risks for a publicly traded corporation are almost inherently too high to operate in the true clinical care provision replacement space, at least currently it's structured, right? It's just, it's, if it's not your core thing, then it's probably not going to be worth the risk there because of the, there's no standard there that you can't have the corporate practice in medicine. There are a number of laws there. So I think this is an area that we had identified in the paper is an unmet need. And this is where we need clinicians to be able to bring the practice, the art of medicine back to the side. So the way I think about the opportunities here is, yes, we can do the back office automation. Like, good God, if we could all move away from fax machines and have something that just process the fax and put it in a structured note, like we all want that. Like if I hear that note, I'm thinking of like, and this will age me, like an AOL signal coming over and a fax connecting, like the beeping in the background. It's like, great, that's happening now. And that's still happening in 2025. So AI should absolutely be on that, right? There's no question about that. Physicians should be leading how we intentionally use it when it comes to patient care. And that's where I think it should be augmentative instead of autonomous, right? So the FDA does have certain categories that are there for AI as a medical device. And I'm not saying that all things are like that, but there's autonomous versus augmentative. And I think if we can augment our clinical interactions, rather than being a clinical substitute, if we can augment our analysis, I think it allows us to do other things. This again, really incumbent upon us, if we can enable better patient access there. But I think we all agree that physician autonomy is the goal here, not only to maintain individual clinical expertise and the profession, but also because there's an overriding principle that was there since 2000, which is move fast and break things. Engineers like to move fast and break things. That is not a good idea in medicine. That is not a principle that we hold to in medical care. And that's where I think that physician leadership can at least move us to a state where let's move fast, let's move responsibly, let's solve for the unmet needs, and let's come to solutions that will have beneficial impact on the back office and for provider autonomy, but really also impact patient access and then patient quality and care. I think kind of solving from the outside in allows us to really get there. Anybody else? I think, Alyson, to just directly answer Alyson's questions about that, this is, again, just like patient-tailored approach, we do that, which patient requires what, we have different sort of practice, right? We have a sort of practice. Some people are doing structural, some people are doing interventional, some people are doing general cardiology or clinic or hospital, and everybody demands are different, right? Somebody demands a better note writing and then deciphering all this integration of all these images and labs in the notes directly, reply to the patient and patient access. However, if somebody is in four days in the cath lab, they will ask for more optimization of coronary imaging, more evaluation of intracoronary plaques and how to put precise stenting if somebody is doing the valve, the valve. So it's different, different needs that will help those of group of physicians. So one particular thing, seeing that which one we should prioritize, would be very difficult for us as a society to say that, whether to write about the notes or the valve or this one. But overall, I think the first thing is we have to get ahead of this implementation stuff because what really bothers me is over the many, many years, even in the other channels like PVDs, limb salvage and everything, I'm not sure, and Curtis and then Afnan can help me understand that why our voice are not heard. And we are the last one that they will hear after the amputation prevention and several things. So the first thing is they will go to the Congress, who have no idea. If you remember in Atlanta, we had one of the Congress women, and she has an idea that one STEMI physician get $60,000. So that was out of proportion. Do you remember that? I do. That's what has thought. So I wish that could be the thing. So think of the out of proportion analogy, and those are our lawmakers, one. Second is then it comes to the insurance, and then it comes to the rest of the device and everything. And we are at the bottom of the food chain when we are the one who is providing the care and taking the responsibility. So this is really concerning that now this new thing is coming. And as Raghavan said, that the most of the people, they want to get into the health care because the Holy Grail is saving somebody's life. Nothing can substitute that. And everybody wants, whether they are in software, no matter where they're in hardware, they're in the business, but everybody wants to be involved in this because that gratification, nobody can get other than clinician. So we are still, we are still figuring out that how we can be heard together as a society so that we can help them implement this and we can get benefit of that. And then the second question, we will get the implementation and all sort of stuff. Yeah, I agree. We do talk a lot about this in the paper. It's in the title. What is the role of Sky's with AI and what should we be doing? And the categories are big. You know, it's education, best practices, ethical oversight, and then, of course, advocacy. Do you guys want to talk about any specific ideas that you all have with kind of next steps or, you know, I'll just throw it open to what the question asks about Sky's role? I think I'll just talk about the first thing is advocacy. And it's really sad that, and I want to tell, I will be still anonymous to say that, but it's really sad. Just two days ago, among the group of 200 to 250 intervention cardiology group, and I was just discussing how F-Sky is important and F-Sky, what does it do to you? It does advocacy to represent and think about all this work that's been done. And everything I know that because I've worked for advocacy and government committee for many years, and one of our elite members, somebody, they said that F-Sky doesn't matter. And that was, it was really sad because F-Sky in comparison to other one, and it doesn't matter. But I think just sitting and if you have time, you can invest your energy in that one. But how can you practice when you don't have all these things done? And think about that. Someday, somebody is going to come that I'm going to implement this as AI, and you have to make the clinical decision, and you still have to take all this responsibility of the patient care. So this is what F-Sky does, that they are being upfront. They are helping you more represent at the, despite being all these hurdles. And upfront, we are helping your education, your clinical practice, and preventing your further issues and complications and everything. And also it helps you the, like, it is your voice. So this is, this was really sad to hear that still in this era, we think that F-Sky does not have any relevant contribution in your clinical care. It has, I'm in private practice, and I can't tell you how much support I get, both in the professional and the personal upfront from the Sky. And I think I will just encourage all the members to be part of this one, because let me tell you one thing, the more we need you, the Sky needs you, more than that, you need Sky. Other thoughts? Sky's role? I mean, I've had the privilege of being on the, you know, the advocacy committee for a number of years now, and I think Sky's role is really to provide leadership in this space. And that's really important, but I think it is incumbent upon our individual members to also provide leadership. I, you know, after you pay enough board certifications and renewing fees, then yeah, you start, it does start to, you know, become a little bit painful. And this is for most people, at least their third specialty, right? Yeah, beyond internal medicine. But we have unique and very particular concerns. I think that not to like borrow from Liam Neeson, but we have a very special set of skills. And I think that we have to find alignment around those and understand that. And we haven't really touched as much on the skills that we have when it comes to very procedural oriented things, but we as interventionists and Sky, we really do represent the continuum of care. I know that a lot of us do procedures, but we also understand the pathophysiology that led to the genesis of that condition, then the intervention, and then the post-procedural monitoring. And that's why our paper tried to break it down across, we tried to target it and some things are looking at, oh, there's so many solutions here because that's what interventional cardiology touches. And well, you touch the life throughout that cycle. And I think it's really important to understand that that's where Sky can be really, really impactful in the membership too. Another thing that I think that Kusum highlighted is why are physicians not always, why do we have, why do we not have as much of a voice as we should? And I think there are changing politics, which is not the subject of this webinar, but providers are now spoken for by large healthcare systems. And when we talk about medicine, we talk about hospitals and healthcare systems. We separate the provider, but when corporations and when corporate practice looks at providers, it is a hospital that speaks for 1,100 providers. That is how providers are looked at. So they are the provider voice. If we can't speak through Sky, then our voice as physicians and our judgment is being diluted and somebody else is effectively speaking for you. And they may be using AI to speak for you. I don't know. But like in general, you should, we should be representative of our own voice here and try to lead, you know, our society, our specialty, our calling to the future and really across that continuum of care. Anybody else? I think that's a great point you mentioned, Afnan. You know, we truly don't have a voice anymore that's individual. No one is listening unless you are part of a big organization or society. And I think, you know, participating with Sky, you know, doing advocacy runs with Sky makes a lot of difference compared to like just being an individual who says, oh, this doesn't work and I can't do this anymore. Let's take a step back. We have about 10 more minutes. There's an excellent question in the chat by Manzoor Tariq, which asks basically, where do you think we're going to go in five to 10 years? Get out your magic wand. So if you want, if we want AI, if we have our own magic wand to change some of the AI, I think what is the optimal is individualized, patient-centered and gender and all this specific care to the patients. And the AI should be the one who should correct for the self-corrector, just like Tesla. It's not advertisement of Tesla, actually. So self-correction, and that would be the holy grail, which is far, like a lot to ask from machineries, but still we are the human being and this is how we got to the AI. So what next? Maybe we will be in double AI, AI 2.0, something like that, self-correction and individualized approach to the patient care. Dr. Velagoladi? Yeah. I'm sorry, go ahead. Magic wand. Yeah. So, you know, earlier when I answered one of the questions, I put AI and technology in one of two bins, either as a physician non-clinical substitute or a physician clinical substitute. But there is a third bin, which to me is personally the more exciting of the bins, which is technology or AI as a physician sidekick. This is all the ways in which your day-to-day routine patient care and routine decision making is enhanced by AI tools. Now, in interventional cardiology, we're not quite there yet, but this is starting to happen in other fields. So to give you an example, there are robotic tools to do joint replacements, for example. And if you feed the patient's data in and the scans in and the robot is manipulating the knee on your behalf, it has a preset data set of like 10, 20,000 prior cases. And based on that, it analyzes and say, if you move it this joint this much, and if you position it just like that, that gives you the most optimal outcome. One can conceivably think of a similar scenario where, you know, 20 million angiograms are pre-fed into something and it'll analyze every single angiogram and, you know, your angiogram goes into that and says, well, it can tell you, you know, risk of perforation, likelihood of success. Is that available today? No. But the ortho guys got there and so shall we. So the technology as a sidekick, things that will enhance our ability, become the second, third eye to our eye, become the third year to our years. That I think is a real exciting opportunity for the next fighter. In the intermediate term, that's where I see most of the action happening is companies will try to use AI-based algorithms to fine tune our clinical practice operations. And that I think most of us would welcome, at least the tech welcoming ones of us. And that should be exciting. Yeah, first of all, AI couldn't have produced a moment where your dad jumps on to ask you a question in a webinar. So thanks for that question. But yeah, you know, I think it's also important to understand how far we've come already. And I think we are always looking forward to the generative AI. But like, you know, what Raghav was mentioning is really important. But if you look at procedure optimization and things like flow reserve, we're pretty far advanced. We've adapted the best principles of computational flow dynamics. And there's without, you know, A, without getting into really deep into physiology and then B, without disappointing Dr. Kern, who trained me on physiology, I'll like at a high level say that, you know, a lot of what we've done are surrogates. You know, there was QFR at 50% stenosis and a 70% stenosis, which was validated against nuclear stress tests. And that's how we started the entire field. And then we validated FFR, which is a hyperremic index, which was supposed to measure flow. But because Doppler wires are fickle, we're actually measuring pressure. So we're using surrogates and we've created surrogates upon surrogates with IFR. And now we're going back to resting flow ratios and we're going there. So we've done this over time. And this is an area where I think that AI and machine learning in particular, that form of AI is incredibly useful. And it will continue to develop because we should be able to with high resolution. You know, if we can move away from dive based cameras with higher resolution and flow rates, we should be able to get to things that actually are surrogates for flow because we know flow is the carrying capacity of oxygen in your blood and oxygen determines ischemia. The closer we can get using math surrogates and big data, then we'll be there and we'll get there closer. And that I think understanding patient factors and physiology and the underlying disease conditions. Again, this takes physicians to come to that calculation, right? It takes physicians to understand what's going there and how to apply these things. So I think you can apply it very intentionally if you are that physician. So where I'd like to be in 10 years, I think one thing I'd like to bridge between now and 10 years, I rarely post on I don't have any social media other than LinkedIn. I rarely post on LinkedIn. I posted something about this so we could get people to join in. Somebody who is not a physician, who is an engineer creating health tech, he he replied to me and he goes, I really love this portion of the paper. And I don't know which because it is a group effort. I don't know who wrote it, but he wrote he said, I love I should be viewed as a tool to augment clinical decision making, not replace it. The unique combination of clinical experience, intuition and human empathy that physicians bring to patient care cannot be replicated by algorithms. I think in five to 10 years, if we can get to the point where I can help us do the back office and solve for the what is going on, what is happening in a patient's life, then we can unlock the why it's happening. That human empathy for understanding the patient factors that have gotten us there and how to solve for those those barriers to care, that's where I think we can elevate ourselves beyond the routine and really, you know, return to patient care and just being human with each other. I think that would be a really nice thing to be able to get to. Yeah, Afnan, I echo your sentiment. If I had a magic wand, just like the way you talk in the cat lab to be able to do the why, we should be able to do that even in our office space. Like go there, sit, talk, there are sensors all over. They pick up our conversation, note is done, meds are sent, bills are dropped, prior authorization is ordered. All you have to do is talk to the patient and hold their hand if necessary and explain to them why, why we are doing this. Because a lot of times the biggest complaint patients have is, oh, I was given this, but I don't know why and why will it make me better? And being able to spend doing that instead of trying to type stuff and call people to drop these bills, that would be magically life changing for the profession itself. So CMS doesn't reimburse for that. So they are hearing us. So there will be 50 percent reimbursement cut for the next year. Yeah, but then you can see 50 percent more people. Right now, I end up seeing like one person in one hour sometimes that they start crying. I'll be able to see more people because I'm not doing their notes and everything else. So, you know, I think I think, you know, there are pluses and minuses to it. So it'll be interesting to see how that goes. Still, we need to revise those notes and the lines, because what is under the lines? So yesterday there was an image show that whether we can, whether I can get the Xeralto for the PVD, it's expensive. Can you change it? And the AI generated reply was, yeah, we can change it to Warfarin. So that was for the PVD. No, we can't. I thought you were going to lead us into the next conversation, like a plug for a next webinar on how we actually solve for the why impact patient care and outcomes. And we make interventional cardiology the center of value based care. I thought that's where you were going. I think that's where I think I think that's where I should take us. And I think that's where we need to, you know, advocate for policymakers to think of it that in that direction, instead of getting bogged down by this, by the noise of like what can be substituted and made cheaper rather than what can be improved and provide better value for the same cost or maybe even less cost. I think that's where we need to be going. I think the valued care for the interventional cardiology would be very expensive. They will not be able to afford since there is no change in the physician reimbursement in last many years. Just a few years ago, United made 22 billion. And then with physicians, the administration cost has increased to 80 percent. And the physician reimbursement or payment is still five to seven percent. So we will never be able to get the value based care in this 44 percent of Medicare cut. That's a different stuff. So but still we can try. So I just want to keep it like AI comes and another one, four percent will go to 14 percent. No, I guess, you know, that'll be a time for us as physicians to think as group and say, hey, our reimbursement is going to be a direct pay completely different from your insurance and your hospital payments. But that's a topic for different, different than our completely. So we are at the top of the hour. I wanted to thank you all very much for providing your thoughts. And I really enjoyed working with you on the paper. And hopefully we'll have a couple more webinars and papers in our in our basket here. So thanks very much. Thank you so much. Thanks, Curtis. Thanks for organizing this. Thank you so much. Thank you.
Video Summary
The webinar featured a discussion on a newly published paper in the J-SKY journal titled "Governing the Unbound: Sky's Role in the Future of Artificial Intelligence." Hosted by Curtis Rooney, Sky's Vice President for Government Affairs, the session included insights from authors Drs. Afnan Tariq, Oussam Lada, Dipali Tukai, and Raghava Velaghaledi. Each panelist shared their professional backgrounds and interests, specifically in cardiology and AI integration.<br /><br />Key points included the significant potential of AI in enhancing medical procedures, improving patient care, and reducing physician burnout through technological advancements. Concerns were raised about the responsibility and liabilities associated with AI errors, the potential for increased healthcare costs, and ensuring that AI acts as a tool to augment rather than replace clinicians.<br /><br />The conversation highlighted Sky's crucial role in education, advocacy, and ethical oversight within the AI landscape in healthcare. Participants emphasized the importance of active physician involvement in AI-related policy to ensure that patient care remains the priority. Overall, the session called for structured integration of AI in healthcare, underscoring the twin goals of augmenting clinical decision-making and preserving physician autonomy.
Keywords
artificial intelligence
healthcare
AI integration
medical procedures
patient care
physician burnout
AI ethics
clinical decision-making
Sky's role
×