false
ar,zh-CN,zh-TW,en,fr,de,hi,it,ja,es,ur
Catalog
CHD Research Forum on Multicenter Research
Retrospective Multicenter Research
Retrospective Multicenter Research
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So it's my privilege to introduce Dr. Shahnawaz. Dr. Shahnawaz is currently the director of the Cat Lab at Cincinnati Children's Hospital. She pretty much does not require a huge introduction to this audience because her accomplishments speaks for themselves. And she's here today to talk about retrospective multicenter trials. Dr. Shahnawaz, thank you for that wonderful, wonderful introduction that I absolutely not deserve. But I will send you that $10 I promised for saying only nice things and nothing truthful. Dr. Burns, that was phenomenal presentation. It will be hard to follow it up with the parsley stuff or the measly stuff I have done in the retrospective as a multicenter research. But I'm going to hopefully talk to you about what I think went well and definitely talk a lot about what I did not think about as I was jumping into this. So the way I thought about research was, you know, the good. And just like Dr. Burns said, my motivation of doing this initially was like I had done 25 patients of XYZ. And in these 25 patients, this is what my outcome is. But any time they brought it up at surgical conference, I felt stupid about not knowing what happened to the 26th patient down outside. So I truly wanted a larger sample size. I really wanted to know, partly, was it patient based outcome or was this anatomical based outcome? Was it, you know, individual operator based outcome? And that was my motivation. I wanted generalizable findings to speak of, look, the four we have done haven't gone well, but the 200 done by Shyam have gone really, really well. And this is what he does differently, sharing resources. And when they have gone well or not gone well, what does that mean for our patients? You know, we have learned a whole lot from other people's outcomes. And some of the motivation was also networking. I got to interact with individuals from very many different sites and get to know about their practices and tell them about my own in terms of learning things and getting to know people. Although some of you who know me know that's usually not my style about knowing people. There was some motivation in some of the people out there, I think. The way I went about was very simple. It was, you know, this is a question I have, for example, one of the projects we are working on, Shyam and I and multiple other collaborators have worked with is preemie PD enclosure. You know, Shyam has written a lot about the piccolo device. I have looked at the alternate metronic microvascular plug. What are the outcomes with that device? Or say bioprosthetic valve fracture, what happens with that? So defining the research question, I think that's, you know, what comes. And we come up with this on a daily basis, whether in the cath lab or outside the cath lab. What was harder is identifying the outcome measure, right? What I truly define as success, like deploying a device in the position I wanted to deploy it in, might be success to me. But if it doesn't cure the physiological need on the patient, then outcome is not ideal for that patient. So that is where the whole planning phase and collaborating with other people and bringing in experts who have done this is important, I think. The development phase, you then have to decide how you're going to collaborate with these people. Is this database purely collaboration? Or are you defining your research question with them? Or is it more than that? Are you developing something further? Obviously, along with that, like Dr. Burns said, comes the whole IRB and DUAs. You have to make sure all of that is tackled up front. But beyond that, once you get the data, as the data comes in, and this is the challenging part of the research, it's not only that data gets sent to you. You have to look at that data to make sure it seems true to what you asked for. Some of it's as simple as centimeters versus millimeters, right? What people put in as their data. But others are more nuanced as the absence of any complications in a data set should raise red flags. And that is where the challenge of doing these retrospective research goes in. Some of the data fields are completely missing because that particular site has failed to collect that data, or that data are not available to them easily. So you have to define what's relevant to you, what you're going to accept as good data, and then sit and analyze the data. Now, that analysis could come by you doing the analysis or a collaborator doing the analysis, which is where defining the roles of the project are very, very important. Things to consider when I started partnering with people, it was just, A, I bet this one has 200 of this procedure versus the 20 of mine, so I should collaborate with them. But the more I've done this, I've learned that there is more to it than patient numbers, right? You have to make sure their analytic approach is the same as your approach. And you have to have the same perceived benefits of the value of your research. And that will determine what their data quality is going to be, how much work they're going to put into it. The people you ask for data should be willing to share the data and all of it, including the bad data. And another challenge of retrospective research is cherry-picking cases, sharing only the good data, which is why when you start your collaboration, you have to make sure you have trust in that patient and that relationship continues to blossom to further research, which is why your analytic approach has to be similar. Okay, the bad. There can be a whole lot of bad, depending on how you've gone about it. Retrospective data collection can be extremely resource-heavy, which is why when you define your questions, you have to be very clear into what your outcome is going to be. I have made spreadsheets that have started from AA to ZZ and then realized no one cares about all of that data. No one wants to go collect all of that data. And more importantly, none of that data actually truly gets me to my outcome measure. And if that's not the case, they are not going to be, after the second patient, no one's going to be collecting all of that data truthfully, at least diligently. It can be extremely time consuming, not only for you doing the research, but also for your collaborators collecting the data. What are they getting out of it? I have been a PI for both funded and non-funded retrospective research. My funding has typically come from industry for the stuff that I have to do. But if you're not doling out finances for resources at multiple centers, then it can be really challenging. For most of what we do, there's no automatic data dump. With Epic and some of the medical record system, some of that can be automated. But a lot of what we do can be measurement-based, which is buried in the chart. What is the diameter of the RVOT? How high a pressure did you go to fracture a valve? All of that takes a single person to go look at all that data and make sure that data is accurate as you go in. This was the hardest for me. As you're doing this data collection and you're going through all your data and your study, there's disagreements between collaborators. As the PI, you have to realize, look, there's very few absolutely right and wrong answers. It takes a lot of compromise to get to your final outcome. You have to be willing to put in all of that compromise itself to get to that data. The ugly, I think, Dr. Burns talked about it. Monitoring finances, that is not what we do really well. Once you get the money, you are responsible for the money. You dole it out. You make sure everyone gets their share. Then you have to make sure your finances are all in order. You are responsible for everything. You will get a call about any missing finances or misappropriated finances that will really kill what your future hope is in terms of continuing to get funding. The other thing, just to be honest, is authorship. That has to be predetermined before you get into the manuscript phase. Again, things I learned really late coming into the projects that I've done previously. You have to be very clear and say, look, with this research, there can be only a single person from each site that can be on the manuscript or two people, whatever you determine. It has to be fair and square. Your best friend, Sean, can't call me up and say, hey, can I have three people on this? Arash can't call me up and say, hey, I want to be the first author on this paper. It's not a decision that you can make once you get really deep into the project. I've learned that those discussions have to happen at the initial collaboration phase of the research. What I have learned in doing this is it is absolutely fun to do this retrospective research, but you have to make sure that not only you, your institution can put in the time and effort. You spend days and hours sending out emails regarding the fact that you have not received the data, especially post-COVID. Most places have lost their coordinators or there's change in people. The one thing I remind myself is I am most passionate about my project. Not everyone needs to be and will be as passionate about the project I want to do, and then that constantly becomes a battle with time and resources and how you handle that. Your institution has to be supportive. You are not going to be doing the contracts, someone else's. You are not going to be doing the DUAs, someone else's. If your institution tells you that you absolutely need that to get your project done, but they don't have the resources, think about who you collaborate with or how you collaborate. Somebody else could be the PEI on paper if you have reasonable understanding, or you use a data coordinating center that runs through all of that. We can get really good work done using multi-center research, but there's a lot of pitfalls with that. If you're taking industry funding, how do you stop yourself from making sure that the data doesn't get misused? When we looked at the SAPIEN data, there was funding from Edwards, but there was understanding that only pooled data will get sent to them. Since the data is collected only in a retrospective fashion, and there was no chart of use, that data could not get submitted to the FDA for any approval. All of that understanding has to go at the beginning of the research before you embark onto it, I think. I'm happy to answer any questions, but again, what the future holds for me, I think I've learned a whole lot, but I've also learned that what worked for me has worked for me for a certain reason. I think it is extremely helpful to collaborate with passionate individuals who believe in the same thing that you do. Once you do that, I think good work can come out of that. Thanks a lot.
Video Summary
Dr. Shahnawaz, the director of the Cat Lab at Cincinnati Children's Hospital, discusses retrospective multicenter trials in this video. She talks about her motivation for conducting research and the importance of larger sample sizes and generalizable findings. She explains the process of defining the research question, identifying the outcome measure, and collaborating with other researchers. She also discusses the challenges of retrospective data collection, including resource heaviness and time consumption. Dr. Shahnawaz emphasizes the need for trust and collaboration with other researchers, as well as the importance of addressing issues such as finances and authorship early on in the project. She concludes by expressing her passion for retrospective research and the benefits of collaboration with like-minded individuals.
Asset Subtitle
Shabana Shahanavaz, MBBS, FSCAI
Keywords
Dr. Shahnawaz
Cat Lab
retrospective multicenter trials
larger sample sizes
generalizable findings
×