[Skip to Content]
[Skip to Content Landing]

Data Collection for Advancing EquityConsiderations, Challenges, and Insights

Learning Objectives:
1. Describe one system’s strategy for data collection, including how it is addressing challenges related to data completeness and accuracy
2. Explain key factors to consider when developing processes and workflows around trauma-informed data collection
3. Identify the ways in which data collection can be used to drive action and increase accountability to reduce inequities
0.75 Credit CME

This video is an excerpt from the AMA Advancing Equity through Quality & Safety Peer Network session on Data Collection. This section offers insight from data expert, Dr Tom Sequist of Mass General Brigham on 4 primary themes including, (1) data completeness and accuracy, (2) trauma-informed data collection, (3) moving from data collection to action and (4) understanding/working with EPIC.

Sign in to take quiz and track your certificates

Education from AMA Center for Health Equity
AMA’s online education to empower individuals and organizations, in health care and beyond, in advancing racial justice and equity. Learn more.

Video Transcript

Tom Sequist, MD, MPH: [00:29] So just as like a 30-second overview, Mass General Brigham has been, over the past almost two years now, we're going to find a few years later, later this fall, launched a campaign that we've termed...labeled United Against Racism. That campaign actually has three pillars to it. You can see over there on the right of the slide, we focus on leadership among our employees and the culture among our workforce. We focus on patient care and sort of traditional health care inequities. And then we focus on community health and external-facing policy and advocacy. Those are sort of the three pillars of our work, I'm not going to talk about all of the individual boxes, because you're going to focus on the one box that's highlighted there, which is the topic for today, which is increasing our data accuracy. We...I think probably everyone would agree, the data are foundational to everything that we want to do. They're really important as an educational tool. They're also extremely important in what we work on in United Against Racism. They're extremely important to demonstrate the impact of the work that we're doing to hold ourselves accountable to our patients and community members; that we actually are, with the investments we're making, achieving real gains in our anti-racism platform and our in our health care equity platform.

[01:49] So the data that we capture right now, we put into these buckets here: race, ethnicity, and language data, so G data and disability data that has sort of been the areas that we are working on. I would say, very specifically, the real thrust of our efforts have been [sic] on the left-hand pillar there; the race, ethnicity, and language. And that's because, primarily, we are focused on our anti-racism platform. And so we...for the first year-and-a-half that we've been doing this work, we've been focused there on the on the left. I think probably folks have seen this before. But we have lots of challenges that are not unique to Mass General Brigham, but they do revolve around these three concepts: managing the sensitive data that we collect; lots of topics that we could talk about in that space; consistency in the data that we collect on both internal consistency and consistency with external stakeholders as well, who are also collecting these data. And then what are the appropriate categorizations of the data? And we can talk a lot about race, ethnicity, and language in that space as well. So I'll end here, this is my last slide, I promise very quick overview.

[02:59] But if you if you were to ask what are the things that we've done to improve our collection of race, ethnicity, and language data, there are listed there at the bottom of the slide that 1 through 9 there. What the top of the slide really focuses on is, what are we trying to achieve with each of these 9 initiatives that we've put in place? And what we're trying to achieve is our staff being comfortable with asking the question, what is your race? What is your preferred language? Our staff getting the training on the why and the how when patients ask, or providers, or physicians, or nurses, or others ask in our system. Why are we doing this? Making our patient comfortable. Are patients comfortable with answering the question as well? The fourth thing is, what are our methods of reporting on these data? How do we work on that? And then the last piece is how do we expand the number of roles who can assist us in collecting race, ethnicity, and language?

[03:55] And so we envision that each of the 9 topics that we are walking through that we have implemented actually touch on different aspects of these 5 core goals that we have in the collection of race, ethnicity, and language data. So we have optimized...Epic is our electronic health record. By optimizing really means [sic] kind of fixed, a lot of the things that didn't really make a lot of sense in the actual fields, they get populated in Epic. Once we actually fix those fields, we have worked a lot on direct patient outreach, through our Patient Gateway, which is the tethered electronic patient portal, this sort of way that patients can interact through email and otherwise,. We've gone out through that portal and email but also through mail and telephone because not everybody is obviously on an electronic portal. We have created standardized scripting for all of our registration staff both in the hospitals in our remote sort of our phone backbone mags to do patient registration. We've created a lot of elearning and training guides for staff across the system. We have created specific training for our registrar's, again the folks that are collecting most of the most of these data around race, ethnicity, and language.

[05:12] We've tried to create an environment where everyone understands why we're collecting this. So creating signage and posters and such throughout the organization. We have expanded the roles of the people collecting these data, including some of our high-risk care management programs and behavioral health teams that work in the clinics to help us collect these data as well. And within some of our individual clinics, they these abbreviations or disagree deviations of some of our hospitals and clinics, we've actually tried to pilot different initiatives to help us understand how to better collect these data. Lastly, I would say there's always a burning platform that comes up, one of the burning platform for us is understanding how do we collect newborn race and ethnicity data? That actually turns out to be a pretty complicated process for us, and one that we're working on cleaning up right now. But critically important, if we're going to address equity issues, such as...that the issues that you see in maternal and fetal outcomes by race and ethnicity and language, but also just generally speaking, we need to, obviously, from birth, need to be collecting race, ethnicity, and language data in a way that comports with the rest of our strategy. So that's the summary of what we're doing at Mass General Brigham, but look forward to the conversation we're going to have today.

Tam Duong: [06:31] Alright, thank you. Thank you, Dr Sequist for this great overview. I really want to delve into these strategies a little bit more. And I know, we really saw that your questions related to these four themes here. So the first one is ensuring data completeness and accuracy, particularly around real and SoGi data. And then there's a lot of questions about trauma-informed data collection. So when we do data collection, it can be sensitive, or sometimes may even bring up traumatizing events. So how can we do this in a way that's trauma-informed and meets patients where they're at? The third one is moving from data collection to action. Now that we have the data, we have all these great strategies and have collected the data, what do we do? How do we prioritize it? And then the question around unintended consequences, how do we make sure that we avoid that? And then lastly, I think everyone on the peer network is on Epic, and MGB is on Epic as well; so trying to understand and work around the limitations of Epic, those were the four themes we heard. But the first question, do you use patient outreach as a primary method for collecting patient information or as a way to confirm data or fill in gaps?

[07:58] So really good point. I should say that our primary metric actually on this is to reduce missingness of data. So the primary metric was initially we had...I don't have a slide ready around this, but let's say that...for ethnicity and for language, we had about 15%, I think 15 to 20% of patients had just blank, it was missing. And so our metric for race, ethnicity, and language is to get the missingness of data down to less than 5% over the course of a year. That was a strategic choice. The other metric that we considered was reducing just the inaccuracy of all of the data that we have, which means that that even for ethnicity, the other 80% that we had, we know it's not accurate. Because we actually did some pilot surveys, reached out to patients, asked them the race, ethnicity, and language and just compare it to what was on Epic, and the sensitivity and specificity, and I apologize, and I probably could have better prepared some slides on this. But the sensitivity and specificity were really not great for patients who were Black, Asian, who were patients who were non-white, and for patients who were Hispanic, and really what the sensitivity and specificity showed was that we defaulted to labeling patients as white. So we had very high sensitivity in some spaces, but very poor specificity...because of a result of defaulting label and to labeling patients as being white. I think in our second row...so our first step is reduce the missingness to less than 5%, then we are going to tackle the accuracy of all of the data. But when we do our patient outreach, just to be fair, we actually have been also outreaching to all patients. So we have about a million primary-care patients. We actually did outreach to all of those patients to try to try to give them an opportunity to correct the data. But to be fair, we prioritized a lot of the resources on those patients. missing data. So the repeated outreach and the multiple ways of outreaching were limited to the people who had missing data, no data present.

Duong: [10:08] Great. Question from [Participant]. From the underbuild. "Do you attribute patients to clinics or areas to be responsible for missing data?"

Sequist: [10:21] No, we did this all at the enterprise level. So for folks who aren't familiar, Mass Gen Brigham has 14 hospitals, and about 7000 doctors. And so we are...so about a million primary-care patients, about 3 million patients per year treated in our system, including the specialty patients as well. So we did all of that at the enterprise level. One of the main reasons for that is it turns out that if you look at registrations, patient registrations, which is one of the primary ways that we can capture time points where we can capture these data, about 90% of them are done through a centralized process through the telephone, not...and only 10% of them are done locally, like check-in in a clinic, or hospital admission, or OB obstetrics admission. So so that's what led to us doing this primarily at the enterprise level.

Duong: [11:16] "Did you update questions about real and SoG in the Epic patient portal? my chart to mirror what you updated in Epic? And then can patients decline to answer or are these fields required?" from Scott.

Sequist: [11:30] So yeah, we did do that. So we did have to, they were not aligned, as you might guess, between the patient portal and the clinical record. And so we made sure to align those things so that when we did the outreach through the patient portal, when the patients enter their race, it can populate. It can subsequently populate the clinical record as well. And so that we don't have two sources of truth as to what your race is, for example. Yes, there is a decline option as well. But when I say missing, I literally mean just nothing like that. That field just isn't filled in at all.

Duong: [12:12] Great, thank you. And then, Anad, do you want to speak up and ask your question, just so we can get some other voices in the room?

Anad (participant): [12:21] Oh, sure. Thanks. Thank you so much for joining us for the presentation. I'm just curious what your thoughts are about, in the absence of...he may not have race, ethnicity data. But I'm just curious your thoughts about using ZIP code and some of the different things you can derive from ZIP codes, like...deprivation indices, ZIP code-based sort of measures of like income and racial segregation, and things like that, too. I'm curious what your thoughts are about those?

Sequist: And can I just ask specifically, and on that, you mean, to use those to impute race, ethnicity, and language? Or do you mean to use those variables to drive care improvements separate from race, ethnicity, and language?

Anad (participant): [12:56] I suppose you could in like a heavily...if it's like a heavily segregated ZIP code, for example, or area, you might theoretically be able to get information about that. But I was more thinking about either using it potentially to get information about race ethnicity, or as a sort of separate construct of evaluating deprivation.

Sequist: [13:14] Yeah, on the use of various imputation methodologies. My reaction to that is there's been a lot of work on that. Those methodology...whether you use last name surname analysis, or ZIP code where you live and there's other ways to impute race, ethnicity, and language, the sensitivity and specificity...vary based on the race, I think Karthik had mentioned. So I'm American Indian, they're really bad...to identify someone as being American Indian. They can be better if you are Black, and you are trying to... and they're really....they can be pretty accurate for Latino populations. But the problem is they're accurate, those models, it's always important to remember those models are accurate at the population level. They're actually not accurate at the individual person level. So I always tell people for imputation, go for it if you're trying to describe the inequity, or describe population health outcomes. But if you're going to send a letter to someone because you're targeting patients who don't speak English, let's say, and you're offering a service to them, interpreter services, don't use imputation methodologies, because you risk offending people because you've made an assumption about their language or their race or their ethnicity.

[14:31] To your question around the use of those, those kinds of things like SPIs, or ZIP code, or other analyses just for doing programs around inequity. And I guess what I would--just because of the topic for today--I would frame it around "what's the difference when using those and using race and ethnicity?" I think we just have to be always clear about what race and ethnicity are and why they exist and what we can use them for. You know, race is...probably people heard this said over and over again, but race is a social construct is not a biologic variable, and it is often a proxy for racism.

[15:04] And so if what we are trying to do is measure the racism in the system, then the use of race is really important in that...that's like when we talked about clinical calculators and removing Black and other races from estimation of your kidney function. We should separate that out from I'm trying to understand, do you have food and housing insecurity as a mediator of your poor outcomes from diabetes care, then we should be measuring the actual social risk factor that's involved there. Now, maybe, and it is true, it's not maybe but it's true, if you are Black, American Indian, Latino, that you are...have a higher risk for having food insecurity, which then leads to poor diabetes outcomes. It's just really important that there are all these different ways that we can describe the scenarios that our patients and community members live in. It's important for us to understand which variable we're using it and for what purpose we're using it.

Duong: [16:04] Thank you, Dr Sequist. Dr Baker from the Joint Commission, do you want to...if you can unmute and ask your question about disability information?

Baker: [16:16] Sure, happy to do that. Thanks for presenting Tam, this just so much great information. So we've thought a lot about this for "what should organizations be doing to collect disabilities?" And what I think is really needed is the specific questions about what are they..."what are their special needs?" So for example, if you ask about hearing impairment, there's a whole range of that. And on one end of that is I must have an American Sign Language interpreter to be able to get care. So that's really what you care the most about. There's a wide range as well of physical disabilities. And the key question is, are you able to get up independently onto an exam table? Or do you need assistance? I'll tell you, as you know, as a primary care physician, that was a nightmare for me when the patient came in, and we didn't have that, and they had to wait around half an hour to get people that could get them up onto an exam table, similar issues for mammography. So it sounds like you're collecting the disabilities, but not necessarily what I consider that most critical information for delivering care.

Sequist: [17:29] Yeah, so I can answer that. Everyone on the call is going to agree that no matter how I answer this, David is not going to evaluate our hospitals from the Joint Commission (laughter), based on our compliance with this question (laughter). So we struggle in this space, I would say. We struggle partly from an Epic build standpoint, I'll just use some specific examples. So language...so the whole purpose of collecting preferred language to communicate in...and we try to be careful about the words that we use. It's not "what languages do you speak," but what language you prefer to be communicated in with. And the whole purpose of collecting that is, so we can do a better job communicating with you. Well, if I collect that, I'd like to tie that within our clinical workflows to delivery of interpreter services. And can I document that the appropriate interpretive services was [sic] delivered and that it matches with the clinical scenario and the patient's preferred language. I think we struggle in terms of getting the right technical build, to be able to do that within our electronic health record right now.

[18:39] And then on top of that, I would say we struggle with linking that set of knowledge to the other set of factors that may create challenges for a patient to receive optimal care. So if I were able to know in some, and I can say we are working on this, but we have not solved this. But if I knew that you are a person who had difficulty getting up onto the exam table and didn't have reliable transportation, and had food insecurity, and you were Black, let's say, you may use all of that information to create what many of us would call a social-risk informed care plan. All of that information, to be fair, exists in some form in the electronic health record right now, but in like five different modules of the electronic health record. And as a clinician or clinical team comes in to care for a patient, we struggle to link all that together in a way that would would help our community health workers and other team members actually create a comprehensive care plan for that patient.

[19:45] We are in the midst right now of implementing a new vendor product, a CRM rate that customer service app that would actually let us do a job that many other industries already do like airline industries and such they know..."do you like an aisle or a window seat?" And "where do you fly to frequently?" We don't do that in healthcare right now. I actually have argued, I think successfully in our organization, that the use case for that kind of work is actually inequities work. If we can bring all this stuff together that you're talking about, David, we can actually put it together in a way that we can create a comprehensive care plan for that patient. But totally, transparently, we're not there right now. We have a bunch of disparate data sources right now.

Duong: [20:34] Lou's asking a question about blended race and how to use that for reporting. Lou, do you want to speak up and ask your question? I think there's a lot of nuances there that folks are struggling with.

Louis Hart, MD: [20:46] Sure, and thank you, Tam, and thank you so much Dr Sequist. So this is a question just given the fact that HHS I want to be standards have raised and Hispanic Latino identity is two separate fields, though, it's a very binary question that how they frame ethnicity. But given that, I mean, that's a long political history with how we take the census. There was recommendations in the 2015 National Content Test for the 2020 census to combine Hispanic alongside the OMB five, as well as add Middle Eastern North African, but that was not heeded by the prior administration. So given that we're kind of stuck with this separate Hispanic identity than the 501 B races. For internal reporting, we looked at our Hispanic patients, or our patients who identified as Hispanic or Latino. And they disproportionately, two-and-a-half times more likely to choose other as their race. So we said probably their Hispanic identity is their racial identity. It's a very American western European concept where they come from, or if they're immigrants, or that's probably not how they identify outside of their just identity. And we wanted to internally report out Hispanic alongside the OMB five to count up to 100. So we kind of took this Hispanic, any race.

[21:53] So if you said you were Hispanic, Black and Asian, you were still just counted as Hispanic. And then all of the other racial groups, you were counted as just that racial group. This was for internal reporting, just so that we could have...do some disparities analysis and have each group be bucketed, albeit far from perfect, we could do some sub level analysis to see Hispanic Black, Hispanic, white, Hispanic, Asian, to see if there are differences amongst that Hispanic identity. But is this really the best approach to do it in your opinion? In terms of otherwise, you're comparing Hispanic binary, yes or no. And then the only five races? And if you have a large Latino population, you end up in a hard time if you're only doing one way or the other. Thank you, sir.

Sequist: [22:27] That's a really good question. And one that gives all of us hives to try to figure out how to sort through, I would say, our journey and our system. I'll go back, maybe I kind of, I think we all may have lost a couple of years during the during the pandemic, but I saw I think it was like, four years ago, that we started to take a hard look at our Epic build, and I'm going to give you a long answer to...the shorter answer is there's not a right answer to what you're describing. I'll give you a longer answer, though, which is that we had three fields in our Epic build, at the start of this, we had a race field, which actually included Hispanic as a race as one of the options when you drop down. We then had...and then it had Black and, and Asian and other races, ONB categorize races. Then we had an ethnicity field that was just Hispanic, non-Hispanic, which was meant for OMC purposes, reporting purposes.

[23:20] Then we did another field that was called "other ethnicity," which was a...who had like 100 or so choices in it. And at all different kinds of levels, like it had specific tribes, like American Indian tribes, but only a couple. There are those who had...there are about 300 tribes, and had like three or four tribes listed in there, along with Native American, and remember, there was a separate race field that already had Native American in it. And then it would have just countries, like it would say, Iceland. And then it would have ethnicities, or other would say Jewish or other things. It was just a...and so when we we did spend a lot of time analyzing because we had been collecting data at that point for probably 10 years. Or some of them were historic data imported from decades we've been collecting data.

[24:16] And you found all...we found all kinds of inconsistencies, where someone would be listed as not Hispanic in one of the fields but Hispanic in the other field. Someone had listed different races, and we weren't sure...what do we do with all of those someone had declined in one field, but actually, it was populated in another field. And so we actually went through and, and not that this was scientific, but at least tried to be consistent and saying, here's the set of rules, we're going to apply to reconcile all of this. And the reason why we went through that set of rules to reconcile it is because we changed all the fields in Epic and said, like, we got to get rid of this...we just can't keep doing it this way. And so, things like what you're saying...if all the patients who were listed as Hispanic in that field that said not Hispanic and Hispanic, but all the races said other, we found the exact same thing that you just said that other was being used by patients who were Latino, and who didn't see that as an option in the drop down. So we went through and created a set of rules. And again, just like you're saying, we use that set of rules for internal reporting. But we have to abide by the CMS and other reporting rules, I wouldn't see like they want it reported in a certain way. And so we report it in that way.

[25:36] But internally, I think we've developed our own internal set of decision points on what to do with these data. I do not claim a totally publicly, I'm happy to share that (laughter) anyone how we do that. But B, I do not claim it to be like some kind of, internationally vetted system. We were just trying to clean up what was really, really, really messy data. [crosstalk] My tribe, by the way, didn't make it onto the list of the unique tribes that were listed in this in this other ethnicity field. I would also say, I can share my own personal experience. My first son was...I have two children...my first son was born at the Brigham. And my wife is Japanese. He was listed as white and I don't think anyone ever asked any of us what his race should be. So it's a very messy, messy process.

Duong: [26:26] Yeah. The next question, Dr Sequist, is from Ramona are their persons who identify as multiracial considered in your data collection and analysis?

Ramona: Yeah, so you can pick more than one race in our belt? [crosstalk] Part of the training for our registrars, is to make sure that patients are aware that it's not can you pick the...I forget what the question is, like the CDC surveys often that asks, pick the race that best describes you. We make it clear that like, pick all the races that describe you,

Duong: [27:07] Ramona, does that answer your question, or do you have any follow up to that?

Ramona: [27:11] No, that's fine. Thank you.

Duong: [27:13] Perfect. Thank you. Dr Sequist. I wanted to bring it back. We talk a lot about using Epic and patient portals. But a lot of folks have been talking about the digital divide and how patient portals tend to be used by educated, maybe younger folks. And so the question is, how does MGB ensure the accuracy for patients who might not be as active in your patient portal?

Sequist: [27:41] So our outreach was mail and telephone as well. So...the patient portal outreach was essentially free. It was expensive to do the other two versions. And I'm...honestly, I'm not really sure, to be fair, how successful it was. I think we're getting more success as people come into the clinical environment for clinical care and having our registrar's who are interacting with them at that point, and clinic staff interacting with them at that point, updating the data.

Duong: [28:16] Are you sort of measuring the impact of that?

Sequist: [28:20] Yeah, that's how I...I guess when I say I'm honestly not sure, I'm pretty sure that it wasn't that effective to do that--a large-scale outreach to millions of people.

Duong: [28:29] Yeah. Okay. Thanks to everybody who are putting resources into the chat. I know Esteban is putting a few. I saw one from Dr Baker. So I'll move on to Esteban's next question. Actually, do you want to speak up and ask that question? Esteban, accountability on EHR vendors?

Esteban Gershanik: [28:50] Yeah, sure. Tam, thanks so much for joining us today. Dr Sequist. I feel like we're in so many meetings out [sic]. One of the questions I asked was in reference to...I know how much work we put in at MGB on the EHR with Epic, and I know so many places across the country have invested so much in their electronic health records, up to billions of dollars. How can we...and we're currently building on top of the building on top of the building. How can we make EHRs a bit more accountable for some of these unintended consequences in their builds? I know previously administration and CMS and then when see, when it came to interoperability had a nudge, a little bit extra for EHRs be interoperable (inaudible) trying to see how can we ensure that they have a little bit more accountability in this extra work that seems to fall especially on your desk, but on all of the desks of everyone involved here?

[29:47] And so that is also a really important question. I struggle with...I think the EHR vendors need to be held more accountable in this space is my really short answer to that. How do we get there? I think that provider systems need better alignment on what we are struggling with with relation to tackling equity and anti-racism because of EHR build. And at Mass General Brigham, I think we have been fortunate to have some resources that basically we are fixing the build ourselves, and that's fine. I'm sure you're aware of...we've actually just reprogrammed some aspects of it. When we reach out to patients through our patient portal, or various aspects of the Epic build, we have translated it into like six different languages, and those languages were chosen based on the the top languages that our patients speak. But we just did that entirely ourselves.

[30:45] Now, I do feel strongly that if we have programs that are involved through federal funding, like CMS, we should be able to leverage that somehow to say the standard is that these languages need to be incorporated into the build of your EHR, the standard needs to be that you have to be able to collect X, Y, and Z data elements, to be able to facilitate care for these patients, because CMS is paying for their care. And you are essentially a vendor in that space.

[31:11] I really feel like we need to have a better coalition of provider organizations to get together around that space and really highlight how big of a problem this is. Because I think it's a pretty big problem as we've been going through it. And I think we've been fortunate at Mass General Brigham to be able to get through that problem by basically solving it ourselves. Because we have an iOS team and everything that's able to do that. But if you think about all the penetration of EHRs across the country, but so many practices still being small physician group practices, how are those practices going to be able to do this? And the answer is they're not. Right? So even if we think about the basic work that we did to work on fixing the race, ethnicity, and language fields, how would you do that if you're a small physician group practice? I don't think you can. So this is a really important external policy topic.

Esteban Gershanik: [32:08] And Tam, one of the things...and Karthik and everyone else, I think one of the things we could do from our peer network standpoint is maybe leverage some of these points, because I think there's always, as mentioned, around race and ethnicity as well, the more numbers, we have to make a push for some of these things, the more opportunity we have to make those changes. But thanks so much, Dr Sequist for your comments.

Duong: [32:29] Yeah, thanks, Esteban. That's something we've been thinking about too, and would love your thoughts on that as we move forward in the peer network. But I know we're moving towards questions around EHR. I do want to make sure we answer some of the questions around trauma-informed data collection. So, Dr Sequist, because we know identifying inequities require us to categorize populations by demographics, and we know that with any system of categorization, we can risk traumatization. And also collection can also be...you're asking for really sensitive information. So our question to you, what do you think are the key factors to consider when collecting data from a trauma-informed lens?

Sequist: [33:20] I guess what I would say to that is again, my short answer upfront is I think we mastered on Brigham, I won't speak for all health systems, obviously. But we at Mass General Brigham need to do a better job in that space. You know, the way that we initially approached this is, we did a lot of patient focus groups, to try to understand what people are thinking in this space around collection of patient race, ethnicity, and language data, to better inform the types of questions that we ask our patients. But I don't think that...and I think it's a fair critique, I don't think that community-member patients feel that the health system really understands what they're struggling with every day and what their communities have been through in the past, and I don't think health systems probably have an understanding of that. And even an understanding of how that's contributed to health outcomes, but even more broadly than health outcomes, sort of their lives overall. So our approach, the shorter...the medium version answer is, we are doing lots of patient focus groups to try to understand and modify our interventions all the time on a collection of race, ethnicity, and language, but I think that we have a lot more to do in that space.

Duong: [34:37] Yeah, I think the next question kind of follows up on that. So we were wondering what community-centered approaches do you use to decide what data will be collected, how it will be collected, and what information or data how it will be used [sic]? I hear you talking about focus group, is there anything else or is there a strategy that you can share?

Sequist: [34:59] Yeah, and I just want to be clear, we frame it around what kind of data. So for race, ethnicity, and language, I'm not sure that we have a community-centered approach, because we're collecting race, ethnicity, and language as the primary data fields. But if we're talking about what other kinds of data to collect, like domestic violence data and housing security, housing, I'm sorry, housing security data, food insecurity, data, and other types of social risk factors, that those data, we actually do have a much more community informed approach as to how we collect it and where we should collect it. And I think, like many others, we have a community health needs- assessment process. We have our community advisory boards at each of our hospitals that are informing that process as well.

Duong: [35:47] Okay, if anyone else has any follow up to that, please, please chime in. I know that that was a question that came up multiple times in previous peer-network calls. Okay, so let me see. Let me go back to Kelly's question around international patience. Kelly, do you want to unmute and ask your question. Around birthplace and longest lived?

Kelly: [36:12] Yes, thanks. So almost 30% of our patients at MD Anderson are international. And like 10% of them are Arabic. And we had a patient history database that allowed them to fill in their ethnicity. And when I looked at the data, my observations were they were just confused, like, what does ethnicity mean? Hispanic versus non-Hispanic to an international patient? Right? It's really relevant to the US. But it's very confusing, I believe, for international patients. And so we also had birthplace and longest lived. I don't believe Epic has longest lived. But birthplace they do, but we don't capture that. What are your thoughts on that kind of data?

[37:02] So on the first comment that you made, I totally agree, ethnicity has been the hardest thing for us to have talking points for our registrar's and clinic staff to explain to patients, because ethnicity and nationality and other and culture, it just is very hard to explain to people and, and it gets at a certain point, we have to recognize these are fairly artificial constructs. And so we're trying to explain an artificial construct to folks. But they also do have some utility as categorizations, because they have important implications for health outcomes. And so they're often just markers for other things. So I'm just sort of acknowledging what you said, we struggle with the same thing. We are not routinely collecting those elements that you were talking about right now, that's not a part of our standard data collection process. So I can't comment on how that would go or how it's going for us right now. It seems those do seem like also good elements for us to be collecting. But it's not part of our QI right now.

Duong: [38:14] All right, thank you. Let's see next. Are there any other questions? Oh, I see. Consuelo had a question about clinic as well. Are you on? Can you unmute and ask your question?

Consuelo: [38:28] Yeah, just hearing some of the comments very, very frustrating that we're all going through the same issues. And I think even as we work to try and solve some of these with the vendors and develop our own workarounds, we still run into other issues. So for example, at Vanderbilt for many months we worked on an approach where we would have...we were just going to have one question. And the question was going to be which of these best describes you, ideally, not having to say race or ethnicity, because they're, obviously, as other people are saying, confusing to try and figure out how to have that as a...these are the options and they would be those five LMB categories for race, but also adding in Hispanic, Latino ethnicity into that list, and Middle Eastern, North African, and to that list, and select as many as you'd like.

[39:31] And then we were also going to use that third category that was mentioned, where you can have the detailed ethnicities, and we had selected nearly 90 for our group. So we did all of the internal testing, and figured out how to make that work. And then we learned that for fire interoperability, we were having difficulty actually getting the CDC codes to connect in order to have that fire interoperability, so like all of the systems that have really been built around this racist idea, the idea that we need to collect Hispanic ethnicity separately, I think is plaguing a lot of the solutions that we're trying to, to deliver. So I feel like we need some federal-level accountability as well. Like, when are we going to reject these ideas that somehow, we can, we can list Asian ethnicities in the race field. And still, that gets coded somehow. But, but we can't list Hispanic ethnicity in the race field and get it coded correctly. So I feel like there has to be some other strategy. It's not just the responsibility of the EHR vendors to fix this.

David Baker: [40:53] Can I add a comment, Tam?

Duong: Yes, please.

Baker: [40:58] David Baker. So about 15 years ago, when I was at Northwestern, we did a proof-of- concept test to try and collect data in a more patient-centered fashion. And we created a database of literally every identified race and ethnicity in the world. And set it up so that we just asked people, again, all of the challenges of what does race and ethnicity mean, but we asked them that, with the standard drop-down field like you would have for medication, you type in the first three letters. And if it was, somebody said, they were Jamaican, they type in "Ja", and Ja comes up and Japanese comes up, and you click the one that you want, and then you say anything else. And you could put in, I think we had up to five different fields that you could put in and then on the back end, you could classify it as granularly, or as whatever grouping was necessary to use the data. So what you would lose with that system is if somebody said that they were...that they identified as Black or African American, and they were not specifically asked about Hispanic, then you could misclassify, that in most places, one, one-and-a-half percent that fall into some of those race/ethnicity categories.

[42:28] So it's got limitations, I put in the reference on this. But it took, on average, just a very short period of time to ask this. It's much more patient-centered. And you think about the Asian category. And what does that mean? If you're talking about care delivery, is that patient Hmong? Is that person a traumatized person from Afghanistan? So you go through all of this, and you have to ask, why do we want these data? And we want these to be able to understand the populations that we serve, so that we can reach out to those communities better, and we can understand their needs. So we really need to get to a way where we can collect very granular data, and also do appropriate analyses. Because if you're looking, for example, at black versus white, and you're using those data, well, then the Latinos are often put into the white category. And if you have disparities among the Latino population, and you sort of decrease the black-white comparison, it's a bias towards the No. So we need to think outside of the box and get away from these lists and actually use that same modern technology that we routinely use in health care for other things.

Duong: [43:48] Thank you. And Dr Baker, I know, Esteban, you've been doing a lot of work around this at the state level. Do you want to chime in?

Gershanik: [43:56] Yeah. Great comments. I mean, this is one of the things that when I used to do work in the public health sector, both federally and on the state level, we often discussed...one of the things I put in the chat was some work that New York had done on their Latinos in their community, because one of the things that we used to discuss in the Latino community is, like, my family's from Argentina, are our diets a little bit different than the diets of people from other nations or other countries? So we have the largest Honduran population in Louisiana. So it's one of those things where we're we're talking about different diets and how that affects different communities, so forth and so on. But one of the things that we were recently discussing with the Commonwealth of Massachusetts, on their Equity Data Technical Advisory Group was asking the questions, Consuelo, as you mentioned, who's...because often people I don't know when you ask Hispanic, non-Hispanic, many people would be like, "I'm Puerto Rican," or "I'm Dominican," or "I'm Argentine." People don't say I'm Hispanic or non-Hispanic.

[44:51] And so one of the things that we did is we cross walked, for example, the fire superset and other supersets into the other things so you would all have a list of everyone, and then be able to frame the question into the bucket that people want to put people in. And then we also said that...we actually had a call this morning, separately, Regan Marsh and I with some folks in Minnesota and North Dakota, knowing that in Minnesota, they have a large Somalian population. So one of the things we've discussed in different states is for different communities, you have a larger percentage of certain populations. And so you want to make sure that you're targeting that in an appropriate way. When I moved to Massachusetts...to Boston, I started running into a large Cape Verdean population, which...and when I worked in Chelsea, there was a large, Salvadorian population. So it's one of those things that based on the populations, capturing that data in a way that can then serve into those bigger sets that can be reportable. But then for your own group, or your own entity or your own system, being able to have things that translate for the communities you're trying to treat. So I think there's a lot of work that we could do as a peer network to help establish the ways and I know, both. Dr Sequist was talking about this...MTB has done some work along with other places across the country of, how do you how do you ask questions a little bit more in a way that allow [sic] patients to be more comfortable in answering some of these questions? But I totally hear you, Consuelo, with what you what you're saying.

Duong: [46:16] So let me let me just ask one question to you, Dr Sequist. This is around moving to action. So there's, you have these metrics that you're collecting and working to improve. And so, can you talk about your plan to how you're going to use this data to drive enterprise- or organizational-wide changes and accountability?

Sequist: [46:43] Sure, I would say that one of the things...there's two overarching notions in what we're doing. One is to be disciplined about prioritization of our work. So there's a lot of excitement around equity, I'm sure everyone feels that, and so much pent up work that needs to get done. But we can't do it all in the first couple years or first five years. And so use these data to be very targeted. That's actually a pretty big challenge, because there's lots of stakeholders who want to work on inequities in cancer care...inequities in diabetes, or it's in inequities in the emergency department or in the ICU, and we just...it's very hard to do all of that. So we're using these data to be very targeted about the areas that we're going to work in. And then the second is, using those data, I sort of mentioned this earlier, using these data to hold ourselves accountable to achieving goals. So not just describing the problem, and then not just implementing an intervention, but setting accountability goals, like we would set safety goals, like what's our CLABSI rate that we think is acceptable in the hospital? What's our number of patient balls that we think is acceptable in the hospital, and what's our goal, setting the same goal.

[48:01] So for us, in the in the ambulatory space, those goals are around substance use disorder, inequities, and racism, and dismantling racism and treatment of substance use disorder. And then the second one is achievement of high blood pressure control for patients with hypertension, and again, the inequities in addressing the social risk factors and other things that contribute to that, to those differences in outcome for hypertension management. So those are just a couple of examples. We have a bunch of other programs in other areas, but that gives you a sense of what we would do. And then we would turn those into specific metrics. Our goal for this year is, improve the rate of blood pressure control for Black and Hispanic patients by 5%. across...we have a large hypertension cohort of patients with hypertension, so we're talking about couple 100 000 patients there. And then on the substance use disorder, it's a structural measure around saying establish two new substance use disorder bridge clinics and increase the visit rate in those bridge clinics by X percent, it's about 10%. And then we will move on to treatment of opioid use disorder in looking at buprenorphine prescription rates among Black and Hispanic patients.

Video Information

CME Disclosure Statement: Unless noted, all individuals in control of content reported no relevant financial relationships.

If applicable, all relevant financial relationships have been mitigated.

AMA CME Accreditation Information

Credit Designation Statement: The American Medical Association designates this Enduring Material activity for a maximum of 0.75  AMA PRA Category 1 Credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to:

  • 0.75 Medical Knowledge MOC points in the American Board of Internal Medicine's (ABIM) Maintenance of Certification (MOC) program;;
  • 0.75 Self-Assessment points in the American Board of Otolaryngology – Head and Neck Surgery’s (ABOHNS) Continuing Certification program;
  • 0.75 MOC points in the American Board of Pediatrics’ (ABP) Maintenance of Certification (MOC) program;
  • 0.75 Lifelong Learning points in the American Board of Pathology’s (ABPath) Continuing Certification program; and
  • 0.75 credit toward the CME [and Self-Assessment requirements] of the American Board of Surgery’s Continuous Certification program

It is the CME activity provider's responsibility to submit participant completion information to ACCME for the purpose of granting MOC credit.


Name Your Search

Save Search

Lookup An Activity


My Saved Searches

You currently have no searches saved.


My Saved Courses

You currently have no courses saved.