Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Industry Insight

Justice in Aging Launches Strategic Initiative to Advance Equity

Justice in Aging announces a new strategic initiative to advance equity. This initiative deploys deliberate strategies and tools and includes dedicated staffing to pursue systemic change in laws and policies that will improve the lives of low-income older adults who experience inequities rooted in historical, persistent, and structural racism, ageism, sexism, ableism, homophobia, and xenophobia, with a special focus on advancing racial equity.

For generations, systemic inequities and racism in health care, housing, education, employment, and access to wealth and resources have kept people of color, women, LGBTQ+ individuals, those living with disabilities, immigrants, and those with limited English proficiency from meeting their basic needs. These challenges persist throughout a person’s lifetime and compound for those living at the intersection of multiple identities. As people grow older, the challenges become even greater as they experience ageism.

Denny Chan, an attorney at Justice in Aging for the past seven years, is leading this effort as the directing attorney for equity advocacy. He is managing a cross-issue, cross-organizational team charged with implementing the initiative, and all Justice in Aging staff are contributing to this work.

The events of the past year, including the inequities revealed by the COVID-19 crisis, and the national reckoning with anti-Black racism following the murders of George Floyd, Breonna Taylor, and others, as well as a string of violent attacks on Asian American older adults, including the shootings in Atlanta and beyond, have made it clear that it is past time to specifically address the structural inequities in our laws and policies that are fueled by racism and bias.

“I joined Justice in Aging because of our longstanding commitment to fighting the poverty and discrimination that low-income older adults face,” Chan says. “I’m thrilled that our new initiative to advance equity will allow us to center equity in all of our advocacy, with a specific focus on race equity for older adults of color. It is truly a turning point for us at Justice in Aging.”

“Going forward, we will prioritize issues, projects, and cases that either significantly impact or uniquely benefit older adults of color, older women, LGBTQ older adults, older adults with disabilities, and older adults who are immigrants or have limited English proficiency. We will seek policy solutions that are tailored to these communities and go beyond a one-size-fits-all approach that can exacerbate or mask existing disparities,” says Kevin Prindiville, executive director at Justice in Aging.

The organization is approaching all of its work with equity at the center by using tools, evaluation strategies, and data to closely examine the policy solutions it advances, the issues it trains advocates on, and the legal cases it brings. Chan is also charged with forging new partnerships, working with the communications team on new communications strategies, and more.

Learn more about Justice in Aging’s Strategic Initiative for Advancing Equity, and watch a video in which Chan shares why this work is personal.

— Source: Justice in Aging

 

Technology to Analyze Cancer Images Can Introduce Bias for Minority Patients

Artificial intelligence tools and deep learning models are a powerful tool in cancer treatment. They can be used to analyze digital images of tumor biopsy samples, helping physicians quickly classify the type of cancer, predict prognosis, and guide a course of treatment for the patient. However, unless these algorithms are properly calibrated, they can sometimes make inaccurate or biased predictions.

A study led by researchers from the University of Chicago shows that deep learning models trained on large sets of cancer genetic and tissue histology data can easily identify the institution that submitted the images. The models, which use machine learning methods to “teach” themselves how to recognize certain cancer signatures, end up using the submitting site as a shortcut to predicting outcomes for the patient, lumping them together with other patients from the same location instead of relying on the biology of individual patients. This in turn may lead to bias and missed opportunities for treatment in patients from racial or ethnic minority groups who may be more likely to be represented in certain medical centers and already struggle with access to care.

“We identified a glaring hole in the in the current methodology for deep learning model development which makes certain regions and patient populations more susceptible to be included in inaccurate algorithmic predictions,” says Alexander Pearson, MD, PhD, an assistant professor of medicine at UChicago Medicine and cosenior author. The study was published in Nature Communications.

One of the first steps in treatment for a cancer patient is taking a biopsy, or small tissue sample of a tumor. A very thin slice of the tumor is affixed to glass slide, which is stained with multicolored dyes for review by a pathologist to make a diagnosis. Digital images can then be created for storage and remote analysis by using a scanning microscope. While these steps are mostly standard across pathology labs, minor variations in the color or amount of stain, tissue processing techniques and in the imaging equipment can create unique signatures, like tags, on each image. These location-specific signatures aren’t visible to the naked eye, but are easily detected by powerful deep learning algorithms.

These algorithms have the potential to be a valuable tool for allowing physicians to quickly analyze a tumor and guide treatment options, but the introduction of this kind of bias means that the models aren’t always basing their analysis on the biological signatures it sees in the images, but rather the image artifacts generated by differences between submitting sites.

Pearson and his colleagues studied the performance of deep learning models trained on data from the Cancer Genome Atlas, one of the largest repositories of cancer genetic and tissue image data. These models can predict survival rates, gene expression patterns, mutations, and more from the tissue histology, but the frequency of these patient characteristics varies widely depending on which institutions submitted the images, and the model often defaults to the “easiest” way to distinguish between samples—in this case, the submitting site.

For example, if Hospital A serves mostly affluent patients with more resources and better access to care, the images submitted from that hospital will generally indicate better patient outcomes and survival rates. If Hospital B serves a more disadvantaged population that struggles with access to quality care, the images that site submitted will generally predict worse outcomes.

The research team found that once the models identified which institution submitted the images, they tended to use that as a stand in for other characteristics of the image, including ancestry. In other words, if the staining or imaging techniques for a slide looked like it was submitted by Hospital A, the models would predict better outcomes, whereas they would predict worse outcomes if it looked like an image from Hospital B. Conversely, if all patients in Hospital B had biological characteristics based on genetics that indicated a worse prognosis, the algorithm would link the worse outcomes to Hospital B’s staining patterns instead of things it saw in the tissue.

“Algorithms are designed to find a signal to differentiate between images, and it does so lazily by identifying the site,” Pearson says. “We actually want to understand what biology within a tumor is more likely to predispose resistance to treatment or early metastatic disease, so we have to disentangle that site-specific digital histology signature from the true biological signal.”

The key to avoiding this kind of bias is to carefully consider the data used to train the models. Developers can make sure that different disease outcomes are distributed evenly across all sites used in the training data, or by isolating a certain site while training or testing the model when the distribution of outcomes is unequal. The result will produce more accurate tools that can get physicians the information they need to quickly diagnose and plan treatments for cancer patients.

“The promise of artificial intelligence is the ability to bring accurate and rapid precision health to more people,” Pearson says. “In order to meet the needs of the disenfranchised members of our society, however, we have to be able to develop algorithms which are competent and make relevant predictions for everyone.”

— Source: University of Chicago Medical Center