Sunday, October 1, 2023

Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI – npj Digital Medicine

Case 1: Parental consent and subject assent

A tertiary academic center enrolls pediatric patients in a study that involves the creation of an AI/ML algorithm for assessing vascular malformations of the face. This study utilizes identifiable images of the face in its training data.

Because CYP cannot legally consent for themselves, federal regulations include special protections for pediatric study subjects, including parental consent and assent of older pediatric subjects. It is essential that the consent process accounts for both chronological and developmental ages. Communication with the parent and subjects should include the risks, benefits, and alternatives, both at the time of enrollment and in the future, and these discussions must be documented21. It is important that social circumstances of the child are accounted for in the consent process, particularly if there are any sources of complexity in parent-child relationships. For children in state custody, researchers must determine if subjects may be included in the study, and if so, who is legally responsible to provide consent. Researchers must explicitly discuss relevant and important reasons to include identifiable pediatric data, such as a condition or presentation that uniquely impacts children, or where symptoms are distinct from adults. Further, whether and how subjects will be able to remove their images from a dataset in the future should be disclosed during the consent process. In instances where data has been utilized to train and test an algorithm, and cannot be removed, the likelihood that this will occur must be disclosed at the time of enrollment to key stakeholders. The ACCEPT-AI framework highlights these key considerations.

Case 2: Communication and equity

An AI/ML researcher plans to develop an AI algorithm for assessing pneumonia on chest X-rays in the emergency room and include adolescents. Both parental consent and participant permissions are required. The researchers wish to ensure the pediatric study population understands the risks and benefits of enrolling in this study. In addition to the study participants, they also wish to communicate their research study to the broader pediatric community to seek feedback.

It has been acknowledged that engaging CYP in AI research is important22. A recent qualitative exploration of twenty-one CYP showed that they wished to contribute insights to the safe development of AI research22. Age-appropriate communication is the cornerstone of pediatric practice, and it is, therefore, crucial that all stakeholders are provided with relevant information on the purpose and nature of proposed AI/ML studies, and given examples of how their data may be utilized in the future. It is crucial that both chronological and developmental ages are factored into communication methods, given their relevance in several pediatric diseases.

When educating both parents and minor subjects, investigators should incorporate educational best practices. Where developmental delay is present in the subject or guardian, communication methods must be tailored appropriately. At the level of consultation with the child and family, investing in evidence-based decision aids has proven beneficial in enhancing decision-making capabilities23.

At the level of the community, efforts should be taken to improve digital literacy for young persons and parents or guardians inclusive of those from racial and ethnic minorities, rural and remote regions, and underrepresented disease groups. Collaborations with formal educational bodies to facilitate this through education on broad concepts of AI/ML health research to CYP may improve familiarity, promote transparency, clarity of research intentions, and enable exchange of ideas. Once an algorithm has been developed, further engagement in focus groups, where relevant permissions are in place, may help the iteration of working models. ACCEPT-AI emphasizes the importance of communication to improve digital literacy and engagement through the AI life cycle, at individual, parental, and community levels (Fig. 4).

Fig. 4

Levels of communication to improve digital literacy with key stakeholders as proposed by the ACCEPT-AI framework, adapted from McLeroy et al.27.

Case 3: Data protection and identification

Researchers review a large public skin image database for the training of an ML algorithm that aims to diagnose skin lesions. They noticed unlabeled pediatric data mixed into the dataset.

Pediatric data must only be utilized when the data and technology addresses a clear need for the pediatric population. Researchers must be transparent about needs and potential benefits for data use in their protocols, and should clearly describe measures taken to minimize risk to pediatric subjects. Adverse events should be clearly documented, with plans in the protocols for clinical evaluation using validated pediatric tools, where possible. Currently, data protection laws involving de-identifiable data In the United States do not separate adult and pediatric data. Differentiating de-identifiable and identifiable data is a key consideration for safe data regulation, as legislation surrounding consent and data protection differ for the respective categories. In the United States, HIPAA supports applying the “Safe Harbor Rule” to remove key identifiers from clinical patient data for secondary research use, or alternatively, suggests expert consensus to determine adequate de-identification for study inclusion24. In Europe, the General Data Protection Regulation (GDPR) stipulates the need for explicit consent, a prerequisite to data usage, and permits by the patient25. While the development of specific laws that are tailored to pediatric data usage may be beneficial, existing legal processes must be optimized for transparency with both pediatric subjects, their parents or legal guardians. Further, researchers must make clear in their protocols the measures taken to protect data security and be familiar with local laws for adolescent consent given their geographical variance26. New pediatric data collection for AI/ML should meet the highest standards for data security without compromising patient privacy, as proposed by key recommendations in ACCEPT-AI.

Case 4: Key technological considerations for age-related algorithmic bias

Researchers train a predictive diagnostic algorithm using chest X-rays available on a public dataset. Images contain no age labels. Both adult and pediatric X-rays are used to train the ML model. The algorithm is then applied to an adult-only population.

Combining data across adults and children introduces age-related algorithmic bias, and risks compromising the applicability, generalizability, and effectiveness of a study, with potential impact on both populations. Clear documentation of the objective for which pediatric data will be collected and used in line with the ACCEPT-AI recommendations will help ensure key safety measures have been taken to avoid mixing of data unless there are clear indications to do so. Reporting the AI/ML technique applied in each study or approved device is important so that pediatric data use maps to the needs of the research question. Further, researchers should provide details on whether an algorithm has been trained to work with adult data, pediatric data, or both. While necessary at every stage of evaluation, ACCEPT-AI recommends three crucial checkpoints during an algorithmic cycle, that can be used to proactively assess for age-related bias; in dataset curation, training, and testing (including deployment and post-deployment phases) (Fig. 5).

Fig. 5
figure 5

Key checkpoints for evaluating age-related algorithmic bias using the ACCEPT-AI framework.

Source link

Related Articles

Leave a Reply

Stay Connected

- Advertisement -spot_img

Latest Articles

%d bloggers like this: