Why Researchers Are Rethinking How They Study Kids' Health: The Digital Survey Problem
Researchers studying children's health through digital surveys face a significant challenge: most protocols are poorly designed for how kids actually think and behave, leading to incomplete data that may not reflect real-world experiences. A comprehensive review of 37 studies examining digital ecological momentary assessment (EMA), a method that asks children to report their health experiences in real time via smartphones or tablets, found that while the approach shows promise, critical gaps in child-centered design are undermining the quality of findings .
What Is Digital Ecological Momentary Assessment, and Why Does It Matter for Kids?
Digital EMA is a research method that sends brief surveys to participants throughout their day, asking them to report their experiences, emotions, symptoms, or behaviors as they happen, rather than relying on memory. For children aged 5 to 11 years, this approach has real advantages over traditional methods. Unlike asking a child to recall what happened a week ago, EMA captures experiences in the moment, when details are fresh and accurate .
Children at this developmental stage have a unique cognitive profile. They can reflect on their own emotions and thoughts at a level similar to adults, yet their long-term memory is still developing, making it difficult for them to accurately remember past events. They also struggle to verbally express health experiences, especially in unfamiliar clinical settings. Digital EMA sidesteps these barriers by letting children report in the moment, potentially providing richer data about their daily experiences, mood fluctuations, pain levels, or behavioral patterns .
How Are Researchers Currently Using Digital Surveys With Children?
The review examined 17 distinct EMA protocols across 37 studies, revealing how researchers are attempting to gather health data from preadolescents. Most protocols targeted children without diagnosed health conditions, used handheld devices like tablets or smartphones, lasted between 3 and 28 days, and delivered survey prompts on a fixed schedule rather than in response to specific events .
Response rates, which measure the percentage of surveys children actually completed, varied widely. Among the 15 protocols with available response data, completion rates ranged from 48% to 92%. Only six protocols achieved the gold standard of 80% or higher response rates, suggesting that many current approaches struggle to keep children engaged .
What Makes Some Digital Health Surveys Work Better Than Others?
The research identified several factors that distinguish high-performing protocols from those with lower engagement. Protocols that achieved 80% or higher response rates typically shared common features:
- Participant Age: Protocols involving older children within the 5-11 age range showed better completion rates than those targeting younger participants.
- Study Duration: Longer studies lasting three weeks or more tended to maintain higher response rates, possibly because children and families had more time to adapt to the routine.
- Survey Structure: Fixed schedules with 20 or more questions per prompt, delivered three or four times daily, combined with timing customization and incentives, produced the strongest engagement.
- Clinical vs. Community Samples: Children with diagnosed health conditions showed higher completion rates than those in general population studies, likely because they and their families were more motivated to participate.
Beyond protocol design, facilitators for successful engagement included uncomplicated, visually engaging technology, reminder notifications, and active caregiver involvement. When parents understood the study's purpose and helped their children remember to complete surveys, completion rates improved significantly .
What Barriers Are Preventing Better Data Collection From Children?
Despite the potential of digital EMA for pediatric research, substantial obstacles limit its effectiveness. Barriers to successful implementation include device burden, meaning children found it tiring or inconvenient to carry and use devices throughout the day. Restricted device access was another major issue; many children don't have consistent access to smartphones or tablets, or parents limit their use. Children also struggled with accurate self-reporting, particularly younger participants who had difficulty remembering details or understanding survey questions .
Social and psychological factors also played a role. Some children experienced stigma or embarrassment about using devices in public or at school. Limited awareness of the study's importance and insufficient caregiver support further reduced participation. These barriers suggest that current protocols often fail to account for the real-world constraints and developmental needs of preadolescents .
Why Are Current Research Standards Falling Short?
A critical finding from the review concerns the quality of reporting and potential bias in existing studies. Thirteen of the 17 protocols examined were rated at critical risk of bias, meaning the way results were reported or data were collected raises serious questions about reliability. Key information needed to fairly compare protocols, such as the raw number of surveys planned versus completed, was either missing or selectively reported .
This reporting gap has important implications. Researchers cannot easily determine which protocol features actually drive better engagement because the underlying data are incomplete or inconsistently documented. As a result, the strength of evidence supporting current EMA approaches for children remains limited, and findings may not be as robust as they appear .
How Can Researchers Improve Digital Health Surveys for Children?
The review offers several recommendations for advancing the field. Researchers need to prioritize child-centered design, meaning protocols should be developed with input from children themselves about what feels engaging, manageable, and appropriate for their age. This includes simplifying technology interfaces, reducing survey length, and building in flexibility so children can complete surveys at times that fit their schedules .
Improved reporting standards are equally critical. When researchers publish findings, they must include complete data on how many surveys were planned, how many were actually completed, and demographic details about who participated. This transparency would allow other scientists to fairly compare protocols and identify which design features genuinely improve engagement .
Greater caregiver involvement and support also emerged as essential. Protocols that included parent education about the study's purpose and regular check-ins with families showed stronger completion rates. Additionally, customizing survey timing to fit individual children's routines, rather than using one-size-fits-all schedules, may reduce burden and improve data quality .
The broader implication is that preadolescents should not be treated as miniature adolescents or adults in research design. Children aged 5 to 11 have distinct cognitive, social, and emotional characteristics that require tailored approaches. As digital health monitoring becomes increasingly important for understanding child development, behavior, and mental health, getting the methodology right is essential for generating trustworthy evidence that can actually improve children's lives .