DMA and Beyond Conference: Bridging the Gap Between Regulatory Intent and User Experience
In February I had the privilege of presenting at the "DMA and Beyond" conference, co-hosted by the Knight-Georgetown Institute (KGI) and Yale Tobin Center's Digital Economy Project in Washington, DC. This two-day event brought together researchers, policy experts, regulators, industry professionals, and civil society representatives to explore the lessons, challenges, and opportunities of competition regulation in digital markets.
As the Digital Markets Act (DMA) completes its first year of required compliance in the EU, this conference provided a timely opportunity to assess early results and discuss novel approaches to stimulating competition in the tech sector. The agenda featured an impressive lineup of speakers, including MEP Andreas Schwab, Filomena Chirico from the European Commission, and Anu Bradford from Columbia University.
Designing Better Digital Competition Remedies: Lessons from User Research
During the Institutional Considerations session, I shared Mozilla's research on how user-centered design methodologies can enhance the effectiveness of competition remedies. My presentation explored how systematic user research can bridge the critical gap between regulatory intent and actual user experience.
The core argument of my paper is that while regulators invest significant effort in understanding market distortions and crafting regulations to address them, the design and implementation of resulting remedies are often left to the very operating system providers being regulated. This creates a fundamental disconnect—companies may technically comply with remedy requirements, but without proper testing, the intended impact often falls short.
Research Methodologies
1. Concept Testing: Exploring New Ideas
Our concept testing research with 108 participants across three countries evaluated four distinct design concepts aimed at improving user engagement and comprehension in browser selection. This early-stage exploration revealed several critical insights:
Intervention Timing: Interruptions during unrelated tasks (like accessing the Play Store) generated frustration and reduced thoughtful consideration of alternatives. Interventions should align with natural decision points to improve meaningful engagement.
Notification Fatigue: Users frequently dismissed pop-ups and banners without meaningful engagement, suggesting that non-disruptive, persistent features accessible at the user's discretion may be more effective.
Default Inertia: Pre-installed browsers and complex operating system default settings create a strong status quo bias that is difficult for users to overcome. This suggests remedies should present alternatives in a neutral manner and provide ongoing opportunities for exploration rather than relying on a single choice moment.
2. Behavioral Experiments: Measuring What Works
Our large-scale browser choice screen experiment involving 12,000 participants across three European countries systematically evaluated how choice screen design influences user decision-making and satisfaction. The findings demonstrated:
User Retention and Satisfaction: Active browser selection through choice screens led to stronger anticipated user commitment. While only 54% of participants in the control group expected to keep the pre-installed default browser, 98% of participants who actively selected a browser through a choice screen expected to maintain their selection.
Information Design: Screens providing comprehensive information about browser options yielded better outcomes, with users both preferring this approach and making more diverse browser selections.
Timing Impact: Presenting choice screens during device setup was significantly more effective than showing them at first browser use. When users encountered choice screens only after clicking their pre-installed browser, they were significantly more likely to stick with that pre-installed option.
Position Effects: Browser placement within choice screens powerfully influenced selection, with browsers listed first receiving higher selection rates regardless of other factors—highlighting the need for randomized presentation.
3. Usability Testing: Identifying Implementation Barriers
Our usability testing with 26 participants across iOS and Windows platforms revealed substantial gaps between the DMA's requirement for "easily" changeable defaults and users' actual experiences:
Navigation Challenges: Settings were buried in unexpected places, with users on both platforms struggling to locate default browser options. On iOS, settings were hidden within individual app settings rather than in a central location, while Windows users frequently became lost in unrelated categories.
Search Functionality Issues: Common search terms like "browser," "default," and "internet" yielded irrelevant results or no results at all, preventing users from finding settings.
Self-Preferencing Behaviors: Both platforms subtly or explicitly nudged users toward platform browsers through hidden menus and misleading labels. iOS subtly steered users toward Safari by hiding default browser settings when Safari was the current default, while Windows promoted Edge through prominent "Recommended Browser Settings" that users mistook for general browser settings.
Key Recommendations for Regulators
Based on our research, I presented five recommendations for improving competition remedy effectiveness:
1. Integrate User Research Throughout Remedy Development
From initial concept testing through implementation and monitoring
Test and refine proposed remedies to identify potential barriers and challenges
2. Require Systematic Evaluation of Remedies
Mandate that gatekeepers conduct and share user research about remedy effectiveness
Include mechanisms for ongoing assessment as platforms update their interfaces
3. Base Remedies on Empirical Evidence About User Behavior
Evaluate effectiveness through systematic observation of actual user behavior
Consider the full user journey, including initial discovery, engagement, and long-term usage
Establish clear standards for successful implementation based on measurable outcomes
4. Design Innovative Remedies for All Users
Explore new solutions rather than relying on previously attempted remedies
Consider complementary interventions beyond single choice moments
Provide ongoing opportunities for user choice
Ensure accessibility for varying levels of technical expertise
5. Foster Collaboration with Stakeholders
Establish mechanisms for stakeholder feedback throughout the remedy development process
Share research methodologies, results, and access to testing environments
Contribute user research expertise to the regulatory process
Looking Forward
The conference discussions highlighted that while enacting technology regulation is a massive undertaking, it's vital that these rare opportunities to restore competition are maximized to benefit consumers. By integrating user research methodologies throughout the remedy development process, regulators can develop more effective interventions that meaningfully promote competition in digital markets.
I'm grateful to the Knight-Georgetown Institute and the Yale Tobin Center for organizing this important forum and to all the participants who contributed to the rich discussions. As we move forward, I'm hopeful that the research-based approaches I presented will contribute to more effective competition remedies that genuinely empower users.
Talk Recording:
For more information about the DMA and Beyond conference and to access the full conference materials, visit the Knight-Georgetown Institute website.