Complaints Into Practical Action: Human Factors Drug Delivery
By Alicia Douglas and Zhonghai (John) Li, Systems and Human Factors - Device Development and Technology - Merck & Co., Inc., Rahway, NJ, USA

Human factors (HF) work in drug–device combination products is most often associated with development activities—formative evaluations to shape design and summative validation to demonstrate safe and effective use. These studies are critical and rightly emphasized during regulatory review. Yet once a product is on the market, a different and often more revealing usability test begins.
Post Market Reality Is The Real Usability Test
Post‑market use occurs in the real world: with real users, in real environments, under time pressure, distractions, staffing constraints, and competing priorities. In that context, even products that successfully passed summative validation can generate persistent complaints—complaints that erode user trust, increase usage burden, and trigger regulatory, quality, and commercial scrutiny.
Post‑market HF sits at the intersection of usability science and lifecycle management. Its role is not to “re‑approve” a product or relitigate development decisions, but to pragmatically diagnose why complaints are occurring and to determine whether—and how—to intervene in a way that is both defensible and feasible. When executed well, post‑market HF translates complaint signals into insight, and insight into action that measurably reduces complaint drivers without creating unnecessary regulatory or supply risk.
Post‑Market Human Factors Is Different
Post‑market HF differs fundamentally from development‑phase HF not only in timing and constraints, but in objective. The goal is not to re‑establish basic usability or demonstrate risk control, but to evaluate whether targeted optimizations measurably reduce real‑world complaint drivers. Because post‑market changes often carry high regulatory, quality, and supply impact, evidence of effectiveness is essential.
HF during development is typically planned, iterative, and driven by the product team’s roadmap. Timelines are known, study objectives are clearly scoped, and design changes are expected outcomes. Post‑market HF work, by contrast, is almost always reactive and urgent.
Complaint signals may be trending in post‑market surveillance systems, circulating informally through commercial teams, or escalating through quality and regulatory channels. In some cases, external attention—from regulators, customers, or litigation—amplifies urgency. This pressure fundamentally changes how HF work must be framed, conducted, and communicated.
Several constraints dominate post‑market HF in ways that rarely apply during early development:
- Limited flexibility in the product interface. Marketed products may have locked configurations, constrained supplier options, or country‑specific labeling rules.
- Heightened sensitivity to interpretation. Study language, documentation, and conclusions must be carefully constructed to avoid mischaracterization as product defects or admissions of inadequate validation.
- Compressed timelines. Evidence is often needed quickly to support decisions about complaints, communications, training, or corrective actions.
- Cross‑functional dependence. Regulatory, legal, quality, marketing, and supply chain stakeholders must be aligned early because even small changes can have cascading implications.
As a result, post‑market HF must be both scientifically sound and operationally practical. The focus is not on identifying the “best possible” design in theory, but on determining what actions can realistically be implemented—and defended—while maintaining supply continuity, regulatory compliance, and customer confidence.
Post‑Market HF Is Not About Proving Users Are “Wrong”
A common pitfall in complaint investigations is framing use‑related issues as user error to be resolved through instructions. In HF, errors are framed as use errors rather than user errors. This distinction reflects that observed use issues are outcomes of the interaction between the user and the system—not failures of the individual. When a behavior is repeated across users, it signals an opportunity to improve the device, labeling, or overall user interface to better support safe and effective use. While Instructions for Use (IFUs) are necessary, post‑market experience repeatedly shows that complete and technically correct IFUs do not guarantee complaint‑free use.
Across post‑market investigations, a consistent pattern emerges: users may read instructions, but then perform tasks based on intuition, prior experience, training, or habit—especially when tasks feel routine. These behaviors are not irrational; they reflect users’ mental models of how similar products “should” work.
When complaints recur, the critical HF question is not whether users followed the IFU step‑by‑step, but whether the observed behavior is reasonable given the interface cues, context of use, and users’ expectations. If a large proportion of users make the same mistake, the system—not the individual—deserves scrutiny.
A Practical Framework For Post‑Market HF Evaluation
Effective post‑market HF work can be structured around a simple, evidence‑focused framework:
1. Start with Complaint Signal
The starting point is a clearly defined complaint pattern, not anecdotal frustration. HF involvement is most valuable when complaint data are reviewed to identify recurring themes, failure modes, and contextual clues. This step often reveals whether complaints cluster around specific tasks, user populations, environments, or product variants.
Importantly, complaint narratives should be preserved in their original language. Early reframing or normalization can obscure the very signals HF analysis depends on.
2. Diagnose Before You Fix
Post‑market HF objectives should be conservative and diagnostic. The first goal is to understand how the complaint occurs and why users behave as they do—not to demonstrate that a proposed mitigation works.
Key questions include:
- Is the complaint driven by a product defect, a use issue, or an interaction between the two?
- Does the observed behavior align with users’ prior experience or training?
- Are interface cues supporting or contradicting intended use?
- Would mitigation reduce complaints in realistic use, or only under ideal conditions?
Effective diagnosis helps prevent studies from becoming implicit justifications for predetermined solutions.
3. Test Under Realistic Conditions
Post‑market HF studies must reflect actual use—not best‑case scenarios. Tasks should mirror how users encounter the product in practice, including time pressure, incomplete attention, and reliance on habits.
Artificially optimized conditions—such as repeated instruction reminders or extensive coaching—may suppress error rates in testing but fail to predict complaint reduction in the field. Realism is essential because post‑market HF findings are often used to justify high‑stakes decisions.
4. Generate Decision-Grade Evidence
When feasible, post‑market HF evaluations should incorporate a comparison framework, such as a control group using the current marketed configuration and a treatment group exposed to the optimized product, labeling, or communication strategy. This allows direct comparison of complaint‑relevant behaviors and assessment of whether proposed optimizations meaningfully shift outcomes under realistic use conditions.
The outcome of post‑market HF work is not a design scorecard; it is decision‑grade evidence. Data should clearly link observed behaviors to complaint drivers and support a reasoned assessment of whether intervention is warranted.
In many cases, the most effective mitigations are not product changes at all, but operational actions such as targeted training, refined complaint‑handling scripts, or focused user communications. Product or labeling changes should be pursued only when evidence suggests they will meaningfully reduce complaints in real‑world use and can be implemented with acceptable regulatory burden.
Takeaway: “Summative Tested” Does Not Mean “Complaint‑Free”
In this context, success is not defined by marginal task efficiency gains or abstract performance improvement, but by a demonstrable decrease in behaviors that generate complaints, confusion, or downstream quality signals. This reframing ensures post‑market HF supports lifecycle decisions with data that are both scientifically defensible and operationally relevant.
One of the most important lessons from post‑market HF is that demonstrated safe and effective use does not equate to complaint‑free performance. A product can meet all validation endpoints and still generate recurring complaint themes after launch.
These themes often reflect a gap between design intent and live use. In the field, users rely on experience, mental shortcuts, and environmental cues far more than written instructions—especially when tasks feel familiar. Validation studies, by necessity, control many of these factors; post‑market use does not.
When complaints arise, teams are often forced to answer difficult questions quickly:
- Is this a quality issue or a usability issue?
- Is user behavior reasonable given the interface?
- Would changing labeling meaningfully alter behavior?
- Is a design change worth the regulatory and supply impact?
Human factors provides a structured way to address these questions through hypothesis‑driven investigation, realistic testing, and disciplined interpretation of results.
Case Study 1: When “Ready‑to‑Use” Overrides Instructions
An injectable combination product generated post‑market complaints when healthcare professionals observed ”potential foreign matter” and reported it as a potential quality concern. Laboratory analysis confirmed the observation was expected: the formulation contains a normal sediment and the product needs to go through certain preparation steps prior to administration. The Package Insert and carton clearly instruct users on how to prepare the product before use.
Despite complete and technically correct instructions, complaints persisted. Post‑market HF evaluation revealed a mismatch between the required task and the users’ mental model. For healthcare professionals, the type of device this product used strongly implies “ready‑to‑use.” are typically associated with minimal preparation. As a result, many users did not expect additional preparation to be necessary and did not actively look for that instruction—particularly in fast‑paced clinical environments where time and attention are limited.
In realistic simulations, some participants read the instruction but still skipped the prepare step, defaulting to prior experience. Others did not consult the instructions at all. The observed behavior was efficient and habitual, not careless. Additional instructional interventions, such as more prominent carton messaging or reminder inserts, produced limited improvement. The fundamental issue was not awareness, but the strong “ready‑to‑use” cue communicated by the device format itself.
Case Study 2: When Mechanisms Are Not Transparent
An injectable combination generated post‑market complaints related to leakage during preparation, often reported with use of an accessory that requires assembly Quality investigations found no material defects, and prior HF validation studies had demonstrated safe and effective use with the finalized IFU.
Post‑market HF analysis replicated complaint scenarios and revealed that leakage frequently occurred when the different pieces are not assembled properly. Although the IFU clearly described the attachment technique, users often deviated from it in practice. The root cause was not misunderstanding of the instructions, but how users naturally interacted with the accessory’s physical form.
Due to its shape and size, many users instinctively held an accessory at the wrong location, since this grip felt stable and intuitive, yet mechanically interfered with proper engagement. Because the working mechanism was not visually or tactically transparent, incomplete attachment was difficult for users to detect. Subtle instructional enhancements had minimal impact, and alternative components introduced trade‑offs related to compatibility, supplier qualification, and regulatory burden.
Lessons Learned From Post‑Market HF Practice
Across post‑market investigations in drug delivery systems, several consistent lessons emerge:
- Safe and effective does not guarantee complaint‑free. Validation demonstrates acceptability under controlled conditions, not immunity to real‑world friction.
- IFUs alone are rarely sufficient. Instructions may be read, understood, and still ignored when they conflict with user expectations.
- Mental models matter more than text. Users default to prior experience, especially under routine or time‑pressured conditions.
- Component changes are not trivial. Replacing off‑the‑shelf components can resolve one issue while introducing others related to supply, compatibility, or regulation.
- Early training and communication matter. Focused education at launch can bridge gaps between design intent and user expectations before habits form.
Conclusion: Make The Right Thing The Easy Thing
Post‑market complaints are more than a quality metric—they are a window into the lived experience of a product. HF methods provide a disciplined way to convert that signal into understanding and understanding into action.
The goal of post‑market HF is pragmatic: diagnose why complaints are happening, determine whether intervention is warranted, and select mitigations that are feasible, defensible, and likely to reduce complaint drivers in real use. In many cases, the most effective solutions are those that make the intended behavior the easiest behavior—aligning design, training, and communication with how users actually think and work.
By treating post‑market HF as a distinct discipline with its own constraints and objectives, organizations can make faster, safer lifecycle decisions, reduce unnecessary changes, and ultimately deliver drug‑device products that perform not just in validation—but in the real world where it matters most.
About The Authors
Alicia Douglas is director of Human Factors in Merck’s Device Development & Technology group, providing strategic leadership for human factors team and activities across the product lifecycle for combination and non‑combination products. She brings 20 years of experience spanning pharmaceutical, medical device, and consumer healthcare sectors, with work across therapeutic areas from oral care to oncology. She is committed to advancing patient safety and product usability through user‑centered design and partner education that embeds human factors in development strategies.
Zhonghai (John) Li is an associate principal scientist in Human Factors for Drug–Device Combination Products at Merck Sharp & Dohme LLC, based in West Point, Pennsylvania. He leads human factors strategy and execution across the product lifecycle, encompassing combination products such as autoinjectors and prefilled syringes, as well as non‑combination products with a focus on dosage instruction. His work spans early development, regulatory submissions, and post‑market optimization, integrating usability engineering, risk management, labeling, and human‑centered design to support safe and effective product use globally.