Regulatory Attention To Human Factors: How Much Is There?
By Tom von Gunden, Chief Editor, Drug Delivery Leader

I’m back on the beat of precedent research conducted to bolster regulatory intelligence. I recently commented on the practice as a submission-support strategy in my editor’s take, “Regulatory Precedents For Drug Delivery: Uncovering Clues To Successful Submissions.” In providing a high-level framing of the rationale behind the approach, I consulted for that piece combination products regulatory expert Doug Mead, a leading proponent and practitioner of employing GenAI tools for precedent research.
That article’s embedded Q&A with Doug had me asking him about his fuller explanation of precedent research in a guest piece he had authored for Drug Delivery Leader, “GenAI: The Muscle Behind Strong Regulatory Intelligence For Combination Products.” Together, my editorial overview and Doug’s detailed guest column were intended to serve as foundations for subsequent sets of companion pieces demonstrating the concept of precedent research translated into practical application.
For the first of those theory-into-practice illuminations, we chose Human Factors. Along with this From The Editor article you are currently consuming, the tandem includes Doug’s just published guest column, “Human Factors: A Key Factor In Regulatory Approvals.” In the video and transcript below, you’ll find me talking with Doug about his article and the insights he offered in it after querying FDA databases on HF-related regulatory precedents.
Why Human Factors as our first-up demonstration of a precedent research strategy? Well, we’ll get Doug’s thoughts on that in the video and transcript below. From my editorial angle of vision on the drug delivery regulatory landscape, I see several reasons why biopharmaceutical product development organizations may face increasing regulatory scrutiny around HF-related considerations. These would include device design, usability, user error, dosing accuracy, patient safety, and any other potential causes of or, ideally, protections from risk and harm.
To list a few of the most obvious and likely reasons for increased regulatory attention:
- widened and accelerated demand for enabling patient self-administration, particularly for chronic conditions treated with injectable therapies
- advances in transitioning the delivery of complex large molecule biologics to administration methods other than clinical IV/infusion (e.g., combination products, autoinjectors, wearables, oral forms)
- expanded therapeutic targeting of formerly unmet needs in chronic and/or rare diseases, thereby expanding the sizes and types of patient populations to which Human Factors studies would need to be applied
- increasingly non-clinical administration of treatments to specialized patient populations (e.g., pediatric, geriatric, behaviorally challenged)
With those considerations in mind, please consider Doug’s demonstration, in his aforementioned article, of GenAI searches applied to Human Factors regulatory topics as a public service, of sorts. In it, he illuminates query responses related to Complete Response Letters (CRLs), HF study waivers, use errors, critical tasks, and other aspects of FDA approvals for combination products and other delivery devices.
Once again, I recently caught up with Doug:
Conversation Transcript: Human Factors Precedents With Doug Mead
Tom von Gunden, Chief Editor, Drug Delivery Leader:
Hi, Tom von Gunden, Chief Editor at Drug Delivery Leader here, and joining me again today for the conversation is Doug Mead, combination products consultant.
Welcome back, Doug.
Doug Mead, President and Principal Consultant, CP Pathways LLC:
Thanks, Tom. Thanks so much for having me.
Well, thanks for being here. So, once again, we're here to talk about the topic and the article you contributed to for Drug Delivery Leader. This time, it's a guest column that you bylined, and it's called, “Human Factors: A Key Factor In Regulatory Approvals.”
So, clearly, it's about Human Factors. And this is in a series where we're taking a look at the way that you've advocated for and practiced using GenAI tools to do precedent research for regulatory submissions and approvals.
Before we get to the results that you reported on in your article, let's just talk about Human Factors as a focus. Why Human Factors? Why now? Is it particularly well suited to illustrate the GenAI approach to precedent research? Is it a high priority, or should it be for folks? Is it particularly timely? Or, all of the above?
It's really all of the above. So, I should explain a little bit of the context here and why GenAI searches are so important.
Of course, FDA has guidances, and one of the key regulatory review groups is the DMEPA, or the Division of Medication Error Prevention and Analysis. They spend quite a bit of time looking at Human Factors for drug delivery devices in a CDER-led review project. So, as we have guidances, what most companies really need to understand is the DMEPA process, and that is an expectation that they have.
And when I look at GenAI searches in the drugs@fda database, I'm looking to see if there's any variation in that process. Basically, companies do formative studies: They submit a protocol, a use-related risk analysis [URRA], their proposed IFU. They get DMEPA feedback. Then they execute the study.
The results have to go into their BLA or NDA, and they have to defend that the product had acceptable residual risk and an overall favorable benefit-risk.
So, when I look at the DMEPA review memos, they are very rich with information on a particular kind of product applicants submitted [and about which] I am looking for my clients to mitigate regulatory risk. In other words, I see what another company did: the results of their study and how DMEPA interpreted it. And then I can apply that experience to my clients' projects.
And it's always timely because guidances rarely come out. But, as each new combination product gets incorporated into the database, I'm right there to see what happened.
Gotcha. Thanks for the background and the framing.
You mentioned the DMEPA review memos as one thing to look at. I don't want to have you relay everything that you outlined in your article; I hope folks go and take a look at that. But do you want to hit a few highlights or offer a few examples of some of the other aspects of regulatory submissions and approvals around Human Factors that you looked at?
What I'm really focused on is regulatory risk. So, as you'll see in the article, I'm most concerned about the risk of getting a Complete Response Letter [CRL] due to Human Factors deficiencies.
And what would happen would be DMEPA will review the study report. It will make a recommendation to the division for or against approval. If they're against approval, it comes down to inadequacies in the patient interface. In other words, it could be the device design, the packaging, or the instructions for use.
A lot of times for a CRL, that company didn't follow this process that I outlined. They didn't submit the protocol for review, for example. So, it's going to have gaps in it. FDA will make them do it again, basically.
Another hot topic would be if you would like to avoid a Human Factors study and you have a platform technology that you can say, for example, this autoinjector was used in these two other drugs. It's the same autoinjector, has the same Instructions for Use. The patient demographics and their physical attributes are the same. We think that leveraging prior data from these studies is supportive of the Human Factors question for a new product using the identical platform.
So, when I did this research, I found multiple examples of companies where DMEPA had made the decision that they would waive a Human Factors study requirement. That's very good to know because you want to know how those companies did it and what the FDA's justification for that waiver was.
Another area that gets difficult in working with clients is establishing what is a critical task and what is a non-critical task. Critical tasks have a higher expectation for fewer use errors, for example. And they require maybe additional mitigation or some kind of justification that this use error is common to other products and there is nothing more that can be done by this company to mitigate further.
So, what I found when I did this president research are inconsistencies. Some companies list their critical tasks, and FDA agrees or disagrees. Some companies list the non-critical tasks, and it turns out that there are inconsistencies. For example, DMEPA might think that swabbing a vial septum is a non-critical task, and then they say that it is a critical task.
Checking expiration dates, swabbing the skin, pinching the skin have all had inconsistent interpretation by FDA as to their criticality. So, when I help a company with their URRA, I'm saying, well, generally you want to err on the side of calling things critical because
If you submit your URRA for review during this protocol review process FDA will tell you what they think is critical or not critical, and you just have to live with it.
One of the things I'm always looking for in review memos when I look at precedents on new things is FDA's review of the use error that occurred: the company's description of the root causes, the company's justification for why their mitigations are suitable and effective. And then I want to see how FDA suggested the Instructions for Use language. Or additional figures should be changed in the final review and labeling.
I’m going to put an interesting — hopefully, it's an interesting and useful way for you to position what I'm going to ask you next. I'm going to artificially create a spectrum on which one end is concerning, and the other end is reassuring. So, out of those query results that you got, and some of the things you've told us just now that you focused on: Is there anything that stands out that you would comment on or highlight somewhere between concerning and reassuring?
Well, let's start with reassuring. DMEPA is a very experienced review division. They have 30 to 40 experts in Human Factors. They know the theory cold. And what they're looking for is — following the expectations they have and knowing quite a bit about that same Human Factors expertise or theoretical aspects — what I'm finding as we get into more and more autoinjectors and prefilled syringes with needle safety devices: They've seen all this before, and they know what to look for.
Companies have seen this before; they know what to send to FDA. And I'm seeing fewer instances where FDA has come back and said, this is inadequate, or you had a gap in your protocol or your study, and you need to do it again. That is reassuring that, as companies have become more experienced, DMEPA has also become more experienced.
When you have a novel combination product or some orphan indication or some patient characteristics that are problematic — for example, pediatrics — you need to be much more careful. And you need to go into review memos to see how FDA has interpreted it. Some of what you'll see there is not reassuring because it's something that either you didn't see, you didn't plan for, something like that.
So, areas of concern would be if they want additional study groups. Each study group has 15 participants. You want to limit them to patients, caregivers, healthcare providers. Sometimes you want a training arm; sometimes you don't want a training arm. When you break them down into other characteristics, sometimes you end up with a study that may involve, like, 90 participants. It becomes burdensome, to say the least.
Thanks for that.
Before we close for today, I'd like to bring it back up to the higher level of the mission that you and I have joined forces on here: And that is to advocate for and instruct and guide people on how to successfully leverage GenAI tools to conduct precedent research for their regulatory purposes.
I know you do this all the time for your clients, and you engaged in this for the article that you wrote. But is there anything, either generally or specifically, from the exercise that you pushed yourself through on this Human Factors topic that has given you any additional takeaways or recommendations or insights or ‘words to the wise’ to our audience who may want to hone their own skills in conducting precedent research in this way?
That's great.
I think the take-home message is really to make precedent research using any GenAI tool.
part of your daily practice. Whenever you're reviewing a draft article or a draft design verification test plan, for example, sometimes it's helpful to do a quick search to see how FDA interprets the ISO requirements, for example.
And then other times — almost any time I have a question — I turn to GenAI searches, and in 5 minutes, I get the lay of the land. If I want to dig deeper, I'll dive right into the source documents and see what FDA actually said. So, I use this as a tool all the time.
My recommendation is that it should be part of everyone's skill set to be able to do these searches, to know how to ask the questions in the right way with the right specificity.
And just become an expert in doing the searches.
All right. Thanks, Doug, for joining me. And I want to point our audience once again to Doug's excellent article published on Drug Delivery Leader. Again, it's entitled, “Human Factors: A Key Factor In Regulatory Approvals.” Hopefully, we’ll see you land there.