The TD50 (tumor dose 50%) value is a crucial parameter in determining the acceptable intake limits (AIs) for nitrosamine impurities in pharmaceutical products. It represents the estimated daily dose of a substance that is expected to induce tumors in 50% of the exposed animal population over their lifetime.
The TD50 value is primarily used in two approaches to calculate nitrosamine impurity AIs:
Class-Specific Threshold of Toxicological Concern (TTC): For nitrosamines with limited or no available toxicological data, a class-specific TTC can be employed. This approach involves deriving a single AI value based on the TD50 values of structurally similar nitrosamines. The TTC is typically set at a conservative level to ensure protection against potential carcinogenic risks.
Read-Across Approach: When sufficient toxicological data is available for a specific nitrosamine, a read-across approach can be used to derive its AI. This method involves extrapolating the carcinogenic potency of a structurally similar nitrosamine with known TD50 data to the nitrosamine of interest. The read-across approach provides a more refined AI estimation compared to the class-specific TTC.
Now, don’t miss the publication from Thomas et all “Use of the TD50 99 % CI for single dose rodent carcinogenicity studies”
Thanks for sharing @Naiffer_Host!
As you probably all know, many of the carc studies for nitrosamines are single-dose, so this is a particularly useful study in this context. While we weren’t able to conclude that mathematically the 99% CI is conservative, it should be from a risk-assessment perspective since, while there were slightly more virtual TD50s outside the CI than expected, this is against a context where the TD50 is expected to be the midpoint of potency not the lower bound.
If the link above doesn’t work for free access (until 30th Dec 2023), try this one: https://authors.elsevier.com/c/1i3op1M2s07p0U
This builds on our earlier work with @SusanFelter where we were looking at other ways to use lower-quality carc data - effectively, this topic became its own paper rather than being a disproportionate part of that one!
Notice that the AI of 78,000 ng/day for nitroso-diphenylamine is derived from the 99% CI. So we do see that regulatory agencies are already using this method.
The problem is the while they see this number for diphenylnitrosamine, the cap the CPCA at 1500 ng/day, which is tough for most of the drugs to meet and may be unnecessary.
Susan Felter has a good presentation too. It is great to see this publicaiton as there is so much data on larger nitrosamines, just that the agencies (especially FDA) has pushed back saying these are not multidose studies and dont meet the M7 criteria.
Indeed. During the 30 November 2022 NIOG meeting R. W. from NS OEG commented on ongoing discussions (probably in NITWG) to overcome robustness limitations in surrogate data. While comments were also pointing in the direction of what we now know as CPCA, I interpreted as well that discussions on accepting a lower confidence interval approach were also ongoing.
These comments were made while we were discussing the beta blocker cases so I was thinking he was alluding to one particular database entry (which is also discussed in the Felter et. al. 2023 paper), but very shortly after (mid December 2022), EMA published for the first time the 78.000 ng/day limit “based on the most sensitive TD50 derived from the most robust TD50 dataset from carcinogenic potency database (CPDB) applying the lower boundary of the 99% CI for the TD50 estimate (TD50L01)” for nitrosodiphenylamine, so that surely was (as well?) a study that was debated in NITWG. In the end, HC, EMA, TGA adopted the 78000 ng/day whereas FDA remains silent in its later (August 2023) limit list and the guidance section on (numerus clausus?) of accepted analogues for readacross (which I would dare to assume suggests some disagreement to the nitrosodiphenylamine evaluation). So full agreement in NITWG to move on with recognising lower 99% CI based AI in the limit list is maybe not needed. But why one case is locally accepted in the end and another one not remains very vague at the moment. Is it about the quality (lack of lower 99% CI proposal by the applicant? Insufficient justification overall for readacross leading to also no publication of the AI for the analogue?) and/or accessibility of justifications submitted to the full NITWG team (cf. Swissmedic letter of consent suggestions), a voting criterium or a custom that HC and EMA like to agree at least? For me it remains a black box.
I hope this new paper (read together with the Felter et. al. 2023 paper) can support more quantitative decision making on acceptance of less robust studies, possibly with not a criterium for conservative in the mathematical sense, but a criterium of conservative for risk assessment purposes, interested to try to apply it in practice.
What I find somewhat puzzling is each time we collectively present approaches to EMA, one of the first reactions to it not being possible for implementation is because of inconsistency reasons/a need for general implementation. But:
- Lower CI approach seemingly not consistently used (nitrosodiphenylamine exception in Appendix 1 since DEC 22)
- Use of TD50 most robust study most sensitive tissue and species instead of harmonic mean TD50 not consistently used (nitrosodiphenylamine and NDELA as exception since DEC 22, but proposal for NMPEA (OCT 22 NS OEG meeting) waived by NS OEG, continued advocacy in Bercu et. al. 2023 on the overall approach)
When the molecular weight corrections where discussed during OCT 22 NS OEG meeting, the option was also waived due consistency issues (but having it not integrated in CPCA seems like a missed chance).
I also don’t know if I would evaluate CPCA as a consistent model per se with the criteria I have in mind, but that’s a difficult discussion without full CPCA design visibility.