Sci Simple

New Science Research Articles Everyday

# Biology # Biochemistry

Ensuring Quality: New Standards in Pharmaceutical Testing

A study shows new instruments can ensure medicine safety without sacrificing quality.

Anne B. Ries, Maximilian N. Merkel, Kristina Coßmann, Marina Paul, Robin Grunwald, Daniel Klemmer, Franziska Hübner, Sabine Eggensperger, Frederik T. Weiß

― 6 min read


Pharma Testing Upgrade Pharma Testing Upgrade and quality. New instruments ensure medicine safety
Table of Contents

Pharmaceutical quality control (QC) is crucial for ensuring that medicines are safe and effective. This involves a series of tests and procedures that verify the quality of a product before it reaches you, the consumer. Just imagine going to the pharmacy and not knowing whether the medicine you buy is actually good or not—yikes!

One of the key parts of QC is making sure that the methods used for testing remain reliable over time. Analytical methods need to be maintained for decades so that they can produce consistent results. This is essential for both the initial release of the medicines and for ongoing checks.

The Need for Instrument Updates

Technology doesn’t stay still, and neither do the instruments used in QC. Sometimes, an old instrument gets updated or replaced with a new one. When this happens, the procedures that were tested and validated on the old equipment must also be moved over to the new one. This can be tricky and often involves regulatory challenges.

The good news is that with proper planning and a scientific approach, these challenges can be managed in a way that keeps everyone safe and sound.

Assessing Comparability Between Instruments

When a technical change like this is made, it’s essential to evaluate how comparable the two instruments are. This means checking if the new instrument produces the same results as the old one. Depending on how much the new device differs from the old one, different levels of testing may be required.

To figure out how comparable the results are, a study design can be developed. This design will help determine whether the old and new instruments are compatible or if more extensive testing is necessary.

The Study Design

The goal of the study design is to assess the comparability of the instruments in a way that is both comprehensive and efficient. It aims to reduce bias and ensure that decisions are based on solid scientific data. If the results show that the new instrument is comparable to the old one, it serves as a great reason to switch to the new machine without complications.

The study focuses on methods that produce graphs, like capillary electrophoresis or chromatography. It turns out that just two experiments on the new equipment can provide sufficient data for a meaningful comparison. This means that there’s no need to run a ton of tests, which is a win for everyone involved.

Choosing the Right Product for Testing

Not all products are created equal. When conducting the comparability study, researchers pick the product that is the most complicated to analyze as their main test subject. Why? Because if the tricky product passes the test, it’s likely that other simpler products will do just fine too!

Key Parameters for Assessment

In the study, several key parameters were assessed to determine how well the two instruments compared. These parameters include:

  1. Limit of Signal Quantitation (LOQ): This measures the smallest amount of a substance that can be accurately detected.

  2. Proportionality between Product Concentration and Observed Signal: This investigates whether increases in product concentration result in corresponding increases in the signal measured by the instrument.

  3. Baseline Comparability: This checks if the baseline of the graph from both instruments is consistent. If not, there might be issues with the new equipment.

  4. Peak Position Shifts: This assesses whether the specific peaks in the graph occur at the same positions on both instruments.

  5. Peak Area Changes: This measures the size of the peaks to ensure they are consistent.

  6. Measurement Variance: This looks at how much results can vary when different factors, like analysts or different days, are taken into account.

Conducting the Tests

The new instrument was compared to the old one across these different parameters. Various statistical methods were used to assess the data, ensuring that any conclusions drawn were based on solid evidence, rather than guesswork.

Testing the LOQ

For the limit of signal quantitation, researchers determined how sensitive the new instrument was compared to the old one. They looked at the signal-to-noise ratio, which is a fancy way of saying they wanted to know if the new machine could detect small amounts of a substance just as well, or even better, than the old one.

It turned out that the new instrument performed quite well. The signal quality was equal to or better than that of the old instrument, which was a good sign.

Proportionality Testing

Next, researchers tested to see if increases in the product concentration led to proportional increases in the signal detected. They ran their tests and crunched the numbers, and happily discovered that the new instrument met the original criteria as well.

Baseline Comparability

Evaluating baseline comparability involved looking at blank measurements—essentially tests done without any product. Researchers overlaid graphs from the old and new instruments to visually inspect for any irregularities. They found that both instruments produced similar baseline trends, indicating that the new one was on the right track.

Peak Position Shifts

Checking for peak position shifts meant calculating where the peaks appeared on the graphs. Researchers gathered data on these specific peaks using a solid number of samples. The analysis showed that the peaks from the new instrument fell within an acceptable range compared to those from the old instrument, so once again, it was a thumbs up.

Are the Peaks Area Consistent?

When it came to peak area changes, scientists assessed whether the size of the peaks, which essentially represent the amount of product present, matched between instruments. After comparing the relevant data, findings showed that the new instrument was producing results that aligned closely with the original.

Measurement Variance

Finally, researchers investigated measurement variance by looking at multiple factors that could influence test results. These included differences in analysts and the days the tests were conducted. The data collected showed that measurement consistency was maintained across both instruments, which is a big win for reliability.

Conclusion

In summary, the presented study design serves as a useful method for assessing instrument comparability in pharmaceutical quality control. The findings demonstrate that the new instrument can effectively replace the old one without compromising the quality of the analysis. This means that companies can adopt new technologies while still ensuring that the safety and effectiveness of medicines remain the top priority.

The successful outcome of this study not only aids in the transition to newer equipment but also assures consumers that the medicines they receive are of the highest possible quality.

So, next time you take that pill or sip that cough syrup, you can rest easy knowing that it has gone through a lot of thought, testing, and maybe even a little dancing—if only in the laboratory!

Original Source

Title: Universal Study Design for Instrument Changes in Pharmaceutical Release Analytics

Abstract: Instrument changes in analytical methods of pharmaceutical quality control are required to maintain release analytics over decades, yet typically pose a challenge. We designed an efficient instrument comparability study to gain a comprehensive understanding of potential performance differences between instruments and therefore rationalize the risk assessment and decision process for a path forward. The results may either point out whether a full or partial re-validation is necessary or whether a science-based bridging can be pursued based on the data generated in the study. The study design is universally applicable to a substantial range of release analytical methods. In a straightforward setup of two experiments with the new instrument, a statistically meaningful data set is generated for comparison with available historical or validation data of the original instrument. In a Good Manufacturing Practice (GMP) environment, we realized the study design a first benchmark in imaged capillary isoelectric focusing (icIEF) analytics, comparing the ICE3 and Maurice C instruments. The core-study confirmed equal or better performance of Maurice C in all parameters and serves as a basis for seamless continuation of release measurements on Maurice C.

Authors: Anne B. Ries, Maximilian N. Merkel, Kristina Coßmann, Marina Paul, Robin Grunwald, Daniel Klemmer, Franziska Hübner, Sabine Eggensperger, Frederik T. Weiß

Last Update: 2024-12-13 00:00:00

Language: English

Source URL: https://www.biorxiv.org/content/10.1101/2024.12.11.627881

Source PDF: https://www.biorxiv.org/content/10.1101/2024.12.11.627881.full.pdf

Licence: https://creativecommons.org/licenses/by/4.0/

Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.

Thank you to biorxiv for use of its open access interoperability.

Similar Articles