The tendency of most immunological reagents to produce
changes in reactivity over time requires the application
of quality control procedures to ensure the satisfactory
analytical performance of immunometric assays on a
day-to-day basis. Similarly, in the case of turbidimetric
immunoassays reagent stability within a defined usable
time span is a prime requirement of the reagent systems,
so is the need for accurate and stable controls to validate
reagent functioning, precision and accuracy.
Reading Principles in Turbidimetry
For turbidimetric measurements, both end point and rate
measurements are applicable. However, the factor method
for calculating the concentration of the unknown is not
preferred in the kinetic methods by turbidimetry due to
the nonlinear nature of relationship between absorbance
Once the assay system has been designed, the analyzers
used for reading must be able to operate according to the
principles mentioned below with respect to the addition of
reagents and reading of signals (absorbance).
In this system first the activation buffer (R1) is added to the
sample cuvette (S). Then the sample is added, mixed and
allowed to stabilize (preincubation period). The first reading
(A1) is then taken at the end of preincubation period.
The antibody reagent (R2) is subsequently added, to the
above mixture and mixed gently. Turbidity develops due
to the reaction between the antigen and the antibody over
a short period of time. A second reading is taken at the
defined time interval (usually 2–10 minutes).
The difference ∆AS (Table 23.1) between the two
readings represents the absorbance generated as a result
If required, the absorbance due to the reagent ∆AB
can be measured by running in parallel a reagent blank
in a separate cuvette (R) using saline in place of sample
∆AR thus obtained of the reagent blank can be subtracted
from ∆AS of the sample to calculate the absorbance
generated due to the Ag-Ab reaction in the sample.
The reagent blank facility may not be available in many
semiautomated analyzers. However, the reagent assay
system can be optimized to provide a very low reagent
blank in order to obviate the need for correcting the
reagent blank signals which can contribute to the complete
The principle of taking a reading just before the addition
of antibody solution (R2) is referred to as ‘true sample
blanking’ or ‘real sample blanking’.
In this system initially the activation buffer, sample and the
antibody reagent solution are all mixed simultaneously. Then
as fast as possible usually 10 to 20 seconds after mixing, the
first reading A1 is taken. This 10 to 20 seconds delay time
in taking a reading is referred to as lag phase. The reaction
is allowed to proceed further and the second reading A2
is measured at the preselected time interval. The increase
in absorbance ∆A (A2-A1) represents the signal generated
due to the Ag-Ab reaction (Table 23.3).
This method eliminates the need for determination of
reagent blank as it measures the increase in absorbance
after equilibration of all the reagents and sample. Hence,
absorbance generated both due to interfering substances
in the sample and the reagent would be blanked during
reaction kinetics do not follow a systematic pattern. As this
initial chaotic phase settles, the reaction pattern and the
absorbances move proportionately. This pattern depends
upon the intrinsic nature of the antibody, such as affinity,
avidity, etc. and also the concentration of the analyte being
Depending on the assay system requirements, it is
desirable that the initial chaotic phase is not included in
the measurement of absorbance. Typically, a lag phase
would vary from ten to thirty seconds from analyte to
analyte. It is, therefore, imperative to follow diligently the
recommended time assigned for the lag phase for precise
blanking in the “immediate mixed blanking” method.
Reaction Kinetics and its Effect on Blanking
The reaction kinetics of an antigen-antibody also guides
as to the appropriateness of the blanking system. As the
reaction kinetics is not the same for all Ag-Ab systems, for
a system with slow reaction kinetics, e.g. IgA, a first reading
10–20 seconds after mixing with the antibody is not very
However, for a system with fast reaction kinetics, e.g.
IgG (Fig. 23.11), half of the reaction would have taken place
within 10 to 20 seconds when the first reading is taken.
Here, a poorly defined point for the first reading would be
The implications of ‘immediate mixed blanking’ can be
demonstrated by comparing the standard curves obtained
for the six calibrators of latex enhanced reagent system for
measurement of IgA (Fig. 23.12A) and IgG (Fig. 23.12B) at
zero seconds and ten seconds, respectively.
The standard curve obtained for IgA (Fig. 23.12A) is
practically not affected by the difference between the two
ways of blanking indicating that a delay of ten seconds is
Whereas for a non-enhanced system with fast reaction
kinetics for measurement of analytes such as IgG
(Fig. 23.12B), a delay of 10 seconds becomes very critical.
There is considerable signal development during the
first ten seconds. This results in decreased difference
between A1 and A2 (Fig. 23.11). The loss of signal increases
with increasing concentration of IgG in the calibrators.
It can be observed from Fig. 23.12B that the curve for
“immediate mixed blanking” tends to get flatter with the
increasing concentration of IgG, resulting in a decrease in
the precision of the analysis.
It would be desirable to optimize both slow reacting systems
and assay systems based on particle enhanced turbidimetry
(latex-based assays) where the reagent absorbance is very
high, based on “immediate mixed blanking”.
Whereas for systems with fast reaction kinetics such as
IgG, the assays should be optimized using the “real sample
Concepts of Assay Optimization
While optimizing reagent system for immunoturbidimetric
assays, it is important to optimize the dose-response
curve by titerating the amount of sample (antigen) and
TABLE 23.2: Real sample blanking system: using reagent cuvette
A portion of the antibody excess zone of dose-response
curve is then selected as the “range for the standard curve”.
The lowest concentration of an antigen, which gives a
detectable signal compared to the background noise, is
defined as the detection limit or analytical sensitivity of the
analysis. It is defined as the minimum concentration of
analyte that is statistically unlikely to form part of the range
of signals seen in absence of analyte. Usually, the detection
limit is set as the lowest signal where the standard deviation
around that signal is less than one third of the signal itself.
The lowest concentration selected for the calibration of the
assay is usually above the detection limit.
As long as the analyte signal is higher than the signal of the
No comments:
Post a Comment
اكتب تعليق حول الموضوع