th en

Laboratory Accreditation

What is the difference between method validation and verification?

  • Method validation is a demonstration of the method suitability by determining an accuracy of the test results as well as an uncertainty and a traceability of measurements. Method validation is needed for proving whether new method is fit for purpose or specified samples.
  • Verification is a confirmation whether a test method fulfills the specified requirements by inspecting the given items in the test method and preparing the related documentation. Therefore, the verification is applied for the method which has been validated (or the standard method) The verification is usually carried out by comparing the performance data of the laboratory in many aspects such as the technical competence of the staff, the equipments, and the environmental conditions.
  • Method validation shall be done when the method is

o Non-standard method

o Developed method

o Standard method used outside its intended scope of  Modified method

o Portable test instruments and test kits

When some changes are made in the validated non-standard methods, the influence of such changes should be documented and, if appropriate, a new validation should be carried out. Some changes in the method include:

o difference of the interferences in the test sample.

o difference of the test methods such as drying time, distillation, incubation, temperature / criteria.

o development or modification of the methods such as the substances and media used in the methods.

o decrease in the step of analysis such as time, amount of the substance.

o reduction or elimination of the retest in order to save the cost.

The meaning of Selectivity / Bias / Working range / Sample blank / Within-run precision

Selectivity is an ability in accurately measuring the analyte in the sample, and evaluating the matrix effect on the result value. The study of the selectivity can be done by spiking various amounts of an interference in the sample and the blank, measuring the amount of the analyte, and determining the interference quantity effect on the amount of the analyte. For example, the study of amount of chloride and copper which can interfere the analysis of mercury in the water sample by using Cold Vapour AAS.

Bias can be defined as a difference between an average value of the measurements and a true value. Bias is caused by a systematic error (type B error) from the test method (known as a method bias) and that from the laboratory (known as a laboratory bias). A difference between the average value from inter-laboratory comparison and the true value represents a bias resulting from the test method (or called a method bias). Otherwise, a difference between the average value from the repeatability of the laboratory and the average value from inter-laboratory comparison is called a laboratory bias.

Working range is a range of the analyte concentrations that can be determined by the method showing the accuracy (trueness and precision) within the criteria. Linear range  is a range of the analyte concentrations that can be determined by the method providing the linear relationship between the analyte concentration and the signal intensity. The working range might be either wider or narrower than the linear range; it depends on the ability of the test method  in analyzing the sample.

Sample blank is an analyte-free sample used to study the Method Detection Limit (MDL or Limit of Detection (LOD)) and Limit of Quantitation (LOQ).

Within-run precision is defined as  a closeness of agreement between the test results obtained from successive measurements under the same conditions. For evaluating the within-run precision of a method, it is necessary to assess the test results from the same testing conditions including the same laboratory, the same test sample, the same method, the same staff, the same equipment and the same time period. The within-run precision can be shown as standard deviation (S), relative standard deviation (RSD), or variance, (S2)

How to select the suitable reference material.

  1. Analyte of interest: a substance or compound in the test sample.
  2. Testing range: the concentration range of the reference material should be similar to that of the analyte in the sample.
  3. Matrix: the matrix in the reference material should be similar to that in the test sample.
  4. Homogeneity: the homogeneity data of the reference material should be provided appropriately.
  5. Stability: the stability data of the reference material should be provided appropriately.
  6. Quantity: the quantity of the reference material is sufficient for the testing.
  7. Uncertainty: the uncertainty value of the reference material and the calculation of the uncertainty are required.
  8. Expired date: the expired date of the reference material is shown.

How to use the control chart for monitoring the quality control data, determining whether the data are out of control, and also solving the problems

The equipments, chemicals, and testing conditions have to be checked whether they remain in  appropriate condition. A duplicate check of the quality control (QC) sample would be done to investigate the nonconforming work. There are 2 possible cases:

1.) If the result is outside the “out of control” area, plot the control chart with a new result of the QC sample and the previous result (or the out-of-control result). Then, record other activities and analyze the next sample.

2.) If the result is inside the “out of control” area, stop analyzing the sample, investigate the sources of errors, and take the corrective actions. Then, plot a new control chart and record all data and activities.

เราใช้คุกกี้ (cookie) เพื่อมอบประสบการณ์การใช้งานเว็บไซต์ของท่านที่ดีกว่าเดิม ในการใช้งานเว็บไซต์ของเรา ถือว่าท่านยอมรับการใช้คุกกี้

©2023 Department of Science Service (DSS)
75/7 Rama VI Road, Ratchathewi Bangkok 10400
Tel. +66 2201 7000 Fax.+66 2201 7466
e-Mail Contact : pr(at)dss(dot)go(dot)th | e-Mail Letter : saraban(at)dss(dot)go(dot)th

icon W2 aa