S-01—Implementation guide for the application of statistical sampling requirements

The copy of this document located on Measurement Canada's website is considered to be the controlled copy.

Category: Statistical methods
Bulletin: S-01 (rev. 3)
Document(s): S-S-01; S-S-02; S-S-03; S-S-04
Issue date:
Effective date:
Supersedes: S-01 (rev. 2)


Table of contents


1.0 Purpose

1.1 The purpose of this bulletin is to provide guidance on the interpretation and application of Measurement Canada's (MC) acceptance and reverification period lengthening sampling plans, S-S-series. These guidelines are intended for use by authorized service providers (ASPs) accredited to S-A-01 requirements as well as for MC staff.

1.2 This Guide explains the terms used in the sampling plans and provides practical guidance on sampling inspection.

2.0 Selected references

2.1 The sampling plans are detailed in the following documents:

  1. S-02 (rev. 4)—Implementation Schedule for the Application of Statistical Sampling Requirements in S-S-01, S-S-02, S-S-03, S-S-04 and S-S-06
  2. S-S-01—Specifications for Random Sampling and Randomization
  3. S-S-02—Measurement Uncertainty and Meter Conformity Evaluation Specifications
  4. S-S-03—Prerequisites to the Use of Sampling Inspection
  5. S-S-04—Sampling Plans for the Inspection of Isolated Lots and Short Series of Lots
  6. S-S-05—Performance Requirements Applicable to Meters Granted a Conditionally Lengthened Initial Reverification Period under S-EG-01
  7. S-S-06—Sampling Plans for the Inspection of Isolated Lots of Meters in Service

2.2 The sampling inspection requirements are based on international standards such as the following:

  1. ISO 2859-10—Sampling procedures for inspection by attributes—Part 10: Introduction to the ISO 2859 series of standards for sampling for inspection by attributes
  2. ISO/TR 8550—Guide for the selection of an acceptance sampling system, scheme or plan for inspection of discrete items in lots
  3. Guide to the Expression of Uncertainty in Measurement, IPM/IEC/IFCC/ISO/IUPAC/IUPAP/OIML
  4. ISO 10576-1—Statistical Methods—Guidelines for the Evaluation of Conformity with
  5. Specified Requirements—Part 1—General Principles

2.3 Users are encouraged to refer to other important documents such as the following:

  1. EL-ENG-12-01—Requirements for the Certification and Use of Measuring Apparatus—Electricity Meter Calibration Consoles
  2. S-E-02—Specifications for the Verification and Reverification of Electricity Meters
  3. S-G-02—Specifications for the Verification and Reverification of Diaphragm Meters
  4. P-S-04-A—Model Procedure for Sampling Inspection of Isolated Lots
  5. P-S-04-B—Model Procedure for Sampling Inspection of Short Series of Lots

3.0 Terminology and acronyms

3.1 In the documents referenced in this Guide, the terms "production" and "presentation" are used interchangeably.

3.2 In the documents referenced in this Guide, the term "implemented" as applied to corrective action, means that corrective action has been identified and carried out but not necessarily validated.

3.3 The ISO standard uses the terms "device" and/or "item". For the documents referenced in this bulletin, those terms refer to a meter.

3.4 The terms and acronyms used in the Guide are the following:

AMV

accredited meter verifier

AQL

acceptance quality limit

ASP

authorized service provider

Fully curtailed inspection

inspection of a sample that is stopped once a decision to accept or fail the lot can be made with certainty.

Semi-curtailed inspection

inspection of a sample that is stopped once a decision to fail the lot can be made with certainty. All meters in the sample must be inspected if a lot is to be accepted.

GUM

Guide to the Expression of Uncertainty in Measurement

ISO

International Organization for Standardization

JWG

joint working group

LQ

limiting quality

MADT

measure of absolute deviation from target

MC

Measurement Canada

MUT

meter under test

n, N

sample size and lot size

SPC

statistical process control

Type A evaluation

a method of evaluation of uncertainty by statistical analysis of a series of observations (reference the GUM).

Type B evaluation

a method of evaluation of uncertainty by means other than the statistical analysis of a series of observations (reference the GUM).

4.0 Principles governing acceptance sampling inspection and plan design

4.1 During the review process (2006 to 2009), the Sampling Project JWG comprising industry and MC representatives, considered a number of national and international sampling plans to use as a foundation for the development of the acceptance sampling plan. The base plan that was selected is the internationally accepted ISO 2859-2 plan. The design parameters associated with ISO 2859-2 were consistent with the level of conformity that the JWG wanted to achieve. The desired level of conformity is generally described as a relative confidence level that a certain percentage of devices under test are conforming.

4.2 With the ISO plan, the design parameter assumed a consumer's risk of 10% probability of lot acceptance. The various LQ tables that are contained in ISO 2859-2 and form the basis of S-S-03 and S-S-04 were based on that probability. During the review process, a check of the ISO 2859-2 tables revealed inconsistencies associated with the LQ = 2.0% table for lot size 51–90 and the LQ = 3.15% table for lot size 51–90. In both cases, the probability of acceptance of a lot is significantly over the 10% design parameter that was defined by the JWG. As a consequence, the JWG reached unanimous consensus on the following options to address these inconsistencies:

  1. users should use sample sizes associated with one of the larger lot sizes; or
  2. the following sample sizes/lot size combinations can be used:
    1. S-S-03 (rev. 2), Table A.1; for LQ = 2.0% and lot size 51–90, use sample size of 58;
    2. S-S-04 (rev. 2), Tables A.1 and A.3; for LQ = 3.15% and lot size 51–90, use sample size of 48.

The JWG recommended that this inconsistency be communicated to the ISO 2859-2 committee for final resolution.

4.3 In addition to the above comments, users of S-S-series plans should be aware that the AQL value tends to increase with the acceptance number and the probability of acceptance at the LQ point may be greater than 10%. Furthermore, ISO plans are designed with constant proportions (i.e. percentages). Given those features of ISO 2859-based plans, users should consult the individual plan operating characteristic to ensure their process quality objectives are met. The following are some considerations and recommendations:

  1. S-S-04 (rev. 2) – Tables B.1, B.2 and B.3 sample sizes are not associated with lot size. Users should consult the individual plan operating characteristic to ensure process quality objectives are met.
  2. S-S-04 (rev. 2) – Table B.1 sample sizes for the third row exceed 10% probability of acceptance at the LQ value. To maintain consistency with the other plans, it is recommended the sample sizes for the third row of Table B.1 be increased to 210, 105 (option 1) and 82, 41 (option 2) and in Table B.3, sample sizes be increased to 201, 105.
  3. S-S-04 (rev. 2) – Tables A.3 and B.3 use LQ 3.15 for smaller lot sizes and LQ 2.0 for larger lot sizes. ISO plans do not necessarily provide a good transition to a tighter LQ value as lot size increases. For non-performance observations, the following sample sizes for S-S-04 Tables A.3 and B.3 are recommended:
    1. Table A.3, lot size 1201 to 3200, it is recommended that the sample sizes be increased to 150 (single) and 95, 95 (double);
    2. Table B.3 second row, it is recommended that the sample size be increased to 150, 75.

5.0 Guidelines on S-S-02 (rev. 1)

5.1 General guidelines for the determination of measurement uncertainty in conformity assessment (S-S-02 (rev. 1), section 4.1.1)

5.1.1 With reference to S-S-02 (rev. 1), section 4.1.1, the determination of measurement uncertainty typically involves the following generic steps:

  1. Identification of the potential sources of uncertainty.
  2. Determination of standard uncertainty (us) for each component.
  3. Determination of the relationship between the components (statement of the reduction equation).
  4. Identification and evaluation of covariance terms.
  5. Determination of the sensitivity coefficients for each component and creation of an uncertainty budget table.
  6. Combination of the standard uncertainties and sensitivity coefficients to determine the combined uncertainty (uc) in accordance with the GUM.
  7. Determination of the expanded uncertainty (U).

5.1.2 Evidence should be provided in the statement of uncertainty to indicate that the items listed in the GUM, section 7.2, have been completed.

5.2 General list of measurement uncertainty contributors in gas meter conformity assessment (S-S-02, section 4.1.2)

With reference to section 4.1.2 of S-S-02 (rev. 1), the following is a partial list of measurement uncertainty contributors that may influence conformity assessment of a typical gas measuring apparatus. Their applicability will depend on the design of the measuring apparatus and the choice of the reference standard. The recommended method for their determination is presented with each of the contributors listed.

  1. Repeatability of device under test:
    1. Repeatability of device under test (Type A evaluation)
  2. Resolution uncertainties:
    1. Resolution of the pressure, temperature and barometer indications (process instruments) (Type B evaluation) (square distribution)
    2. Pulse counting resolutions of the reference standard and device under test (Type B evaluation) (triangular distributor)
  3. Environmental performance uncertainties (influence quantities):
    1. Temperature and pressure sensitivity of reference standard calibration/performance (Type B evaluation) (square distribution – limited information)
    2. Temperature and pressure sensitivity of the process instrumentation (Type B evaluation) (square distribution – manufacturer's data)
  4. System design and operation uncertainties:
    1. Put in / take out reproducibility of the device under test (Type A evaluation) (normal distribution – by experiment)
    2. Long-term stability of process instrumentation (Type A evaluation or Type B evaluation)
    3. Sensitivity of reference standard's performance to operating rate (Type B evaluation)
    4. Influence of the location of sensors relative to the desired measurement point (Type A evaluation or Type B evaluation)
  5. Uncertainties of the calibration of the reference standards contained in the measuring apparatus (often referred to as fossilized uncertainty):
    1. Calibration of reference standard (Type A evaluation or Type B evaluation) (normal distribution)
    2. Calibration of process instrumentation (Type A evaluation or Type B evaluation) (normal distribution)
  6. Operator performance:
    1. Reproducibility of operator's ability to read instrumentation – parallax errors (Type A evaluation or Type B evaluation) (square distribution or normal distribution)
  7. Uncertainties from assumptions:
    1. Compressibility and Z / Z assumptions (Type B evaluation) (square distribution)
    2. Equation of state and algorithms (Type B evaluation) (square distribution)
    3. Temperature and pressure effects – changes in the connecting volume (Type B evaluation) (square distribution)
    4. The use of an alternate pressure, temperature or test medium on the performance of the device under test (Type A evaluation or Type B evaluation)

5.3 General list of measurement uncertainty contributors in electronic electricity meter conformity assessment (S-S-02, section 4.1.2)

With reference to section 4.1.2 of S-S-02, the following is a partial list of measurement uncertainty contributors that may influence conformity assessment of a typical electronic electricity meter. Their applicability will depend on the design of the measuring apparatus, the particular meter types being evaluated and the meter functions being evaluated. The recommended method for their determination is presented with each of the contributors listed:

  1. Electricity meter calibration console
  2. Electricity MUT

5.3.1 Measurement uncertainty contributors from an electricity calibration console

The major contributors to measurement uncertainty from an electricity calibration console are:

  1. Burden Effects (EL-ENG-12-01 sections 7.3.2.2 to 7.3.2.4). The uncertainty due to burden effects is determined from the data established from the difference in errors determined pursuant to the requirements of sections 7.3.2.2 to 7.3.2.4. An uncertainty value can be established for each of the configurations for which burden tests are conducted. Alternatively, the largest spread may be used as the uncertainty figure due to burden effects for all calibration errors (Type B evaluation).
  2. Variation of Positions (EL-ENG-12-01, section 7.4.2.1). For multi-position consoles, an uncertainty figure shall be established for the variation of console errors from position to position. This will be determined using the data (i.e. results of the largest spread of errors) from section 7.4.2.1 of EL-ENG-12-01 (Type B evaluation).
  3. Switching Effects (EL-ENG-12-01, section 7.7). These effects are applicable to fully automatic and semi-automatic consoles. The uncertainty figure shall be established using the data from section 7.7 of EL-ENG-12-01 (Type B evaluation). Note: Switching effects are only applicable to those consoles with current and/or potential transformers between the MUT position and the console's reference standard. For example, some types of consoles, operating in series/parallel mode, or making use of amplifiers with reference standards connected directly in series/parallel with the MUT make them immune to current switching effect uncertainty.
  4. Regulation (EL-ENG-12-01, section 7.6). This is applicable to verification of demand meters. This uncertainty figure is not applicable to calibration errors for energy functions. The uncertainty value is established from data that shows the largest percent deviation observed between any one minute energy value and the expected energy value pursuant to section 7.6.1.2 or 7.6.1.3 of EL-ENG-12-01, as applicable (Type B evaluation).
  5. Console Reference Meter. The uncertainty of the console reference meter shall be established by Type A evaluation.
  6. Effective Uncertainty due to Repeatability (EL-ENG-12-01, section 7.9.3.1.6). The uncertainty due to repeatability is to be established on the basis of five repetitions of a test point. For calibration consoles assessed with fewer than 100 test points, the uncertainty due to repeatability is to be established for each test point included in the console's certificate of calibration. For consoles having greater than 100 test points, the uncertainty due to repeatability is to be established on the basis of the test points and test positions specified in sections 7.9.3.1.6.2.1 and 7.9.3.1.6.2.2 of EL-ENG-21-01.

5.3.2 Meter under test

5.3.2.1 The MUT uncertainty is a Type A evaluation and is established by the meter verifier for the specific meter being tested.

5.3.2.2 When assessing demand meters, the uncertainty due to the resolution of measurements, meter displays and/or readings is also a consideration in establishing the total meter uncertainty.

5.4 General guidelines for the determination of the effects introduced between the test conditions and usage conditions of the meter under test (S-S-02, section 5.1.4 (d))

5.4.1 For gas meters, the effect of the use of an alternative pressure, temperature or test medium on the performance of the MUT compared to in-service operating conditions needs to be assessed (future).

5.4.2 For electricity meters, the effect of the use of a measuring apparatus as an alternative source on the performance of the MUT compared to in-service operating conditions needs to be assessed (future).

5.5 General comments on measurement uncertainty and measure of absolute deviation from target

5.5.1 For diaphragm gas meters, the measurement uncertainty may be a consideration in the determination of the MADT (median). Generally, however, the uncertainty of the difference between the high-load and low-load tests will be much smaller than the overall measurement uncertainty. This is because there is a tendency for bias errors and long-term drift to counter one another in such a comparison. This results in an uncertainty that approaches the square root of 2 times the test repeatability. Where it is established that the repeatability of the process uncertainty is much smaller than the tolerance band, the uncertainty can be regarded as insignificant.

5.5.2 For electronic electricity meters, the measurement uncertainty may be a consideration with MADT (mean) determination. Since the MADT is determined on the basis of absolute deviation values, the uncertainties that would apply to the determination of the MADT tend to cancel each other out. This results in an uncertainty that approaches the square root of N (number of test points used in determining the MADT) times the test repeatability. Where it is established that the repeatability of the process uncertainty is much smaller than the tolerance band, the uncertainty can be regarded as insignificant.

6.0 Guidelines on lot homogeneity, lot formation and limiting quality value

6.1 The purpose of random sampling is to ensure that the sample is representative of a homogeneous lot. Requirements associated with lot homogeneity can be found in the device verification specification.

6.2 If an ASP wishes to combine in one lot various models or vintages of meters, it is the ASP's responsibility to submit a request to MC with documentation supporting the claim that the various models or vintages are homogeneous. The device owner is responsible for keeping records to support this. Since the meters usually remain in the owner's inventory for many years, it is advisable that a record system be structured to ensure those records can be retrieved in the future.

6.3 The various actions and switching rule decisions that are outlined in S-S-03 (rev. 2) and S-S-04 (rev. 2) apply only to the initial inspection with homogenous grouping.

6.4 QA managers and auditors must confirm that appropriate dispositions and corrective actions have been taken and that qualification requirements have been met regarding lot formation and LQ selection. Specifically, lot splitting (lots subdivided arbitrarily in order to create a long run), LQ jumping (moving to different LQ values without a management plan) and changing mid-stream (changing a sampling strategy part way through an inspection process) are not permitted in S-S-03 (rev. 2) and S-S-04 (rev. 2).

6.5 To ensure lot sizes are within a consistent range, a factor of 0.5 to 1.5 of the historical or typical lot size can be employed. This would be a move of up/down one row in the ISO 2859-2 table.

6.6 In choosing an inspection strategy, organizations should consider the factors identified in ISO/TR 8550 and ISO 2859-10. Even though there is mathematical equivalence between single, double and multiple plans, many factors should be considered when implementing an available plan. Those factors include consideration of the plan's complexity, use of 100% inspection, number of sampling inspections, ease of drawing samples, quality history, etc. Complexity can arise when differing or additional samples are needed, or when events lead to non-conformances, dispositions and corrective actions. AMVs should have the organizational ability to carry out the inspection strategy as well as to properly document those activities and related processes.

Note: Recommend operations initially start at the baseline LQ 3.15% and single sampling inspection under S-S-04 (rev. 2), Annex A. Consideration can be given to other sampling options once operational experience is gained.

7.0 Guidelines on S-S-03

7.1 Guidelines for historical evidence as noted in S-S-03 (rev. 2), section A.3.7 includes the following:

  1. An ASP currently sampling in accordance with LMB-EG-04 is qualified in this regard for meters listed in the scope of its accreditation. The grandfathering allowed during initial implementation of S-S-03 (rev. 2) can only be used once per homogenous meter group.
  2. Historical evidence should be based on data for the facility and meter type under review. Sampling evidence from other organizations is not acceptable for qualification purposes. The intent is to confirm the integrity of both the device being sampled and the process being used by applicants.

7.2 With respect to limited meter quantities, users have the option to use 100% inspection or the sampling plan indicated in S-S-03 (rev. 2).

7.3 When applying the acceptance sampling plan to a brand new technology, or after a new process implementation, the following steps must be followed:

  1. Perform a 100% inspection of the quantity specified in S-S-03 (rev. 2);
  2. Ensure the qualification requirements specified in S-S-03 (rev. 2) have been met.

7.4 Refer to section 11.4 below for classification of observations.

7.5 Fully and semi-curtailed inspection are not permitted under S-S-03 since it includes requirements for initial and re-qualification of meter production processes as a prerequisite to sampling inspection using S-S-04. Sample testing under S-S-03 is used to provide objective evidence and information relating to the stability and capability of meter production and quality control processes relating to meter performance and non-performance quality characteristics. A complete count of marginally conforming (Type 1 and Type 2) and nonconforming sample meters provides for an unbiased estimate of the average quality output of the meter production processes being assessed.

8.0 Guidelines for process related corrective action resolution

8.1 Documentation is a key part of any corrective action resolution process. The documentation should include procedures that describe the processes for detection, feedback control and the correction of conditions adverse to quality that have affected the device or the process. The procedures should include the following:

  1. Stopping rules and actions in the event that lots continue to fail;
  2. Event identification that would cause discontinuation of sampling;
  3. Escalation procedures.

8.2 Corrective action resolution should typically include the following types of activities:

  1. Immediate disposition;
  2. Root cause analysis;
  3. Implementation and evaluation of corrective actions;
  4. Confirmation and approval of resolution.

8.3 Effective corrective action would normally be implemented prior to presentation of the next lot. However, since some actions may involve other parties (such as the supplier) or capital expenditures, the complete implementation of corrective actions may take more time.

Such time delays should not be used to delay any necessary actions. It is important that ASPs ensure the quality of the process; consequently, they should provide procedures to achieve that goal.

8.4 Corrective action may include discontinuation of sampling inspection under

S-S-04 (rev. 2), Annex A or B, which would necessitate re-qualification of the production process under S-S-03 (rev. 2).

8.5 Conditions that could result in corrective action include:

  1. Failures, malfunctions and deficiencies in production processes, equipment or software;
  2. Inadequate procedures and documentation;
  3. Inadequate control of work;
  4. Non-compliance with procedures;
  5. Scheduling problems; and
  6. Process not statistically capable of producing product in accordance with the specifications.

9.0 Guidelines on stopping and escalation procedures

9.1 Stopping rules based on a sequence of lots cannot address all outcomes. Without limiting other possibilities, ASPs should have documented procedures to address occurrences such as process disruptions or repeated failures due to the same or similar causes. The investigation and resulting recommendations could result in the discontinuation of sampling inspection under Annexes A and B of S-S-04 (rev. 2).

9.1.1 If a cumulative number of lots in a sequence are not accepted on original inspection, then stopping rules such as the following should be implemented and inspection discontinued pending action to improve quality:

  1. In the event users are operating at LQ = 8.0% and have non-acceptance of 3 out of 5 (or optionally 2 out of 3) or fewer consecutive lots, then sampling can continue at 3.15%, unless evidence suggests that sampling is no longer appropriate (re-qualification would then be necessary).
  2. In the event users are operating at LQ = 3.15% and have non-acceptance of 3 out of 5 (or optionally 2 out of 3) or fewer consecutive lots, then sampling would be suspended. Corrective action and process re-qualification would be initiated. To re-qualify the process, S-S-03 (rev. 2) would be utilized.

9.2 Information obtained from sampling several lots in sequence may indicate that process concerns are warranted. This should result in using 100% inspection or the S-S-03 (rev. 2) specification to invoke more rigorous sampling procedures. Sampling under S-S-03 (rev. 2) provides consumer protection against poor quality; however, there is a higher risk for the producer of having an acceptable lot judged unacceptable. Indication of possible deterioration in product quality is a signal to initiate corrective action and ensure corrective action is in fact taken.

The following are some tools that can be used to monitor the process:

  1. data obtained from meters and lots that are the same or similar in nature;
  2. tables and graphics such as frequency distributions and histograms;
  3. charts such as the Pareto or defect concentration;
  4. appropriate control charts such as a CUSUM chart;
  5. trend analysis; and
  6. stopping rules to monitor a series of same or similar lots.

9.3 A 100% inspection should be used until process-related concerns have been addressed.

10.0 Guidelines on Sampling Plan Changes

10.1 Should there be a significant change in production (change in production implies a new process), then the process re-qualification procedure under S-S-03 (rev. 2) should be utilized.

10.2 With regard to the isolated lot plan, S-S-04 (rev. 2), Annex A:

  1. There are no timeline restrictions.
  2. A significant change in process requires re-qualification.

10.3 With regard to the short series of lots plans, S-S-04 (rev. 2), Annex B:

  1. The maximum tolerable time period between successive lots is 6 weeks. After a single stoppage over 6 weeks, re-start at n1.
  2. For delivery time periods expected to be longer than 6 weeks, revert to the isolated lot option in Annex A.
  3. There must be a minimum of 5 lots presented in sequence. If there are insufficient lots, revert to the isolated lot option in Annex A.
  4. A significant change in process requires re-qualification.

11.0 Guidelines on the inspection of short series of lots and isolated lots

11.1 S-S-04 (rev. 2), Annex A (isolated lots) is the default plan and may be used for all sampling inspection. Typically, an accredited manufacturer would utilize S-S-04 (rev. 2), Annex B (short series of lots) when production quantities are large and quality stability has been demonstrated over a series of lots. For an ASP processing limited volume purchases, refer to section 7.2 above.

11.1.1 Under Annex A, there is potential for a wide range of activities. Lots may be presented in isolation or the ASP may elect to inspect all lots under this annex. To provide consistency between Annexes A and B of S-S-04 (rev. 2), the following guidelines should be followed:

  1. If an isolated lot is presented, then any action is limited to what is stated in section 4.4 of ISO 2859-2. If there are lots to follow, the corrective action applied to the process or system would be similar to that stated under Annex B.
  2. If an isolated lot were presented, dealing with a rejected lot would be limited to what is stated in section 4.4 of ISO 2859-2. If there are lots to follow, they would be dealt with as per the requirements of Annex B (i.e. a lot permitted to be inspected under an LQ = 8.0% plan may be inspected under an LQ = 3.15% plan, 100% inspected, or may undergo requalification of the process).
  3. If process performance has degraded significantly, sampling under Annex A or B should be discontinued and re-qualification as per S-S-03 (rev. 2) is required.

11.2 ISO references include ISO/TR 8550, 5.2c and ISO 2859-10, sections 2.5 and 3.17.

11.3 Regarding short series of lots, if a lot is not accepted due to the observed number of performance or non-performance characteristics, then typically, the sequence for a short series of lots is as follows:

  • Lot # 1 – not accepted
  • Lot # 2 – screened and found satisfactory
  • Lot # 3 – is then sampled using sample size of n1

A 100% conforming product is the goal, coupled with corrective action and continual improvement activities when nonconformities are found. The focus is on acceptable levels of marginally conforming product via setting narrow test limits within the device specification.

If the above sequence cannot be successfully accomplished, process requalification may be necessary. A non-accepted lot may be resubmitted for sampling inspection under Annex A at the same LQ value.

11.4 To ensure sample meters are classified in a mutually exclusive manner, classification is performed by using the decision sequence shown in Appendix A.

11.4.1 If classifying a meter in several categories would cause failure, it would be reasonable and prudent to fail the lot. For example, if the plan is Ac1 = 1, Ac2 = 5, and during inspection one type 1 and five type 2 meters are observed, but the type 1 meter is also a type 2 meter, it would be reasonable to fail the lot (since six type 2 meters would cause failure). Reference ISO 2859-10, section 3.3.

11.5 Fully and semi-curtailed inspection is not permitted when using single sampling plans. Semi-curtailed inspection is permitted on the second and subsequent samples when using double or multiple sampling plans. Fully and semi-curtailed inspection is not permitted when using single sample plans or on the first sample when using double or multiple sampling plans.

11.5.1 The initial and continued use of acceptance sampling inspection as per S-S-04 is conditional on meter production processes being stable and capable of producing meters of stable quality levels on a continuing basis. It may be necessary to suspend sampling inspection after a single failed lot with significantly poor quality and/or after a certain number of lot failures in a given number of successive lot inspections. A complete count of marginally conforming (Type 1 and Type 2) and non-conforming sample meters provides an unbiased estimate of the quality output of the meter production processes, which is necessary in the event of deterioration in quality.

11.5.2 In the case of double or multiple sampling, it is sufficient to establish unbiased estimates of the quality output of meter production processes based on completed first samples and semi-curtail inspection of the second or subsequent samples as required (i.e. on certainty of lot failure). All inspection certificates or reports shall indicate whether or not sampling inspection was semi-curtailed.

12.0 Guidelines on outgoing quality

12.1 The objective for outgoing quality is that the quality be the same regardless of whether the meters are processed via sampling or by 100% inspection. The outgoing quality requirement of the product of LQ and N applies to marginally conforming units. This requirement is considered as being met if a lot is accepted under sample inspection. If meters are inspected (lots inspected by 100% inspection or lots inspected but not accepted by sampling), nonconforming meters and marginally conforming meters shall be removed or repaired to ensure the outgoing quality standards are met.

12.2 In the case of 100% inspected diaphragm gas meters, regardless of the quantity presented, compliance with the test limit and MADT requirements of S-G-02, section 6.3.9, ensures that the outgoing requirements of S-G-02 are met with respect to marginal conformities (Type 1 and Type 2) and performance nonconformities.

Note: Meters that do not meet the test limit or MADT requirements, or have a non-conformance, cannot be sealed.

12.3 When processing lots of electronic electricity meters, there are situations where the test limit requirements are insufficient to ensure the outgoing quality requirements of S-E-02, Annex A have been met. This would occur, for example, when the test limit is equal to the verification specification limits. Example calculations are shown in Appendix E.

12.3.1 To avoid excessive administrative costs, section A.7(e) of S-E-02 allows for the outgoing quality requirements for meters that only undergo 100% inspection, are not new and cannot be calibrated or renewed to be considered as being met. This provision is valid only when the k value is at least 3 and the MADT limit is at most 50% of the specification limit.

13.0 Guidelines on rounding and significant figures

13.1 Discrimination refers to the degree of exactness with which the quantity is stated (i.e. the smallest readable unit or the measurement resolution). The discrimination of a quantity refers to the right-most significant digit. The applicable meter verification specification for electricity or gas meters defines the measurement discrimination.

13.2 During calculations, care is needed to ensure errors are not introduced due to the lack of figures. To minimize the introduction of related errors, rounding should only be done at the end of the calculation process. The term "one step" as used in S-S-02 (rev. 1) refers to a procedure in which rounding is only performed once during calculations.

14.0 Guidelines on S-S-05 – Sampling of meters granted a conditionally lengthened initial reverification period under S-EG-01

14.1 Annex A.4 of S-S-05 (rev. 1) requires meter manufacturers of models granted a conditionally lengthened initial reverification period to submit lot formation reports to MC during the first week in January of the year that the meters are to be sampled. For example, if electricity meters are being sampled ahead of the date specified in Table 2 of bulletin E-28, MC should also be provided with the lot formation reports in the first week of January of the year the meters are to be sampled.

14.2 The lots to be used for sampling are to be formed from registered contractors' meters that meet sample eligibility criteria of section A.4.5 of S-S-05 (rev. 1). Specifically, the applicant must ensure that the meter lists submitted for sample selection include meters that are in service and have been in service for a minimum of 48 months. The lot listings of the meter populations must include the date when each meter was installed as supporting evidence of the eligibility of the meters used for sampling. If the meter has been removed and subsequently returned to service, the total service time must be 48 months.

14.3 Any meters excluded from the population lists must be identified as such. In accordance with section A.6(b) of S-S-05 (rev. 1), a documented account of the reason for their exclusion must be provided. For example, an excluded meter may be one that was removed from service and returned, but was not in service for a sufficient amount of time to be an eligible sample meter and therefore cannot be included in the list to be used for sample selection.

14.4 When the sample is selected, 50 extra meters are randomly selected to be used as replacement meters in the event that one of the original 200 sample meters must be excluded. An explanation of the exclusion must be provided as per section A.6(b) of S-S-05 (rev. 1). In the event that the meter owner provides additional results that include more than the minimum 200 meters, sentencing of the lot will be based on the performance results of the first 200 eligible sample meters inspected. For more information on excluded sample meters, see Appendix F below.

14.5 Semi-curtailed inspection may be used in interim sampling as per S-S-05 on a case-by-case basis subject to acceptance by Measurement Canada. Any request from an applicant to semi-curtail sample testing and withdraw their application for a lengthened initial reverification period shall be made to MC in writing and be accompanied by all information required under Appendix A.6 of S-S-05. In all cases, sample meters shall be tested in sequential order and results reported. Untested sample meters shall be quarantined pending MC's review of an applicant's request.

14.5.1 MC reserves the right to deny any such request and require that all sample meters be tested where deemed necessary (e.g. where failure is due to over-registering non-conforming meters, where potentially defective or defective meters have been identified).

15.0 Guidelines on S-S-06—Sampling plans for the inspection of isolated lots of meters in service

15.1 S-S-06 was developed to focus on meter performance, as captured by the scope and intent of the Electricity and Gas Inspection Act. S-S-06 is not to be used to address meter reliability (from a meter manufacturing, meter quality or meter failure perspective). If meter reliability demonstrates noncompliance with a statutory requirement, it will be addressed through MC's enforcement processes.

15.2 The following note included in the scope of S-S-06 advises meter owners of the risks and associated liabilities when using sampling plans. Use of meters that register outside the legal tolerance limits prescribed in the Electricity and Gas Inspection Regulations may lead to enforcement action by MC.

Sampling plans, by design, contain inherent risks and limitations with regard to their usage and the conclusions they may or may not provide. Meter owners are therefore advised that, although conformity with the requirements of this specification may allow for the extension of a meter's reverification period, relying solely on the use of the sampling plans contained in S-S-06 will not provide users with an assurance of compliance with the metering accuracy obligations prescribed under the Electricity and Gas Inspection Act.

Note included in the scope of S-S-06

15.3 S-S-06 is used when considering extending reverification periods of populations of meters and is designed to assess the performance of a lot represented by the selected sample meters; however, this assessment is not necessarily representative of the in-service condition of the meters. The condition and performance of a meter in service is the responsibility of the owners/contractors. If MC determines, based on the number of stopped, defective or damaged sample meters, that there is reason to believe that this responsibility is not fulfilled in accordance with its legislation, it is justified in initiating enforcement action.

15.4 As per the requirements of section 5.5.3 of S-S-06, a type 1 (C1) marginally conforming meter is one whose performance error exceeds ± 2.0% at any test point, and a type 2 (C2) nonconforming meter is one whose performance error exceeds ± 2.9% at any test point. S-S-06 also requires that a type 2 (C2) nonconforming meter be counted as a type 1 (C1) marginally conforming meter. However, regardless of the number of performance test points at which a meter is assessed during compliance sampling, it can never count as more than one C1 and one C2 for sentencing of the lot.

15.5 Semi-curtailed inspection is permitted for compliance sampling as per S-S-06 when it is certain a lot will fail to meet Level 5 sampling plan criteria or the meter owner's own targeted level sampling plan criteria for keeping the meters in service.

15.5.1 When a lot is certain to fail due to observed non-conformances resulting from a potential or actual defect, semi-curtailed inspection may not be appropriate, as the total number of non-conforming meters in the sample could be relevant to subsequent investigations. However, lot failure may not always be due to non-conformances resulting from some defect in the meter. It could be due to deterioration in meter performance (e.g. calibration drift, mechanical wear, etc.) over time (e.g. meters that have been previously reverified by compliance sampling). In such circumstances, semi-curtailed inspection may be appropriate.

15.5.2 An authorized service provider shall include within their accreditation documentation the conditions and criteria for their use of semi-curtailed inspection in compliance sampling. Sample meters shall be tested in sequential order from the unsorted list and results reported for all meters tested. Any observed non-conformance deemed to be caused by a potential or confirmed defect shall rule out the use of semi-curtailed inspection. Any meter excluded as a result of a potential or confirmed defect shall rule out the use of semi-curtailed inspection. All inspection certificates or reports shall indicate whether or not sampling inspection was semi-curtailed.

16.0 Revisions

16.1 The purpose of revision 3 is to:

  • define fully and semi-curtailed inspections;
  • provide guidance for using semi-curtailed inspection for S-S-04, S-S-05 and S-S-06.

16.2 The purpose of revision 2 was to:

  • amend the bulletin to address the changes from S-E-01 (rev. 3) to EL-ENG-12-01;
  • include guidelines on S-S-05 (rev. 1) and S-S-06;
  • include a new Appendix F with questions and answers on S-S-06.

16.3 The purpose of revision 1 was to:

  • reflect amendments to S-S-02 (rev. 1), S-S-03 (rev. 2), S-S-04 (rev. 2) and S-E-02 (rev. 3). Various amendments were made throughout this document;
  • make various improvements throughout to correct grammar, document titles and facilitate readability and flow of material;
  • add to section 3.4 the acronyms AQL, GUM, MC, MUT, n, N and SPC;
  • change 4.2b(I) sample size from 57 to 58;
  • add section 4.3 to address the sample size recommendations set out in Annexes A and B of S-S-04 (rev. 2);
  • incorporate wording in sections 5.3, 5.4 and 5.5 for electricity meter measurement uncertainty;
  • add to section 6.4 a prohibition against changing a sampling strategy mid-stream;
  • add section 6.6 to address selection of complex inspection strategies;
  • remove incorrect phrase "small volume in" sections 7.2 and 11.1;
  • add assessment techniques to section 9.2;
  • add sections 11.3 and 11.4 to address lot sequences and classification;
  • add section 12 "Outgoing Quality Requirements";
  • add Appendix A containing classification sequence;
  • add Appendices B, C, D and E containing calculations and questions and answers.

Appendix A—Classification of observations in S-S-03 (rev. 2) and S-S-04 (rev. 2)

A.1 To ensure meters are classified in a mutually exclusive manner, classification is performed by using the following decision sequence for each sample meter:

  1. Does the meter have a nonconforming performance characteristic?
    If yes, classify the meter accordingly and go to question 2.
  2. Does the meter have a nonconforming non-performance characteristic?
    If yes, classify the meter accordingly and go to question 3.
  3. Does the meter have a type 1 marginally conforming performance characteristic?
    If yes, classify the meter accordingly and go to question 4.
  4. Does the meter have a type 2 marginally conforming (MADT) characteristic?
    If yes, classify the meter accordingly and go to question 6. If no, go to question 5.
  5. Classify the meter as conforming if there are no nonconforming and no marginally conforming characteristics and go to question 7.
  6. Determine the counts of type 1 and type 2 categories for the meter samples. Classify the meter according to the most serious defect and go to question 7.

    Note: Classifications falling under type 1 are most serious, followed by type 2.

  7. Is there another sample meter?
    If yes, proceed to the next sample meter and go to question 1. If no, go to question 8.
  8. Once the category counts are determined for all meters in the sample, lot acceptability is determined by comparing counts for each nonconforming and marginally conforming unit to the associated acceptance number in the sampling plan.

Note: The acceptance number is always zero for nonconforming performance characteristics.

Appendix B—Selected sampling plan application questions and answers based on S-S-04 (rev. 2), S-E-02 and S-G-02

For all answers shown, meters inspected by 100% inspection or sampling inspection must meet the outgoing quality requirements as specified in the applicable meter verification specification. Measurement uncertainty must be included in all conformity assessments.

B.1 Questions and answers

Q.1 A utility purchases 2,000 meters as a homogeneous isolated lot, selects and inspects a random sample of the required size using single sampling (LQ = 3.15%) and discovers a type 1 marginally conforming unit in the sample. What is the appropriate sample size and what is the decision regarding the lot?

A.1 Single sampling chosen under LQ = 3.15%

  • n = 125 (for N = 501 to 3,200)
  • Ac1 = 1
  • Ac2 = 19
  • Ac3 = 1
  • Ac = 0 for performance nonconformities
  • Since dmc1 <= Ac1, the lot is accepted

Q.2 A utility purchases 2,000 meters as a homogeneous isolated lot, selects and inspects random samples of the required sizes using double sampling (LQ = 3.15%) and discovers two type 1 marginally conforming units in the first sample and none in the second. What are the appropriate sample sizes and what is the decision regarding the lot?

A.2 Double sampling chosen under LQ = 3.15%

  • n1 = 80 (for N = 501 to 3,200)
  • Ac1 = 0, Re1 = 2, Ac (performance) = 0
  • Ac2 = 11, Re2 = 18
  • Ac3 = 0, Re3 = 2

Since dmc1 > = Re1, the lot is rejected on the first sample.

  • n2 = 80 (for N = 501 to 3,200)
  • Ac1 = 1, Re1 = 2 Ac (performance) = 0
  • Ac2 = 22, Re2 = 23
  • Ac3 = 1, Re3 = 2

Note: Since the lot was rejected on the first sample, a second sample need not have been selected or inspected.

Q.3 A utility purchases 2,000 meters as an isolated lot. All meters are homogeneous with respect to all characteristics and production, except for 500, which have an additional attribute type characteristic. What are the valid sampling inspection approaches? Assume only single sampling will be used.

A.3a Split lot into two fully homogeneous sub-lots of N1 = 1,500 and N2 = 500. See Figure 1.

  • Single sampling chosen under LQ = 3.15%
  • Sampling plan for N1 = 1,500 is: n1 = 125, Ac1 = 1, Ac2 = 19, Ac3 = 1
  • Sampling plan for N2 = 500 is: n2 = 80, Ac1 = 0, Ac2 = 11, Ac3 = 0
  • For all samples, Ac = 0 for performance nonconformities.

Inspect each sample for all characteristics and decide on the acceptance of each sub-lot accordingly.

A.3b Alternative solution:

For the inspection of all common characteristics, leave the lot together. Treat meters with the additional characteristic as a sub-lot. See Figure 2.

Single sampling chosen under LQ = 3.15%

Sampling plan for N1 = 2,000 is: n1 = 125, Ac1 = 1, Ac2 = 19, Ac3 = 1

For all samples, Ac = 0 for performance nonconformities.

Sampling plan for N2 = 500 is: n2 = 80, Ac3 = 0

Select a random sample of 125 from N1. Assume the sample from N1 happened to include 30 from N2. An additional random sample of 50 is therefore needed from N2. Refer to A.5 below for sample selection options.

Inspect each sample for all characteristics and decide on the acceptance of each sub-lot accordingly.

Figure 1

Figure 1 (the long description is located below the image)
Description of Figure 1

An isolated lot of 2,000 meters, 500 of which have an additional attribute type characteristic, split into two homogeneous sub-lots of 1,500 meters and 500 meters

Figure 2

Figure 2 (the long description is located below the image)
Description of Figure 2

An isolated lot of 2,000 meters, 500 of which have an additional attribute type characteristic, as a lot of 2,000 meters for the common characteristics and a sub-lot of 500 meters for the additional characteristic

Q.4 What plan should be followed for a lot size of 1,000 and single sampling chosen under LQ = 3.15%?

Assume ±1.00% specification limit; 0.50 MADT limit

Assume ±1.60% specification limit; 0.80 MADT limit

Assume console displays to 2 decimal places

A.4 Required sample size: n = 125 and Ac1 = 1, Ac2 = 19, Ac3 = 1, Ac (performance) = 0.

Randomly select 125 meters from the lot. Inspect all quality characteristics of the 125 meters.

All inspected meters shall meet 100% inspection requirements.

Case 1 – Specification limit: ±1.00%

Reject the lot if:

  • two or more meters are found with extended performance errors greater than +0.83% but not greater than +1.00% or less than −0.83% but not less than −1.00%;
  • 20 or more meters are found with type 2 (MADT) performance errors greater than 0.50%;
  • one or more meters are found with extended performance errors outside ±1.00%;
  • two or more meters are found with non-performance nonconformities.

Case 2 – Specification limit: ±1.60%

Reject the lot if:

  • two or more meters are found with extended performance errors greater than +1.33% but not greater than +1.60% or less than −1.33% but not less than −1.60%;
  • 20 or more meters are found with type 2 (MADT) performance errors greater than 0.80%;
  • one or more meters are found with extended performance errors outside ±1.60%;
  • two or more meters are found with non-performance nonconformities.

Q.5 What plan should be followed for a lot size of 1,000 and single sampling chosen under and LQ = 8.0% for performance characteristics?

Assume ±1.00% specification limit; 0.50 MADT limit

Assume ±1.60% specification limit; 0.70 MADT limit

Assume console displays to 2 decimal places

A.5 Performance characteristic inspection

Required sample size: n = 50 and Ac1 = 1, Ac2 = 6, Ac (performance) = 0

Non-performance characteristic inspection; required sample size: 125 and Ac3 = 1

Method of selecting the sample meters

Several methods may be used. The key point is that the 50 samples to be inspected for performance and non-performance attributes must be randomly spread throughout the sub-lot, and the 125 samples for non-performance tests (of which the 50 are presumably a part) must also be randomly spread throughout the lot.

Randomly select 50 meters from the sub-lot, then randomly select another 75 meters (no overlap) from the balance of the lot. Inspect all quality characteristics of the first sample of 50 meters and all non-performance quality characteristics of the second sample of 75 meters.

Alternatively, randomly select 125 meters (leave them unordered) from the lot. Test the first 50 in the unordered list (unsorted list) for all quality characteristics and the other 75 for non-performance quality characteristics.

All inspected meters shall meet 100% inspection requirements.

Case 1 – specification limit: ±1.00%

Reject the lot if:

  • two or more meters are found with extended performance errors greater than +0.67% but not greater than +1.00% or less than −0.67% but not less than −1.00%;
  • seven or more meters are found with type 2 (MADT) performance errors greater than 0.50%;
  • one or more meters are found with extended performance errors outside ±1.00%;
  • two or more meters found with non-performance nonconformities.

Case 2 – specification limit: ±1.60%

Reject the lot if:

  • two or more meters are found with extended performance errors greater than +1.08% but not greater than +1.60% or less than −1.08% but not less than −1.60%;
  • seven or more meters are found with type 2 (MADT) performance errors greater than 0.70%;
  • one or more meters are found with extended performance errors outside ±1.60%;
  • two or more meters are found with non-performance nonconformities.

Q.6 A utility purchases 2,000 meters as a homogeneous isolated lot and selects and inspects a random sample of the required size using single sampling under LQ = 8.0% for performance characteristics. What options are available as valid sampling inspection approaches?

A.6a Performance characteristic inspection

Required sample size is n = 80 and Ac1 = 3, Ac2 = 11, Ac (performance) = 0

Non-performance characteristic inspection – required sample size is n = 125 and Ac3 = 1

Since the sample sizes are not the same, refer to the method outlined in answer A.5 (above) to select the sample meters.

A.6b Alternative solution

Use single sampling for performance characteristics and double sampling for non-performance characteristics.

Performance characteristic inspection: same as above.

Non-performance characteristic inspection

Required sample size n1 is 80 and Ac3 = 0, Re3 = 2, and n2 is 80 and Ac3 = 1, Re3 = 2.

In this case, the sample sizes are the same for performance characteristics and for the first sample of non-performance characteristics. If the first sample of non-performance characteristics indicates a second sample is required, select from the unsorted list an additional sample of 80 from the balance of the lot.

Appendix C—Selected conformity assessment and classification scenarios and resolutions based on S-S-04 (rev. 2), S-E-02 and S-G-02

C.1 Scenarios and resolutions for diaphragm meters

Given the following scenarios for diaphragm gas meters, what is a meter's conformity assessment and classification during sampling inspection?

The following scenarios assume a specification limit of ±1.60%, a test limit of ±1.00 (% error) and sampling is under LQ = 3.15% plan, with a compressed specification limit of ±1.3360 and MADT (median) limit of 0.8000 and 0.6000. Assume a small lot size (zero acceptance numbers) and uncertainty value of 0.20.

Note: No sample meter or 100% inspected meter may be placed into service unless it meets all specified requirements.

S.1 A meter with two test points, HL and LL, has errors of +0.30% and +1.70% respectively. All non-performance tests are rated as acceptable.

R.1 The meter has type 1 and type 2 marginal conformities and one non-conformance. The meter is rated as a performance nonconforming unit and counted as a dp (i.e. unacceptable unit due to performance attribute). The meter cannot be sealed.

S.2 A meter with two test points, HL and LL, has errors of +0.30% and +1.70% respectively. The non-performance tests reveal incorrect nameplate markings.

R.2 The meter has type 1 and type 2 marginal conformities and two non-conformances. The meter is rated as a performance nonconforming unit and counted as a dp. The meter cannot be sealed.

S.3 A meter with two test points, HL and LL, has errors of +0.40% and +1.10% respectively with all non-performance tests rated as acceptable.

R.3 The meter cannot be sealed, since it has error values beyond the test limit criteria.

S.4 A meter with two test points, HL and LL, has errors of 0.00% and +1.40% respectively. The non-performance tests reveal incorrect nameplate markings.

R.4 The meter has a type 1 marginal conformity and a non-conformance. The meter is rated as a non-performance nonconforming unit and counted as a dnp (i.e. unacceptable unit due to non-performance attribute). The meter cannot be sealed.

S.5 A meter with two test points, HL and LL had errors of +0.60% and +1.40% respectively. All non-performance tests are rated as acceptable.

R.5 The meter has type 1 and type 2 marginal conformities. The meter is rated as a marginally conforming unit and is counted as a type 1 meter. The meter cannot be sealed, since it has error values beyond the test limit criteria.

S.6 A meter with two test points, HL and LL, has errors of +0.50% and +0.70% respectively. All non-performance tests are rated as acceptable.

R.6 The meter is rated as a conforming unit.

C.2 Scenarios and resolutions for electronic electricity meters

Given the following scenarios for electronic electricity meters, what is the meter conformity assessment and classification during sampling inspection?

The following scenarios assume a specification limit of ±1.00 (% error) and sampling under LQ = 3.15% plan with a compressed specification limit of ±0.8350 and an MADT (average) limit of 0.5000. Assume a small lot size (zero acceptance numbers) and uncertainty values of 0.10 (k = 1.6449) and 0.18 (k = 3.0000).

Note: No sample meter or 100% inspected meter may be placed into service unless it meets all specified requirements.

S.1 A meter with three test points, HL, LL and PF, has errors of +0.40%, +0.50% and +0.95% respectively. All non-performance tests are rated as acceptable.

R.1 The meter has type 1 and type 2 marginal conformities and a non-conformance. The meter is rated as a performance nonconforming unit and counted as a dp. The meter cannot be sealed.

S.2 A meter with three test points, HL, LL and PF, has errors of +0.40%, +0.50% and +0.95% respectively. The non-performance tests reveal incorrect nameplate markings.

R.2 The meter has type 1 and type 2 marginal conformities and two non-conformances. The meter is rated as a performance nonconforming unit and counted as a dp. The meter cannot be sealed.

S.3 A meter with three test points, HL, LL and PF has errors of +0.40%, +0.60% and +0.60% respectively. The non-performance tests reveal incorrect nameplate markings.

R.3 The meter has a type 2 marginal conformity and a non-conformance. The meter is rated as a non-performance nonconforming unit and counted as a "dnp". The meter cannot be sealed.

S.4 A meter with three test points – HL, LL and PF, has errors of +0.50%, +0.60% and +0.75% respectively. All non-performance tests rated as acceptable.

R.4 The meter has type 1 and type 2 marginal conformities. The meter is rated as a marginally conforming unit and counted as a type 1 meter. The meter cannot be sealed, since it has error values beyond the test limit criteria.

S.5 A meter with three test points – HL, LL and PF, has errors of 0.00%, +0.20% and +0.10% respectively. All non-performance tests rated as acceptable.

R.5 The meter is rated as a conforming unit.

Appendix D—Screening inspection questions and answers based on S-S-04 (rev. 2), S-E-02 and S-G-02

D.1 Questions and answers

Q.1 Electricity meters are processed as a series of lots under Annex B of S-S-04 (rev. 2). A lot of 100 electricity meters undergo screening inspection. What should be the number of marginally nonconforming units in the previously screened lot so that the product of (LQ * N) is not exceeded?

Assume ±1.00% specification limit and the console displays to 2 decimal places.

A.1a Scenario A

  • LQ = 3.15%
  • expanded uncertainty is 0.10 (k = 1.6449)
  • (0.0315 x 100) = 3 meters are permitted with an observation falling in the range of ±(0.83–0.10) to (1.00–0.10)

That is, at least 97 meter observations fall within ±0.73% and the balance fall within ±0.90%.

A.1b Scenario B

  • LQ = 8.0%
  • expanded uncertainty of 0.10 (k = 1.6449)
  • (0.08 x 100) = 8 meters are permitted with an observation falling in range of ±(0.67–0.10) to (1.00–0.10)

That is, at least 92 meter observations fall within ±0.57% and the balance fall within ±0.90%.

Under both scenarios, only (0.20 X 100) = 20 meters permitted with an MADT observation exceeding 0.50%. Refer to S-E-02 (rev. 4) to determine if the sample meters can be sealed.

Q.2 Gas diaphragm meters are processed as a series of lots under Annex B of

S-S-04 (rev. 2). A lot of 100 meters undergo screening inspection. What should be the number of marginally nonconforming units in the previously screened lot so that the product of (LQ * N) is not exceeded?

Assume ±1.60% specification limit and the console displays to 2 decimal places.

A.2a Scenario A

  • LQ = 3.15%
  • expanded uncertainty of 0.30 (k = 1.6449)
  • (0.0315 x 100) = 3 meters are permitted with an observation falling in range of ±(1.33–0.30) to (1.60–0.30)

That is, at least 97 meter observations fall within ±1.03% and the balance fall within ±1.30%. Only (0.20 X 100) = 20 meters are permitted with an MADT observation exceeding 0.80%.

A.2b Scenario B

  • LQ = 8.0%
  • expanded uncertainty of 0.30 (k = 1.6449)
  • (0.08 x 100) = 8 meters are permitted with an observation falling in range of ±(1.08–0.30) to (1.60–0.30)

That is, at least 92 meter observations fall within ±0.78% and the balance fall within ±1.30%. Only (0.20 X 100) = 20 meters are permitted with an MADT observation exceeding 0.70%.

Under both scenarios, refer to S-G-02 to determine if sample meters can be sealed.

Appendix E—Outgoing quality example calculation

E.1 In the case of electronic electricity meters inspected under S-E-02 by 100% inspection, when processing meters there are situations in which the test limit requirements are insufficient to ensure the outgoing quality requirements of Annex A of S-E-02 have been met. This occurs, for example, when the test limit equals the verification specification limits.

Example: 100 meters are presented for 100% inspection under LQ = 3.15% plan.

Assume expanded uncertainty of 0.10 (k = 1.6449) and 0.18 (k = 3.0000).

Consequently: Only (0.0315 x 100) = 3 meters are permitted with an observation falling in range of ±(0.83–0.10) to (1.00–0.18).

At least 97 meter observations fall within ±0.73% and the balance fall within ±0.82%. All meters have MADT observations within ±0.50%.

Appendix F—Sample meter exclusions questions and answers

Q1: How would the following meters be treated if found in a sample?

  1. Seal missing or broken
  2. Stopped meter
  3. Dirt/object in meter
  4. Dropped meter
  5. Cracked index cover (results good)
  6. Cracked index cover (results bad)
  7. Display not working (results good)
  8. Display not working (results bad)

A1:

  1. Exclude the meter if there is evidence of tampering; include it if there is no evidence of tampering.

From (b) to (h) exclude the meter from the sample.

Examples (a) to (h) represent meters not kept in good repair by a meter owner and that should be excluded from the sample. Note that excluding too many meters as a result of an owner not fulfilling their "good repair" obligations can result in the minimum sample size criteria (nmin) not being met. Not meeting nmin requires corrective action by the meter owner (e.g. lot reformation, lot removal and 100% re-inspection or disposal). Test results are not to be used as a basis for meter inclusion or exclusion; decisions should be made prior to testing.

Q2: Where nmin is 125 and nmax is 156, do we need to test the first 125 meters on the list? How are excluded meters replaced from the sampling list?

A2: Meters are to be utilized on the basis of the order in which they appear on the unsorted sample list. Meter 126 cannot be included unless one of the first 125 meters has been excluded. Meter 127 is included when there was a second exclusion within the first 125 meters.

If any meters from 126 through 156 from the sample list arrive in the meter shop for testing ahead of meters 1 through 125, they are not to be included in the sample unless their inclusion has been warranted by an exclusion of one of the first 125 meters from the unsorted sample listing.

Q3: A meter tests −0.60% at high load, and stops at light load. Can the meter be excluded as a stopped meter if it was clearly not stopped in service or deemed to be defective?

A3: Yes, the sample meter is to be excluded and replaced with an alternate sample meter and classed as a potential defect until a root cause analysis investigation of the problem is completed.

Date modified: