Defect Rate Estimation and Cost Minimization us mg Imperfect Acceptance Sampling with Rectification Approved by Dissertation Committee: THIS IS AN ORIG INAL MANUSCRIPl IT MAY NOT BE COPIED W1THOUT THE AUTHOR 'S PERMISSION Copyright by Neerja Wadhwa 1997 Defect Rate Estimation and Cost Minimization using Imperfect Acceptance Sampling with Rectification by Neerja Wadhwa, M.S., M.Phil., PGDIP. Dissertation Presented to the Faculty ofthe Graduate School of the University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy The University of Texas at Austin May, 1997 Acknowledgments I acknowledge the advice and encouragement received from Betsy Greenberg, Peter John, Subal Kumbhakar, and Douglas Morris in the writing of this dissertation. Special thanks to the chairperson of my dissertation committee, Professor Edward George, whose unfailing support made this dissertation possible. Finally, I wish to thank my husband, Pavan, who unflinchingly went through the five long years, and my daughter Priya who let me work right after she was born. Defect Rate Estimation and Cost Minimization using Imperfect Acceptance Sampling with Rectification Publication No. Neerja Wadhwa, Ph.D. The University of Texas at Austin, 1997 Supervisor: Edward Ian George An important aspect of any quality control program is estimation of the quality of outgoing products. This dissertation applies Acceptance Sampling with rectification to the problem of quality assurance when the inspection procedure is imperfect. The objective is to develop effective rectification sampling plans and estimators based on these plans without making the assumption of a perfect inspection procedure. We develop estimators, under two different sampling plans, for the number of undetected defects remaining after a set of lots has been passed. We compare, by extensive simulation, the proposed estimators with existing ones v m terms of Root Mean Squared Error (RMSE). One of our estimators, an empirical Bayes estimator, is seen to consistently obtain substantially lower RMSE overall. We also construct expected cost functions for sampling plans based on fixed sample sizes. We then show how intermediate empirical Bayes estimates of population characteristics can be used to obtain adaptive acceptance sampling plans which vary the sample sizes in order to reduce expected cost. We also compare the two sampling plans on the basis of RMSE and expected cost functions. We show that RMSE comparisons across different levels of machine imperfection can be misleading and propose a measure which accounts for MSE and expected cost simultaneously. Table of Contents Chapter 1: Introduction 1 Chapter 2: Sampling Plan A 11 Chapter 3: Expected Cost Function 45 Chapter 4: Sampling Plan B 65 Chapter 5: Conclusions and Future Research 72 Graphs 78 Appendices 106 References 121 Author's Vita 123 Vll Chapter 1: Introduction. An increasing number of manufacturers are pursuing high quality standards these days. Manufacturers such as Texas Instruments, Motorola, and General Electric are striving to achieve Six Sigma Quality. This corresponds to a target of no more than 3.4 defects per million products. Given such stringent quality standards, it has become increasingly important to measure quality reliably, consistently, and accurately. Suppliers, for example, are frequently required to demonstrate through sampling inspection that their products meet specified quality standards. As part of industrial quality control, inspection is seen as a careful search for errors. Some of its important functions are (see Salvendy, (1982)): 1. Preventing defective goods from proceeding further in the processing cycle or from being sold to a customer. 2. Collecting data on specific characteristics of goods or materials for use in decisions regarding overall quality. The purpose is to identify imperfections and decide on whether they are severe enough to be considered defective or not. 3. Collecting data on some specific characteristics of goods or materials to give feedback to the manufacturing process. The identification of defectives can lead to trend analysis to prevent further occurrence of defectives. Salvendy (1982) notes that these functions have gained significant importance over the past few years. Manufacturers are now held accountable more often for manufacturing defects. There has been a substantial increase in the costs of product liability litigation and in the corresponding insurance premiums. Large capital investment in complex production processes is putting high premiums on rapid and accurate inspection feedback to keep the process in control. Process inspection, therefore, plays a very crucial role for total quality control in manufacturing. In quality assurance a lot of items is either accepted outright as satisfactory, or inspection is done on every item in the lot. Alternately, one may use an acceptance sampling plan. An acceptance sampling plan is one which indicates conditions for acceptance or rejection of the lot being inspected. Acceptance sampling prescribes a procedure that, if applied to a series of lots, will give a specified risk of accepting lots of a given quality. Quality assurance makes use of such plans as one of the techniques used to achieve the desired end. In this thesis, we focus on a particular type of sampling inspection, Acceptance Sampling with Rectification (ASWR). 2 In an acceptance sampling plan, a random sample is inspected from each lot. The lot is accepted if less than a certain number, k, of defectives are found in the sample; else it is rejected. Often, to an acceptance sampling plan there is attached a provision for further inspection of lots rejected by the plan. Rectification calls for retention of rejected lots and their submission for further inspection. Such programs generally intend to correct or eliminate a sufficient number of defectives to attain a specified level of quality. Rectifying plans were among the earliest of the proposed sampling plans. Such plans were developed by Harold F. Dodge and Harry G. Romig at the Bell Telephone Laboratories and used at Western Electric before World War II. Rectifying inspection plans are used to improve product quality. These plans are used in situations where the manufacturer wishes to know the average level of quality that is likely to result at a given stage of the manufacturing operations. They can be used either at receiving inspection, in-process inspection of semi-finished goods, or at final inspection of finished goods. Commonly, the rejected lot is made to undergo I 00% screening operation; the defectives thus found either discarded or replaced. The process may be represented as follows: Flow of lots Accept Reject ship Aside from inspection, another fundamental element of quality assurance is measurement or estimation of the quality of outgoing products. The ability to make sound decisions during process development or process control depends largely on the availability of adequate estimation processes, selection of a correct estimation process for a given job, and correct application of this process. Experienced quality engineers are of the view that a high percentage of industry's 'quality problems' can be solved by identifying and correcting the real problem ­inaccurate data produced by inadequate estimation process. They point out that 4 good estimation is not achieved spontaneously, but is a result of carefully planned estimation processes supported by a knowledgeable management. Some indicators of estimation weakness are: a) frequent occurrences of quality problems that are attributed to unknown origins, b) pronounced dissatisfaction by operating personnel with their lack of control of production processes, and c) vague or negative responses given to questions regarding estimation capability. Extensive research has been conducted in the area of estimating the proportion of defectives in outgoing lots. For example, Hahn (1986) investigates two naive estimators and proposes one empirical Bayes approach to estimating the percentage defectives in accepted lots with zero-defect sampling. He considers both rectification and non-rectification schemes. He proposes an estimator that does not depend on knowing the distribution of the incoming lot quality, and can be applied with equal or unequal lot sizes. As noted by Hahn, a shortcoming of his proposed estimator is that it is based only on those lots which have either zero or one defective detected in the sample. Lots with more than one detected defective in their samples are ignored altogether. This leads to less than full utilization of information. Zaslavsky ( 1988) formally demonstrates Hahn's results and extends them in several ways. He discusses an empirical Bayes estimator under the assumption that the number of defectives in the sample, given the lot defective rate, follows a Poisson probability distribution. A gamma prior is considered for the lot defective rate. Confidence intervals are calculated for the estimator, and a criterion is developed to determine the optimal sample size. Brush, Hoadley, and Saperstein (1990) use a hierarchical Bayes model to estimate the proportion of defectives in accepted lots, based on both accepted and rejected lots. The number of defectives Si , i = 1, 2, ... , T, in the sample follows a Poisson distribution with mean A. i. They define 8 i =A.i I si as the true quality parameter on the index scale. Si is assumed to follow a gamma distribution with mean 8 and process variance y 2 . Finally, ( 8 , y 2 ) is assigned a prior distribution based on historical information. The estimator also works when sample sizes and acceptance numbers vary from lot to lot. In the case of fixed sample size, the estimator of outgoing quality is the weighted average of two naive estimators; the sample quantities observed in the incoming and outgoing lots respectively. Martz and Zimmer (1990, henceforth M&Z) present an estimator for the percentage of defectives in lots under zero-defect sampling (the sample is deemed acceptable when the number of defectives k, in the initial sample, is zero). Their estimation is based only on accepted lots; rectification is not considered. An empirical Bayes technique is used and the number of defectives s in the sample follows a binomial distribution. A beta mixture is used as a general prior distribution for the defective rate p. The estimator for the i1 h lot is E(pi lsi =0). 6 They present point as well as probability interval estimates and use simulation to examine the Bayes risk of the estimator. They compare the estimator with those proposed by Hahn (1986) and Zaslavsky (1988). Greenberg and Stokes (1992) (henceforth, G&S) provide estimators of the number of defectives in a set of T outgoing lots under zero-defect sampling with rectification. They compare the mean squared error (MSE) performance of their estimators with the aforementioned estimators when the lot defective rate is a mixture of a beta distribution and a point mass at 0. The other models they consider are the ones used by M&Z (1990) or Brush et. al. (1990). To date, almost all research in ASWR has been conducted under the restrictive assumption of the inspection procedure being perfect. This is often an unrealistic assumption. Inspection error is a fact of life, and assuming perfect inspection procedure is attempting to force a sampling plan to perform an impossible task. It is now possible to design plans in which the inspection is assumed to have a known amount of error, as opposed to a hypothetically perfect inspection procedure. Indeed, it has been seen that even 100% inspection is much less than 100% effective in screening out defectives. Reasons for inspection errors have been discussed by Juran(l 962). Some of them include intermediate errors due to bias, rounding off, overzealousness, involuntary errors due to blunder, fatigue, and other forms of human or machine imperfections. It should be pointed out that errors can go either way. An overzealous inspector can easily flinch by calling a non-defective unit defective. On the other hand, researchers who have considered inspection errors, have not considered rectification sampling plans. Lindsay (1985) proposed methods to estimate the probability of a unit being declared defective, when in fact it is non-defective, and vice versa. He also describes methods to determine the rates and numbers of defectives, when the sample is screened repeatedly. Johnson and Kotz (1991) have several tables for average outgoing quality when the probabilities of a non-defective being declared a defective, and that of a defective being declared a non-defective are known. In this dissertation we consider two rectification sampling plans when the inspection procedure is imperfect. We estimate the defective rates in the lots after zero defect sampling, when the inspection environment is not 100% accurate. We then develop estimators for the number of undetected defectives remaining in a set of accepted lots. Recently, in a working paper, G&S (1996) have proposed an adjustment to their estimator taking into account imperfections in the inspection procedure. We compare the performance of our estimators with those proposed by G&S (1992, 1996). Another fundamental activity associated with quality assurance is cost estimation. From a manufacturer's point of view, the purpose of a sampling plan 8 is to reduce the cost of shipping defectives. Indeed, a decision not to sample will always incur the cost of shipping all the produced defectives. An acceptance sampling plan will reduce this cost if the benefit of removing at least some of these defectives outweighs the cost of sampling and of incorrectly discarding non­defectives. Although any acceptance sampling plan cannot be guaranteed to reduce costs, it will be effective over repeated implementations if the expected cost ofusing it is less than the expected cost of not sampling. The most beneficial sampling plan will be the one which offers lowest expected cost. In this thesis, we compute the expected cost functions for the rectification sampling plans considered. We compute the expected cost functions without making the assumption that the inspection procedure is perfect. We propose a sequential adaptive sampling plan which minimizes the expected cost. We also compare the two sampling plans on the basis of expected cost. We propose a generalized cost function, on the basis of which we find the optimal values of the sample size, and the number of times the sample is inspected. The dissertation is organized as follows: Chapter 2 discusses a rectification acceptance sampling plan in which the sample is screened once, henceforth referred to as the Sampling Plan A. Specifically, we discuss this sampling plan and the model proposed by G&S (1992) in section 2.1, the G&S ( 1992, 1996) estimators and our modification to G&S ( 1996) estimator in section 2.2, our proposed model and an empirical Bayes estimator in section 2.3, the comparison of the proposed estimator based on known population parameters with the one with estimated parameters in section 2.4, the RMSE comparison of the proposed estimators with that of the existing ones in section 2.5, changes in the bias and RMSE properties due to change in one of the misclassification error probabilities in section 2.6, and an example showing computation of different estimators in section 2.7. Chapter 3 derives the expected cost function for Sampling Plan A in section 3 .1. An adaptive sampling plan is proposed to minimize the expected cost by optimizing the sample size in section 3 .2. The expected cost function is then generalized in section 3.3. A measure combining MSE with expected cost is proposed in section 3.4. Chapter 4 discusses a rectification acceptance sampling plan in which the sample is screened three times, henceforth referred to as the Sampling Plan B. Section 4.1 discusses this plan as well as the modifications made to the estimators discussed in section 2. Section 4.2 presents the cost function for this sampling plan. In section 4.3, we compare Plans A and B on the basis of RMSE and expected cost functions. Chapter 5 consists of concluding comments and suggestions for further research. We derive the theoretical expressions for the bias and MSE in the Appendix. IO Chapter 2: Sampling Plan A. 2.1. Model. When a process is in control, the defective rate will be essentially constant across lots. In this case, acceptance sampling has little value since the number of defectives reflects only sampling variability and gives no further information on the remainder of the lot. Thus, for this case, the role of acceptance sampling is limited to confirming that the process is in control. However, if the process is out of control, the defective rate will vary across lots and will have some statistical distribution over time. The model/distribution we consider is a modification of the one proposed by G&S (1992). In this section, we describe the G&S (1992) model. Consider a set of T lots, each of size n units. A random sample of size m units is selected from each lot and inspected. If no defectives are found in this sample, the lot is accepted. If at least one defective is found, the entire lot undergoes inspection. The defectives detected are discarded and the lot is accepted. The notation in this paper follows G&S (1992), and is summarized in Appendix 3. Let oil: The number of defectives in sample i. Di2 : Additional defectives among the un-sampled units in lot i. Yil: Number ofdefectives detected in sample i. Yi2 : Additional defectives detected in the remaining (n -m) units in lot i. Yi : Number of defectives detected in lot i. (Yi= Yil +Yi2 ). if DjJ >0 otherwise Ui: Number of undetected defectives in lot i. (Ui =Di -Yi). The objective is to estimate the number of defectives, U, in the T outgoing lots; where U = IT Ui . In the next section we discuss the G&S (1992, 1996) i=l estimators, and modify the G&S (1996) estimator. 12 2.2. Estimators. In this section we first discuss the G&S (1992) estimator for the nwnber of undetected defectives remaining in the outgoing lots. G&S (1996) have proposed an adjustment to their ( 1992) estimator in order to take into account imperfections of the screening procedure. We discuss the G&S (1996) estimator for the imperfect machine, and then propose a modification to it. The estimator proposed by G&S (1992) is a non-parametric estimator which allows for general variability of the defective rates across lots. The T estimator to estimate the number of defectives in T outgoing lots, U = I U i , for i=l the case of error-free inspection where Yil =Dil , Yi2 = Di2 is defined as: (1) where (2) 1-[(~~·)l otherwise. 0 T Pi is the probability that Yi! >O, and Lyi is subtracted in (1) because i=I identified defectives are rectified. Pi is known for those lots which enter the sum in (1 ), and thus can be computed from the sample data. One shortcoming of the estimator is possibility of division by 0 in the first term. G&S (1992) observe that this estimator is like a Horvitz-Thompson estimator (Cochran 1977, sec. 9A.7) if lots are considered to be sampling units. The difference between the two is that in UGs,1 the sample size T, i.e., the number oflots rectified, is random. When the inspection process is not error free, two kinds of errors may occur: a defective unit is declared non-defective, or a non-defective unit is declared defective. Let, p =Pr [unit declared defective Iunit is defective] p 1 = Pr [unit declared defective Iunit is not defective] We will assume that p and p' are known by previous calibration. The probabilities may be conveniently represented by the following schematic diagram: 14 DECLARED Non-Defective Defective T Non-Defective R u E Defective 1-p' p' 1-p p T An initial adjustment proposed by G&S (1996) for the estimate of U =LUi , in i=l the case of imperfect inspection procedure is: A y.1 -np' ( 1 ) (3) Dos2 = 2: --P ' YiJ>O p-p' pi where Pi is defined in (2) 1. Recently, G&S ( 1996) have further updated this estimator to A 1 Y. T np' Dos2= 2: -:L--­ , YjJ>O (p-p') pi i=I p-p' ŁO (p-p') This estimator is an unbiased estimate of U. The original UGS 2 , however, seems to have better ' RMSE properties for most levels ofmachine imperfection. The intuition they provide for this estimator is as follows: Since the inspection procedure is imperfect, Yi may not equal Di . However, it is easy to see that E[YiJ =Dip+ (n-Di)P'. Therefore, by the method of moments, Di may be estimated as Yi -np' , when a non-zero Yi is observed. Thus, an p-p' . f th tal b f d fi . . T 1 · """" Yi -n p' Th estimate o e to num er o e ectives m ots is ~ p. ( _ , ) . e Yi]>O I p p number of defectives removed from a rectified lot i is Dip, and therefore the y.1 -np' total number removed may be estimated as I , p . Yit>O p-p T We now propose an estimator for U = LUi . This estimator is a i=l modification of the one proposed by G&S (1996) in (3). In our estimator, the probability Pi, i.e., the probability of detecting at least one defective in the sample, has been improved to take into account the imperfections of the inspection procedure. The estimator is, A y.1 -np'( 1 ) U new I = L , -,-P (4) ' Yn >0 P -P pi 16 where p. ' = P[ at least one declared defective in the sample of lot i] = 1 -P[no defectives in the sample oflot i] = 1 -P[ defectives pass in the sample] P[non-defectives pass m the sample] Yj-np' m Y-np' m m-~•~­ :::::::1-(1-p) p-p' n(l-p') p-p' n (5) In the next section, we propose an empirical Bayes estimator for the number of undetected defectives remaining in the outgoing lots. 2.3. An Empirical Bayes Estimator. An empirical Bayes estimator is obtained by considering a modification of the model proposed by G&S (1992). A set of T lots, each of size n units is considered. A random sample of size m units is selected from each lot and inspected. Ifno defectives are found in this sample, the lot is accepted. Ifat least one defective is found, the entire lot undergoes inspection. The defectives detected are discarded and the lot is accepted. We assume that each unit in lot i is independently defective with probability co i , and co i varies from lot to lot. The model is then specified as: co _ ( beta( a, b) with probability 7t 0 with probability 1 -7t Di2 1 coi -binomial(n-m, coi) The above model is a special case of the one proposed by M&Z which assumes a mixture of betas for the distribution of co. It is also considered by G&S (1992), and is an appropriate model for the semiconductor industry since most lots have zero defectives and only a few have a random number of defectives. To account for inspection error, we overlay the above model with the following: Yi 1 -binomial(Di1, p) + binomial(m -Di!, p') Yi 2 1Yi 1 > 0 -binomial(Di2, p) + binomial(n -m -Di2, p') Yi 1 represents the number of detected defectives in the sample of lot i. Our model is summarized by the following diagram: roi Proportion Defective ro.I -{Beta(a, b) 0 with 7t with 1-7t ___/~ Next, we propose an empirical Bayes estimator for the number of undetected defectives remaining in the outgoing lots. In order to estimate the number of undetected defectives in outgoing lots, researchers have historically based their information only on those lots where at least one defective is found. Implicitly, the number of undetected defectives in accepted lots is inferred from the number of undetected defectives in rejected lots. The estimators discussed previously are based on this rationale. It is, however, debatable on how much information about rejected lots one can directly derive from an accepted lot. Assuming reasonable parameter values, P(ro =01Yi1 =0) is substantially greater than P{ro >0IYi1 = 0) (see appendix A 1.2). Thus, if no units are declared defective in a lot, the probability that ro = 0 for that lot is very high. In other words, most of the lots where no defectives are found come from the path with probability 1 -7t. Conversely, if at least one unit is declared defective, the probability that ro > 0 for that lot is very high. These lots are, thus, more likely to come from the 7t path. The rejected lots should thus be treated separately from the accepted lots, and one should not directly extrapolate information about one from the other. Based on this reasoning, we can derive the empirical Bayes estimator which does not make direct inference about accepted lots from rejected lots. The only extrapolation done from rejected lots is in estimation of the population parameters. The empirical Bayes estimator is thus based on the following expression: y. -np' unew,2abknown = L I I (1-p) + LE(Di IYil =o) Y,,>0 p-P Y.. =O where E(Di IYil =0) is a constant whose calculation is discussed in appendix Al.I. In Unew, 2 abknown, we seek to estimate separately the number of undetected defectives in accepted versus rejected lots. For accepted lots, we take the expectation of the number of defectives, given the information that no defectives were observed as an estimate of U. The reader should note that U new 2 ab known is based on unknown ' population parameters a, b, and 7t, and thus is not an estimator. Therefore, in the expression for unew 2 ab known we substitute the estimators a, band it for the ' parameters a, b, and 7t respectively to obtain the empirical Bayes estimator as: (6) We obtain estimators of a and b using the method of moments approach. As discussed in Appendix 1.2, if zero defectives are observed in the initial sample, the probability of a lot being defective is very small. Further, using the rationale of hypotheses testing, we consider the null hypothesis to be that of no defectives present in a lot, versus the alternative that there exists at least one defective in the lot. Under the null hypothesis, Yi is distributed as Binomial(n, p'). Thus for a level of significance of 2.5%, the rejection criterion is to reject a lot if (Yi1 > 0 and (Yi> 0p'+2~np'(l-p'))) 2. For estimating a and b, we thus consider lots where the above two conditions are satisfied. Using the method of moments, the mean may be equated to a and b as follows: n* The expectation E(ro i )is estimated by the mean of ffi i , namely ~ = ~L ffi i n i=I Yi -np') ( p-p' where ro i = ,and n 2 Clearly, higher the value ofµ= a/(a+b), the greater the power of the test. Consider a simple example: let p = 0.99, p' = 0.01 , n = 5000, and m = 125. Even for the smallest value ofµ considered, that isµ= 0.01 , Pr[Type I Error]= 2.5%, and Pr[Type II Error] ~ 0. 23 n* =Number of lots satisfying the condition Equating the population mean with this estimate, we obtain, a ­ --A =(J) (la) a+b Similarly, we use method of moments to equate the variance to a and bas follows: ab Var(wi) =------­ 2 (a + b) (a+b+l) The variance, Var( CJ) i) , is estimated by the variance of wi , namely I ~{,A .:_y -*-L.. \(J) j -(J)) n -} i=I Equating the population variance with this estimate, we obtain, A A n* ab I "{,A .:_y ----------L.. \(J) 1. -U>) ~+bJ(a+b+1)-n*-Ii=1 (lb) 24 By solving equations (la) and (lb) we obtain estimators of a and bas: M [Mean(l-Mean) 1] aA_-ean ­Variance b= (1-Mean)[Mean(l.-Mean) _ 1] Vanance where Mean =co • 1 ~{,A ~v Vanance = * _ L. \co i -co) n 1i=l The estimator of n is obtained by solving the following equation for n: A n*+(T-n*)P(co>OIYi1 =0) 7t =----------(le) T Solving, 25 A -x±Jx2 -4YZ 7t=------­ (le') 2Y where I . I . j(l-8)m ma-1 (1-m )b-1 am j(l-8fma (1-m)b-1 am X = n* o Beta(a, b) + (T-n*)-'-o _________ Beta(a, b) -(T+n*) (1-p')m I . J(1-8)mm a-1 (1 -mt-1 am 0 y = T(l-p'f -T ~ A) Beta a, b Z = (1-p'f n * The intuition for the above estimate of n, the proportion of defective lots, is as follows: Using the above mentioned rationale of hypotheses testing, the first term in the numerator of equation (le) indicates the number of rejected lots, or in other words, the number of defective lots given that at least one defective is observed in the sample. The second term in the numerator accounts for the expected number of defective lots given that no defectives are observed in the sample. This term is included because even when zero defectives are observed in a lot, there exists a possibility of the lot being defective. The quadratic equation 26 (le') yields two roots of n(see appendix Al.3.) One of the roots lies between 0 and 1, while the other is always ~ 1 for the values of parameters considered. We, therefore, estimate 7t by the root lying between 0 and 1. In the next section we compare the RMSE of Unew 2ab known and Unew 2 , ' to see how the estimator performs when the population parameters a, b, and 7t are estimated. 27 2.4. Comparison of the estimaton Unew,labknown and Unew,2· Expressions for the bias and Mean Squared Error (henceforth MSE) are given in appendix A2. Specifically, appendix A2. I derives the bias and MSE for Unew ,2 ab known . These expressions are computationally efficient and permit calculation of exact theoretical results. For Dnew, 2 , however, theoretical expressions of bias and RMSE are not available; we therefore resort to Monte Carlo simulation to get its RMSE. The performance of the two estimators is compared for the same values of n, m and T as used in G&S (I992)3. Graphs for Dnew 2 are based on 200 ' simulations, each consisting of 300 lots. In Graphs IA-ID we take 7t =O.I, and p = 0.3. pis the within-lot correlation of defectives, defined in G&S (I 992). They define p as follows: Let Xi (or Xj) = I if the ith (or jth) unit in a lot is defective and 0 otherwise. Then p = corr(Xi , Xj) 4• The RMSE of the estimators is plotted against the average proportion of defectives,µ= a/(a+b) [i]. µvaries from 0.01 to 0.3 in increments of 0.05. The values considered for population parameters a and 3 n = 5000, m = 125, and T = 300 (I -7t)a (a+ b + 1) + b 4 p=[a(l-7t)+b](a+b+l) [ii) 28 b in the simulations are calculated using simultaneous equations [i] and [ii]. The graphs are based on different levels of machine imperfection5• The maximum standard error of RMSE across different levels of µ is stated for 0new 6• As is 2 ' evident from the graphs, the RMSE performance of Unew, 2 compares favorably with that of Unew 2 ab known • Thus, our estimator performs well even when all ' population parameters are estimated. In Graphs 2A-2D we take µ = 0.1, 7t = 0.1. The RMSE of the estimators is plotted against p which takes on values from 0.1 to 0.9 in increments of 0.2. The behavior of the estimators is similar to that described above. In Graphs 3A -3D we takeµ= 0.1, p = 0.3. The RMSE of the estimators is plotted against 7t which takes on values 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, and 0.3. The behavior of the estimators is similar to the one described for the above graphs. 5 Given the technological sophistication these days, it is unlikely that a machine would be less than 99% accurate. However, we also consider (p, p') = (0.97, 0.03) for comparison purposes. Thus, the various levels of machine imperfection considered are (p, p' )= {(!, 0), (0.9999, 0.0001), (0.999, 0.001), (0.99, 0.01), (0.97, 0.03) }. 6 Note that the standard error of (J new, 2ab known is not stated since the plots are based on exact values of RMSE obtained from analytical expressions. 29 2.5. Comparison of Estimators. In this section we present simulation evidence that when the inspection A procedure is imperfect, the proposed estimators have a lower MSE than U GS,I and U GS,2 . The difference in the MSE of the various estimators is usually very large, which makes it difficult to plot the MSE of the estimators on the same scale. Thus, the measure of comparison used is Root Mean Squared Error (RMSE). Appendix A2.2 derives the bias and MSE for estimators UGS,l , UGS,2 and Unew 1. These expressions are very general and can be applied to any ' estimator of the form ~).). However, a limitation of these expressions is their yil >0 computational difficulty due to the large number of calculations involved. We, therefore, resort to simulation to get the RMSE of these estimators for large lot/sample sizes7• The expressions for bias and MSE are useful when the lot/sample sizes are small. The performance of the four estimators is compared for the same values of n, m and T as used in G&S (1992) (see footnote 3). Comparisons are done by way 7 Large lot/sample sizes refer to n=5000 and m=I25. Small lot/sample sizes refer ton= 15 and m =3. 30 of several RMSE graphs. The graphs are based on 200 simulations, each consisting of 300 lots; one parameter is made to vary in each graph. In Graphs 4A-4D the RMSE ofthe estimators is plotted againstµ. When a semiconductor chip is introduced in the market, because of lack of expertise, yield (proportion of chips that are observed to be non-defective) is comparatively low. However, as time passes and expertise on the manufacture improves, yield increases. Thus, it is important to make RMSE comparisons across various levels of proportion defectives. We take 7t =0.1, and p = 0.3 for graphs 4A-4D. The maximum standard error of the RMSE across different levels of µ is stated for A A these graphs. UGS, 1 and U GS,2 are referred to as GS I and GS2 in the graphs. unew,I and unew,2 are newl and new2 respectively. In Graph 4A, UGS,1 and U A GS,2 are identical when the inspection procedure is perfect, and their performance lies between Unew,I and unew,2. However, when there is a slight A imperfection, U GS, 1 rises very fast. In fact, for some graphs we had to use a A secondary axis to accommodate U GS,I. Note that this estimator seems to be changing shape for different levels of machine imperfection. Thus, the A performance of U GS,I seems to be rather poor when the inspection procedure is imperfect. 31 A Tue RMSE performance of the second estimator, U GS,2 is substantially A A better than U GS,I · It almost always performs better than that of U GS,l and its shape is consistent for different levels of p and p'. Tue standard error of this A estimator is almost two orders ofmagnitude smaller than that of U GS,l. Unew,l performs worse than Uas,1 and Uas,2 when the machine is perfect. Tue reason is that for the perfect machine, Pi I reduces to 1, and thus U new 1 is always zero. This is a good estimate for a rejected lot since the entire ' lot has been screened and there are no remaining defectives. For an accepted lot, however, this may not be a good estimator. Lack ofdefectives in the sample does not imply the absence of defectives in the remaining lot. Thus, zero is not a good estimate for the number of undetected defectives in accepted lots. However, with even slight imperfection, the performance of Unew, 1 becomes much better. Since we are concerned with the cases when the inspection procedure is imperfect, the overall performance of Unew, 1 seems to dominate that of UGS,I and UGS,2. Tue fourth estimator i.e., Unew,2 , performs well for all levels of p and p'. It has a lower RMSE than the other three estimators at all level of machine imperfection, including the case when the inspection procedure is perfect. In fact, 32 the maximum RMSE of Unew 2 is less than the minimum RMSE of all the other , estimators. Semiconductors are manufactured in clean rooms. During the manufacturing process, a small spec of dust on the wafer can lead to defective chips being produced around it. Ifone chip is defective, probability of a chip next to it being defective is very high. Thus, in Graphs 5A-5D we plot the RMSE of the estimators against p. We take 7t =0.1andµ=0.1. pis made to vary from 0.1 to 0.9 in increments of 0.2. The behavior of the estimators is similar to that A described for the case when µ varies. The only difference is that U GS 2 and ' Unew, 1 converge faster as the machine becomes more imperfect. The maximum standard error of the mean squared error, across different levels of p, is stated for each estimator based on simulations. In Graphs 6A-6D we take µ =0.1, and p = 0.3. The RMSE of the estimators is plotted as 7t takes on values 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, and 0.3. 7t can vary widely depending on whether the process is in control or not. The behavior of the estimators is similar to that described for the case when µ varies. A As before, the shape of the curve for U GS,l changes dramatically for different levels of p and p'. The RMSE of Uos,2 and Unew,l converges as the machine 33 moves toward being highly imperfect. 8 The RMSE of U new, 2 mcreases monotonically as 7t increases. As is clear from the preceding graphs, the performance of U new, 2 is better than the other estimators under all circumstances. We shall, therefore, restrict attention to this estimator. The reader should note that there is minimal difference in the RMSE performance of Unew,2 ab known and Unew,2 for various levels of p and p'. Therefore, for simplicity, we base graphs 7 and 8 on analytical values of U new. 2 ab known instead of simulated values of Unew, 2 . In graph 7, the performance of Unew ,2 ab known is studied for different levels of machine imperfection as µ varies. The other parameters are taken to be constant at n = 5000, m = 125, p = 0.3 and 7t = 0.1. The values ofµ range from 0.01to0.3. When the machine is near perfect, the RMSE is low for small and very large values of µ, but is relatively high for intermediate values. To see this, consider a simple example where the machine is perfect and µ = 0 ( i.e., there are no defectives in the lots). The RMSE for the undetected defectives is zero. Similarly, whenµ= 1, (i.e., either a lot has all defectives, or all non-defectives), 8 The inspection procedure is referred to as highly imperfect when p, p' ;o: (0.97, 0.03). 34 the RMSE is again zero. For all other values ofµ, the RMSE would be non-zero because the number of undetected defectives would not always be zero. As the machine becomes more imperfect, the RMSE increases monotonically with µ. Looking at the cross section for a particularµ, for small and medium values ofµ, the RMSE decreases and later increases as the level of machine imperfection mcreases. Note that the level of machine imperfection where RMSE starts increasing (approximately p = 0.95, p' = 0.05) is not shown on the graph. Cases such as this where m and p' are such that all units are almost always sampled are uninteresting and hence were not considered. The intuitive explanation for this is as follows: As the machine goes from perfect to highly imperfect, there are two counteracting effects. First, the number of lots sampled increases since false errors are detected in the sample. This tends to decrease the RMSE, since more sampling is being done. Second, due to machine imperfection the RMSE increases. When the machine goes from perfect to slightly imperfect, the first effect dominates, thereby reducing the RMSE. However, as the machine approaches severe imperfection, the probability of sampling a lot approaches one. Therefore, the decrease in the RMSE due to the first effect approaches zero. The second effect dominates, thereby increasing the RMSE. Hence, the RMSE first decreases and later increases. 35 When the value of µ is high, we see that the RMSE increases monotonically as the machine goes from perfect to highly imperfect. The probability of a defective showing up in a sample is high, and therefore, the probability of lot being sampled is high. Hence the first effect is minimal. Graph 8, plots the RMSE against 7t, proportion of lots having defectives, for different levels of machine imperfection. As can be seen, as the value of 7t goes up, value of the RMSE increases monotonically across all levels of machine imperfection. For a given value of 7t, the RMSE decreases with increasing machine imperfection. 36 A A 2.6. Relative Bias and RMSE of Unew,1, Unew,2· In this section we study the effect on bias and RMSE of a change in one of the probabilities of misclassification errors, when the probability of the other is held constant. We present relative bias, ( E~(~)u)), in graphs 9A and 9B for values of p ranging from 0.95 to 1 and for values of p' from 0 to 0.05. U denotes the actual number of undetected defects remaining in the outgoing lots. U represents the estimated number of undetected defects. We substitute U by Unew 1 in graph 9A and Unew 2 in graph 9B. We consider small lot/sample size ' ' for Unew, 1 since this permits calculation of exact theoretical bias. Relative bias for Unew, 2 is based on large lot/sample size. The values taken for µ, p, and 7t are 0.1, 0.3, and 0.1 respectively. As is evident from graph 9A, the bias in Unew 1 ' increases with increasing p'. Increase in p also leads to increase in the bias. 0new 1 seems to be negatively biased for small lot/sample sizes. ' Next, consider graph 9B. For almost all values of p and p', zero is contained in the 95% confidence interval for the relative bias; this implies that the bias in 0new, 2 is negligible. For 0new, 2 also, bias seems to increase with 37 increasing p. The bias is either positive or negative, depending on the type of inspection error. Graphs 1 OA and 1 OB show how the RMSE of Unew 1 , Unew 2 changes ' ' with change in p and p'. As p' increases, RMSE usually decreases. This is because an increase in the probability of a non-defective unit declared defective leads to a higher probability of the entire lot being screened. Thus more defectives are detected and removed, leading to a lower RMSE. Next, consider the cross section of the plots. For a given p', an increase in p reduces RMSE. This is because higher probability of defectives being declared defective leads to more lots being screened entirely, which in turn results in fewer number of undetected defectives, and thus a lower RMSE. Note that for a perfect machine case, that is when p = 1 and p' = 0, the results seem to be slightly different. Looking at the cross section for p' = 0, the RMSE seems to have increased for p = 1. This is probably because fewer lots are completely screened since no non-defectives are declared to be defectives. 38 2.7. Example. To illustrate the computation of the estimators, we simulate a data set based on the model discussed in Section 2.3 . The data, shown in tables 1 through 4 below, is generated by taking 125 samples from each of 300 lots of size 5000. The values for µ, p, and 7t considered in the simulations are 0.1, 0.3, and 0.1 respectively. p and p' are taken to be 1.0 and 0.0 for Tables 1 and 2, and 0.999 and 0.001 for Tables 3 and 4 respectively. Table 1 presents the actual number of defectives present in different lots. 273 samples did not contain any defectives; the rest contained at least one defective unit. Table 2 presents the observed number of defectives when the inspection procedure is perfect. Similarly, for p = 0.999 and p' = 0.001, Tables 3 and 4 present the actual number of defectives and the observed defectives in the 300 lots respectively. For generation of Table 1 and Table 3, the actual number of defectives, Di, follows Binomial(n, ro). Specifically, Di1 -Binomial(m, ro), and Di2 -Binomial(n -m, ro). Tables 2 and 4 are generated from Tables 1 and 3 respectively. The declared number of defectives in the sample, Yi1, is generated in two steps. Defectives that are declared defective are simulated as Binomial(Di1, p). Non­defectives declared defective are simulated as Binomial(Di2, p'). Thus, Yii = Binomial(Di1, p) + Binomial(Di2, p'). The declared number of defectives in the 39 remaining lot, Yi2, is simulated similarly. Yi is obtained by adding Yi1 and Yi2· The computation of the estimators and their comparison to the actual number of undetected defectives is shown below. Table 1 (Q = 1.0, Q' = 0.0} Actual # of defectives (D) 0 3 9 35 42 51 69 86 94 99 # of lots 273 Actual # of defectives (D) 135 146 147 171 300 323 379 503 662 760 # of lots 2 Actual # of defectives (D) 1090 1254 1571 168117231820 2582 # of Jots 40 Table 2 (p = 1.0, p' = 0.0) # of defectives (Y) 0 35 42 51 86 94 99 135 146 171 #of lots 276 1 1 # of defectives (Y) 300 323 379 503 662 760 1090 1254 1571 1681 # of lots 1 2 # of defectives (Y) 1723 1820 2582 # of lots Actual number of undetected defectives= 81 35 42 2582 u A = --+--+---+---15654~108 GS.I 0.589 0.656 1 0 = 35-5000*0(_1__1)+ 42-5000*0(_1__ 1)+ GS.l 1-0 0.589 1-0 0.656 ---+ 2582-5000* 0(!-1) ~108 1-0 1 41 = 35-5000* 0 (!-1) + 42-5000 *0(!-1) + new.I 1-0 1 1-0 1 ---+ 2582-5000*0(!-1) =0 1-0 1 A 15654 -24 * 5000 * 0 unew .2 = 1-0 (1-1)+ 276*0.22865:::: 64 Table 3 (p = 0.999 p' = 0.001) Actual # of defectives (D) 0 4 5 13 14 16 25 43 45 # of lots 275 2 1 1 Actual # of defectives (D) 60 65 91 120 122 182 249 371 840 841 # of lots l Actual # of defectives (D) 853 1166 1253 2061 2640 # of lots 42 Table 4 (p = 0.999 p' = 0.001) # of defectives (Y) 0 3 4 5 6 7 8 9 10 11 #of lots 248 3 14 4 4 3 2 # of defectives (Y) 12 17 30 47 67 70 101 126 129 188 # of lots 2 # of defectives (Y) 258 375 841 842 855 1169 1256 2061 2641 # of lots Actual number of undetected defectives = 100 ~ 3 4 2641 u = --+--+---+---11148;:::: 1367 GS.I 0.073 0.096 1 u = 3 -5000*0.001 (-1--0.999) + 4 -5000 *0.001 (-1--0.999) + GS. 2 0.999-0.001 0.073 0.999-0.001 0.096 ---+ 2641-5000*0.001(~-0.999);:::: 1380 0.999-0.001 1 43 0 . = 3-5000*0.001( 1 -0.999)+ 4-5000*0.001(_1__ 0.999)+ ne-..l 0.999-0.001 0.1175 0.999-0.001 0.117 ---+ 2641 -5000 *0.001 (!-0.999) ::::: 62 0.999-0.001 1 0 . = 11148-52*5000*0.001(1-0.999)+ 248*0.21977::::: 66 ne-. , i 0.999-0.001 As can be seen, the estimators are highly disparate. (J always gives 0 new,! as the estimate for the number of undetected defectives when the inspection procedure is perfect, but its performance improves when the slightest machine imperfection is introduced. Unew, 2 seems to perform the best amongst all estimators considered; its values of 64 (when p = 1.0 and p' = 0.0) and 66 (when p = 0.999 and p' = 0.001) are closest to the actual values ( 81 and 100 respectively) of the number of undetected defectives in the accepted lots. 44 Chapter 3. Expected Cost Function. A fundamental activity associated with quality assurance is cost estimation. There is a cost associated with any given sampling plan. This cost typically includes the cost of sampling any given unit and the cost of shipping a defective unit. If the inspection procedure is imperfect, a third kind of cost is introduced into the system --discarding non-defective units. In this chapter, we construct expected cost functions for sampling plans based on fixed sample sizes. Simulation studies are then used to show how the expected cost depends not only on the sample size, but also on the unknown population characteristics. Next, we show how intermediate empirical Bayes estimates of population characteristics can be used to obtain adaptive acceptance sampling plans which vary the sample sizes in order to reduce expected cost. We also discuss that comparison of RMSE across various levels of machine imperfection can result in misleading conclusions. We thus propose a measure which combines RMSE with expected cost. To construct the expected loss function for an acceptance sampling plan, we need the following inputs of the three types of per unit costs to a manufacturer. c1 Cost of sampling a given unit 45 c2: Cost of failing to detect a defective This cost arises from shipping a defective unit to a customer and may entail a direct and/or an indirect cost. Direct cost would consist of the administrative cost involved with replacing a defective unit, while the indirect cost may be incurred by the way of customer dissatisfaction, loss of credibility, and eventually loss of a customer. c3 Cost of incorrectly labeling a non-defective as a defective This cost comes about from the loss in revenue arising from discarding a non-defective unit, and may be approximated by the price at which a marginal unit could be sold in the market. It may be worth noting that of the three components, c1, c2 and C3, this often has the largest per unit value9. In the next section, we compute the expected cost function under Sampling Plan A. 9 Based on personal conversation with a Quality Control manager at Motorola, a typical ratio of c1:c2 ranges from 1:10 to 1:100. The ratio ofc2:c3 ranges from 1:2 to 1:5. However, since the relative sizes depend on the nature of the product, we consider other combinations of c1:c2:c3 in addition to the above mentioned ratios. 46 3.1. Cost function for Sampling Plan A. In this section, we compute the expected cost function for Sampling Plan A. It consists ofthree components, one for each of the three types of costs. The first component consists of the direct cost that would arise from sampling the lot. The first m units from the lot are always screened, while the screening of the remaining (n -m) units is conditional on finding at least one defective in the initial sample of m units. Thus, the expected number of units sampled may be written as, E(Number of units sampled) = m + ( n-m )Pi The second component estimates the cost of observing a non-defective when it is actually defective, that is, the cost of shipping defectives. This component may arise from two sources. First, if a lot passes, the remaining (n ­m) units are shipped without being screened. The existence of defectives in these (n -m) units contributes towards this cost. Second, due to machine imperfection, a defective unit may be declared non-defective, further increasing this cost. Thus, E(Numberof defects shipped)= E [I(Yi1= O)Di +(1-p )I(Yil >O)Di] The last component estimates the cost of incorrectly labeling a non­defective as defective . This cost arises due to machine imperfection. Noting that is the number of non-defectives present in the lot and that is the expected number of non-defectives sampled and declared defective, the number of non-defectives labeled defective as, E(Number of non -defective incorrectly labeled defective) = p'E[Cn-Di)I(Yil >0)] Multiplying per unit costs with the expected number of units discussed above, the Cost Function (henceforth referred as Costl) is as follows: Costl = c1 (m +(n-m)Pi)+c2E[I(Yil =O)Di +(1-p)I(Yil >O)Di] +c3 p'E[(n-DJI(Yil >0)] = c1 (m +(n-m)Pi)+c2E[Q-I(Yi1 >O))Di +(1-p)I(Yil >O)Di] +c3p'E[(n-Di)I(Yil >0)] = c1m +c1(n-m)Pi +c2E[Di]-c2 pE~(Yil >O)(Di1 + Di2)] +c3 p' nE[I(Yil >0)]-c3p'E~(Yi1 >O)(Dil + Di2)] = c, m +c,(n-m)Pi +c2 E[Di]-c2 pE[E~(Yil >O)Dil Inil] -c2 pE[E[I(Yil >0)Di21 ro ]+c3p' nPi -c3 p' E[E[I(Yil >O)Dil I Di!] -c3p' E[E(I(Yil >O)Di2 Iro] where 48 E[Di ] = E[E[Di I ro E= nE(ro) = I a-1 (l ) b-1 J = n 7t Jro ro -ro dro + 0(I -7t) = n _a_ 7t ( Beta( a, b) a+ b0 P(Di1 = b') = P(Dil = b' Iro )P(ro) = (l -7t) + 7t Beta( a, m + b) if b' = 0 Beta( a, b) I (m) a-1 (l )b-1 = 7t J ro b' (1 _ ro) m-b' ro -ro dro b' Beta( a, b) 0 = 7t(m) Beta(a+ b',m-b' + b) if b' > 0 b' Beta( a, b) E[I(Yi1>0)Di1]= E[E[I(Yi1>0)Di11Di1]= E[Di1E[I(Yi1>0)1Di1] = 7t f [ m)Beta(a+Di1'm-Di1+b)~-(l-p')m-Dil(I-p)Di1 ~ii Dii=I DiJ Beta(a,b) E[I(Yi1>0)Di2]= E(E[I(Yi1 >O)Di2 lro] =E[EP(Yil >O)lro ]E[Di2 lro ]] =E[ f E~(Yi1>0)lro,Di1]P[DiJlro ]E[Di2lro]l D1 =0 ~ =7t(n-m) I [ m) Beta( a+ Di!+ l,m-Dil + b) Q-(1-p')m-DiJ (I -p)Dil ) Dj1=0 Di! Beta(a,b) 49 = 7t t (mJBeta(a+ Dil>m-oil+ b) €1-p')m-Djt (1-p)Dil ) DH=l Dj! Beta( a, b) + (l-p')m ((1-n) + 7t Beta( a, m + b)) Beta( a, b) Pi = P(Yil > 0)= 1-P(Yil = 0) 3.2. An adaptive sequential sampling plan. In this section, we propose a sequential sampling plan in order to lower the expected cost as much as possible. Because the expected cost function depends on population parameters a, b and n, which are unknown, it is not possible to determine the optimal sample size m ahead of time. However, after sampling a fraction of the lots, estimates of a, b and 7t may be obtained by empirical Bayes considerations. These estimates may then be used to determine the sample size m which leads to a lower expected cost over the next few lots. Estimates of a, b, and 7t may be updated from these lots, and then used to update m which would lower the expected cost over the next fraction of lots. This process is repeated until all lots are exhaustedlo. In this fashion, we seek to sequentially adjust the sample size as we get more data, thereby reducing the cost associated with the sampling plan. To assess the potential of this adaptive sampling plan, we conducted the following simple two stage simulation experiment involving a single update of the sample size m: I. From the first l 00 lots, we estimate a, b and 7t. The sample size considered is 125 for this set of lots. 10 For experimental purposes, we restrict the procedure to two stages; the fraction of lots considered for estimation of population parameters being 1/3, and that for reducing the cost being 2/3. 51 2. Using these estimated values, we estimate the sample size, m, which will lead to the lowest expected cost over the next 200 lots. In order to estimate the expected cost, we substitute a, b, and 7t with estimates a, band n respectively (discussed in section 2.3) in Costl. Thus, the sampling plan might change at this point. 3. We do steps 1 and 2 for each simulation. The expected minimum cost of sampling the 200 lots is then averaged across simulations, giving us an estimate of the expected expected minimum cost. 4. The above cost is then added to the cost of sampling the initial 100 lots, giving us the total expected cost for the initial sampling plan. We compare the performance of the expected cost function with known population parameters (referred to as Cost_ Known in the graphs 11 A-11 C) with the one with unknown parameters (referred to as Cost_ Unknown.) The values of n, m and T are the same as used in G&S (1992) (See footnote 3). Graphs for Cost_ Unknown are based on 100 simulations, with a, b, and 7t estimated from the first 100 lots in each simulation. In Graphs 1 lA-1 lC we take 7t =0.1, and p = 0.3. The values considered forµ are 0.01, 0.05, 0.1, 0.15 and 0.2. Two different levels of machine imperfection are considered: p=0.999, p'=0.001 and p=0.99, p'=0.01. Various combinations for c1:c2:c3 are considered. The sample size for the first 52 100 lots is taken to be 125 for both Cost Known and Cost Unknown cases. To ensure comparability, the sample size for the next 200 lots is also taken to be the same (various values considered for m are 1, 125, 250, 500 and 1000). The performance of Cost_ Unknown seems to compare favorably with that of Cost_ Known; it improves as m increases because population parameters are estimated better with larger number of sampled units. The performance of Cost_ Unknown also improves with increasing µ because the number of lots completely screened increases due to the presence of larger proportion of defectives. This phenomenon occurs regardless of the size of the sample because as µ increases the probability of finding a defective in the initial sample approaches one. To give the reader some perspective on how expected cost varies with machine imperfection, we present Graph 12A which plots expected cost againstµ, for various combinations ofp and p'. The values of parameters considered are n = 5000, m = 125, c1 = 1, c2 = 100, c3 = 200. The graph shows that expected cost increases with increasing machine imperfection. Graph l 2B presents the percentage saving in expected cost achieved using the method discussed above. The saving is calculated by subtracting the minimum expected cost of the incremental procedure from the expected cost where sample size is taken to be 125 for all 300 lots. The difference is then divided by the latter cost. Note that the minimum expected cost is based on m = 53 125 for the first 100 lots. The next 200 lots, however, are based on the optimal sample size, which is determined on the basis of a, band n. We consider different combinations of cl :c2:c3, p and p'. For the values considered in the graph, the range of savings is from 15% to 63%. Thus, the percentage saving can be as much as 63% in some cases. For illustration purposes, consider a simple example. Suppose the values considered for c 1 :c2:c3, p and p' are 1: 100:500, 0.999 and 0.001. Let p = 0.3, 7t = 0.1, andµ= 0.1. lfm = 125 in all 300 lots, the total expected cost is calculated to be 458,700. The minimum expected cost is calculated to be 348,700. In arriving at the minimum cost figure, mis taken to be 125 for the first 100 lots. The optimal sample size for the other 200 lots is determined to be 50. The percentage net saving is, thus, {(458,700 -348,700) I 458,700 }* 100::::; 24%. When p = 0.99 and p' = 0.01, percentage savings range from 52% to 63%. For p = 0.999 and p' = 0.001 the range is from 15% to 40% approximately. Clearly, the more imperfect machine offers better percentage savings. Naturally, this does not mean that it offers lower absolute expected cost after the savings are considered. For example, forµ= 0.1, when p = 0.99 and p' = 0.01, the expected cost is 6,882,300. For p = 0.999 and p' = 0.001, the expected cost is 458,700 (refer to Graph 12A). The corresponding percentage savings (from graph 12B) are 57% and 24% respectively. Even though the percentage saving is more for the 54 machine with higher level of imperfection, it still has higher overall absolute expected cost (approximately 2,959,389 vs. 348,700). 55 3.3. General Cost function. For situations involving costly or destructive screemng, it 1s common practice to use a zero-defect sampling plan with a small initial sample. The sample size is dictated by the cost of the test and the zero-acceptance number arises from the desire to maintain high level of quality. Since we are considering an inspection procedure that may not be 100% accurate, the defectives may be declared non-defective, and vice-versa. One of the ways to improve the accuracy of a screening procedure is to screen the units repeatedly. Repetitive sampling is done frequently to reduce the number of non-defectives being scrapped. In this section, we generalize the zero-defect sampling plan considered above to the one in which the initial sample is screened y (where y is a positive integer) times. The objective of this section is to find the values of m (the sample size), and y (number of times the sample is screened), which minimize the cost for given values of c1, c2 and c3. c1, c2 and c3 are the same as defined in section 3.1. Since analytical solution to this problem is not available, we estimate the cost for various combinations of m and y, in order to select those values which minimize the cost globally. Consider a set of T lots, each of size n units. A random sample of size m units is selected from each lot and inspected y times instead ofonce in each lot. A 56 unit is declared defective if it fails at least y 1 = integer( r +1) times. If no 2 defectives are found, the lot is accepted. If at least one defective is found, the entire lot is screened, and all the defectives are removed. The probability of correctly classifying a defective as well as incorrectly classifying a non-defective for this sampling plan may easily be stated in terms of p and p' respectively. This is important because this methodology may be applied to the general y-Sampling plan, where y is any positive integer. The probability of correctly classifying a defective may now be stated in terms of p and p' as follows : p* = P(A unit is declared defective IActually defective) = P(Declared defective kl times IActually defective) + ----­+P(Declared defective k times IActually defective) y (YJ . . =.I . pl {l-py-1 1=yl I 57 Similarly, the probability of incorrectly classifying a non-defective as defective 1s defined as Thus, E(Numberofunits sampled)=ym + (n-m)Pi• E(Number of non -defectives incorrectly labeled defective) = E[I(Yi1 >0)(p'*(m-Di1)+(n-m-Di2)p') Multiplying per unit costs with the expected number of units discussed above, the expected cost functions is calculated as follows: Expected Cost= c1(ym +(n-m)Pt) +c2E[(Yi1=O)Di + I(Yil>O)(Q-p*pi1 +(l-p)Di2 )] +c3 E[I(Yi1>O)(p'*(m-Di1 )+(n-m-Di2)P'] 58 = c1 ym +c1(n-m)Pt +c2E[Di]-c2 E[p*I(Yil >0)Di1 +pl(Yil >0)Di2] +c3 p'* mPt +c3 p' (n-m) Pt -c3 p'*E(I(Yil >O)Dil ]-c3 p' E(I(Yi1 >0)Di2 ] = c1 ym +c1(n-m)Pt +c2E[Di] -c2 p *E[E[I(Yil >O)Dn IDilll-c2 pE[E[I(Yil >0)Di2 1ro] +c3p,* mip* +c3 p'(n-m)P*i -c3 p'*E[E[I(Yi1>O)Dn1Di1Il-c3 p'E[E[I(Yi1 >0)Di2 1ro] where E[Di ] = E[E[Di I ro E= nE(ro) = I a-l(l )b-1 J = n 7t Jro ro -ro dro +0(1-n) = n a n ( Beta( a, b) a + b 0 P(Dil = b') = P(Di1 = b' Iro )P(ro) = (l -7t) + 7t Beta( a, m + b) if b' = 0 Beta( a, b) I (m) a-1 (1 )b-1 = 7t J ro b' (1-ro) m-b' ro -ro dro b' Beta( a, b) 0 = n(m) Beta(a+ b',m-b' + b) if b' > 0 b' Beta( a, b) 59 E(Di1 I(Yi1 >0)] =E[E[Di1I(Yi1>O)JDil] = 7t t (m) Beta(a+ Oil, m-Dil + b) ~-(l-p'*)m-Di! (1-p*)Di! pi! DiJ=l Dii Beta( a, b) E[I(Yil >O)Di2] = E[E[I(Yil > O)Di2 ImIl = E[E[I(Yi1 >0) Im] E(Di2 Im]] =n(n-m) I ( mJ Beta(a+Dil +l,m-Dil + b) ~-(1-p'*)m-Oil (1-p*)Oil) oil=O oil Beta(a, b) p* (Yil = 0) = p* (Yil = Ojro)P(ro) = 7t I ( mJ Beta(a+Di1,m-Dil + b) ~l-p'*)m-Oil(l-p*)Oil) Oii=I oil Beta(a, b) + li-p•*'f1 (cl-n)+n Beta(a,m+ b)) ~ J Beta( a, b) Pi*= p*(vil > o)= 1-p*(vn = o) Graphs 13A-13D plot the expected cost against sample size m for different values of y, p and p' are taken to be 0.999 and 0.001 in the first three graphs, and 0.99 and 0.01 in the fourth graph. Values assigned toµ, p, and n are 0.1, 0.3, and 0.1 respectively. The ratios of c1 :c2:c3 considered are 1: 100:500, 1:500:100 and 1: 10:20. It is evident from the graphs that the shape of the expected cost functions depends on the ratio of c1 :c2:c3, and also on the values of the other parameters. depends on the ratio of CJ:c2:c3, and also on the values of the other parameters. When CJ :c2:c3 is 1: 100:500, the expected cost first decreases, and then increases. It decreases because, for a small sample, the cost of sampling and the cost of scrapping a non-defective are low as compared to the cost of shipping a defective unit. As m increases, the probability of detecting a defective unit in the sample increases. This leads to a higher probability, and thus a higher expected cost of screening the entire lot. The costs of sampling and shipping a defective go down and the cost of scrapping a non-defective goes up as m increases. Thus, when the sample size becomes sufficiently large, the overall expected cost starts to increase with increasing m. The sample size of approximately 30 seems to be optimal in this case. For CJ :c2:c3= 1:500:100, the expected cost behaves same as above. The initial decline in the expected cost function is, however, more steep. For c1 :c2:c3 = 1: 10:20, the expected cost increases monotonically since the cost of sampling is very high as compared to the other two costs. Minimum sampling seems to be the best alternative for this scenario. Next, consider the cross section of the graphs. When p and p' are 0.999 and 0.001 respectively, and CJ:c2:c3 is either 1:100:500 or 1:500:100, y = 3 seems to be optimal. For CJ :c2:c3 = 1: 100:500, therefore, y = 3 and m = 30 is the optimal scenario. For CJ :c2:c3 = 1:500:100 and y = 3, the minimum value of m occurs at approximately I 00. When p = 0.999, p' = 0.001 and CJ :c2:c3 = I: I 0:20, it is 61 interesting to note that y = 2 is the best globally. When p and p' change to 0.99 and 0.01 respectively, screening the sample 4 times seems to be optimal. y = I leads to the highest expected cost for the values of the parameters considered in this graph. The optimal combination thus is y = 4 and m = 25. Also note that the expected cost first decreases and then increases with increasing y. The rationale that describes the U-shape of the overall expected cost due to increase in m holds here as well. Note that for a perfect machine, y = I would be optimal since no further information is gained by screening the sample repeatedly. The reader should note that the generalized expected cost function also uses population parameters a, b and n, which are unknown. Estimates of a, b and 7t may be obtained by sampling a fraction of the lots in the same fashion as suggested for the zero-defect sampling plan. These estimates may then be used to determine the optimal sample size, m, and the optimal number of times the sample should be screened, y. 62 3.4. Combining MSE and Expected Cost. In the RMSE comparisons across different levels of machine imperfection (see graph 7), it seems that one can improve RMSE performance by using a less efficient screening procedure. Sampling more reduces the number of undetected defectives and, therefore, improves the RMSE performance. This is, however, a manifestation of the sampling plan and can be misleading. In order to make proper efficiency comparisons, we propose another measure which explicitly takes into account the expected cost associated with the sampling plan. The measure we consider is (MSE)3 * (Expected Cost)b, where a, b >O. This is an appropriate measure since the manufacturer is interested in minimizing both MSE and expected cost. He can set the values of a and b depending on whether he is concerned more about MSE or expected cost. As a special case, we consider a = b = l . Graph 14 plots (MSE * Expected Cost) across various levels ofµ. The graph is based on the estimator Unew, 2 . (MSE * Expected Cost) curves first increase and then decrease with increasing µ. The overall shape of the curves seems to follow that of the RMSE curves. This is because expected cost has less variability than the RMSE curves across various levels ofµ (refer to graphs 7 and 12A). Cross-sectionally, (MSE * Expected Cost) increases with increasing 63 machine imperfection. The pattern seems to be following that of expected cost. This is because of higher variation in expected cost as compared to RMSE across various levels of machine imperfection. The error free case, p = I and p' = 0, is ideal when both MSE and expected cost are taken into account. This implies that more accurate the machine, the better it is for the manufacturer. Thus, the manufacturer who wishes to jointly minimize MSE and expected cost should consider a measure such as (MSE)a * (Expected Cost)b. 64 Chapter 4 Sampling Plan B. In this chapter, we discuss a special case of the general sampling plan discussed in section 3.3. This plan is referred to as Sampling Plan B. First, we present estimators for the number ofundetected defectives left in outgoing lots for this sampling plan in section 4.1 . We do so by modifying the proposed estimators for sampling plan A. Second, we present the expected cost function associated with this sampling plan in section 4.2. Next, we compare this sampling plan with plan A on the basis of RMSE and expected cost functions in section 4 .3. 65 4.1. Estimators. We consider a sampling plan where the initial sample is inspected three times instead of once in each lot. A unit is declared defective if it fails at least twice. Ifno defectives are found, the lot is accepted. Ifat least one defective is found, the entire lot is screened, and all the defectives are removed. This sampling plan is a special case of the one described in section 3.3, with the value of y taken to be 3. Note that this sampling plan is especially important when p and p' are not known. This is because in order to estimate p and p', it is necessary to inspect some items at least three times. (see Johnson, Kotz and Wu 1991 ). The probability of correctly classifying a defective as well as incorrectly classifying a non-defective for this sampling plan may easily be stated in terms of p and p' respectively I 1. The probability of correctly classifying a defective may now be stated in terms of p and p' as follows: p* = P(Unit is declared defective IActually defective) 11 It is easy to show that q* < (>) q if q < (>) 0.5. Since pis usually greater than 0.5, and p' less than 0.5, screening the initial sample three times is equivalent to screening it once but with a inspection procedure that is more perfect. 66 = P(Unit declared defective twice IActually defective) + P(Unit declared defective thrice IActually defective) Similarly, the probability of incorrectly classifying a non-defective as defective is defined as Note that the error probabilities for the initial sample will be different from the ones associated with the remaining lot. This is so because the sample is screened three times, whereas the remaining units are screened at most once. Therefore, E[Yi1 ]= Di1P *+(m-Di1 )J•* *Yil -mp' + Yi2 -(n -m)p' Thus, Di can be estimated by The estimator, p * -p'* p-p' Unew, 1 , for the number of undetected defectives is modified as, 67 l}* = '°' (YiJ-mp'*(.J.-•)+ Yi2-(n-m)p'( 1 _ )J new,J ~ *_ , * p. P , --.p. P (9) Y >0 p p I p-p I ii where m m • • z -• m-z ­ pi =1 _ (1 _ P ) n (1 _ p, ) n (10) such that • z= Yi1-mp' + Yi2-(n-m)p' p -p I* p-p' * The second estimator, Unew 2 , can be modified for this sampling plan as ' follows: A. A where E (Di IYil =0) is a constant calculated in the same way as E(Di IYil =0) for sampling plan A. The only modification is that p, p' are replaced by p* and p'* respectively. 68 4.2. Cost function for Sampling Plan B. Note that all the notations are the same as defined in Section 3.1. The following cost function can easily be derived from the general cost function (derived in Section3 .3) by taking the value of y to be 3. It would henceforth be referred to as Cost3. The cost function is : Cost3 = c 1 (3m +(n-m)Pt) + c2E [(Yi1 =O)Di + I(Yi1 > o)(Q-P • f il + (1-P )Di2 )] +c 3 E[I(Yil >0)(p'*(m-Di1 )+(n-m-Di2)p'] In the following section, we compare the Sampling Plans A and B on the basis of RMSE and the cost function performance. Since the fourth estimator has the best RMSE properties for all values ofµ, p, and n, we compare the RMSE of the two sampling plans based on this estimator, that is, we compare Unew 2 and ' ~. U new 2 · ' 69 4.3. Comparison of the Sampling Plans A and B. We first compare the sampling plans on the basis of RMSEJ2 for varying levels of machine imperfection. (See graphs 15A-15D). In these graphs, we plot the RMSE against µ. Intuitively, if the sample is screened thrice, one would feel that since there is more information available, one should get a lower RMSE; and since one is sampling more, cost of sampling thrice should be more. But the results are contrary to intuition. When the inspection procedure is perfect, the RMSE is exactly the same for both sampling plans, which is to be expected. As the inspection procedure becomes imperfect, Sampling Plan A performs better than Sampling Plan B. As described in footnote 11, the reason is that screening a sample thrice is equivalent to screening it once, but with a more perfect inspection procedure. Since the probability of detecting a false defective is smaller for Sampling Plan B in the initial sample, fewer lots are completely screened leading to a larger number of undetected defectives, thus leading to a higher RMSE. The result holds for all except the highest values ofµ when the inspection procedure is imperfect. 12 The RMSE values for One-Sampling plan as well as Three-Sampling plan are based on theoretical expressions. 70 Next we compare the Sampling Plans A and B on the basis of the cost functions (Costl vs. Cost3) derived in section 3.1 and 4.2 respectively. The comparison is made for the following values of cl :c2:c3 (see footnote 9): a) (cl :c2:c3) = (1 :10:20) (see graphs 16A-16D) b) (cl:c2:c3) = (1 :100:500) (see graphs 17A-17D) In the above mentioned graphs, we plot the ratio Cost l /Cost3 against µ. Graphs are based on varying levels of machine imperfection. For a given level of machine imperfection, the shape of the graphs for both sets of values of(cl:c2:c3) are similar. As expected, Cost I is smaller than Cost3 when the machine is perfect or near perfect. As the machine becomes more imperfect, Cost3 performs substantially better than Cost I. For a highly imperfect machine, the difference in the cost performance of Sampling Plans A and B is minimal since most of the lots will be completely screened anyway. 71 Chapter 5. Concluding Comments and Future Research. In acceptance sampling with rectification (henceforth, ASWR), estimators for the number of undetected defectives remaining in outgoing lots have been developed under the restrictive assumption of the inspection procedure being perfect. This assumption, however, is not practical. In this dissertation we examine the role of ASWR when the inspection procedure is imperfect, that is, a defective unit can be declared non-defective, and vice-versa. To take machine imperfection into account, Greenberg and Stokes (henceforth, G&S) (1996) have recently proposed an adjustment to their (1992) estimator of the number of undetected defectives remaining in outgoing lots. We consider a zero-defect rectification sampling plan in chapter 2, in which a random sample of size m is inspected from each lot under consideration. Ifno defectives are found in the sample, the lot is accepted. Ifat least one defective is found, the entire lot is screened. The defectives found are discarded or replaced, and the lot is then accepted. The model we consider is a modification of the G&S ( 1992) model. We discuss the G&S ( 1992, 1996) estimators, 0GS 1 and 0GS 2 , and ' ' then present two new estimators, 0new,1 and 0 new,2 . We then compare the performance of the estimators against each other. The comparisons are done on the basis of Root Mean Squared Error (RMSE) for different combinations of 72 parameter values. The resulting RMSEs are plotted in a series of graphs to facilitate comparisons. For large lot/sample size we have to resort to Monte Carlo simulations due to the large number of calculations involved. Different levels of machine imperfection are considered. A A When the inspection procedure is perfect, U GS 2 reduces to U GS 1 . The ' ' RMSE performance of Unew,J is not as good as UGS,l or UGS,2 m this particular case. However, even with slight machine imperfection, its performance becomes substantially better than that of the G&S estimators. Unew 2• on the ' other hand, has a lower RMSE than the other three estimators at all levels of machine imperfection. It has an MSE one order of magnitude lower than that of the existing estimators. Since U new 2 performs better than the other estimators under all ' circumstances, we study its performance at different levels of machine imperfection, as proportion of defectives per lot, µ, varies. We discuss the behavior of RMSE with varying levels of machine imperfection. We study the effect on bias and RMSE due to change m one of the misclassification error probabilities, when the probability of the other error is held constant. Bias seems to increase with an increase in either p or p'. For a given level of p (probability that a defective unit is declared defective), RMSE usually 73 decreases with increasing p' (probability that a defective unit is declared non-defective). Also, for a given p', an increase in pleads to a decrease in RMSE. In chapter 3, we compute an expected cost function for the sampling plan introduced in chapter 2. The various components of the expected cost function are the cost of sampling, the cost of shipping defectives, and the cost associated with labeling non-defectives as defectives. The cost function is based on population parameters which need to be estimated. We propose a sequential adaptive sampling plan m order to lower the expected cost as much as possible. We estimate the population parameters by sampling a fraction of the Jots. These estimates may then be used to determine the sample size, m, which leads to minimum expected cost over the next few lots. We sequentially adjust the sample size as we get more data, thereby minimizing the cost associated with the sampling plan. We then generalize this expected cost function for an acceptance sampling plan where the sample is inspected y times instead of only once in every lot. A unit is declared defective if it fails at least integer ( 2'.:. +I) times. The lot is 2 accepted if no defectives are found. Ifat least one defective is found, the entire lot is screened and all the defectives removed. On the basis of this general cost function, we find the optimum values of the sample size, m, and the number of 74 times the sample should be screened, y. The objective is to find those values of m and y, which minimize the expected cost globally, given the values of different cost components. Next, we propose a measure which minimizes MSE and expected cost simultaneously. This is done because the RMSE comparisons across different levels of machine imperfection might suggest that one can improve RMSE performance by using a less efficient screening procedure. Thus, in order to make proper efficiency comparisons, the manufacturer should consider both MSE and expected cost. In chapter 4, we compare the sampling plan presented in chapter 2 with a zero-defect sampling plan in which the sample is screened thrice, that is, y = 3. This sampling plan is especially important if p and p' are not known since it is necessary to inspect some items at least thrice in order to estimate p and p' (see Johnson et. al., 1991 ). The estimators , Unew 1 and Unew 2 , can easily be , , modified for this sampling plan by making adjustments to the error probabilities. RMSE comparison for the sampling plans is done on the basis of our best estimator, Unew 2 . Finally, the plans are compared on the basis of expected cost , functions. Contrary to intuition, screening the sample three times instead of once increases the RMSE. However, it usually reduces the overall expected cost. This 75 happens because screening the initial sample three times is equivalent to screening it once but with an inspection procedure that is more perfect. Therefore, there is a tradeoff between cost and RMSE for the two sampling plans. As part of future research, the proposed estimators can be extended to the case of c-defect acceptance sampling. A lot is shipped without further inspection if c or fewer defectives are found in the initial sample. If at least c units are declared defective in the sample, then the lot is rejected and screened completely. It is ready for shipment after the defectives are either discarded or replaced. Monte Carlo simulations can be used for RMSE comparisons of the estimators. The estimators can be further modified for situations where lot/sample sizes and/ or acceptance numbers vary from lot to lot. The expected cost function can also be modified for the above mentioned situations. The adaptive sampling plan can be changed accordingly. Also, the empirical Bayes estimator is based on the beta­binomial model. It can easily be computed for other models, such as the garnma­Poisson. In this dissertation, misclassification error probabilities, that is p and p', are assumed to be known. In many practical situations, p and p' will be unknown, so that one would not be able to compute the proposed estimators. However, estimates of p and p' can be substituted in the estimators. Blischke (1964) proposed several estimators for the misclassification error probabilities. 76 Estimation of the probabilities p and p' has also been considered by Johnson, et. al. ( 1991 ). Since computational burden is insignificant these days, maximum likelihood estimators can also be computed (see Greenberg and Stokes, (1996)). 77 Graphs lA-lD compare the RMSE performance of Unew,2abknown and onew,2 when n = 5000, m = 125, 1t = 0.1 , and p = 0.3. The RMSE for Unew 2abknown is ' computed analytically, and 0new 2 results are obtained via simulation. ' p=l,p'=O 90 ­80 70 -0--new2, ab known 60 ~ 50 ~new2, max(stderr) =28 ~ 40 0::: 30 20 10 o_ 0 0 Graph IA p =0.9999, p' =0.0001 90 ­80 70 -0--new2. ab knov.n 60 ~ 50 ~new2, max(stderr) =3 1 ~ 40 a: 30 20 10 0 0 0 Graph lB 78 p=0.99, p' =0.01 50 ­ -o--new2, ab known --<>--new2, max(stderr) = J 8 10 ­ o ___ ____ ~ l{') lO mu N 0 0 0 0 0 Graph lC p=0.97, p' =0.03 40 ­ -o--new2, ab known --<>--new2. max(stderr) = J2 5 ­ 0--------~----­ l{') lO IDUN l{') M 0 0 N 0 0 0 Graph lD 79 Graphs 2A-2D compare the RMSE performance of Unew,2abknown and Unew,2 when n = 5000, m = 125, 7t = 0.1, and µ = 0.1. The RMSE for U new, 2 ab known IS computed analytically, and U new 2 results are obtained via simulation. ' p=l,p'=O 80 ­--0--new2. ab kno,~n ., , :/: --<>--new2 , max( stderr) = 32 ~ 10 ­ o_ 0.1 0.3 0.5 rho 0.7 0.9 Graph 2A p=0.9999, p' =0.0001 100 10 ­0 0.5 rho 0.7 0.1 0.3 0.9 Graph 2B 80 p=0.99, p' =0.01 45 ­ 0 rh o 0.1 0.3 0.5 0.7 0.9 Graph 2D 15 10 ­ 5 -­0 0.1 30 ­ 25 ­ 20 ,., Vi 15 ­ ~ 0:: 10 ­ 5 ­ -er-new2 , ab kno'Ml -<>-new2, max(std. err.)= 19 rh o 0.3 0.5 0.7 0.9 Graph 2C p=0.97, p' =0.03 -er-new2. ab knov.~1 -<>-new2. max(std. err.)= 8 81 Graphs 3A-3D compare the RMSE performance of Unew,2abknown and Unew,2 when n = 5000, m = 125, p = 0.3, andµ= 0.1. The RMSE for Unew 2abknown IS ' computed analytically, and Unew 2 results are obtained via simulation. ' p=l,p'=O 200 ­180 ­160 ­ -o--new2, ab knov.n --˘---new2. max(stderr) = 113 20 ­ o ______ 0.05 0.1 0.15 0.2 pi 0.25 0.3 Graph 3A p=0.999, p'=0.001 -0--new2 , ab knov.n --˘---new2, max(std. err.)= I 08 ~ 20 ~ 0 0.2 0.25 0.05 0.1 0.15 pi 0.3 Graph 3B 82 p=0.99, p' =0.01 80 ­ -o-new2, ab known L:..l (/) -<>-new2. max(std. err.)= 48 ~ 0:: 0.05 0.1 0.15 0.2 pi 0.25 0.3 Graph 3C p= 0.97, p' = 0.03 45 -0-new2. ab knovvn -<>-new2, max(std err.) = 30 10 ­ 5 ­0 pi 0.05 0.1 0.15 0.2 0.25 0.3 Graph 3D 83 Graphs 4A-4D compare the G&S (1992, 1996) estimators with the proposed estimators across various levels ofµ. The values considered for various parameters are n = 5000, m = 125, T = 300, p = 0.3, 7t = 0.1. 140 120 100 t.:..l ;r. 80 :2: c 0 60 .::::: 40 20 0 p =1, p' =0 - ~GS!. max Std. Err. = 32 --e--GS2. max Std. Err. = 32 -+-new!. max Std. Err.= 35 -+-new2. max Std. Er. =30 0.01 0.05 -X-GS!. max Std. Err.= 39 --e--GS2. max Std. Err. =31 -+-new!. max Std. Err.= 33 UJ -+-new2. max Std . Er.= 31 ~ c 0 .::::: p = 0.99, p' = 0.01 300 -4500.00 _ ~ 4000.C 8 GS2, max Std. Err. =25 250 --x-_ I -X---~-__ ,.... -3500.C -+--new!, max Std. Err.= 19 ,.,---~ 200 ~ 3000.C $ new2, max Std. Err. = 18 l.U ---i:r--e~-s---1~-R~~ VJ -2500.C --)!(--GS I. max Std. Err. = 249 ~ 150 0 -2000.00 0 0:: 100 1500.00 ""+-~--+--+--"'l'---t-I 000.00 _ 50 -500.00 v--...-~ 0 ____ __...,_________...,__~ 0.00 0.0 I 0.05 0.1 0.15 mu 0.2 0.25 0.3 Graph 4C p = 0.97, p' = 0.03 300 --1000.00 x -900 .00 250 -8 GS2, max Std. Err. = 16 -800.00 new!. max Std. Err. = 14)I( -700.00 200 ­-600 .00 e new2. max Std. Err. = 13 v: ~ 150 -/ ~500.00 --)!(--GS!. max Std. Err. = 111 0 X, 0 -400 .00 0:: / 100 -,x -300.00 x_ -liji ' Iii ii 200.00 50 '!! ;i ;i !f! e e ~ 100.00 .--e e o~ 0.00 0.01 0.05 0.1 0.1snu 0.2 0.25 0.3 Graph 4D 85 Graphs 5A-5D compare the G&S (1992, 1996) estimators with the proposed estimators across various levels of p. The values considered for various parameters are n = 5000, m = 125, T = 300, µ = 0.1, 7t = 0.1. p =I, p' = 0 140 ­ -::t:-GSI, max Std. Err.= 29 --O--GS2. max Std. Err. = 29 --+-new!. max Std. Err. = 33 ~new2. max Std. Err. = 27 0 ­ 0.1 OJ 0.5 rho O. 7 0.9 Graph SA p = 0.9999, p' = 0.0001 180 160 _____::.: 140~ -::t:-GSI, max Std. Err. =37 --O--GS2. max Std. Err. = 30 --+-new!, max Std. Err.= 31 ~new2, max Std. Err.= 30 0 ----------------~ 0.1 OJ 0.5 rho 0.7 0.9 Graph SB 86 p = 0.99, p' = 0.01 LJ.J V) :E 0 0 0::: 300 -25 0 -200 -!i . 15 0 -100 -+--.. _x --0 -­X -­D -4000 --x _ -­X· ­---3 900 -3800 I c 3700 a D -3600 -3500 -3400 + 3300 --0--GS2 , max Std . Err. = 25 new I, max Std. Err. =20 ---˘-new2 , max Std. Err.= 19 --x -GS I, max Std . Err. =88 50 - - 3200 0.1 0.3 0.5 rho 0.7 0.9 Graph SC p = 0.97, p' = 0.03 600 -~:.:: 500 . _____---* x---x­ 400 rrl -;(-GS I. max Std. Err. = I02 Cl) -B-GS2. max Std. Err.= 14 :E 300 ­0 0 --+-new!, max Std. Err.= 13 0::: 200 ­ ~new2, max Std. Err. =8 100 ­ o! ~ ~ ~ ~ rho 0.1 OJ 0.5 0.7 0.9 Graph SD 87 Graphs 6A-6D compare the G&S (1992, 1996) estimators with the proposed estimators across various levels of 7t. The values considered for various parameters are n = 5000, m = 125, T = 300, µ = 0.1, p = 0.3. p =1, p' =0 _._GS!, max Std. Err.= 65 -e--GS2, max Std. Err. =65 ~new!. max Std. Err.= 106 --+-new2. max Std. Err. =88 0 -0.05 0.1 0. 15 Pi 0.2 0.25 0.3 Graph 6A p = 0.999, p' = 0.001 1200 ~ (/) ~ 0 0c.: 1000 . 800 . 600 -400 - • -:it(--GS I . max Std. Err . = 226 -e--GS2 . max Std . Err. =62 ~new! , max Std. Err.= 90 --+-new2, max Std . Err. = 74 200 : 0 I 0.05 0. 1 0.15 Pi 0.2 0.25 OJ Graph 6B 88 p = 0.99, p' = 0.01 300 x --4000.00 I---~ -­ , --); _ 250 ----); ­ ---x. ---i: 3000.00 w VJ ::E 150 ~ 0 0 c:.: 0 --------------0.00 0.05 0.1 0.15 Pi 0.2 0.25 OJ Graph 6C p = 0.97, p' =0.03 300 - - 800.00 x 250 - -700.00 x 600.00 ,., v; 200 x / / - 500 .00 ::E 0 150 x - 400.00 0c:.: 100 - 'x ---­x' -300.00 ; ; ~ -200.00 ~ 100.00 ' 0.00 Pi 0.05 0.1 0.15 0.2 0.25 OJ Graph 6D 8 GS2, max Std. Err. = 30 ---new! , max Std. Err. =28 $ new2, max Std. Err. =24 --)!(--GSI , max Std . Err.= 422 D GS2, max Std. Err. = 30 new! , max Std. Err. =28 $ new2, max Std. Err. =24 --)!(--GSI , max Std. Err.= 422 89 Graphs 7 and 8 plot the RMSE against µ and n: respectively for different levels of machine imperfection. RMSE comparisons for different levels of p, p' n = 5000 m = 125 T = 300 Pi= 0.1 Rho= 0.3 80 ­ --0-p= I. p' =O --˘-p = 0.999 p' = 0.001 -::t:-p =0.99 p' =0.01 -+-P = 0.97 p' = 0.03 Graph 7 RMSE Comparison for different levels of p, p' n = 5000 m = 125 T = 300 Mu = 0.1 Rho= 0.3 --0-p = 1 p'=O --˘-p = 0.999 p' = 0.001 -X-p = 0.99 p' = 0.0 1 -+-p = 0.97 p' = 0.03 0 0 0 '.:'. 0 0 N 0 0 M 0 0 """. 0 0.,., 0 p·o1-o 0 0 r­0 0 00 0 0 0-­0 0 0 Graph 8 90 Graphs 9A and 98 plot the relative bias for l\ew, 1 and Unew,2 respectively. newl, Relative Bias n = 15, m = 3, mu= 0.1, rho= 0.3, Pi= 0.1 (Based on theoretical Bias) 0 0.0001 Cl) iii"' -1 .5 ­~ -2 ­ ;: ~ -2.5 ­ a:: -3 ­-3.5 ­ -4 ­ 0.001 p' 0.01 0.03 0.05 -X-p =0.95 ~p=0. 97 -+-p = 0.99 --<>--p = 0.999 -X-p = 0.9999 --0--p =1 Graph 9A newl, Relative Bias n= 5000, m = 125 mu = 0.1 rho= 0.3 Pi= 0.1 1 ­ ~p= 0.97, max. Std. Err. = 0.013 -+-p = 0.99, max. Std. Err. = 0.029 ~P= 0.999, max. Std. Err. = 0.088 -X-p = .9999, max. Std. Err.= .284 --0--p = 1, max. Std. Err. = .35 p' -0.6 ­ Graph 98 91 Graphs I OA and I OB plot the change in RMSE of 0new, 1 and 0new, 2 due to change in either p or p'. newl , Rl\iSEas pand p' change ----p = 0.97 -0-p =0.99 --+--p = 0.995 -+-p = I p' 0 0.005 0.01 0.1 Graph IOA new2 , Rl\iS E as p and p' change 120 ­100 ­ ----p = 0.97 80 . -0-p = 0.99 "' ~ 60 ~ ~ --+--p = 0.995 40 20 . -+-p= I 0 . p' 0 0.005 0.01 0.1 Graph IOB 92 Graphs I IA-I IC compare the performance of the expected cost function where the population parameters are known (referred to as Cost_Known) with the one in which the population parameters have to be estimated (referred to as Cost_ Unknown). The ratio Cost_ Unknown/Cost_Known is plotted against µ. cl/c2/c3 = 1/100/500 p = 0.999 p' = 0.001 Pi= 0.1 Rho= 0.3 c 2.8 ~ e 2.6 ­ c --0--m =1 ;:e 2.4 ­ I ~m=125 2.2 ­ "' = (_. e 2 ----0--m = 250 1.8 -X-m = 500 ~ e c -X-m = 1000 "'"' E - -;; I e u 0.01 0.05 0.1 mu 0.15 0.2 Graph I IA 93 cl/c2/c3 = 0.11100/500 p ~ 0.999 p' = 0.001 P1=0.1Rho=0.3 s c -0-m= l ;:ii:: _1 c ~m=l25 "' e ---6-m= 250 u -X-m =SOO -X-m= 1000 Graph 1 IB cl /c2/c3 = 11100/500 p = 0.99 p' = 0.01 Pi -O-·l R ho= 0.3 c ~ e -0-m = 1 c ;:ii:: _, ~m=125 1.6 ­u "' ---6-m = 250 e 1.4 ­ = -X-m = 500 ~ e c -X-m = 1000 .'11. 1.2 ­ E _, - 0"' u 0.8 ­ 0.01 0.05 0.1 mu 0.15 0.2 Graph I IC 94 Graph 12A plots the expected cost per lot across various levels of machine imperfection. Since the values for p = 0.97 and p' = 0.03 are very high as compared to the rest, they are plotted using the secondary axis. Graph 12B presents the percentage savings obtained by using the adaptive sampling plan. Expected Cost per Lot cl/c2/c3 = 11100/500 n = 5000, m = 125, T = 300, rho= 0.3, pi= 0.1 ~p=1, p' =O ---0---p =0.999 p' =0.001 = 5 ­ --0-p = 0.99 p' = 0.01 4 --22 -·X-· p =0.97 p' =0.03 3 ­2 --21 - ci 0 ci ci Graph 12A Percentage Savings by Using Adaptive Sampling Methodology c l/c2/c3 = Ill 00/500 n = 5000, m = 125, T = 300, rho= 0.3, pi= 0.1 70% ­ ---6-p = 0.999. p' = 0.00 1 60% 0--­ -0 0-­ "1l -0-----0-p = 0.99. p' = 0.01 = 50% -----0 ·;;:: .. VJ ... "1l ~ " 0 10% ..'.. 0% ­ 0.01 0.05 0.1 0.15 0.2 mu Graph 12B 95 Graphs l 3A-13D plot the expected cost against sample size m for different values ofy. Expected Cost per Lot cl/c2/c3=1/100/500 p= 0.999, p' = 0.001 n = 5000 mu= 0.1 rho= 0.3 ri = 0.1 m -sample size y -#oftimes sample tested 4000 ­ 3500 c ~y= l -+-Y=2 -x-Y= 3 --0-Y =4 -x-Y=5 -er-Y =6 --0-Y =7 ,,_______F -... 500 ­ m IO 20 30 40 so 60 Graph 13A 96 Expected Cost per Lot cl/c2/c3=11500/100 p=0.999, p' =0.001 n = 5000 mu= 0.1 rho= 0.3 pi= 0.1 m -sample size y-#of times sample tested 18000 ­a 4000 ­ --+ +--+ ---0 2000 ­ + u--0­ Q~a ....,... a a ~ a ,.. 0 20 40 60 m 80 100 Graph 13B 97 8000 ­7000 ­";; 6000 ­u 0 5000 ­ "O., 4000 ­ ti OJ c.. 3000 .. ..." 2000 ­1000 . 0 Expected Cost per Lot cl/c2/c3 = 1/10/20 p = 0.999, p' = 0.001 n = 5000 mu= 0.1 rho= 0.3 pi= 0.1 m-sample size, y -#of times sample tested 100 200 300 400 m 500 600 700 800 900 Graph 13C Expected Cost per Lot cl/c2/c3 = 1/500/100 p = 0.99, p' = 0.01 n = 5000 mu= 0.1 rho= 0.3 pi= 0.1 m -sample size y -#of times sample tested 25000 ";; 0 u "O ti" " c.. "' "" ~y=1 -+-y = 2 -X-y = 3 -<:ry = 4 -X-y= 5 -/r-y = 6 -<>-y = 7 ~y= I -+-y=2 -X-y=J --(ry= 4 -X-y=5 -/r-y=6 -<>-y= 7 20 40 60 m 80 100 120 Graph 130 98 Graph 14 plots the product (MSE *Expected Cost) againt various levels ofµ. MSE*Exp. Cost n = 5000 m = 125 rho = 0.3 pi = 0.1 120000000 100000000 ~p= I. p'=O i­"' 80000000 --˘-p =0.999 p' =0.00 1 0 u•'"' "'~ 60000000 40000000 """"1i-p = 0.99 p' = 0.01 -X-p = 0.97 p' = 0.03 20000000 0 LO LO LO mu N LO (") N 0 N 6 6 00 0 6 ~ 0 0 Graph 14 99 Graphs 15A-15D plot the RMSE against µfor sampling plans A and B. p= 1 p' =O 80 ­ --a-Screening the sample once ---Screening the sample thrice 20 ­IO ­0 -­ V) .,., mu N V") M 0 0 N 6 6 0 0 6 Graph 15A p= 0.9999 p' = 0.0001 --a-Screening the sample once ---Screening the sample thrice IO ­0 V") lr, mu N V") M 0 0 6 N 6 0 0 6 Graph 158 100 p=0.99 p' =0.01 -a-Screening the sample once --+--Screening the sample thrice 10 ­0 V) V) "' mu N 0 0 N 0 0 0 0 Graph 15C p = 0.97 p' = 0.03 -a-Screening the sample once --+--Screening the sample thrice 0 V) ...., V) V) mu N 0 0 N 0 0 0 0 0 0 0 Graph 15D 101 Graphs 16A-16D plot the ratio Costl/Cost3 againstµ . The ratio taken for c1:c2:c3 is 1:10:20. cl/c2/c3=1 /10/2 0 p= 1, p' =O 0.90 ­ 0.80 ~ 0.70 ­ 0.40-----------­0 0 V) ~ mu No . c 0 0 Graph 16A cl/c2/c3=1 /10/20 p = 0.9999, p' = 0.0001 0.50 ,..., V) N V\ ~ mu 0 0 0 0 N 0 0 0 0 0 Graph 16B 102 cl/c2/c3 =1/10/20 p= 0.99, p' = 0.01 7.00 ­ 6.00 ­ 5.00 ­ ':l "' i 0 ~ 4.00 ­"'0 u 3.00 ­ 2.00 -. 1. 00 Vl N Vl ~ mu 0 0 6 c N 0"' c c 0 0 Graph 16C cl/c2/c3 = 1/10/20 p= 0.97, p' = 0.03 3.00 ­ 2.80 ­ 2.60 ­ ':l "' 0 ~ 2.40 -· "' 0 u 2.20 ­ 2.00 T 1.80 0 6 Vl 0 0 6 ~ 0 mu N 6 Vl "' N 6 0 Graph 16D 103 Graphs 17A-17D plot the ratio Costl/Cost3 againstµ. The ratio taken for c1:c2:c3 is 1: 100:500. cl /c2/c3 = 1 /100/500 p=l, p'=O 1.00 ­ "' e u 0.70 ---------------­ 01 w=O)f( w=O) 0 \w = ii> = f(Yi1 >0) (1-(1 -p') m) (l -7t) =---------------­1-((1-p')m (1-7t)+ {.1t ) f(1-8)m (l) a-1 (1-(l) t-1 ow] pe,a, b 0 107 Appendix Al.3: Roots of the Quadratic equation. The quadratic equation (le) yields the following two roots: A -x -~x2 -4 yz 7tl =----'----­ 2 Y A -x + ~x2 -4YZ 7t2 = ------­ 2 Y While nl always lies between 0 and 1, n2 always lies outside this interval. Note that for nl to lie in the interval (0, 1 ), and for n2 to lie outside it, the following two conditions have to be satisfied: Y*Z> 0 -Z ~ (X + Y) Conversely, for n2 to lie in the interval (0, 1), and for nl to lie outside it, the following two conditions have to be satisfied: Y*Z 0)+ K 1(Y;1 ~o)1 ro) -E~i -Yid I(Yi 1 >0)1 ro) = n(l-p)(a-p') -nro(l-p) p-p' +(1-or ( ~=::((n-m)O-np')--K+(n-m)ro p) Note that K represents E(Di IYi1 =0). 110 MSE for Unew,2ab known• MSE = E(Oi -UiJ = E(Oi) + E{ui)2 -2E(Oi ui) = E~(Oi2 )ro} E~(ui2 ) ro )-2E{E(O i U i )iro) where E{U i2 1ro )= E(Di -I{Yit >O)Yict Iro J =E(Di 2 1ro)+E~{Yit >O)Yi/1 ro )-2E(Di I{Yit >O)Yictl ro) where 2 E(Yi~ I{Yit >O)lro }= nrop(l -rop)+ (n-m)2 ro p2 (1-8)m €-(1-8)m) +(n-m)Dp(l-8t(rop(2m + 1)-1) +0rop-(n-m)Dp(l-8)m J E(Yid Di I{Yit >O)lro )=2p{(mro(I-ro)+m2ro 2 ) +C(n-m)D(I-ro)+(n-m)2ro 2 )€-{l -8)m) +2m(n -m)ro 2} 111 E(1\ 2 lro )= (~=:.r fo(J -ii)-(n -m)'>(l -or (ii(n -m-1)+ I)} +€-(1-sr}0P~~;.p))' + K'(l-iir -i(:=:.)(np~~;.P)) 0o-(n-m)'>(!-iir) EfuiOi lro)= ( l-p,)( ( )ro~-p)X ')J{m(m-1))(1-8) \: p-p ro 1-p + 1-ro 1-p + (n-mXn -m-1))(1-8)€-(1-8)rn) + 2( l-p)(n-m)m8ro(l-p)+K(n-m~(l-8)m p-p' np'{l-p) 2 m -( p-p' ) ~ro-(n-m~{l-8) ) 112 Appendix A 2.2: Bias and MSE of estimators UGS,t, UGS,2, or Unew,1 · Bias Calculations: n E{Ui) = IP(Ui = u)*u u=O m n-m I I P{ui = ul ro,Dil,Di2, Yit >0J(Yil >01 ro,Di1,Di2)P(Dil,Di2 1co) DjJ=ODj2=0 m n-m + I 'l:P~i = ul ro,Dil,Di2,Yil =0f(Yi1 =01 ro,Dil,Di2)P(Di1,Di21co) Dii=ODji=O if u=O,l, ... ,Di1 +Di2 -1 where lu =(~ if U= Di! +Di2 113 if u = 0, 1, ... ,Di! +Di2 -I where lu =(~ if u = Di1 +Di2 m n-m = I I P(Yi1 = k1 Yi2 =k2 )*u k1=lk2=0 In the expression of E(ui ), U can be substituted by either UGS,I , UGS,2 or Unew 1. For example, UGs,I is substituted as ' 114 m n-m L L P(Yi1 = k1 Yi2 = k2 lro=O,Dil>Di2)P(Dil>Di2lro=O)P(ro=O) Di! =0Di2=0 m n-m + 00JL L P(Yil = k1 Yi2 = k2lro>O,Di1,Di2)P(Di1•Di2lro>O)f(ro)dro 0Di1 =ODi2 =0 where Xi = Beta(D;1 + Di2 +a. n -D;1 -D;2 + b) Beta(a, b) 115 MSE Calculations: MSE = E(ui -UiJ = E(UiJ + E(ui)2 -2E(ui ui) = E~(ui2 )ro} E~(ui2 )ro} 2E(E(ui ui )lro) n E(ur) = IP(Ui = u)*u2 u=O m n-m n E(uiui )=I I IP(Yil =k1 Yi2 =k2 ui = u}iiu k1=l k2=0u=O m n-m P(Yi1=k 1Yi2=k 2 ui = u)= I IP(Yi1=k1 Yi2 =k2ui = uDi1 Di2) Di1=0Dj2=0 m n-m = 00f I I {P(ui= uIYil = k 1 Yi2 = k 2 o i 1 o i2 ro) O DjJ=ODj2=0 116 =( n )(m)(n-mJp•2(ki+ki)(l-p')2(n-k1-k2) ({l-7t)+ 1tBeta(a,n+b)j Beta(a,b) ) k1+k2 k1 k2 m n-m k1 ki + 1t I I I I {A1 A; B1B; C1 c;xi} Di1=0Di2=0 1=01'=0 Di1 :t Di2 =0 where C1'= ( Di2)pk2-l'(l-p)Di2-k2+l' ki -)' Beta(Dil +Di2 +a, n-Dil -Di2 + b) Beta(a,b) 117 For a set T lots, the value of the Mean Squared Error is calculated as follows : MSE = E{O -u) = E{O) + E(u)2 -2E{Ou) E(02 )= TE(ur)+ T(T-l)E2{0i) E(u2)= TE(ur)+ T(T-l)E2{Ui) E{Ou) = TE{Oiu i)+ T(T-l)E{Oi )E(u j) Note that in the expression of MSE, U represents the actual number of undetected defects remaining in T lots. U represents the estimated number of undetected defects remaining in T lots. 118 Appendix A3: NOTATION SUMMARY: ro i = The proportion of defectives in lot i. It varies from lot to lot. Ui = The number of undetected defectives in lot i . Di! = The number ofactual defectives in the sample of size m chosen from lot i. Di2 = The number of actual defectives in the remaining n-m units of lot i. Di = Dil + Di2, The total number of actual defectives in lot i. Yi1 = The number of declared defectives in the sample of lot i by the imperfect machine. Yi2 = The number of declared defectives in the remaining n-m units of lot i, by the imperfect machine. (Yi= 0 when Yi! = 0) . Yi = The number of declared defectives in lot i, if the entire lot is inspected. Thus, Yi =Yi! + Yi2 . (If there are no inspection errors, Yi =Di). Yin = The number of non-defectives declared defective. Yid = The number of defectives declared defective (Thus, Yi = Yin + Yid ). 119 This notations may conveniently be displayed as: DECLARED Non-Defective Defective T Non-Defective n -Di R u E Defective D·I n-Di -Yin Yin Di-Yict Yict n-Yi Yi 120 References: Brush, G.G., Hoadley, B., and Saperstein, B. (1990), "Estimating Outgoing Quality Using the Quality Measurement Plan," Technometrics, 32, 31-41. Blisch.ke, W.R. (1964), "Estimating the Parameters of Mixtures of Binomial Distributions," American Statistical Association Journal, 510-527. Cochran, W.G. (1977), Sampling Techniques (3rd ed.), New York: John Wiley. Costas J. Spanos (1989), "Statistical Significance of Error-Corrupted IC Measurements," IEEE Transactions on Semiconductor Manufacturing, 2, 23-28. Cowden, Dudley J. (1957), Statistical Methods in Statistical Quality Control, Englewood Cliffs, NJ.; Prentice-Hall. Greenberg, B.S. and Stokes, S.L. (1992), "Estimating Non Conformance Rates after Zero Defect Sampling with Rectification," Technometrics, 34, 203­ 213. Greenberg, B.S. and Stokes, S.L. (1996), Working paper, University of Texas at Austin. Hahn, G.J. (1986), "Estimating the Percent Non conforming in the Accepted Product After Zero Defect Sampling," Journal ofQuality Technology, 18, 182-188. 121 Johnson, N .L., Kotz, S. and Wu, X. (1991 ), Inspection Errors for Attributes in Quality Control, London, Chapman & Hall. Lindsay, B.G. (1985), "Errors in Inspection: Integer Parameter Maximum Likelihood in a Finite Population," Journal of the American Statistical Association, 80, 879-885. Martz, H.F., and Zimmer, W.J. (1990), "A Non-parametric Bayes Empirical Bayes Procedure for Estimating the Percent Non conforming in Accepted Lots, 11 Journal ofQuality Technology, 22, 95-104. Schilling, E.G. (1982), Acceptance Sampling in Quality Control, Marcel Dekker, Inc. T enenbein, Aaron ( 1970), 11 A Double Sampling Scheme for Estimating from Binomial Data with Mis-classifications," Journal ofAmerican Statistical Association, 65, 1350-1361. Salvendy, Gavriel (1982), Handbook of Industrial Engineering, A Wiley­lnterscience Publication, Chapter 8. Wiel, Scott A. Vander and Vardeman, Stephen B. (1994), 11 A Discussion of All­or-None Inspection Policies," Technometrics, 36, 102-108. Zaslavsky, A. (1988), "Estimating Defective Rates in c-Defect Sampling," Journal ofQuality Technology, 20, 248-259. 122 This digitized document does not include the vita page from the original.