Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
STAT 440: Design and Analysis
of Experiment
4. Simultaneous Inferences
The inferences covered before are related to individual interest, e.g.,
100(1-)% . . for − , i ≠ j, is
�� − �� ± 1−2,− ( 1 + 1)
This inference has significance level exactly , or the error rate= , the correct rate = 1- .
However, in practice, most often, we are interested in simultaneous inferences, i.e., we might be interested in all the possible pair-wise
comparisons.
For cutting method data set, r=3
There are total 32 = 3 possible comparisons.
1 − 2,1 − 3,2 − 3
If the individual error rate (or significance level) is = 0.05, then, the overall significance level is 1 − 1 − 3 = 1 − 1 − 0.05 3 = 0.1426 ≫ 0.05
If we have r levels of means to compare, then the overall significance level is 1 − 1 − ≫
To adjust this, many multiple procedures have been proposed. The idea is to modify the C.I. width or the rejection region by adjusting
the critical values.
Why?
- Often, investigators consider a large number of unplanned comparisons. (e.g.,
all pairwise comparisons among treatments)
- There can be many unplanned comparisons.
We begin with linear combination of treatment means:
= ∑=1
Note that contrasts are special cases of L with ∑=1 = 0.
And ( − )’s are special cases of contrasts, hence they are linear combinations too.
Point Estimate of L: � = ∑=1 ̂ = ∑=1 ��
{�}2 = � = �
=1
2
2
= 2�
=1
2
, {�}2 = �
=1
2
Result:
(i) 100(1-)% C.I for = ∑=1 is
� ±
1−
2,− {�} = � ± 1−2,− �
=1
2
(ii) Test �0: = 0: ≠ 0
(iii) Test Statistic: = �−0
∑=1
2
= �
{�} ~ ( − )
(iv) Rejection region: < −
1−
2
,− > 1−
2
,−
I. Bonferroni Multiple Comparison Procedure:
- Need to know number of comparisons, say, g
- We use
2
instead of in each test / interval.
- Conservative, i.e., overall Type I error (significance level) < .
- Sample size need not to be equal.
Result:
(1) 100(1-)% C.I for Bonferroni simultaneous C.I. for g contrasts are: � ± ∗ {�}
Where =
1−
2
,− , � = ∑=1 �� , {�} = ∑=1 2
(2) Test: �0: = 0: ≠ 0
= � − 0
∑=1
2
= �
{�} ~ ( − )
Reject region: if > 1−
2
,− < - 1−
2
,−
Example: Cutting method data: 95% Bonferroni simultaneous C.I. for 1 − 2, 1 − 3, 2 − 3
Sol: g=3, = 0.05
1−
2,− = 1−0.052∗3 , 9 = 0.9917, 9 = .
95% Bonferroni simultaneous C.I for multiple comparisons are
For 1 − 2, �1� − �2� ± 1−
2
,− ( 11 + 12) = 7.4 − 26 ± . ∗ 3.85 ∗ 14 + 14 = −18.6 ± 4.07 →(−22.67,−14.53)
For 1 − 3, �1� − �3� ± 1−
2
,− ( 11 + 13) = 7.4 − 35.4 ± . ∗ 3.85 ∗ 14 + 14 = −28 ± 4.07 →(−32.07,−23.93)
For 2 − 3, �2� − �3� ± 1−
2
,− ( 12 + 13) = 26 − 35.4 ± . ∗ 3.85 ∗ 14 + 14 = −9.4 ± 4.07 →(−13.47,−5.33)
II. Scheffe Multiple Comparison Procedure
- Works for all possible contrasts (tests/intervals)
- Most conservative, relatively easy.
- Use ( − 1)(1−,−1,−) in place of 1−
2
,−
- For both equal or unequal sample sizes
- overall significance is exactly .
Result: for = ∑=1
(1) 100(1-)% C.I for Scheffe simultaneous C.I. for L is: � ± ∗ {�}
Where = ( − 1)(1−,−1,−) , � = ∑=1 �� , {�} = ∑=1 2
(2) Test: �0: = 0: ≠ 0
= �2−1 ∗{�}2 ~ (−1,−)
Reject 0 if > (1−,−1,−)
Fail to reject 0 if ≤ (1−,−1,−)
Example: Cutting method:
95% Scheffe Simultaneous C.I. for 1 − 2, 1 − 3, 2 − 3 are:
= ( − 1)(1−,−1,−) = (3 − 1)(0.95,2,9) = 3 − 1 ∗ 4.26 = 2.9189
(1−,−1,−) = (0.95,2,9) = 4.26
For 1 − 2,
�1� − �2� ± ∗ ( 11 + 12) = 7.4 − 26 ± . ∗ 3.85 ∗ 14 + 14 = −18.6 ± 4.049 → (−22.65,−14.55)
For 1 − 3,
�1� − �3� ± ∗ ( 11 + 13) = 7.4 − 35.4 ± . ∗ 3.85 ∗ 14 + 14 = −28 ± 4.049 → (−32.05,−23.95)
For 1 − 3,
�2� − �3� ± ∗ ( 12 + 13) = 26 − 35.4 ± . ∗ 3.85 ∗ 14 + 14 = −9.4 ± 4.049 → (−13.45,−5.35)
Simultaneous test on = − ( + )
Test: �0: = 0: ≠ 0
= �2−1 ∗{�}2 = ( �2�−12 �1�+�3� )2
3−1 ∗
−
1
2
2
4
+
1 2
4
+
−
1
2
2
4
= 14.656
2
= 7.328 ~ (−1,−)
(1−,−1,−) = (1−0.05,3−1,12−3) = (0.95, 2, 9) = 4.26
> 4.26 , Reject 0
But If = 0.01,(1−,−1,−) = (0.99, 2, 9) = 8.02
< 8.02, Fail to reject 0
III. Tukey Multiple Comparison Procedure
- Deal with pairwise comparisons only
- An exact solution to pairwise comparisons.
- Overall error rate (significance level) for all 2 possible comparisons is exactly .
- Assumes equal sample sizes, 1 = 2 = ⋯ = = , but it is conservative for unequal sample
size, i.e., the overall significance level is < if the sample size unequal, a good thing.
- Based on the idea of studentized range distribution
- Use 1
2
(1−, , −) in Table B.9.
Result:
(1) 100(1-)% C.I for Tukey simultaneous C.I. for difference of means, = −
� ± ∗ {�}
Where = 1
2
(1−, , −), � = �� − ��
�
2 = ∑=1 2 2 = 2 1 + 1 , {�}2 = ( 1 + 1), {�} = ( 1 + 1)
(2) Test: �
0: − = 0
: − ≠ 0
Test statistics: = �{�} ~ (,−)
Reject 0 if > (1−,,−)
Fail to reject 0 if ≤ (1−,,−)
Example: Cutting Method:
95% Tukey Simultaneous C.I. for 1 − 2, 1 − 3, 2 − 3 are:
= 12 (1−, , −) = 12 (0.95, 3, 9) = 12 ∗ 3.95 = 2.79307
For 1 − 2,
�1� − �2� ± ∗ ( 11 + 12) = 7.4 − 26 ± 2.79307 ∗ 3.85 ∗ 14 + 14 = −18.6 ± 3.874 → (−22.47,−14.73)
For 1 − 3,
�1� − �3� ± ∗ ( 11 + 13) = 7.4 − 35.4 ± 2.79307 ∗ 3.85 ∗ 14 + 14 = −28 ± 3.874 → (−31.87,−24.13)
For 1 − 3,
�2� − �3� ± ∗ ( 12 + 13) = 26 − 35.4 ± 2.79307 ∗ 3.85 ∗ 14 + 14 = −9.4 ± 3.874 → (−13.27,−5.53)
Summary Table for multiple comparisons of cutting method data:
Comparison of three procedures:
In pairwise comparisons, Tukey is superior to the Bonferroni procedure.
5. Sample size Determination with Estimation Approach
Goal: To approximate the sample size needed to conduct the multiple comparisons of interest. This is a
trial and error approach. Iterative procedure is needed some times.
To determine the sample size here, things needed.
(1) The acceptable half width of C.I. for the specified contrasts.
(2) Critical values for the multiple inferences, Bonferroni, Scheffe, or Tukey?
Which one to choose depends upon the situation, just as what procedure you would use to make
multiple comparisons.
e.g., number of contrasts known → Bonferroni
number of contrasts unknown → Scheffe
only pairwise comparisons → Tukey, may be Bonferroni
(3) An estimate of . Be careful ! As in all other sample size determination problems, the sample size
could be very sensitive to . Trouble occurs if the estimation of is not good.
Example: A company owning a large fleet of trucks wishes to determine whether 4 different brands of snow tires have the same mean
tread life (in thousands of miles).
Suppose that the company is interested in the following = 0.05. =2 according to past experience.
(1) All pairwise comparisons, i.e., 42 = 6 , − , ≠
(2) A trend in the mean tread life:
= 1 + 42 − 2 + 32
= 1 + 2 + 43 − 3
{�}2 = 2�
=1
2
Assuming equal sample size, 1 = 2 = 3 = 4 = 4, we use Scheffe multiple comparison procedure is
(1) Half width of 95% C.I. for − :
∗ ��− �� = ∗ 2�
=1
2
= 2�
=1
2
= ∗
12 + −1 2 = 2
= 2 2
Where = ( − 1)(1 − , − 1, − ) = (4 − 1)(1 − 0.05, 4 − 1, 4 ∗ 4 − 4) = 3 ∗ 2.87 = 2.93
(2) Half width of 95% C.I. for = +
−
+
is
∗
12 2 + −12 2 + −12 2 + 12 2 = 2
(3) Half width for = ++
−
4
3
Try n=10, then
= ( − 1)(1 − , − 1, − ) = (4 − 1)(0.95, 4 − 1, 4 ∗ 10 − 4) = 3 ∗ (0.95,3,36) = 3 ∗ 2.87 = 2.93
− : 2 2 = 2 2 ∗ 2.9310 = 2.62
= 1 + 42 − 2 + 32 : 2 = 2 ∗ 2.9310 = 1.85
= 1 + 2 + 43 − 3: 43 = 43 ∗ 2.9310 = 2.14
The half width are all ≤ ∆= 3, so n=10.
There are situations that an unequal sample sizes design will work better.
Suppose that the company is only interested in comparing 1 − 4 , 2 − 4 , 3 − 4
Then if the sample size for 4 is bigger than 1,2,3. Then we can actually increase the precision.
Let 1 = 2 = 3 = , 4 = 2. The defined precision with confidence coefficient 0.9 is to be ± 1.
Then, half width for those 3 pairs are (Bonferroni)