SWRK 292

Dr. Hardina

 

CLASS EXERCISE ON INTERPRETING DATA FOR PROGRAM EVALUATION

 

A researcher was asked to conduct an evaluation of a welfare reform demonstration project for the state. The project involved an assessment of the effects of the "100 hour rule". AFDC-U recipients were systematically assigned to either experimental and control groups when they first applied for welfare benefits. Members of the experimental group were told that they could work 100  or more hours per month. Members of the control group were told that they could work a maximum of 99 hours per month.

 

Changes in policies related to financial incentives or regulations that prohibit hours of work are believed to increase work effort among welfare recipients (Greenberg, et al., 1995).Previous research on welfare reform initiatives indicates that few of these projects result in statistically significant increases in hours of work or income for recipients or decreases in welfare grants (Friedlander & Burtless, 1995; Gueron & Pauly, 1991).

 

METHODOLOGY

 

Five hundred primary wage earners in these households were interviewed by phone in order to examine the effect on the rule change on work effort (Hardina, Carley, & Thompson, 1995).  Respondents were selected randomly from among members of the experimental and control groups. The researchers constructed a 50 page questionnaire; 50 pre-test interviews were conducted and modifications were made in the final instrument.

 

There were no statistically significant differences between the experimental and control groups on most demographic variables: age, gender, disability status, and family size. Experimental group members  (M = 9.77, SD = 4.06) had more years of schooling than control group members (M = 8.64, SD = 4.36) however. Control group members (39.9%) were more likely to work in the agriculture than experimental group members (30.7%).

 

Elimination of the 100 hour rule was expected to produce a positive effect on work effort. Experimental group members were expected to work more hours, earn more income, and have lower grant amounts than members of the control group. Grants are determined based on family size and the amount of work income; recipients who work more should have lower grant amounts. Three hypotheses were addressed:

 

Hypothesis 1:Members of the experimental group will be more likely to work more hours than members of the control group.

 

Hypothesis 2:Members of the experimental group will be more likely to earn more income from work than members of the control group.

 

Hypothesis 3: Members of the experimental group will be more likely to have lower AFDC grants than members of the control group.

 


 

 

During a second phase of the research, focus group interviews were conducted with staff at county DSS offices; these staff members were responsible for implementing policies and procedures related to the 100 hour rule.

 

FINDINGS

 

Hypotheses 1-3 were not confirmed. When looking at all 500 cases, we found that the control group worked more hours per month (M = 58.19, SD = 78.96) than the experimental group (M = 50.75, SD = 71.41). This difference was not statistically significant, t (495) = -1.10, p = .27. The control group (M = $442.25, SD = $657) also earned more income during the previous month than the experimental group (M= $321.69, SD = 500). This difference was statistically significant at t (495) = -2.29, p = .022.  Because they earned more income, the average size of welfare grants (controlling for family size) was lower for members of the control group (M =$318.60, SD = $355.92) than the experimental group  (M = $355.99, SD = $311.47). This difference was not statistically significant, however, t (488) = - 1.24, p = .2.

 

  Why was the intervention not effective? Control group members (37%) were more likely to have had their benefits discontinued than experimental group members (26%).  This difference was statistically significant, c2 (1, N = 498) = 7.63, p = .01. Eighty percent of the control group members who had gone off welfare reported that they had left AFDC because either the primary or secondary wage earner in the household were earning income compared to 67% of the experimental group members. This difference was also statistically significant, (4, N = 152) = 9.3, p = .05.  This suggests that at least some members of the control group were terminated from AFDC when they exceeded 100 hours of work.

 

 When hours of work for all respondents who were receiving AFDC benefits at the time of the interview are examined, it is clear that the experiment had a minimal impact; 21.2% of the experimental group members who remained on AFDC worked more than 100 hours per month compared to 14.5% of the control group members. This difference was not statistically significant, however, c2 (1, N = 341) = 2.54, p = .11). These control group members evidently under reported both hours worked and income earned to their caseworkers in order to retain their benefits.

 

Focus group participants expressed favorable attitudes toward the effects of the change in the 100 hour rule.  Group members felt that it had removed a real and perceived barrier to work for recipients.  Members also identified one unstated or incidental benefit associated with the rule change: recipients were now more likely to report accurately the number of hours they had worked and their work income than prior to the rule change.  

 

CLASS ASSIGNMENT:                   Answer the following questions:

 

1.         What do the statistical symbols, M, SD, t, c2, and p. mean?

 

2.         What theoretical assumptions were tested?

 

3.         What are the null hypotheses for the four research hypotheses listed above?

 

4.         What confidence level do you think is appropriate for this study?

 

5.         The "t-test" for Hypothesis 2 produced a probability level of .02. Why was Hypothesis 2 not confirmed?

 

6.         This evaluation combined an outcome analysis with a process evaluation? What information was given on outcomes?

 

7.         What information was given about process? How was this data collected?

 

8.         Was this a "true" experiment? Why or why not?

 

9.         What information was given about the reliability of the instrument?

 

10.       Is this study generalizable? Why?

 

11.       Is there an obvious threat to internal validity? What is it?

 

12.       What other information would you expect to see in an empirical research article?

 

13.       Would you recommend to the Federal government that this policy be continued? Why?

 

Return to Hardina Home Page | Back to "Find Courses"