Evaluation of the Australian Wage Subsidy Special Youth ...

Evaluation of the Australian Wage Subsidy Special Youth ... Evaluation of the Australian Wage Subsidy Special Youth ...

02.06.2014 Views

vi The job subsidy Special Youth Employment and Training Program (SYETP) was introduced in Australia with the aim of improving the movement into work. In 1984, the SYETP was a flat rate subsidy of A$75 a week paid to employers for 17 weeks, equivalent in value to half the average teenage wage, and was available to youths aged 15-24 who had been claiming unemployment benefits and not studying full-time for at least 4 of preceding 12 months. A review of theoretical literature indicates they can give no proof of employment gains for wage subsidies. The empirical ambiguity of employment gains is concluded unresolved, in both recent overseas and Australian literature. A contributing factor is the insufficiency of the evaluation methods. Appraisal of the micro-evaluation evidence for SYETP and other Australian wage subsidies is also found to suffer these deficiencies. The inadequacies of past analyses of SYETP contribute three themes to address: suitable modelling of selection to account for the influence of observables or unobservables, dealing with non-response in the observational data, and appropriate control for the differences between the SYETP and comparison groups. Past evaluation by Richardson (1998), modelling the Heckman selection bivariate probit, using the Australian Longitudinal Survey of Youths 1984-1987 found a very large positive employment effect for SYETP participants 26 months after taking part. A key issue with the results is that no account of sample attrition was made. Theory indicates bias to be a potentially serious problem with results. Two evaluation methods are explored – the Heckman selection bivariate probit model, and matching methods, in particular propensity score matching. Both identify a parameter corresponding to the mean effect of treatment on the treated, which can be used to decide whether the programme leads to employment gains. However each method uses different assumptions to achieve this. Selection on unobservables is assumed by the Heckman bivariate probit, while selection on observables is assumed by matching methods. A series of empirical studies assesses a number of questions – what happens to the evaluation outcome if selection is assumed to be based on observables instead of unobservables; what is the importance of sample reduction to the evaluation outcome; and how sensitive is the employment impact to variation in modelling. To provide a foundation for useful comparison Richardson (1998) is first replicated successfully. The more recently popular propensity score matching method (PSM) is then applied. The PSM results reduce the size and significance of the employment effect found. The effects of attrition are examined and then accounted for, and the impact on evaluation discussed. The results are found to be smaller and have lower statistical significance. Correctly accounting for weights is found to be important in applying PSM. The Heckman and the PSM method both make strong but very different assumptions about the selection into SYETP. A comparison of the employment impacts found under each method is undertaken, together with a discussion of the most suitable assumptions for this evaluation. The value of replication is validated and advocated. The research confirms that careful accounting for data and modelling problems is important. External validity for the robustness of employment gains to SYETP are provided by the variations

in method and assumptions. The orthodox approach of adopting only one potentially appropriate selection approach and underlying assumption of observables or unobservables is challenged. A sensitivity analysis showing variation to employment effects for changes in key modelling assumptions can give a confidence interval accounting for statistical modelling uncertainty. It is concluded that the benefits are an informed overview of the role of the assumption to the evaluation outcome. vii

in method and assumptions. The orthodox approach <strong>of</strong> adopting only one potentially<br />

appropriate selection approach and underlying assumption <strong>of</strong> observables or<br />

unobservables is challenged. A sensitivity analysis showing variation to employment<br />

effects for changes in key modelling assumptions can give a confidence interval<br />

accounting for statistical modelling uncertainty. It is concluded that <strong>the</strong> benefits are an<br />

informed overview <strong>of</strong> <strong>the</strong> role <strong>of</strong> <strong>the</strong> assumption to <strong>the</strong> evaluation outcome.<br />

vii

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!