Comparative Education
Online ISSN : 2185-2073
Print ISSN : 0916-6785
ISSN-L : 0916-6785
The Negative Effects of Administering Undergraduate Assessments of Generic Skills: Considering the Experiences of Australia and the United States
Ayaka NODA
Author information
JOURNAL FREE ACCESS

2010 Volume 2010 Issue 40 Pages 3-23

Details
Abstract

  There has been a global trend to expect higher education systems to assure that the quality of graduates meets current social demands. In Japan, the Ministry of Education (MEXT) suggests that diploma competencies be cultivated across all disciplines under a four year undergraduate program. The U.K. Confederation of British Industry (CBI) similarly expects higher education to enhance the core skills of graduates to assure they have basic employment skills. The OECD is currently conducting a feasibility study on the Assessment of Higher Education Learning Outcomes (AHELO) to measure undergraduate generic skills in addition to those in the economics and science fields. Existing literature regarding generic skills outcomes has focused particularly on trends of overseas assessment systems and the contents of their assessment tools. However, these studies have not adequately discussed the negative impacts inherent to the use of these tools. Therefore, the present study aims to explore the negative aspects of generic skill assessments and bring new insights into Japanese higher education institutions in applying assessment tools. In particular, it focuses on discussions regarding generic skill assessments in Australia and the U.S.

  Consensus as to what is meant by the term “generic skills” has proven elusive; definitions vary across nations, stakeholders, and higher education institutions. Recently, central governments, the labor market and universities have become increasingly involved in efforts to specify in concrete terms those skills which are to be considered “generic”. For the purposes of this study, particular attention is paid to the Graduate Destination Survey (GDS), the Course Experience Questionnaire (CEQ) and the Graduate Skills Assessment (GSA) in Australia, and the Collegiate Learning Assessment (CLA) in the U.S., with the aim of examining positive and negative aspects of these assessment tools.

  The Graduate Destination Survey (GDS) and the Course Experience Questionnaire (CEQ) have been conducted in Australia for all graduates in the year following graduation as a national policy. The data is used for public information in university rankings as well as in determining federal government funding allocation. However, methodologies of these instruments have also received criticism. For example, the CEQ focuses on graduates’ overall perceptions of their entire degree program, but the items are seen as oversimplified, requiring students to make an overall judgment across an entire program with five scales. Moreover, the variation in response rates raises questions about the appropriateness of comparisons among different universities.

  Another assessment tool in Australia, the Graduate Skills Assessment (GSA), was developed as a standardized test with government funding to measure generic skills (i.e., critical thinking, problem solving, interpersonal understandins, and written communication). The GSA is designed to test students at both the entry and exit levels, and could be used to gain insights into “added-value” across institutions. However, the central government’s involvement in universities has received negative reactions from academia which has tended to emphasize autonomy. It has also proven challenging to achieve a consensus among multiple stakeholders (e.g., policy makers, academia, and employers) with regards to what kind of generic skills should to be assessed. (View PDF for the rest of the abstract.)

Content from these authors
© 2010 Japan Comparative Education Society
Next article
feedback
Top