The Politics of Educational Assessment in South African Public Schooling, 1994-2010

By Scott Timcke (School of Communication, Simon Fraser University)

Over the past two decades various government jurisdictions within the United Kingdom, Australia, and the United States have implemented learning outcomes and student centered learning models in their schooling systems. At the bequest of local constructivist education theorists and curriculum consultants, and in an attempt to match international best practice, South Africa did so too in 1994. Its version was called Outcomes Based Education (OBE). At the time, these theorists and consultants were buoyed by the prospects, and waxed poetically about how this policy heralded a new era in South African education.

Formally, OBE was implemented in primary and secondary education as an attempt to update, democratize, and desegregate the South African primary and secondary education system in the post-Apartheid political regime. Nominally it was specifically created with the fault lines of Foreign Direct Investment (FDI) courtship,[1] economic security, and individual aspiration in mind. OBE was introduced in the education system as a one-size-fits-all solution to address South Africa’s unique history and ambitions. It was envisioned that the same principles could be equally well implemented in rural grassland schools (those being schools without rudimentary buildings) as well as the long established urban elite boarding schools. It had three components: school wide evaluation; systematic evaluation; and assessment of preparation for national examinations and benchmarking.

Free-Textbook-Archive-To balance these issues pedagogically, progressive policymakers and experts sought to create a transferable, portable outcome and qualification rubric which would complement policy instruments being simultaneously developed by other national departments. For example, it was imagined that during the course of their schooling students would be able to complete electives which would in turn be certified and endorsed by the Department of Labour (DoL).[2] Moreover this certification would, upon data-capture by the DoL, be available for skills surveys, employer validation upon request, or labour brokering. But such efforts at praxis and interministerial collaboration were subject to serious walk-back once cost projections were completed, and only partial implementation.[3]

To meet this aspiration, and signal that this aspiration was met, efficient continuing assessment was seen as a necessary feature of the education model. This was implemented in the mid-1990s through stepped rollout. But apparently due to severe and persistent criticism from Monitoring & Evaluation, evidence based policy researchers, and internal government criticisms, OBE was scrapped altogether in 2010. The prevailing interpretation is that it is a textbook worthy case study in policy failure.[4] As one researcher puts it, OBE was the Cinderella of public schooling policy (Muller 2004, 239).

Keeping brevity in mind, the primary stipulated outcome was to increase capacity and delivery in basic literacy in mathematics (failed), language acquisition (successful, but one must wonder to what extent was this due to OBE?), entrepreneurship (failed), and social cohesion and deracialization (pocketed success). On the ground one major difference was to re-orient from annual examinations to continuing assessment and from quantitative feedback to qualitative feedback.[5] Social cohesion was the highest value, and accordingly all subjects were arranged so that group and peer learning models were the order of the day. OBE also attempted to bring gender equality into the educational mandate. Some claim this as a success, but given the historical moment and the 1996 Constitution, counter-factually one cannot imagine an educational policy that would not have had this as a core component.

Basic critiques of OBE centered on the difficulties of implementation: that is, the need to retrain and upskill teachers, problems with teacher retention (as many teachers sought employment in the private sector), the lack of support materials to teach certain subjects in line with policy expectations, and how continuing assessment failed to ensure learners had an adequate grasp of material before entering into new grades. Bureaucratic bungling, the user unfriendliness of policy support documentation, footdragging, unaccountable delays in implementation, and seeking to make a silk purse of a pig’s ear can be added to the list.

Questions were further raised with how monitoring and evaluation would be conducted given the difficulty in finding competent administrators and auditors. Lastly, there were problems articulating how OBE might fit with the qualifications required for university entrance. Universities had little interest in OBE, for outside of their social mission their only interest was in protecting a particular ‘high-jump bar.’

Some lefty political economists believed that OBE was a capitulation to neo-liberal impositions insofar that OECD metrics are used to assess the expected knowledge profile of a potential labour force, and what skills they might be able to master, thereby giving ‘quality assurance’ to potential investors. Despite the soundness and validity of these arguments, they rarely developed traction outside a small sympathetic audience. This neglect can partially be attributed to OBE being publically understood as an attempt to bring accountability to the education-society relationship from an institutional political perspective, as opposed to a political economic exercise to create a globally competitive labor force on the cheap,[6] even though both models were present in the initial conception. Another partial explanation is the inertia of governmental decision making to attract FDI to kick-start a developmental state. In short, lefties had lost the bigger argument regarding developmental paradigms.[7]

Outside of the left, more advanced critiques centered on conceptional confusion vis-à-vis whether the purpose of the education system was to be education or training; should assessment be systematically diagnostic or locally remedial; would assessment drive the development of future content or be used to assess the delivery of current content; would assessment be centralized or decentralized, privilege students or institutions? Additionally Muller (2004, 225) writes that there was confusion “between discipline-based and practice based learning and assessment.” Muller (231) continues to indicate how philosophically assessments were “norm-referenced, summative and aggregative by default.” Pulling from all these factors, the argument was that assessments could not be conceptually clarified until they were adequately justified, justification itself being the result of a more nuanced and subtle understanding of the purpose of particular kinds of educational practices and preparations.

A cluster of scholars argued that assessing the assessable because it was easily assessable rendered invisible the complex mechanisms that create differential outcomes and class disadvantage. Excessive focus on these items comes to inadvertently perpetuate inequalities. In short, people papered over by policy. In parallel, due to the lack of capacity and instructional leadership, performative data was itself performed, and not an actual reflection of what was taking place within school settings. This is even before educational researchers observed the tradition ‘teaching to test’ effect, which is encouraged by external assessments.[8]

Related to the above issues, within schools, assessments meant proscriptions regarding the curriculum, exercises and projects. This narrowed the range for discretionary judgements for teachers, principals, and local administrators and was interpreted as an attempt to reduce their autonomy. Pragmatically, due to lack of access to resources and shortages thereof, some kinds of exercises and projects could simply not be conducted, and unless creative substitutes could be devised (more difficult with Science and Biology), these lesson plans slipped.

In the twilight and afterglow of the OBE experiment, some researchers sought to track the changes brought by this policy. Virtually no improvement in international test scores was registered. South Africa mean scores in maths and science still rank miserably in the region, and well below what is expected at all levels within the schooling system. The latter applies even when comparing to national curriculum standards.

A few lines of reasoning about these results are possible. First it seems that either assessments have had negligible impact in improving outcomes, or that the system completely fell apart but assessments put a floor under the collapse. If the former, then questions should be directed toward political will, because the initial indicators suggested that implementation issues need to be addressed. If the latter, then South African education system has bigger problems that it is unwilling to acknowledge.

A second line of reasoning could reject international mean scores and testing component of OBE altogether, and instead emphasis the qualitative changes in educational practice brought by this policy realignment. Muller sums this up as:

we can see a discernable move since 1994 away from an underdeveloped systemic policy (Grade 12 external assessment only) towards a marked progressive preference for formative, process and integrative kinds of assessment with little real progress towards comprehensive systemic assessment. (2004, 239)

Notwithstanding the need to integrate the various Apartheid era education departments, one should nevertheless not find policy solace in polishing the turd—which is going to extraordinary lengths to make things appear better than they are—which is what I think this line does. Doing so disrespects the dignity of persons who were adversely affected by the policy.

A third line of reasoning is that using the opportunity of post-Apartheid policy reconfiguration, brokers at the top of the DoE mobilised assessments as a means to consolidate their power and exclude third party interference in educational decision making. OBE and its commensurate assessment simply happened to be the horse they used to ride into battle. Here specific educational content and techniques, educational life chances and economic competitiveness were subordinate considerations. Rather, these kingmakers used the best intentions of pedagogically progressive proponents to facilitate ministerial territorial boundary marking in the new political regime.

Using Mill’s Method of identifying the single circumstance that all rationales have in common, the third line of reasoning is the only explanation that can account for the inadequate support the intention for curricular proscription but the continued delay of content for this exercise, the delimiting of autonomy throughout the school system, the increase of external oversight but neglect of systematic oversight, and the lack of senior administrative accountability for these cock ups.

To be explicit, it is my contention that during the policy planning and implementation cycle, the DoE deliberately sabotaged and sacrificed OBE as a means to stave off extra-ministerial encroachment into its presumed area of authority. It was designed to fail. To this extent, OBE was used in an intra-governmental power struggle.

In time, the archives might reveal that this political calculus was a better alternative to what might have been. In the meantime though, the broad lesson for educational researchers is to not neglect that education is a public policy issue. And as public policy is forged by raw encounters between power with trades and concessions between various interests, it is important that educational policy and state politics not be prematurely discounted in any analysis. Education researchers should not be blind to the broader ideological and political context, but must instead place their analysis within that gestalt.

As OBE was not really given a full blooded chance, it is outside the purview of this post to assess whether it would have been more effective had it not been starved of resources.  What is not outside the post’s purview is a cautionary tale for educational researchers. Despite constant calls for evidence based policy, governments tend to be wary of academic theory and slow to adopt social scientific findings. So, if and when governments do adopt proposals from this sector, researchers should still maintain a critical attitude and investigate the actual reasons why a particular policy proposal is adopted. As the case of OBE in South Africa illustrates, there is often more to the story.

 

Bibliography

Muller, J., (2004) Assessment, qualifications and the NQF in South African schooling in Chisholm, L., (ed) Changing Class: Education and social change in post-apartheid South Africa. Pretoria: HSRC Press.



[1] Notwithstanding humanistic intentions this is how I understand the purpose for international education testing and benchmarking.

[2] Muller (2004, 226) writes:

For the administrative progressives in the NQF (broadly, the representatives of labour), integration meant the administrative integration of the DoE and DoL, the flattening of qualificational distinctions between education and training, both symbolising to the proponents the bridging of mental and manual, head and hand. This qualifications-driven effort at social engineering embodied a centralising agenda in the interests of the aspirational working and lower middle classes.

[3] Some policy researchers speculate that this failed collaboration was intentional on the part of the Department of Education. I am reminded of Sir Humphrey’s remarks in Yes Minister on Britain joining the European Economic Community: “We ‘had’ to break the whole thing up, so we had to get inside. We tried to break it up from the outside, but that wouldn’t work. Now that we’re inside we can make a complete pig’s breakfast of the whole thing.”

[4] As some additional background, South Africa has a ministerial department of education which manages the primary and secondary education sector. Management is centralized, and while there are provincial departments, which have some oversight from elected provincial administrations, these defer to national directives and oversights. Governing bodies (equivalent to PTAs) have a symbolic presence, but no power.

[5] Subaltern Education Theory was rhetorical mobilised and extended to create and position a different and purportedly inclusive understanding of knowledge. Here emphasis was placed on the value of indigenous knowledge and African knowledge systems. Perpetuating European models of examinations was deemed unfair, exclusionary, and colonial. (Ironic given that OBE was largely borrowed from UK education theorists.) While examinationism is counter-productive, in practice this meant little more than little examination and no failure. Student throughpush was the order of the day, both to flatter the system, but also to clear the school system of 22 and 23 year old students in grade 6. Basically this was an attempt to streamline grades according to maturity, not necessarily educational attainment.

[6] This comes with the usual qualifiers about the plurality and contradictory nature of the state, and the complex demands of different classes, regions, and developmental levels. Also one needs to take into account that due to the mixed economy and plurality of economic sectors—ranging from resource extraction, semi-skilled manufacturing, to resource finance and the telecommunication sectors—there is no consensus on the ideal typical student.

[7] A criticism of undue state intervention was not utilized.

[8] The ‘teaching to test’ effect is where students do better on tests, but this is only because they have been primed to take that test, a subject matter which is narrow and generally known beforehand. So teaching to test comes at the expense of students having depth to their knowledge. This method is metaphorically akin to giving the students answers then congratulating the teacher when they all receive high grades.

This entry was posted in Education, Featured Articles and tagged , , , . Bookmark the permalink.