by Steve Schackne
Most EFL classes fall somewhere in the intermediate range. However, have you ever taught one of these classes when all of the students were at the proper level? That is, in the real world of EFL, do we ever see, for example, a high intermediate class where all of the students fall into the high intermediate range? A beginning class where all the students are true or false beginners? An advanced class where all the students are advanced?
Most classes, regardless of label, tend to have students of different levels. This is not necessarily a bad thing for it has been posited that mixing strong students with weak students can benefit the weak students without holding back the strong ones, but it is noteworthy that despite comprehensive strategies designed to section students by language level, most EFL classes contain students of measurably different abilities. It can be argued that with at least seven definable language levels (true beginner, false beginner, lower intermediate, intermediate, upper intermediate, lower advanced, advanced) uniform leveling is impractical, but even dealing with the basic categories (beginning, intermediate, advanced), placement tests often do a haphazard job of assigning students to the “right” class.
In a previous article, I argued that EFL teachers should utilize their time in areas which make them more efficient, effective and accountable; for example, spending a bit more time on curriculum development and a bit less on grading. Here, I am proposing that, given the imperfect nature of placement instruments, teachers and schools should abandon expensive and time consuming efforts to measure student language levels.
Many schools spend untold teacher hours looking for standardized instruments or, more often, developing their own placement tests and entrance examinations. The results vary a bit, but, generally, most of the results will yield “de facto” mixed level classes. This can be a waste of both time and money, as teachers spend hours on test writing (not to mention overtime expense), and standardized instruments can often be expensive and time consuming to administer. At my school evaluative tests are developed anew each year; tests are administered only once and then distributed as “practice tests” to area high school students prepping for the English entrance exam. While offering access to previous exams through its web site is a noteworthy community service on the part of my university, the time and effort that is put into this yearly ritual is hardly the most efficient use of staff time.
Years ago, while working for an educational foundation, I solved one of the problems by using the Secondary Language English Proficiency Examination (SLEP). This is an easily administered 85-minute evaluation which measures listening and reading, and correlates reasonably well to overall language proficiency. The cost, however, comes to over $10.00 per test taker, and while my educational foundation had “deep pockets,” this cost would be over budget for some schools.
Rather, a cloze test can be developed at almost no cost, be administered quickly, and yield results which mirror standardized institutional tests. The cloze has a fixed design format—the first three and last three sentences of the passage remain intact; every 7th word is deleted, words to include articles, prepositions, adverbs, verbs, adjectives, nouns, conjunctions, but no proper nouns, with 50 blanks constituting the usual length. More recently, this standard has become flexible with from one to three sentences remaining intact, and deletions of every 7th or 5th word. The close can be scored on an exact response or appropriate response basis. Research (see Suggested Reading) has shown that a standardized cloze correlates favorably with the Michigan Test, TOEFL, and the (former) EEE at the University of Beirut, all of which were considered reliable and valid standardized measurements of language ability.
Over 20 years ago, at a Taiwan university, when a cloze test was proposed to section students, the EFL administrators balked, fearing a “fill in the blanks” exercise would not be reliable; trained linguists, however, saw more than a mechanical exercise. The cloze forces test-takers to grapple with meaning based on the surrounding language, decoding and guessing in a way that native speakers do. It is a realistic test of language ability which closely adheres to the principles of schema theory in language comprehension.
This same university conducted an informal study on leveling comparing an institutional instrument with a simple holistic approach, a 2-minute interview. The results were surprisingly similar, further eroding the case of the university test writers and professional test developers. In the end, the cloze was adopted as a sectioning instrument.
Most schools, especially universities, are expected to implement standardized leveling tests or to have professional staff develop an institutional test. These leveling tests often give a school the imprimatur of professionalism, but they can be both time-consuming and expensive, and they often do an imperfect job of determining student language levels. Other options are available which will save time and money, and will yield at least similar results. While brief interviews may not be a consistently valid determiner of language standard, the cloze format has been tested, researched, and effectively used to place students in appropriate classes. Furthermore, it carries negligible costs, and is easy to administer and grade, thereby leaving the program with more money and the staff with more time to devote to meaningful program development.
Appropriate Response: Response which fits both syntactically (grammar) and semantically (meaning) into a cloze passage.
Cloze Test: A test for diagnosing reading ability where words are deleted at fixed intervals and the reader is required to fill in the blanks.
Exact Response: The exact word deleted in a cloze passage.
False Beginner: Language student who has studied a particular language before, but still remains at the beginning level.
Holistic: As I use it, an approach which emphasizes an overall impression in a language interview, as opposed to concentrating on discrete parts, such as grammar or pronunciation.
Schema Theory: A theory of learning which emphasizes the importance of previous knowledge in building and acquiring new knowledge; as I use it and as it was often interpreted in linguistic circles in the 70s and 80s, it refers to the comprehension of surrounding language to build and construct new language; hence a reader’s ability to successfully perform on a cloze procedure would depend on the understanding of the surrounding language in order to appropriately fill in the blanks, making it a reasonably reliable test of reading comprehension.
Standardized Instruments: Tests that have been professionally developed, pre-tested, and researched; internationally recognized tests, such as SLEP, TOEFL, and IELTS fall into this category.
True Beginner: A language student who has just started studying a foreign or second language.
Aitken, K.G. 1975. Problems in a cloze testing re-examined. TESOL Reporter, 8:2.
Hanania, E. & M. Shikhani, 1986. Interrelationships among three tests of language proficiency: standardized esl, cloze and writing. TESOL Quarterly, 20 97-09.
Oller, J.W. 1973. Cloze tests of second language proficiency and what they measure. Language Learning 23, 105-118.
Poel, C.J. & S.D. Weatherly, 1997. A cloze look at placement testing. JALT Testing & Evaluation SIG Newsletter, 1, 1 http://jalt.org/test/po_we.htm
Schackne, S. 2006. The Common Sense Approach: Grades and ESL. Developing Teachers
Copyright 2000-2014© Developing Teachers.com