Everyone’s Favorite Topic: Assessment

Traditional Testing Benefits: How could the time-conscious teacher not adore and embrace traditional testing? Think of the perks, the seductive siren call of “tradition” and “test”. If we’re being very honest, the tug of traditional testing lies within its very nature. It’s easy: one test fits all. It’s timely: one test is applied evenly, to all students, at the end of the unit or quarter or semester. It’s efficient: only one answer key is required. And, finally, it’s guilt-free. This is where we get down to the nitty gritty: at its very core, traditional testing assumes that the student is responsible for all learning that takes place. I taught it; why didn’t you learn it? is a typical teacher response to a poorly executed test-taker’s attempt.

Authentic Testing is much more concerned with whether or not a student has acquired a certain skill set, bit of knowledge, or essential understanding. The teacher adjusts his/her lesson, teaching style, or strategy based on whether or not the student is “getting it” and the assessment reflects what the student currently knows at that point in time. The purpose of education in this context is that the student actually learns — not that “ha! I got you” flavor that traditional testing takes on. The downside, of course, is that it tends to be more individualized, more time-intensive, more focused on the student’s learning. When you’re attempting to stuff two year’s worth of knowledge into a semester, it feels a little idealistic.

Ultimately, in light of ideals and constraints, a blend of the two might be in order. Pleasing parents while serving the needs of students, for example, often requires such a blending. We are, after all, human (and thus, with our foibles and weaknesses, greatness and strengths), and there are times when each form of testing can provide the appropriate result or understanding, depending upon the need, circumstance, or momentary whim.

Of course, I am much more fond of authentic testing than I am of traditional testing — and only use the latter when I am forced to by outside agencies. But I am an English teacher, and I wonder what I would think if I taught math… 😀

Disruption: the Good Kind of Revolution

The issues of public education, the state of our schools, and the preparation of our students for the global economy draw concern, criticism, and commentary from most all levels of society within the United States. This makes sense on several fronts. First, human nature dictates that we care about what happens to and with our children, and, while we may disagree on how to go about it, we all want the best for them; even those of us without children of our own understand the importance of education and its impact upon those who will become our future leaders. Secondly, taxpayers want to know that their hard earned money is accomplishing all that it can, and, considering that the national education budget in 2007 was $972 billion, it makes sense that public scrutiny is upon those of us who educate. And, while it is by no means a final reason, the political rhetoric, chest-thumping, and hay-making engaged in by our state and federal officials guarantees that no test score shall be left behind. In fact, education is often politicized for no other reason than to garnish votes, often to the detriment of school systems, administrators, teachers, and, yes, students. This is unfortunate for many reasons, not the least of which that it muddies the water. We often cannot see the important issues facing our education system simply because we are distracted by symptoms, unintended consequences, or political correctness. While such muddying cries for education reform, researchers such as Larry Cuban, a professor of education at Stanford University and author of Oversold and Underused: Computers in the Classroom, argue that the education system within the United States has embraced “school reform again, again, and again” (Cuban 1). If this is true (and who can deny it?), then how much of it has been effective? And, what, then, is the purpose of reform if it neither addresses the root problems nor accomplishes the purported goals?

And, yet, neither the public nor the feds nor the educators seem happy with the status quo. One of the concerns most loudly deliberated regarding public education is the reportedly dismal results of American students’ test scores on the Program for International Student Assessment (PISA) when considered against those of other industrialized nations. Of course, the idea that we bank so much on a test of 5,600 15-year-olds should prompt some concern, but if that doesn’t, consider the fact that while U.S. students scored “11 points below the average of the 30 countries” on the science portion of the PISA that this 11 point difference was scored on a 1000-point scale. It occurs to me that, for a nation that has prided itself on being distinctly unique in its founding and its self-governance, we are trying awfully hard to compare ourselves to those nations that don’t necessarily agree with our stamp of democracy or who haven’t agreed with it for long. Although teachers have long been accused of living and breathing the deficit model, it feels rather more disingenuous than usual to determine that our education system is suffering a deficit of some kind based upon disconnected test scores, and then to align ourselves more closely to those nations who don’t educate all students or don’t embrace individuality, self-governance, or private ownership. The emperor, indeed, is not wearing any clothes – but who has the temerity or strength of character or maybe even simple courage to note so and say so? After all, rote memorization and scoring well on standardized tests do not confer a sudden blossoming of creativity, innovation, or invention, key components of a forward-moving society.

While I don’t purport to understand the make-up or purpose of such international assessment, it seems to me that we are asking the wrong questions and airing the wrong concerns. Uppermost in my mind, I want to know – What are the real questions that we should be asking? Among those questions that beg to be considered are the following:
• What is the purpose of universal education and the public school system in the United States?
• How do we know that we are accomplishing that purpose?
• Is a standardized test the best method of analyzing the success of that purpose?
• If so, how do we know what questions to place on the test?
• Are current standardized tests assessing what we want them to?
• If not, what would a better test be?
• Should we look at the number of successful patent applications? The number of innovations? The percentage of voter turn-out?

And, finally, if we’re not assessing what we think is important, then why do we allow the vagaries of political winds to not only sweep over us but actually sway us? How is it that we even pay attention to U.S. standings among other industrialized nations regarding high school test scores? How do we know, for instance, that we’re comparing apples to apples and oranges to oranges when we compare our test results to theirs? What convinces us that we have fallen behind the rest of the world? If we have, in what ways? Do test scores embody that which is most important? Of course, in his book Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns, Clayton Christensen (and Horn and Johnson) argues that the United States maintains its “technological edge in the world” simply because it is still a “magnet for the best talent in the world,” attracting the best and the brightest instead of employing the graduates of American schools (Christensen 6). But there’s something more to the story than simple recruitment.

If we have a crisis in education and if we are truly not meeting the needs of our students and our populace, it seems of paramount importance to define what it is we are attempting to accomplish in the education world. Cuban writes that a school’s “primary mission is to make the next generation literate, prepare it for civic duties, and imbue it with the core values of the community” (153). Positive learning environments foster “self-direction and confidence,” as well as “respect and affection” (39). In addition, various stakeholders also want 1) efficient and productive schools, 2) engaging and active learning that is connected to real life, and 3) preparation for the future workplace. If this is true, then how are our standardized tests assessing these educational mandates? And if reform is required, what format should it take? Why haven’t our previous reforms, such as the desire to make all students technologically literate, necessarily worked? Cuban’s book covers a great deal of ground, concluding that simply inundating schools with technology will not transform education. This is an intuitive and relatively simplistic deduction: after all, providing me with the accoutrements of a medical doctor will not transform me into one; strapping a saddle on a green horse will not alter its wild, untrained nature; placing 14 violins in a junior high locker room will not produce 14 string virtuosos.

It seems, however, the Cuban ignores the elephant in the room. While he proves over and over that preschool and high school teachers adapt technological innovations “to existing ways of teaching and learning” (58), he doesn’t discuss the why. For example, he explains that “when teachers adopt technological innovations, these changes typically maintain rather than alter existing classroom practices” (71), “maintain rather than transform prevailing instructional practices” (73), and “fit their customary practices, not revolutionize them” (97). The world of the university fares little better. In many cases, because “tenure decisions are so clearly linked to the volume and quality of scholarly publication, not to teaching effectiveness” there is, for many professors, “a conflict of interest – being hired to teach but rewarded for doing research” (109). Too, these faculty members “continue to believe that the central purpose of teaching is to transmit their discipline’s accumulated knowledge to students” (118). In essence, these professors are the “content-disseminator”, perpetuating the misconception that “those who know can teach” (119). Unfortunately, even in an “elite private research-driven institution” such as Stanford University, this means that – despite reforms and the addition of computers and other technology in the classroom – there have been “few changes in how they teach and how their students learn” (107). Thus, it is clear that wiring the world of education has done little to improve, transform, or revolutionize it.

The elephant, of course, stands complacently in the realm of educational policy, teacher preparation programs, and the lack of pedagogical course requirements for professors. It is easy to throw money at a problem, toss a few computers into a classroom, bemoan the failings of schools and teachers, but it is much more difficult to address the root of such a problem. Until we decide that we really need to teach students to think critically, address problems rationally, and make their thinking visible through writing and communicating, we will never understand the importance of our math, science, or language arts curriculum. Until we determine that art and music and dance can breathe creativity and innovation and invention into our education programs, we will never be able to see how integrated a curriculum could and should be. But until we realize that deep learning most often occurs when students are engaged in “projects that cut across content boundaries and connect to learning outside the classroom” (14) or when “constructivist practices” are used, we will never grasp the importance of revolutionizing our teacher prep programs and insisting that our educational policy, curriculum and instruction, and educational leadership and administration programs get on the same page. These are reforms that can only happen at the university level. For, oddly enough, the “trickle down” theory is alive and well in the education world. Regardless of the shining stars currently present in public education and a few university campuses, they are far and few between. Early adapters seem a genetic anomaly; very few (19% in Cuban’s study) say that “in using technology they…become more student-centered in their teaching” or make “fundamental changes in their pedagogy” (95). Many educators, instead, tend to rely on 1) the way our teachers taught us, 2) the way our teacher preparation programs taught us, or 3) post-graduate education courses that we seek out because we refuse to settle for what we’ve become. If education is truly to be reformed, practices at the very highest level must be transformed. Only then, as new teachers emulate those who instructed them in their university classes, can we expect education to encounter true reform.

Of course, it is always easy to point fingers, to blame, to accuse. A central goodness of the citizens of the United States, however, seems to lie in our willingness to look critically at ourselves, to assign responsibility, and yet to work to correct our mistakes and misconceptions. It is my hope that education reform begins with this first tough look, this willingness to take responsibility, and an optimistic enthusiasm for a better educational tomorrow. We owe it to our students, our future voters, and ourselves.

Cuban, Larry (2001). Oversold and Underused: Computers in Classrooms. Cambridge, MA: Harvard University Press.

Christensen, Clayton, Curtis W. Johnson, and Michael B. Horn (2008). Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns. New York: McGraw Hill.

Of Humans and Technology

The human condition beckons the curious, engages the scholar, and stumps the wisest of philosophers. Despite the pitfalls awaiting the intrepid explorer, its very nature demands examination, especially since it is the human in question doing the examining.

Photo credit: http://wcuk.wordpress.com

One could argue that nothing whets the appetite quite like self absorption, but the truth of the matter is that through examination comes illumination, and with illumination comes understanding. Once we understand how and why we came to be, we can, with some level of accuracy, project a path of future human behavior. One could even, conceivably, shape future human conduct with such knowledge. Whilst many scholars and philosophers have contemplated humanity, two in particular have entered the fray with gloves on. Jared Diamond argues in Guns, Germs, and Steel (1999) that continental environment has played the largest role in determining the fate of future humans while Paul R. Ehrlich explains in his book, Human Natures (2000), that the evolutionary processes of genes, culture, and environments have been the most responsible culprits. While each professor makes a series of solid and provocative arguments, there is still room within both for questions, further contemplations, and divergent thinking. Perhaps most engaging is the thorough foundation they provide for the student of human nature devoted to the study of contemporary adaptation and survival within the technology world.

Diamond’s main contention rests on three major environmental features that helped shape the future destiny of the early human: wild plants that could be cultivated into crops, wild animals that could be domesticated into pets and livestock, and continents that extended along an east/west axis versus a north/south one. Areas that contained all three features tended to flourish. Eurasia, it seems, not only has more species of plant life that has been domesticated than other continents, but the plant life itself is more protein rich, containing more calories per hour of gardening than foodstuffs grown in other parts of the world. These aspects entice the hunter-gatherer to settle down and begin farming since the advantages of such are quite clear. Eurasia also contains a higher percentage of large domesticated animals; these livestock not only added to the provisions available to early humans, but they also helped till the ground, fertilize the soil, and harvest the final product. A seemingly negative side effect of large domesticated animals (that actually turns out to advance the owning society in the long run) is the fact that many of the most horrific germs impacting human life have mutated from them, passing on to humans. Those humans who survived (small pox, the plague, etc) passed on their immunities to their offspring. Those humans who were not exposed to such animals and thus such nasty germs did not develop immunity over time and suffered massive epidemics when exposed (think: Native American populations ravaged by small pox in response to, in some cases, even simple contact with other natives who had met with infected Europeans). And, finally, the axis of the continent determined, to some extent, how far a given domestication would travel. An east-west axis is more desirable since plants are more likely to grow at the same latitude while animals are more likely to thrive in similar conditions. Of course, anything (from desert to mountain range) that prevented peoples and cultures from interacting tended to block subsequent sharing of livestock and seeds, but the idea is that the easier one’s environment makes it to share one’s animals and plants the more likely people are to meet, trade, and learn from one another.

As previously stated, those areas of Earth that possessed all three major environmental features are the ones whose societies progressed along the continuum from band to tribe to chiefdom to state. Since the world is diverse in so many ways, with a variety of environmental features and domesticable plants and animals, a broad range of human societies developed with the inhabitants of Tasmania on one end of the extreme, “who abandoned even bone

Photo credit: high-pasture-cave.org

tools and fishing to become the society with the simplest technology in the modern world” (258), to the advanced European societies possessing the guns, germs, and steel of the title. The benefits of these three columns of civilization lie within the larger populations and sedentary lifestyles achieved within those populations. These two concepts, then, allow for all the trappings of civilization that the modern human tends to value: advanced technology, centralized political organization, writing systems, and, of course, the coveted guns, germs, and steel.

Diamond’s argument along the way often has the feel of a historical fait accompli, as if the humans involved were bound into strict channels of destiny and had no control along the way. It is easy to commiserate with the reader who pauses and wonders, “If A + B = C, then why doesn’t A2 + B2 = C2?” It seems that for some cases, in some situations, with some people or animals or plants, evolution works just fine. In others, however, the whole natural selection thing simply does not work out and obstinate living matter refuses domestication in any finer sense of the word. This is one confusing aspect of Diamond’s argument. For example, if, over the course of 10,000 years, humans selected out edible foodstuff repeatedly, looking for specific characteristics, it only follows that those characteristics might develop into a domesticated plant form. After all, we have the edible domesticated almond versus the poisonous wild one. What happens if no human selected out that plant? Over the course of 10,000 years, nothing beneficial for humans would happen. Perfectly comprehensible. But the argument states that if x, y, and z plants were not domesticated then they are truly incapable of being domesticated. Period. (Ditto with large animals.) How is this possible? The argument continues: if they were even conceivably domesticable, then modern day farmers or botanists or scientists of some particular stripe would have found a way to do so. Since they haven’t, this proves the point that they are incapable of being thusly tamed. Of course, this flies in the face of all we know regarding the many thousands of years necessary for some forms of natural selection or evolution to proceed. What could the plant eugenics of yesteryear or the scientific plant-breeders of today do against the weight of so many millennia? Granted, Diamond’s discussion within the larger picture far outweighs these simple gnat-like annoyances. However, to be perfectly frank, a scholar and scientist of his ilk has no need for such a disingenuous argument. Another example of annoyance proportions is the bizarre idea that the wheel could not be invented in the Americas because the Indians did not possess large domesticated animals (beyond the llama/alpaca which was limited in geographic scope). However, Mexican ceramics reveal that the wheel was invented for toys. What, then, prevented the Mexicans from

Photo credit: pages.cthome.net

taking that invention a step further and inventing the wheelbarrow or rickshaw (though, admittedly, the rickshaw was not invented until the late 19th century by all disputed reports except Montanus’)? Diamond’s discussion of this development feels weak and not at all up to par with his remaining chapters. Ehrlich, then, is appreciated for his reminders that we are, indeed, unique and that we should not apply larger, overarching truths to the individual.

Rooting one’s self in Diamond’s rather more engaging points, however, one cannot help but find intriguing similarities between the geographical and biological worlds of yesteryear and the technological worlds of today. Even more provocative is the idea of applying these concepts to a university setting. What constitutes an axis within a class, a building, a department, a campus? Are technological advances or innovations easily shared, adapted, and adopted? Or are they eschewed for traditional strategies, entirely new technologies, or political alliances? One might deduce that an east-west axis might lie along fields of study or disciplines. If this is true, then what steps can be taken in order to minimize inefficient transfer of technological adaptations? How might one department learn from another while yet honoring the uniqueness of their own? Another thought revolves around the population size of a given course, department, college, or university. If the effects of isolation and population size on the development and maintenance of technology are widely documented in the realm of human cultures, what is the carryover to a university setting? If population equates potential investment, greater competition, more innovations, and a greater pressure to adopt new technologies (Diamond 1999), then how many is too many or how few is too few in order to achieve such desirable outcomes? And, finally, does an awareness of such matters then make us responsible for not only producing research-based and stellar expectations (of ourselves, our programs, our schools) but for demanding that those expectations be met?

An understanding of human natures, of how we came to be the way we are, of how we may, foreseeably, develop, is of paramount importance to determining who we want to be and where we want to go. This basic understanding allows us to make informed decisions, even detailed action plans, and, perhaps even more important, allows us to feel that we may have some control over our destinies, regardless of Diamond’s assertions to the contrary. The education world, in specific, had better sit up and take note: for if we – we teachers in the trenches, closest to our subjects, our students, and our situations – are not directing our own destinies, a centralized bureaucratic black suit, replete with standardized testing and federally mandated curriculum, will be. And that’s an evolution I’d rather not witness.

Ehrlich, Paul.  (2000). Human Natures: Genes, Cultures, and the Human Prospect. Washington, D.C.: Island Press.

Diamond, Jared. (1999). Guns, Germs, and Steel: The Fates of Human Societies. New York: Norton.