On not learning or con in the context

We will, we will, fail you by testing what you do not know…

We live in a rather strange world. Or is it that we assume the world
to be non-strange in a normative way, but the descriptive world has
always been strange? Anyways, why I say this is to start a rant to
about some obviously missed points in the area of my work. Namely,
educational research, particularly science and mathematics education
research.
In many cases the zeal to show that the students have
‘misunderstandings’ or are simply wrong, and then do a hair-splitting
(micro-genetic) exercise on the test the students were inflicted
with. Using terse jargon and unconsequential statistics, making the
study reports as impossible to read as possible, seem to be the norm.
But I have seen another pattern in many of the studies, particularly
in mathematics education. The so-called researchers spent countless
nights in order to dream up situations as abstract as possible (the
further far away from real-life scenarios the better), then devise
problems around them. Now, these problems are put in research studies,
which aim to reveal (almost in evangelical sense) the problems that
plague our education. Unsuspecting students are rounded, with
appropriate backgrounds. As a general rule, the weaker socio-economic
background your students come from, the more exotic is your study. So
choose wisely. Then these problems are inflicted upon these poor,
mathematically challenged students. The problems will be in situations
that the students were never in or never will be. The unreal nature of
these problems (for example, 6 packets of milk in a cup of coffee! I
mean who in real life does that? The milk will just spill over, the
problem isn’t there. This is just a pseudo-problem created for satisfying the research question of the researcher. There is no context, but only con.
Or finding out a real-life example for some weird fractions) puts many off. The fewer students perform correctly happier the researcher is. It just adds to the data statistic that so many % students cannot perform even this elementary task well. Elementary for
that age group, so to speak. The situation is hopeless. We need a
remedy, they say. And remedy they have. Using some revised strategy,
which they will now inflict on students. Then either they will observe
a few students as if they are some exotic specimens from an
uncontacted tribe as they go on explaining what they are doing or why
they are doing it. Or the researcher will inflict a test (or is it
taste) in wholesale on the lot. This gives another data
statistic. This is then analysed within a ‘framework’, (of course it
needs support) of theoretical constructs!
Then the researcher armed with this data will do a hair-splitting
analysis on why, why on Earth student did what they did (or didn’t
do). In this analysis, they will use the work of other researchers before
them who did almost the same thing. Unwieldy, exotic and esoteric
jargons will be used profusely, to persuade any untrained person to
giveup on reading it immediately. (The mundane, exoteric and
understandable and humane is out of the box if you write in that
style it is not considered ‘academic’.) Of course writing this way,
supported by the statistics that are there will get it published in
the leading journals in the field. Getting a statistically significant
result is like getting a license to assert truthfulness of the
result. What is not clear in these mostly concocted and highly
artificial studies is that what does one make of this significance
outside of the experimental setup? As anyone in education research
would agree two setups cannot be the same, then what is t
Testing students in this way is akin to learners who are learning a
new language being subjected to and exotic and terse vocabulary
test. Of course, we are going to perform badly on such a test. The
point of a test should be to know what students know, not what they
don’t know. And if at all, they don’t know something, it is treated as
if is the fault of the individual student. After all, there would be
/some/ students in each study (with a sufficiently large sample) that
would perform as expected. In case the student does not perform as
expected we can have many possible causes. It might be the case that
the student is not able to cognitively process and solve the problem,
that is inspite of having sufficient background knowledge to solve the
problem at hand the student is unable to perform as expected. It might
be the case that the student is capable, but was never told about the
ways in which to solve the given problem (ZPD anyone?). In this case, it might be that the curricular materials that the student has access
to are simply not dealing with concepts in an amenable way. Or it
might be that the test itself is missing out on some crucial aspects
and is flawed, as we have seen in the example above. The problem is
systemic, yet we tend to focus on the individual. This is perhaps
because we have a normative structure to follow an ideal student at
that age group. This normative, ideal student is given by the so-called /standards of learning/. These standards decide, that at xx age
a student should be able to do multiplication of three digit
numbers. The entire curricula are based on these standards. Who and
what decides this? Most of the times, the standards are wayyy above
the actual level of the students. This apparent chasm between the
descriptive and the normative could not be more. We set unreal
expectations from the students, in the most de-contextualised and
uninteresting manner, and when they do not fulfil we lament the lack
of educational practices, resources and infrastructure.