Why is it difficult to copy edit your own work?

When I was writing my PhD thesis, as with anyone else it involved multiple drats going back and forth. As far I am concerned writing is never a linear process. At times one cannot even write a single line in a day, and at other times you may finish a couple of sections in a a few hours. Writing is difficult as it involves third level thinking (Dix 2006). You may have several ideas with you, you can also explicate while talking to others. But when it comes to writing it down we find it is not easy. But when we are in the”zone” the writing task becomes a natural thing. Your creative juices flow, the elusive ideas seem to express themselves in words. I usually experience such zone when l am at the end of the world task. The disparate looking ideas are bound together in a coherent whole. The feeling is close to an epiphany of a strange kind. You lose track of time and experience oneness with your work, as of the concrete form of ideas is a physical extension of your self. The feeling can be deeply satisfying to see your ideas on a concrete form. Mihaly Csikszentmihalyi uses the term “flow” to describe such an experience.

I experience the similar thing while reading a book. There are times when even reading a couple of sentences feels like a chore. While at other times when I am in the flow a hundred pages are finished in a couple of hours. The result send effortless. Words just seen to read themselves or to you. Of course it also depends on the kind of book one is reading. Technical books will take a longer to read.

When you are reading easily, you actually don’t read the entire words, letter by letter. Rather there is some sort of guess work or pre-processing that happens. Typically by looking at the starting letter and the end letter and also estimating the size of the word, we can actually guess the word before we can read it correctly. That is our cognitive system can fill in the gaps when we are dealing with familiar information. This makes the reading fast for experienced learners. A full use is made is of the repertoire of words that we know, and also rules of grammar. We expect certain words to follow certain words. And at times our system will fill in the gaps by itself when it finds some. This way the reading becomes effortless and we can make name out of it easily. Such fast refund comes with experience and knowing the language. When your children have difficulty in reading they have both problems. Their prediction system is not strong so they have to read each word and each letter in the word individually and only then they are angle to make sense. This then boils to be able to recognise the symbols as quickly as possible.

But how do we recognise the symbols that we see? There are several theories that attempt to explain our recognition of the symbols. The template theory posits that there are as many templates in our long term memory as many symbols we can detect. But this assumption of the theory puts severe demand on the long term memory and also on the processes which would the pattern recognition. A simple example which puts the template theory into spot is that we can recognise a letter in its various forms. The sheer number of fonts and handwriting, some of it bordering on illegible, we can recognise with little efforts lots severe strain on the template theory. The fact that we can also recognise the shape of fonts we have never seen before also poses a challenge.

The feature theory on the other hand posits that the long term memory has a set of features of the symbols which are essential in the symbol. For example, to recognise letter “w”, the feature set might include two lines slanting to the left and two lines slanting to right such as \ / \ /. This as soon as our sensory register gets this input of such lines we immediately pre process such input to a “w”. The feature theory posits three steps in pattern recognition which are collectively called as Analysis-by-Synthesis. In this process the pattern is broken down into its features, then these features are matched with LTM and finally a decision about the pattern is taken. Thus with this theory we require much less number of items in our long term memory. The analysis-by-synthesis is completely driven by the data that impinges on the sensory organs. 

Some of the challenges that this theory faces include ambiguity of how we deal with ambiguity in recognition of the patterns especially when the data is similar. In particular it does not answer our ability to consider importance of context in which the patterns appear and the sensory data itself is not good enough discriminator. In many cases turns out that we rely on other knowledge and information also to make sense of the patterns, in which case the feature theory alone cannot provide good explanations. For example, consider the Greek letter $\Delta$. Though we can identify it as such, the meaning it conveys can be heavily dependent on the context. We take three such examples.

  • If it is seen in a sentence in Greek it will be interpreted as a sound “de” Το Δελχί είναι η πρωτεύουσα της Ινδίας (Delhi is India’s capital.).
  • Now if the same letter $\Delta$ is seen in a mathematical context such as $\Delta ABC \cong \Delta PQR$, it represents a triangle and the sentence is read as “Triangle ABC is congruent to triangle PQR”.
  • Finally, if the symbol $\Delta$ appears in a physics formula, lets say $\Delta E = E_{2} – E_{1}$, it represents a difference in the two values of $E$.

Or consider the two sentences below

In the first sentence we will probably read it as “The number of participants was 190 (one hundred and ninety)” while in the second sentence we would read it as “I go there often”. Note here that the visual pattern is the same in both the sentences. Yet the context of the sentence makes all the difference in how we interpret the pattern. From such experiences we must conclude that context affects the pattern recognition by activating some conceptual information from LTM or pre-synthesising the pattern. Thus our cognitive system adds more information based on the contexts to the perceptual data to make sense of the patterns and context establishes what to expect in the incoming patterns. 

Now this adaptive feature of the our cognitive system can be very useful and allows us to be much faster than just being dependent on the perceptual information. But at times it can be maladaptive also. This notion brings us back to the title of this post. As I completed my first draft of the thesis, and gave it for comments, I discovered to my extreme horror and embarrassment that it was full of elementary grammatical mistakes. In the flow of writing down my ideas, I chose to just go with them. Though I did review what I had written, I did not find any obvious faults in it. This is something that you might have also experienced. It is difficult to see “obvious” break in ideas or abrupt endings in your own writing, and this of course also includes “trivial” grammar rules of punctuation and articles as such.  But when you are proof-reading work of someone else both “obvious” and  “trivial” errors are markedly visible. I can say this as I have copy-edited and proof-read several long and short works, where I did found out the very same errors in other works which I could not in my own work. Thankfully, in my thesis most of the issues were of “trivial” grammar and no “obvious” conceptual or fundamental issues were pointed. I then furiously began correcting the “trivial” grammar issues in my work. 

 

Why is this so? Seen in the framework of analysis-by-synthesis model, we know what we have written or wanted to write and our pre-synthesising cognitive system fills in the obvious gaps and creates the required and expected patterns contextually where they are found missing. We tend to “skip” over our writing as we read it in a flow, with background and context of why the text was written and what it wants to say. All the “obvious” and  “trivial” errors and gaps are ironed out with the additional contextual information that we have about our own work. So we have to be extra-careful while proof-reading our own work. When we are reading work written by someone else, all this background information is not available to us, hence pre-synthesising of patterns happens at a lower level. This leads us to find “obvious” and  “trivial” errors and gaps much easily.

I found out that though I can do a good job of proof-reading other persons work on a computer (using the record changes/comments on a word processor) , for proof-reading my own work I usually take a printout and work on it with a pen. The concrete form of my work perhaps helps me in minimising the pre-synthesising that happens.  I usually take red ink for proof-reading, perhaps reminiscing of how teachers in schools grade assignments. 

 

References

Chapter 2 Hunt, R. R., & Ellis, H. C. (1999). Fundamentals of cognitive psychology. McGraw-Hill.

A. Dix (2006). writing as third order experience Interfaces, 68, pp. 19-20. Autumn 2006.

 

Einstein on his school experience

One had to cram all this stuff into one’s mind, whether one liked it or not. This coercion had such a deterring effect that, after I had passed the final examination, I found the consideration of any scientific problems distasteful to me for an entire year … is in fact nothing short of a miracle that the modern methods of instruction have not yet entirely strangled the holy curiosity of inquiry; for this delicate little plant, aside from stimulation, stands mainly in need of freedom; without this it goes to wreck and ruin without fail. It is a very grave mistake to think that the enjoyment of seeing and searching can be promoted by means of coercion and a sense of duty. To the contrary, I believe that it would be possible to rob even a healthy beast of prey of its voraciousness, if it were possible, with the aid of a whip, to force the beast to devour continuously, even when not hungry – especially if the food, handed out under such coercion, were to be selected accordingly.

Seeing that even almost a hundred years later it is almost unchanged gives one an idea of how little effort has gone into changing how we learn.

Genetics and human nature

Usually, in the discussion regarding human nature, there is a group of academics who would like to put all the differences amongst humans to non-genetic components. That is to say, the cultural heritage plays a much more important or the only important role in the transfer of characteristics. In the case of education, this is one of the most contested topics. The nature-nurture debate as it is known goes to the heart of many theories of human behaviour, learning and cognition. The behaviourist school was very strong until the mid 20th century. This school strongly believed that the entirety of human learning is dependent only on the environment with the genes or (traits inherited from the parents) playing little or no role. This view was seriously challenged on multiple fronts with attacks from at least six fields of academic inquiry: linguists, psychology, philosophy, artificial intelligence, anthropology, and neuroscience. The advances in these fields and the results of the studies strongly countered the core aspects of behaviourism. Though the main thrust of the behaviourist ideas seems to be lost, but the spirit still persists.  This is in the form of academics who still deny any role for genes, or even shun at the possibility of genes having any effect on human behaviour. They say it is all the “environment” or nurture as they name it. Any attempt to study the genetic effects are immediately classified as fascist, Nazi or equated to social Darwinism and eugenics. But over several decades now, studies which look at these aspects have given us a mounting mountain of evidence to lay the idea to rest. The genes do play a definitive role and what we are learning is that the home environment may not be playing any role at all or a very little role in determining how we turn out. Estimates range from 0 to 10%. The genes, on the other hand, have been found to have about 50% estimate, the rest 40% being attributed to a “unique”  environment that the individual experiences.   Though typically, some of the individuals in academia argue strongly against the use of genetics or even mention of the word associated with education or any other parameters related to education. But this has to do more with their ideological positions, which they do not want to change, than actual science. This is Kuhnian drama of a changing science at work. The old scientists do not want to give up on their pet theories even in the case of evidence against them. This is not a unique case, the history of science is full of such episodes.
Arthur Jensen, was one of the pioneers of studying the effect of genetic heritability in learning. And he lived through the behaviourist and the strong nurture phases of it. This quote of his summarises his stand very well.

Racism and social elitism fundamentally arise from identification of individuals with their genetic ancestry; they ignore individuality in favor of group characteristics; they emphasize pride in group characteristics, not individual accomplishment; they are more concerned with who belongs to what, and with head-counting and percentages and quotas than with respecting the characteristics of individuals in their own right. This kind of thinking is contradicted by genetics; it is anti-Mendelian. And even if you profess to abhor racism and social elitism and are joined in battle against them, you can only remain in a miserable quandary if at the same time you continue to think, explicitly or implicitly, in terms of non-genetic or antigenetic theories of human differences. Wrong theories exact their own penalties from those who believe them. Unfortunately, among many of my critics and among many students I repeatedly encounter lines of argument which reveal disturbing thought-blocks to distinguishing individuals from statistical characteristics (usually the mean) of the groups with which they are historically or socially identified.
–  Arthur Jensen, Educability and Group Differences 1973

As the highlighted sentence in the quote remarks, the theories which are wrong or are proven to be wrong do certainly exact penalties from their believers. One case from history of science being the rise and rise of Lysenkoism in the erstwhile USSR. The current bunch of academics who strongly deny any involvement of genes in the theories of human learning are no different.

Implicit cognition in the visual mode

Images become iconified, with the image representing an object or
phenomena, but this happens by enculturation rather by training. An
example to elaborate this notion is the painting Treachery of
Images by Belgian surrealist artist René Magritte. The painting is
also sometimes called This is not a pipe. The picture shows a
pipe, and below it, Magritte painted, “Ceci n’est pas une pipe.”,
French for “This is not a pipe.”
176
When one looks at the painting, one
exclaims “Of course, it is a pipe! What is the painter trying to say
here? We can all see that it is indeed a pipe, only a fool will claim
otherwise!” But then this is what Magritte has to say:

The famous pipe. How people reproached me for it! And yet, could you
stuff my pipe? No, it’s just a representation, is it not? So if I had
written on my picture `This is a pipe’, I’d have been lying!

Aha! Yess! Of course!! you say. “Of course it is not a pipe! Of
course it is a representation of the pipe. We all know that! Is this
all the painter was trying to say? Its a sort of let down, we were
expecting more abstract thing from the surrealist.” We see that the
idea or concept that the painting is a \emph{representation} is so
deeply embedded in our mental conceptual construct that we take it for
granted all the time. It has become so basic to our everyday social
discourse and intercourse that by default we assume it to be so. Hence
the confusion about the image of the pipe. Magritte exposes this
simple assumption, that we so often ignore. This is true for all the
graphics that we see around us. The assumption is implicit in all the
things we experience in the society. The representation becomes the
thing itself, for it is implicit in the way we talk and communicate.
Big B and D
When you look at a photo of something or someone, you recognize
it. “This is Big B!” you say looking at the painting! But then you
have already implicitly assumed that the representation of Big B is Big B. This implicit assumption comes from years of implicit training from being submerged in  the sea of the visual artefacts that surround and drown us. This association between the visual representation and the reality it represents had become the central theme of the visual culture that we live in. The training that we need for such an association comes from the peers and mentors that surround us from the childhood. The meaning and the association of the images is taught/caught over the years, so much so that we assume the abstract association is the normal way things are. In this way it becomes the implicit truth, though when one is pressed, the explicit connections are brought out.
Yet when it comes to understanding images in science and mathematics, the same thing doesn’t happen. There is no enculturation of children into understand the implicit meaning in these images. Hardly there are no peers or mentors whose actions and practices can be imitated by the young impressible learners. The practice which comes so naturally in other domains (identifying actor with a picture of the actor, or identifying a physical space with a photo) doesn’t happen in science and mathematics classrooms. The notion of practice is dissociated from the what is done to imbibe this understanding in the children. A practice based approach where the images become synonymous with their implied meaning is used in vocabulary might one very positive way out, this is after all practitioners of science and mathematics learn their trade.

Mathematical Literacy Goals for Students

National Council of Teachers for Mathematics NCTM proposed these five goals to cover the idea of mathematical literacy for students:

  1. Learning to value mathematics: Understanding its evolution and its role in society and the sciences.
  2. Becoming confident of one’s own ability: Coming to trust one’s own mathematical thinking, and having the ability to make sense of situations and solve problems.
  3. Becoming a mathematical problem solver: Essential to becoming a productive citizen, which requires experience in a variety of extended and non-routine problems.
  4. Learning to communicate mathematically:  Learning the signs, symbols, and terms of mathematics.
  5. Learning to reason mathematically: Making conjectures, gathering evidence, and building mathematical arguments.
National Council of Teachers of Mathematics. Commission on Standards for School Mathematics. (1989). Curriculum and evaluation standards for school mathematics. Natl Council of Teachers of.