Explosives or Not

We have earlier seen some quotes from the book The Golem: What You Should Know About Science. There are two companion volumes to this book The Golem Unleashed: What You Should Know about Technology and Dr. Golem: How to think about Medicine. These series of books by Harry Collins and Trevor Pinch provide us with examples from these fields which most of the times are ‘uncontested’. For example in the first volume they discuss about the famous 1920 experimental confirmation of Einstein’s predictions in general relativity by Eddington. This experiment is told as a matter-of-fact anecdote in physics, where petty borders of nationalism could not stop physics and physicists. But in the book, as they show inspite of scanty or almost no positive evidence, Eddington “Concluded” that the predictions were true. This they term “experimenters’ regress”.

The experimenter’s regress occurs when scientists cannot decide what the outcome of an experiment should be and therefore cannot use the outcome as a criterion of whether the experiment worked or not.
The Golem Unleashed pp. 106

In The Golem Unleashed they present us with many examples of this from field of technology. One of the examples is from the Challenger accident which Feynman made famous by courtroom drama. In this case they call the “experimenter’s regress” as “technologist’s regress”.
Recently I read (all further quotes from the same link)an episode in India which would fit in very with these episodes. This is regarding baggage  scanning machines installed at Indian airports. They were brought at 2 crore rupees per unit in 2010. But in August 2011 they failed the tests on tasks they were supposed to do.

The scanners are called in-line baggage inspection systems as they scan bags that go into the cargo hold of the aircraft after passengers check in and hand over their luggage to the airline. They use x-ray imaging and “automatic intelligence” to verify the contents of bags and determine whether they include explosives.

Now one would think that this would be as easy as it gets. Either the scanner detects whether the explosives are present in the baggage or they do not. But it is not as simple as it seems so. Now when the tests were done, the testers found the machines failed.

During the tests, security sources said that a technological specification committee of officials from the IB, RAW, SPG, NSG, BCAS and the civil aviation ministry passed bags containing 500 gm of six kinds of explosives, including PETN and ammonium nitrate, as well as IEDs through these systems. The scanners did not flag any of these bags as suspicious, the sources said.

So after this “failure” the companies which supplied these machines were asked to improve upon the machines or to share the software to recalibrate them. But the companies and interestingly Airport Authortiy of India AAI said that the testing methods were at fault. Now the explosives were passed and the machines did not detect them, then how can companies say that the testing methods were not working?
The machines work on the so called 70:30 principle.

“Though it works on a 70:30 principle, if there is an explosive in the 70 per cent, it will throw up the image of each and every bag that has dangerous substances. We would like to emphasise that the systems supplied and installed by our company at Indian airports are of state-of-the-art technology and are fully compliant with current standards.”
The 70:30 principle refers to the “automatic intelligence” used by Smiths Detection machines to clear 70 per cent of the baggage and reject the rest, according to the Airports Authority of India (AAI). “The machines reject 30 per cent of the baggage, the images of which are then sent to the screener. These systems have automatic intelligence capability and have been tested against a wide range of substances considered dangerous for aircraft. The details and specifications are never disclosed, or else terrorists would understand the software,”

But if anyway machines are doing the job, why not do it 100%? And the funny thing is that they are not sharing the software, which is the main agenda of the proprietary software companies. This is a case where people realize that they are just Users of the software under question. This argument that  “or else terrorists would understand the software” does not hold. They don’t need to if the machine is going to reject a whole lot of bags And in anyway if there are bus/holes in the software, a thousand eyes repair them much faster than a few. And this is The companies further say that

“The technology or physics is that x-ray based system can’t detect explosives, it is only approximate detection of dangerous substances,”

Why is the AAI siding (they are rather defending the companies) with the companies is something worth pondering.

AAI people say “The problem could be due to the sheer ignorance of officers who lacked the skills to test for explosives,”

Still with no unanimity in the testing results, the case truly presents us with a “technologist’s regress.”

The Golem at Large

Recently I completed reading of the second book in the Golem series, the complete being The Golem at Large: What you should know about technology by Harry Collins and Trevor Pinch. The book discusses cases from technology field in which there is a ‘regress’, in even expert people are not able to decide objectively what to make out of results of experiment, which at first sight seem to be so objective.
Some of the examples that they choose are well known, some are not. For example the much famed demonstration by Richard Feynman on O-Rings is brought out from its almost cult status. The demonstration by Feynman when looked at with all the background seems to be very naive. Similarly many other examples de-mythify different examples from different technologies.
Some of the quotes that I have liked are as under.
+ 4 It would, of course, be foolish to suggest that technology and
science are identical. Typically, technologies are more directly
linked to the worlds of political and military power and business
influence than are sciences.
+ 6 But disputes are representative and illustrative of the roots of
knowledge; they show us knowledge in the making.
+ 10 It would be wrong to draw any conclusions for science and
technology in general from wartime statements; wartime claims
about the success of the missile reflect the demands of war rather
than the demands of truth.
+ 28 As always, if only we could fight the last war again we would
do it so much better.
+ 28 Just as military men dream of fighting a war in which there is
never any shortage of information or supplies, while the enemy
always does the expected, so experts have their dreams of
scientific measurement in which signal is signal and noise follows
the model given in the statistical textbooks. As the generals
dream of man- oeuvres, so the experts dream of the mythical model
of science.
+ 28 Even when we have unlimited access to laboratory conditions, the
process of measurement does not fit the dream; that was the point
of our earlier book ¡V the first volume of the Golem series.
+ 32 Skimp, save and cut corners, give too much decision-making
power to reckless managers and uncaring bureaucrats, ignore the
pleas of your best scientists and engineers, and you will be
punished.
+ 38 Whether two things are similar or different, Wittgenstein
noted, always involves a human judgement.
+ 40 The `correct’ outcome can only be achieved if the experiments or
tests in question have been performed competently, but a competent
experiment can only be judged by its outcome.
+ 62 The treatment of the controversial aspects must be different to
the uncontroversial aspects. The same is true of what we loosely
refer to as experiments: one does not do experiments on the
uncontroversial, one engages in demonstrations.
+ 64 In an experiment, that would be cheating, but in a display, no
one would complain. A demonstration lies somewhere in the middle
of this scale. Classroom demonstrations, the first bits of science
we see, are a good case. Teachers often know that this or that
`experiment’ will only work if the conditions are `just so’, but
this information is not vouchsafed to the students.
+ 64 A demonstration or display is something that is properly set
before the lay public precisely because its appearance is meant
to convey an unambiguous message to the senses, the message that
we are told to take from it. But the significance of an experiment
can be assessed only be experts.
+ 71 Anything seen on television is controlled by the lens, the
director, the editor and the commentators. It is they who control
the conclusions that seem to follow from the `direct evidence of
the senses’.
+ 74 The public were not served well, not because they necessarily
drew false conclusions, but because they did not have access to
evidence needed to draw conclusions with the proper degree of
provisionality. There is no short cut through the contested
terrain which the golem must negotiate.
+ 77 A vast industry supported by national governments makes sure it
understands how oil is found, where it is found and who has the
rights to find it.
+ 82 In some ways it is easier to delve into the first few
nanoseconds of the universe than to reconstruct something buried
deep in the core of the earth.
+ 86 This is the `experimenter’s regress’. If you believe that
microbiological activity exists at great depths then this is
evidence that a competently performed experiment has been carried
out. If you believe that microbiological activity is impossible or
extremely unlikely then the evidence of biological activity is
evidence for doubting the experiment. Experiment alone cannot
settle the matter.
+ 91 In short, Gold’s non-biological theory and its assessment are
intertwined with the politics and commerce of oil
exploration. There is no neutral place where a `pure’ assessment
of the validity of his claims can be made.
+ 96 With several hundred equations to play with, this is an area
where `theory’ and `guesswork’ are not as far apart as
conventional ideas about science would encourage us to think.
+ 102 I think there are really two different approaches. One is to
say that this is a branch of science and that everything must be
based on objective criteria which people can understand. The other
is to say that is just too inflexible, and that there’s something
called judgement – intuition if you like – which has its place in
the sciences and that it’s the people who are intuitive who are
successful.
+ 104 It is also possible to argue that modellers who did not suffer from big
mistakes were lucky while some others were unlucky to have been wrong.
+ 106 Even if you believe that large errors are bound to prove you
wrong, you may still argue about the meaning of `large’ and you
may still think that the difference between accuracy and
inaccuracy was not clever economics but luck. Finally, you may
always say that the economy changed radically.
+ 106 … it was not the model but the economy that was wrong.
+ 107 The experimenter’s regress occurs when scientists cannot
decide what the outcome of an experiment should be and therefore
cannot use the outcome as a criterion of whether the experiment
worked or not.
+ 107 Oh absolutely, that’s why it’s absolutely pointless to publish
these forecast error bands because they are extremely
large. . . . I’m all for publishing full and frank statements but
you see the difficulty [with] these standards errors is that
they’re huge.
+ … In fact, we could have done this at the National Institute in
the mid 70s, but we suppressed it on the grounds that the standard
errors were so large, that it would have been difficult for
non-specialists, you know people using the models, using the
forecasts, to appreciate. It would have discredited them.
+ 108 Science is often used as a way of avoiding responsibility;
some kinds of fascism can be seen as the substitution of
calculation for moral responsibility.
+ 110 That is, it selected those who were `. . . willing to
subordinate their education to their careers’.
+ 111 The economists who build the models deserve credibility, but
their models do not; one should not use the same criteria to judge
expert advice as one uses to judge the coherence of a model.
+ 124 Flipping to and fro between science being all about certainty
and science being a political conspiracy is an undesirable state
of affairs.
+ 149 In effect, a group of lay people had managed to reframe the
scientific conduct of clinical research: they changed the way it
was conceived and practised.
+ 151 Feynman gives the impression that doubts can always be simply
resolved by a scientist who is smart enough.
+ 151 The danger is always that enchantment is the precursor of
disenchantment.
+ 153 Golem science and technology is a body of expertise, and
expertise must be respected. But we should not give unconditional
respect before we understand just what the expertise comprises and
whether it is relevant. To give unconditional respect is to make
science and technology a fetish.