Illustrations for Alice in Wonderland – Part 3 – Peter Newell

Peter Newell was a prolific American illustrator and author. The books with his illustrations for Alice’s Adventures in Wonderland and Through The Looking Glass were published circa 1901. These are paintings rather than line drawings. The books were printed in black and white/grayscale so are the illustrations. But there must be a set of fully colour versions of these paintings. Some of them you can see here and there on the interwebs, but a complete collection I could not find. If you know of colour versions of these paintings please let me know.

 

Alice’s Adventures in Wonderland

Down she came upon a heap of dry leaves.

 

The poor little thing sat down and cried.

“Now I’m opening out like the largest telescope that ever was!”

 

The Rabbit started violently.

 

The Mouse gave a sudden leap out of the water.

The Caucus-Race.

 

The Dod solemnly presented the thimble.

“Mine is a long and a sad tale,” said the Mouse.

On various pretexts they all moved off.

“Why, Mary Ann, what are you doing here?”

“What’s that in the window?”

“Catch him, you by the hedge.”

The poor little Lizard Bill was in the middle being held up.

The Puppy jumped into the air.

 

T

The Caterpillar and Alice looked at each other.

Old Father William standing on his head.

 

Old Father William balancing an Eel on the end of his nose.

Old Father William turning a back somersault in at the door.

“Serpent!” screamed the Pigeon.

Then they both bowed low and their curls got entangled.

 

Singing a sort of lullaby.

 

Sp she set the little creature down.

This time it vanished quite slowly.

He dipped it into his cup of tea and looked at it again.

They lived at the bottom of a well.

 

Don’t go splashing paint over me.

 

“Off with her head!”

It would twist itself round and look up in her face.

“Don’t look at me like that.”

The Hedge-hog was engaged in a fight with another Hedge-hog.

 

“Tut, tut, child!” said the Duchess.

 

They began solemnly dancing round and round Alice.

“Will you walk a little faster,” said a Whitling to a Snail.

Alice began telling them her adventures.

“Come on!” cried the Gryphon.

How big is the shadow of the Earth?


The Sun is our ultimate light source on Earth. The side of the Earth facing the Sun is bathed in sunlight, due to our rotation this side changes continuously. The side which faces the Sun has the day, and the other side is the night, in the shadow of the entire Earth. The sun being an extended source (and not a point source), the Earth’s shadow had both umbra and penumbra. Umbra is the region where no light falls, while penumbra is a region where some light falls. In case of an extended source like the Sun, this would mean that light from some part of the Sun does fall in the penumbra.  Occasionally, when the Moon falls in this shadow we get the lunar eclipse. Sometimes it is total lunar eclipse, while many other times it is partial lunar eclipse. Total lunar eclipse occurs when the Moon falls in the umbra, while partial one occurs when it is in penumbra. On the other hand, when the Moon is between the Earth and the Sun, we get a solar eclipse. The places where the umbra of the Moon’s shadow falls, we get total solar eclipse, which is a narrow path on the surface of the Earth, and places where the penumbra falls a partial solar eclipse is visible. But how big is this shadow? How long is it? How big is the umbra and how big is the penumbra? We will do some rough calculations, to estimate these answers and some more to understand the phenomena of eclipses.

We will start with a reasonable assumption that both the Sun and the Earth as spheres. The radii of the Sun, the Earth and the Moon, and the respective distances between them are known. The Sun-Earth-Moon system being a dynamic one, the distances change depending on the configurations, but we can assume average distances for our purpose.

[The image above is interactive, move the points to see the changes. This construction is not to scale!. The simulation was created with Cinderella ]

 

The diameter of the Earth is approximately 12,742 kilometers, and the diameter of the Sun is about 1,391,000 kilometers, hence the ratio is about 109, while the distance between the Sun and the Earth is about 149 million kilometers. A couple of illustrations depicting it on the correct scale.

 

 

The Sun’s (with center A) diameter is represented by DF, while EG represents Earth’s (with center C) diameter. We connect the centers of Earth and Sun. The umbra will be limited in extent in the cone with base EG and height HC, while the penumbra is infinite in extent expanding from EG to infinity. The region from umbra to penumbra changes in intensity gradually. If we take a projection of the system on a plane bisecting the spheres, we get two similar triangles HDF and HEG. We have made an assumption that our properties of similar triangles from Euclidean geometry are valid here.

In the schematic diagram above (not to scale) the umbra of the Earth terminates at point H. Point H is the point which when extended gives tangents to both the circles. (How do we find a point which gives tangents to both the circles? Is this point unique?). Now by simple ratio of similar triangles, we get

$$
\frac{DF}{EG} = \frac{HA}{HC}  = \frac{HC+AC}{HC}
$$

Therefore,

$$
HC = \frac{AC}{DF/EG -1}
$$

Now, $DF/EG = 109$, and $AC$ = 149 million km,  substituting the values we get the length of the umbra $HC \approx$  1.37 million km. The Moon, which is at an average distance of 384,400 kilometers,  sometimes falls in this umbra, we get a total lunar eclipse. The composite image of different phases of a total lunar eclipse below depicts this beautifully. One can “see” the round shape of Earth’s umbra in the central three images of the Moon (red coloured) when it is completely in the umbra of the Earth (Why is it red?).

When only a part of umbra falls on the moon we get a partial lunar eclipse as shown below. Only a part of Earth’s umbra is on the Moon.

So if the moon was a bit further away, lets say at 500,000 km, we would not get a total solar eclipse. Due to a tilt in Moon’s orbit not every new moon is an eclipse meaning that the Moon is outside both the umbra and the penumbra.

The observations of the lunar eclipse can also help us estimate the diameter of the Moon.

Similar principle applies (though the numbers change) for solar eclipses, when the Moon is between the Earth and the Sun. In case of the Moon, ratio of diameter of the Sun and the Moon is about 400. With the distance between them approximately equal to the distance between Earth and the Sun. Hence the length of the umbra using the above formula is 0.37 million km or about 370,000 km. This makes the total eclipse visible on a small region of Earth and is not extended, even the penumbra is not large (How wide is the umbra and the penumbra of the moon on the surface of the Earth?).

When only penumbra is falling on a given region, we get the partial solar eclipse.

You can explore when solar eclipse will occur in your area (or has occurred) using the Solar Eclipse Explorer.

This is how the umbra of the Moon looks like from space.

And same thing would happen to a globe held in sunlight, its shadow would be given by the same ratio.

Thus we see that the numbers are almost matched to give us total solar eclipse, sometimes when the moon is a bit further away we may also get what is called the annular solar eclipse, in which the Sun is not covered completely by the Moon. Though the total lunar eclipses are relatively common (average twice a year) as compared to total solar eclipses (once 18 months to 2 years). Another coincidence is that the angular diameters of the Moon and the Sun are almost matched in the sky, both are about half a degree (distance/diameter ratio is about 1/110). Combined with the ratio of distances we are fortunate to get total solar eclipses.

Seeing and experiencing a total solar eclipse is an overwhelming experience even when we have an understanding about why and how it happens. More so in the past, when the Sun considered a god, went out in broad daylight. This was considered (and is still considered by many) as a bad omen. But how did the ancient people understand eclipses?  There is a certain periodicity in the eclipses, which can be found out by collecting large number of observations and finding patterns in them. This was done by ancient Babylonians, who had continuous data about eclipses from several centuries. Of course sometimes the eclipse will happen in some other part of the Earth and not be visible in the given region, still it could be predicted.   To be able to predict eclipses was a great power, and people who could do that became the priestly class. But the Babylonians did not have a model to explain such observations. Next stage that came up was in ancient Greece where models were developed to explain (and predict) the observations. This continues to our present age.

The discussion we have had applies in the case when the light source (in this case the Sun) is larger than the opaque object (in this case the Earth). If the the light source is smaller than the object what will happen to the umbra? It turns out that the umbra is infinite in extent. You see this effect when you get your hand close to a flame of candle and the shadow of your hand becomes ridiculously large! See what happens in the interactive simulation above.

References

James Southhall Mirrors, Prisms and Lenses (1918) Macmillan Company

Eric Rogers Physics for the Inquiring Mind (1969) Princeton

 

Remarkable Curves 1: The Straight Line and the Circle

 

 

 

 

The Straight Line and the circle

(Note: All the images are interactive, move the points around to see. Dynamic mathematics interactive web page with Cinderella )

A moving point describes a straight line when it passes from one
position to another along the shortest possible path. A straight line
can be drawn with the help of a ruler; when a pencil runs along the
edge of a ruler it leaves a trace on the paper in the form of a
straight line.

When a point moves on a surface at a constant distance from another
fixed point on the same surface it describes a circle. Because of this
property of the circle we are able to draw a circle with the help of
compasses.

The straight line and the circle are the simplest and at the same time
the most remarkable curves as far as their properties are concerned.

You are no doubt more familiar with these two curves than with
others. But you should not imagine that you know all of the most
important properties of straight lines and curves. For example, you
may not know that if the vertices of the triangles $ABC$ and $AB’C’$
lie on three straight lines intersecting at the point $S$ (Fig. 1),
the three points of intersection $M$, $K$, $L$ of the corresponding
sides of the triangles, the sides $AB$ and $A’B’$, $BC$ and $B’C’$,
and $AC$ and $A’C’$, must be collinear, that is, they lie on a single
straight line.

(Note: this image below is interactive, move the points to see the dynamic change!)

You are sure to know that a point $M$ moving in a plane equidistantly
from two fixed points, say $F_1$, and $F_2$, of the same plane, that
is, so that $MF_{1}= MF_{2}$, describes a straight line (Fig. 2).

But you might find it difficult to answer the question:

What type of curve will point $M$ describe if the distance of $M$ from
$F_1$, is a certain number of times greater than that from $F_2$ (for
instance, in Fig. 3 it is twice as great)?

The curve turns out to be a circle. Hence if the point $M$ moves in a
plane so that the distance of $M$ from one of the fixed
points. $F_{1}$ or $F_{2}$, in the same plane is always proportional
to the distance from the other fixed point, that is

$$
MF_{1} = k \times MF_{2}
$$

then $M$ describes either a straight line (when the factor of
proportionality is unity) or a circle (when the factor of
proportionality is other than unity).

 

This is a post to create interactive mathematics elements using Cinderella a Free Software alternative to GeoGebra which is no longer a Free Software. The files have been exported from Cinderella at html interactives)

Preternatural Photographs

A red bloom from Indian summer. This photo has flowers from the Canonball tree, Gulmohar and African Tulip taken by keeping them on a notebook Taken with one of the most basic cameras the Canon A800.

I got my first camera c. 2005. It was the Canon S2 with 12x optical zoom and 5 MP sensor and could support a 1 GB memory card. By today’s standards these specs look very primitive, but this is almost 18 years ago. I have taken several thousand pictures with that camera, resetting the image counter two times. About 2015 the camera sensor gave up, and they could not repair it. The camera, had super zoom for those days, but its macro function was excellent in terms of focusing very well on the macro subjects. Then I got my hands on DSLRs the Sony Alpha 350 and then finally a Nikon D300 with excellent selection of prime lenses. But this post is not about the cameras per se, but the photographs.

The magnificent flower of Epiphyllum oxypetallum (mistaken popularly as bramhakamal), taken with Canon S2.

Among all the photos that I have taken some just stand out from the rest. The light, the focus, the exposure and the framing is just perfect. I don’t know how to explain this aesthetic appeal of these photographs. But in these photos, even amongst a batch of hundreds taken in the same setting or with same subjects, some photos just come out extraordinary as compared to the rest. As a photographer you might have experienced this too.

Sophia College, Mumbai corridor, taken with Nikon D300

I am a very visual oriented person. Ever since I remember from my childhood, I have been always attracted to images and visual depictions. This is what led to me reading comics. The visual impact. Also, in my house there were always art projects my mother did such as embroidery, paintings among other things. And I was fascinated by colours and drawings. Though I am not very good at drawing, I can certainly appreciate good ones.

Water lily fields submerged near the origin of Mooi river (Taken with Motorola E6). This is one my favourite photos that I have taken.

But coming to photos, some photos make a connection with your very being, inner-self of you. I cannot explain it. But you can experience the photo at some deeper level than the perceptual one. From what I have gathered talking to other people “seeing” the same image, but this experience might be unique, as not every one feels this special connection. You have your favourite photographs, they are really powerful when they speak to you directly transcending the boundaries of space and time.

A poppy flower in a poppy field (Taken with Canon EOS 80D)

I don’t know if there is a cognitive theory that will explain this in terms of our earlier experiences with images or social interactions with media. But this connection that you feel is real and I can’t put it in exact words…

I asked ChatGPT to give the Turing Test

Turing Test is a test first conceptualised by Alan Turing, a mathematician and a computer pioneer. Alan Turing made many ground breaking discoveries, in computer science including the architecture of  computing devices which can run any computer programme known after him as Turing Machines. The other type of well known computing device architecture is the von Neumann Machine.

The Turing Test put simply is the ability for a computer programme to have an conversation with a human in a human manner. A programme which can pass a Turing test would mean that it can mimic the human conversation.

The Turing test, originally called the imitation game by Alan Turing in 1950,is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine’s ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine’s ability to give correct answers to questions, only on how closely its answers resembled those a human would give. (Wikipedia Entry on Turing Test)

 

Now some of the earlier programmes tried to pass the Turing test included chatting programmes such as Joseph Weizenbaum created Eliza  in the mid 1960s. So the Turing tests remained the gold standard for computer programmes to achieve the “human like” thinking. With the latest ChatGPT programme from Open AI Labs, I thought of giving it a try. These are the questions and their responses :

Can you pass the Turing test?

As an AI language model, my primary purpose is to assist and provide useful responses to users based on the input given to me. However, passing the Turing test, which involves convincing a human evaluator that I am a human myself, is not my main goal.

While I am designed to generate human-like responses and engage in natural language conversations, there may be instances where I might not be able to accurately interpret or respond to certain queries. Nonetheless, I will strive to provide the best possible response based on the input given to me.

What questions should I ask in a Turing test?

In a Turing test, the purpose is for the evaluator to determine whether they are interacting with a machine or a human based on the responses they receive. The questions asked should be designed to assess the machine’s ability to engage in natural language conversation, understand the context and nuances of human language, and generate appropriate responses.

Here are some examples of questions that can be asked in a Turing test:

1. What is your favorite book/movie/TV show?
2. Can you tell me a little about yourself?
3. What is your opinion on [a recent news event]?
4. What is the capital city of [a country]?
5. What is the weather like today?
6. Can you recommend a good restaurant in [a city]?
7. What do you think of [a philosophical question]?
8. Can you explain [a complex scientific concept] in simple terms?
9. Have you ever traveled to [a country or city]? If so, what did you like about it?
10. Can you tell me a joke or a funny story?

These questions are designed to test the machine’s ability to engage in natural language conversation, answer questions based on personal experience, provide opinions, and demonstrate knowledge in different areas. However, the specific questions used in a Turing test will depend on the context and purpose of the test.

How would you answer questions listed in the previous chat?

Sure, I can provide sample responses to the questions listed in the previous chat:

  1. What is your favorite book/movie/TV show?
  • As an AI language model, I don’t have the ability to have personal preferences or emotions. However, I can provide recommendations based on popularity or ratings.
  1. Can you tell me a little about yourself?
  • I am ChatGPT, a large language model trained by OpenAI. My primary purpose is to assist and provide useful responses to users based on the input given to me.
  1. What is your opinion on [a recent news event]?
  • I don’t have personal opinions as I am a machine learning model, but I can provide you with information and perspectives on the topic.
  1. What is the capital city of [a country]?
  • The capital city of a country depends on which country you are asking about. Can you specify which country you are referring to?
  1. What is the weather like today?
  • I’m sorry, but as an AI language model, I don’t have access to live weather data. However, I can provide general information on weather conditions in different regions.
  1. Can you recommend a good restaurant in [a city]?
  • Certainly, I can recommend popular or highly rated restaurants in the city you specify. Can you please let me know which city you are interested in?
  1. What do you think of [a philosophical question]?
  • As an AI language model, I don’t have personal beliefs or thoughts. However, I can provide information and different perspectives on philosophical topics.
  1. Can you explain [a complex scientific concept] in simple terms?
  • I can certainly try to explain complex scientific concepts in simpler terms. Please let me know which concept you would like me to explain.
  1. Have you ever traveled to [a country or city]? If so, what did you like about it?
  • As an AI language model, I don’t have the ability to travel. However, I can provide information on different countries and cities, and their attractions or points of interest.
  1. Can you tell me a joke or a funny story?
  • Sure, here’s a joke for you: Why was the math book sad? Because it had too many problems.

How would you rate yourself in the questions that you just now answered?

As an AI language model, I don’t have the ability to rate myself or evaluate my own performance. However, I always strive to provide the best possible responses based on the input given to me, and I am constantly learning and improving through feedback and updates to my training data.

 

As you can see some of the responses are good, while for other its vague and noncommittal. How would you rate the Turing test given by open ai?

Why is it difficult to copy edit your own work?

When I was writing my PhD thesis, as with anyone else it involved multiple drats going back and forth. As far I am concerned writing is never a linear process. At times one cannot even write a single line in a day, and at other times you may finish a couple of sections in a a few hours. Writing is difficult as it involves third level thinking (Dix 2006). You may have several ideas with you, you can also explicate while talking to others. But when it comes to writing it down we find it is not easy. But when we are in the”zone” the writing task becomes a natural thing. Your creative juices flow, the elusive ideas seem to express themselves in words. I usually experience such zone when l am at the end of the world task. The disparate looking ideas are bound together in a coherent whole. The feeling is close to an epiphany of a strange kind. You lose track of time and experience oneness with your work, as of the concrete form of ideas is a physical extension of your self. The feeling can be deeply satisfying to see your ideas on a concrete form. Mihaly Csikszentmihalyi uses the term “flow” to describe such an experience.

I experience the similar thing while reading a book. There are times when even reading a couple of sentences feels like a chore. While at other times when I am in the flow a hundred pages are finished in a couple of hours. The result send effortless. Words just seen to read themselves or to you. Of course it also depends on the kind of book one is reading. Technical books will take a longer to read.

When you are reading easily, you actually don’t read the entire words, letter by letter. Rather there is some sort of guess work or pre-processing that happens. Typically by looking at the starting letter and the end letter and also estimating the size of the word, we can actually guess the word before we can read it correctly. That is our cognitive system can fill in the gaps when we are dealing with familiar information. This makes the reading fast for experienced learners. A full use is made is of the repertoire of words that we know, and also rules of grammar. We expect certain words to follow certain words. And at times our system will fill in the gaps by itself when it finds some. This way the reading becomes effortless and we can make name out of it easily. Such fast refund comes with experience and knowing the language. When your children have difficulty in reading they have both problems. Their prediction system is not strong so they have to read each word and each letter in the word individually and only then they are angle to make sense. This then boils to be able to recognise the symbols as quickly as possible.

But how do we recognise the symbols that we see? There are several theories that attempt to explain our recognition of the symbols. The template theory posits that there are as many templates in our long term memory as many symbols we can detect. But this assumption of the theory puts severe demand on the long term memory and also on the processes which would the pattern recognition. A simple example which puts the template theory into spot is that we can recognise a letter in its various forms. The sheer number of fonts and handwriting, some of it bordering on illegible, we can recognise with little efforts lots severe strain on the template theory. The fact that we can also recognise the shape of fonts we have never seen before also poses a challenge.

The feature theory on the other hand posits that the long term memory has a set of features of the symbols which are essential in the symbol. For example, to recognise letter “w”, the feature set might include two lines slanting to the left and two lines slanting to right such as \ / \ /. This as soon as our sensory register gets this input of such lines we immediately pre process such input to a “w”. The feature theory posits three steps in pattern recognition which are collectively called as Analysis-by-Synthesis. In this process the pattern is broken down into its features, then these features are matched with LTM and finally a decision about the pattern is taken. Thus with this theory we require much less number of items in our long term memory. The analysis-by-synthesis is completely driven by the data that impinges on the sensory organs. 

Some of the challenges that this theory faces include ambiguity of how we deal with ambiguity in recognition of the patterns especially when the data is similar. In particular it does not answer our ability to consider importance of context in which the patterns appear and the sensory data itself is not good enough discriminator. In many cases turns out that we rely on other knowledge and information also to make sense of the patterns, in which case the feature theory alone cannot provide good explanations. For example, consider the Greek letter $\Delta$. Though we can identify it as such, the meaning it conveys can be heavily dependent on the context. We take three such examples.

  • If it is seen in a sentence in Greek it will be interpreted as a sound “de” Το Δελχί είναι η πρωτεύουσα της Ινδίας (Delhi is India’s capital.).
  • Now if the same letter $\Delta$ is seen in a mathematical context such as $\Delta ABC \cong \Delta PQR$, it represents a triangle and the sentence is read as “Triangle ABC is congruent to triangle PQR”.
  • Finally, if the symbol $\Delta$ appears in a physics formula, lets say $\Delta E = E_{2} – E_{1}$, it represents a difference in the two values of $E$.

Or consider the two sentences below

In the first sentence we will probably read it as “The number of participants was 190 (one hundred and ninety)” while in the second sentence we would read it as “I go there often”. Note here that the visual pattern is the same in both the sentences. Yet the context of the sentence makes all the difference in how we interpret the pattern. From such experiences we must conclude that context affects the pattern recognition by activating some conceptual information from LTM or pre-synthesising the pattern. Thus our cognitive system adds more information based on the contexts to the perceptual data to make sense of the patterns and context establishes what to expect in the incoming patterns. 

Now this adaptive feature of the our cognitive system can be very useful and allows us to be much faster than just being dependent on the perceptual information. But at times it can be maladaptive also. This notion brings us back to the title of this post. As I completed my first draft of the thesis, and gave it for comments, I discovered to my extreme horror and embarrassment that it was full of elementary grammatical mistakes. In the flow of writing down my ideas, I chose to just go with them. Though I did review what I had written, I did not find any obvious faults in it. This is something that you might have also experienced. It is difficult to see “obvious” break in ideas or abrupt endings in your own writing, and this of course also includes “trivial” grammar rules of punctuation and articles as such.  But when you are proof-reading work of someone else both “obvious” and  “trivial” errors are markedly visible. I can say this as I have copy-edited and proof-read several long and short works, where I did found out the very same errors in other works which I could not in my own work. Thankfully, in my thesis most of the issues were of “trivial” grammar and no “obvious” conceptual or fundamental issues were pointed. I then furiously began correcting the “trivial” grammar issues in my work. 

 

Why is this so? Seen in the framework of analysis-by-synthesis model, we know what we have written or wanted to write and our pre-synthesising cognitive system fills in the obvious gaps and creates the required and expected patterns contextually where they are found missing. We tend to “skip” over our writing as we read it in a flow, with background and context of why the text was written and what it wants to say. All the “obvious” and  “trivial” errors and gaps are ironed out with the additional contextual information that we have about our own work. So we have to be extra-careful while proof-reading our own work. When we are reading work written by someone else, all this background information is not available to us, hence pre-synthesising of patterns happens at a lower level. This leads us to find “obvious” and  “trivial” errors and gaps much easily.

I found out that though I can do a good job of proof-reading other persons work on a computer (using the record changes/comments on a word processor) , for proof-reading my own work I usually take a printout and work on it with a pen. The concrete form of my work perhaps helps me in minimising the pre-synthesising that happens.  I usually take red ink for proof-reading, perhaps reminiscing of how teachers in schools grade assignments. 

 

References

Chapter 2 Hunt, R. R., & Ellis, H. C. (1999). Fundamentals of cognitive psychology. McGraw-Hill.

A. Dix (2006). writing as third order experience Interfaces, 68, pp. 19-20. Autumn 2006.

 

Emphasis

emphasis | ˈɛmfəsɪs | noun (plural emphases | ˈɛmfəsiːz | ) [mass noun]

1 special importance, value, or prominence given to something: they placed great emphasis on the individual’s freedom | [count noun] : different emphases and viewpoints

2 stress given to a word or words when speaking to indicate particular importance: inflection and emphasis can change the meaning of what is said

vigour or intensity of expression: he spoke with emphasis and with complete conviction

Emphasis on something means that we want to highlight it from the rest. A common way to do this in text is to italicize or give a boldface or even underline the text. At times colour is added to text to highlight it or colour is added to the background of the text. All these elements of typography work when there is a common background against which these elements standout. Hence emphasise the words as required. But,

If everything is emphasised, the un-emphasised becomes emphasised.

But consider a block of text which is completely emphasised.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Thus we see that the appeal of the emphasis is lost! The only way emphasis will work is to create a background against which it stands out. Let us return to our examples above

 

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

 

For me, personally I have not used underline or the highlight. And recently have shifted to coloured italics as my choice of emphasis.

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

Sometimes this produces very pretty results (at least I am very happy about them 🙂

(ETBB font with OrangeRed  (#FF4500) italics )

In some cases boldface colour also gives very good results:

Either the well was very deep, or she fell very slowly, for she had plenty of time as she went down to look about her, and to wonder what was going to happen next.

 

Further reading:

Elements of Typographic Style by Robert Bringhurst

EM Spectrum in Astronomy

EM Spectrum in Astronomy from Astrobites

I created a mindmap from the information above

  • Radio
    λ > 1 mm
    ν < 300 GHz

    • Objects
      • AGN JEts
      • Supernovae
      • Tidal Disruption Events
      • H II regions
      • Gamma ray bursts
      • Radio Galaxies
    • Processes
      • Synchrotron radiation
      • Free-free radiation
    • Observation
      • Ground based
    • Telescopes
      • Green Bank Telescope (GBT)
      • Five-hundred-meter Aperture Spherical Telescope (FAST)
      • Very Large Array (VLA)
      • Square Kilometer Array (SKA)
      • Low-Frequency Array (LoFAR),
      • Giant Meterwave Radio Telescope (GMRT)
  • Microwave/Sub-mm
    λ ~ 300 μm to 1 mm
    ν ~ 1 THz to 300 GHz

    • Objects
      • CMB
      • High energy phenomena
        • Relativistic jets
        • Cold dust
        • Cold gas
        • Galaxies at high z
    • Processes
      • Thermal (blackbody radiation)
    • Observation
      • Space
      • Ground
    • Telescopes
      • Space
        • Cosmic Background Explorer (COBE)
        • Wilkinson Microwave Anisotropy Probe (WMAP)
        • Planck
      • Ground
        • Submillimeter Array (SMA
        • Atacama Large Millimeter/submillimeter Array (ALMA).
  • Infrared
    λ ~ 300 μm to 2.5 μm
    ν ~ 1 THz to 120 THz

    • Far-Infrared
      λ ~ 15 μm to 300 μm
      ν ~ 20 THz 1 THz

      • Objects
        • Cool dust
        • Cool gas
        • star forming galaxies
        • young stellar objects
          • proto-stars
          • pre-main sequence stars
      • Processes
        • Thermal (Blackbody radiation)
      • Observation
        • Space
      • Telescopes
        • Infrared Astronomical Satellite (IRAS)
        • Infrared Space Observatory (ISO)
        • Herschel
    • Mid-Infrared
      λ ~ 2.5 μm to 15 μm
      ν ~ 120 THz 20 THz

      • Objects
        • Cosmic dust
          • surrounding young stars
          • protoplanetary disks
          • zodiacal dust
        • Solar system objects
          • planets
          • comets
          • asteroids
      • Processes
        • Thermal (Blackbody Radiation)
      • Observation
        • Space
        • Ground
      • Telescopes
        • Ground
          • Infrared Telescope Facility (IRTF)
          • United Kingdom Infrared Telescope (UKIRT)
        • Space
          • James Webb Space Telescope
          • Wide-field Infrared Survey Explorer (WISE)
          • Spitzer
    • Near-Infrared
      λ ~ 0.8 μm to 2.5 μm
      ν ~ 380 THz 120 THz

      • Objects
        • M-dwarfs
        • Cool stars
        • Low-mass stars
        • Galaxies
      • Processes
        • Thermal (Blackbody Radiation)
      • Observation
        • Space
        • Ground
      • Telescopes
        • Ground
          • 2MASS survey
          • Infrared Telescope Facility (IRTF)
          • United Kingdom Infrared Telescope (UKIRT)
          • Visible and Infrared Survey Telescope for Astronomy (VISTA)
        • Space
          • James Webb Space Telescope
  • Optical
    λ ~ 350 nm to 800 nm
    ν ~ 860 THz 380 THz

    • Objects
      • Ionized gases
      • Stars
      • Galaxies
    • Processes
      • Black Body Radiation (Thermal)
      • Non-thermal
    • Observation
      • Both Ground and Space
    • Telescopes
      • Ground
        • W.M. Keck telescopes
        • Very Large Telescopes
        • Southern African Large Telescope (SALT)
      • Space
        • Hubble Space Telescope
        • Gaia
        • Kepler
        • Transiting Exoplanet Survey Satellite (TESS).
  • Ultra-violet
    λ ~ 10 nm to 350 nm
    ν ~ 3e16 Hz to 860 Hz
    E ~ 120 eV to 3.5 eV

    • Objects
      • Thermal
        • O Stars
        • B Stars
        • white dwarfs
      • Non-thermal
        • AGN (continuous emission)
    • Processes
      • Blackbody radiation (thermal radiation)
      • Non-thermal sources
    • Observation
      • Ground (longest)
      • Space
    • Telescopes
      • AstroSAT (2015)
      • Galaxy Evolution Explorer (GALEX) (2003)
      • Hubble Space Telescope (1990)
      • Neil Gehrels Swift Observatory (2004)
  • X-Ray
    λ ~ 10 pm to 10 nm
    ν ~ 3e19 Hz to 3e16 Hz
    E ~ 120 keV to 0.12 keV

    • Objects
      • X-Ray binaries
      • AGN
      • Neutron stars
    • Processes
      • Thermal Emission
      • Free-Free emission
      • Accretion
    • Observation
      • From Space
    • Telescopes
      • Uhuru (1973)
      • Einstein (1978-81)
      • ROSAT (1999)
      • Chandra (1999)
      • XMM-Newton (1999)
      • NuSTAR (2012)
      • eROSITA (2019)
    • Gamma-Ray
      λ < 10 pm
      ν > 3e19 Hz
      E > 120 keV

      • Objects
        • AGN with Relativistic Jets
        • Gamma Ray Binaries
        • Gamma Ray Bursts
      • Processes
        • Gamma Decay
        • Pair-Annihilation
        • Shock Waves
        • Inverse-Compton Scattering
      • Observation
        • From Space
      • Telescopes
        • Compton Gamma-ray Observatory (1991)
        • International Gamma-Ray Astrophysics Laboratory (INTEGRAL) (2002)
        • Fermi (2008)

//mm2html.xsl FreeplaneVersion:freeplane 1.2.0

Free Graphics Illustration Resources and Repositories

Finding a good and accurate graphic or illustration for your need is something that we all struggle with. On top of that if the requirement is that the graphic resource has to be free (as in freedom) then it further narrows down the options. Sometimes you see the graphic you want, but its license terms are unknown or are not agreeable to your work as they are not released freely. So what do you do? Either you use an illustration which is not perfect fit or you use one which breaks your resource (in terms of license).  But the problems is also that there are many free resource repositories which are very well not known. I have personally come across great many graphic resources, only to be forgotten after their current need is finished. This post is an attempt to overcome this. This post is a collection of various free graphics and illustration resources and repositories that I have found useful over the years. It is also a sort of personal bookmark list of the these resources if and when I need them in the future. I hope this will be of use to others too. I will keep on updating this list with new resources that I find. If you know of any resources which are missing please do post in the comments.

The Internet Archive Image Search https://archive.org/details/image

Wikimedia Commons https://commons.wikimedia.org/

Open source illustrations kit https://illlustrations.co/

NYPL Public Domain Archive https://nypl.getarchive.net

David Rumsey Map Collection  https://www.davidrumsey.com/  also https://archive.org/details/david-rumsey-map-collection

Metropolitan Museum of Arts Public Domain Images also https://archive.org/details/metropolitanmuseumofart-gallery

Cleveland Museum of Art Open Access also https://archive.org/details/clevelandart

Brooklyn Museum also https://archive.org/details/brooklynmuseum

https://archive.org/details/bibliothequesaintegenevieve_image

Unsplash Free Images

NASA Images

ESO Images

Vintage Ads

Vintage Australian Print Ads

Vintage British Print Ads https://archive.org/details/vintage-british-magazine-ads

Vintage American Print Ads https://archive.org/details/vintage-american-print-ads

 

Vintage Danish Print Ads https://archive.org/details/vintage-danish-print-ads

https://archive.org/details/vintage-new-zealand-print-ads

https://archive.org/details/vintage-canadian-print-ads

https://archive.org/details/vintage-italian-print-ads

 

A library of microorganisms https://archive.org/details/cmpuj

 

National Gallery of Art also https://archive.org/details/national-gallery-of-art-images

Flickr Collections (there are several collections on Flickr which are openly licensed)

Flickr Commons

NOAA Photo Library https://flickr.com/photos/noaaphotolib/