Color Vision: One of Nature's Wonders
Evolution has dictated that we are visually orientated. The human brain and visual cortex are capable of processing
huge amounts of visual data and can quickly and efficiently recognize and extract useful information from this data.
In fact, studies have shown that we receive approximately 80% of our external information in visual form. Generally
speaking, however, most of us tend to take our visual capabilities for granted, especially when it comes to color
vision...
View Topics
The Electromagnetic Spectrum
Actually, the way in which all of this works (vision in general and color vision in particular) can be a little confusing at first, so sit up, pay attention, and place your brain into its turbo-charged mode. What we refer
to as "light" is simply the narrow portion of the electromagnetic spectrum that our eyes can see (detect and process), ranging from violet at one end to red at
the other, and passing through blue, green, yellow, and orange on the way (at one time, indigo was recognized as a distinct spectral color, but this is
typically no longer the case.):
Of course, when we use terms like "red", "yellow", "green", and "blue", these are just labels that we have collectively agreed to associate with specific
sub-bands of the spectrum. Just outside the visible spectrum above violet is ultraviolet, the component of the sun's rays that gives us a
suntan (and skin cancer). Similarly, just below red is infrared, which we perceive as heat.
The Discovery of the Visible Spectrum
Strange as it may seem when one is first introduced to the idea, white light is a mixture of all of the colors in the visible spectrum.
This fact was first discovered around 1665-1666 by the English mathematician and physicist Sir Isaac Newton (1642-1727), who passed a beam of sunlight through a
glass prism to find that it separated into what he called "a spectrum of colors":
In reality, even before Newton's famous experiments, a number of other people were using prisms – which were fairly new at that time – to experiment with light.
Actually, when you come to think about it, it's more than possible that some caveman tens of thousands of years ago observed sunlight passing through a block
of natural glass and reappearing as a rainbow of colors.
In Newton's time, however, most folks thought that it was their prisms that were coloring the light. Newton's experiments took things much further. First,
he used a prism to separate white sunlight into his spectrum of colors as shown above. Next, he used a piece of card with a slit in it to block all of
the colors except one – say green – and then he passed this individual color through a second prism. Newton's thought was that if it was the prism
that was coloring the light, then the green light entering the second prism should come out a different color. The fact that it came out the same color
indicated to Newton that it wasn't the prism that was coloring the light.
Newton then took the spectrum coming out of his first prism and fed it into an "upside down" prism. This caused the individual colors to recombine back
into white light. By these experiments, Newton was the first to prove that white light was made up from all of the colors in the visible spectrum and that
his first prism was simply separating these colors out.
As a point of interest, Newton originally declared that there were eleven colors in the visible spectrum. Later, he toned this down to seven in order
to make his spectrum fit with contemporary Western ideas about musical harmony (specifically, that there were seven notes/tones in an octave). This is why
the spectrum was originally said to include indigo, but more recently is defined as comprising only red, orange, yellow, green, blue, and violet.
Last but not least, we're used to seeing the same effect as Newton's experiment in the form of a rainbow, which is caused by sunlight passing through
droplets of water, each of which acts like a tiny prism. In fact, as far back as the 13th century, the famous Franciscan friar and English philosopher
Roger Bacon (1212-1294) suggested that rainbows were caused by the reflection and refraction of sunlight through raindrops, but – at that time –
he had no way to prove that this was indeed the case.
The Discovery of Infrared
Friedrich Wilhelm Herschel (1738-1822) was born in Hanover, Germany. In 1757 he moved to England, where he became known as the famous
astronomer and musician Sir Frederick William Herschel.
One thing for which Herschel is particularly well known occurred in 1781, when he discovered the seventh
planet from the sun – Uranus. As an aside, the eighth planet – Neptune – was discovered in 1846, while the ninth – Pluto – was discovered in 1930. From that time,
every high school student has been taught that our solar system has nine planets. Ever since its discovery, however, referring to Pluto as a planet has
been something of a pain to many astronomers. Apart from anything else, Pluto has a very eccentric orbit, which means that some of the time it comes closer to the sun
than Neptune.
Just to increase the fun and frivolity, in 2004, astronomers discovered what some regarded at that time as being the tenth planet – Sedna – which
takes 10,500 years to orbit the sun (as compared to Pluto, which completes the trip in only 249 years). But 18 months later, in 2005, astronomers discovered
what became commonly called Xena (see also the note below with regard to changing Xena's name), which falls between the orbits of
Pluto and Sedna, and which takes 560 years to orbit the sun. This would make Xena the tenth planet and Sedna the eleventh planet.
The real problem was that – believe it or not – until the middle of 2006, there actually was no rigorous definition that everyone agreed on as to
what was (and was not) a planet. The discovery of Xena and Sedna brought things to a head, because astronomers now expect to find large numbers of similar
objects. Thus, as was reported in the Washington Post,
in August 2006, the International Astronomical Union stripped poor old Pluto of its planetary status, reclassified the little scamp, and placed in a new
category called "dwarf planets" (these are similar to what had been referred to as "minor planets" in the past).
Furthermore, as per This Report on MSNBC, on Wednesday 13th September 2006,
the International Astronomical Union officially changed Xena's designation to Eris (aptly named after the Greek goddess of chaos and strife).
But we digress... In 1800, Herschel started to wonder if the different colors in the spectrum had different temperatures associated with them (you have to admire
someone like that, because this sort of thought simply wouldn't occur to the majority of us). Anyway, he used a thermometer to measure the temperatures of the
different colors, and he observed that the temperature rose from violet (with the lowest value) through blue, green, yellow, and orange, until it reached its
peak in the red portion of the spectrum.
Now here's the really clever part – Herschel next moved the thermometer just outside the red portion of the spectrum in an area that – to the human eye –
contained no light at all. To his surprise, he discovered that what he came to regard as being "invisible rays" in this area had the highest temperature of all.
Following a series of experiments in which he proved that these invisible rays behaved just like visible light (in that they could be reflected, refracted, and so
forth), Herschel christened his discovery infrared rays (where the prefix infra comes from the Latin, meaning "below").
In addition to leading us to an understanding of heat, Herschel's discovery was also important because it was the first time anyone had demonstrated that
there were forms of radiation ("light" in his terms) that humans couldn't see.
The Discovery of Ultraviolet
Although his was a short life of only 33 years, Johann Wilhelm Ritter (1776-1810) certainly made the most of it. Born in Samitz, Silesia,
which is now part of Poland, he studied science and medicine at the University of Jena.
While at the University, Ritter performed numerous experiments with light and – later – electricity. After hearing about William Herschel's
discovery of infrared light beyond the red end of the visible portion of the spectrum, Ritter decided to see if he could discover his own "invisible rays"
beyond the violet end of the spectrum.
Ritter knew that silver chloride turned black when exposed to light. Thus it was that, in 1801, in the same way that Herschel had used a thermometer to measure
the temperature of the different colors, Ritter decided to use silver chloride to see if it reacted at a different rate to the different colors.
First, he placed a quantity of the chemical in the path of the red portion of the spectrum and observed that any change was relatively slow. Next, he tried
orange, followed by yellow, green, blue, and violet, observing that each new batch of silver chloride grew darker faster as he progressed through the spectrum.
Finally, Ritter placed a quantity of the chemical just outside the violet portion of the spectrum in an area that – to the human eye – contained
no light at all. To his delight, he discovered that invisible rays in this area had the greatest effect on the silver chloride. This new type of radiation
eventually came to be known as ultraviolet light (where the prefix ultra comes from the Latin, meaning "beyond").
Primary Colors
In the case of color televisions and computer screens, each picture element (pixel) is formed from
a group of red, green, and blue (RGB) dots (see also our paper on
The Origin of the Computer Console).
If all three of these dots are active (lit up) at the same time, from a distance we'll
perceive the group as a whole as being white. (If we looked really closely we'd still see each dot as having its own
individual color.) If we stimulate just the red and green dots we'll see yellow; combining the green and blue
dots will give us cyan (a greenish, lightish blue); while mixing the red and blue dots will result in magenta (the color magenta, which is a sort of purple, was named after the dye with the same moniker; in turn, this dye was
named after the battle of Magenta, which occurred in Italy in 1859, the year in which the
dye was discovered). Furthermore, mixing different proportions of the three light sources will result in a gamut of colors, where the word "gamut" means "a complete
range or extent".
Now, this may seem counter-intuitive at first, because it doesn't seem to work the way we recall being taught at school, which was
that mixing yellow and blue paints together would give us green, mixing all of the colored paints together would result in
black (not white as discussed above), and so on. The reason for this is that mixing light is additive, while
mixing paints or pigments is subtractive:

The appellation primary colors refers to a small collection of colors that can be combined to
form a range of additional colors. In the case of light, the primary colors we typically use are red, green, and blue. Since bringing in new color components "adds" to
the final color, these are known as the additive primaries. By comparison, when it comes to paints or pigments, the primary colors used by printers are cyan, magenta, and yellow (CMY). In this
case (for the reasons discussed in the following topic), bringing in new color components "subtracts" from the final color,
so these are known as the subtractive primaries. Actually, forming black by mixing cyan, magenta, and yellow inks together is expensive and typically results in
a "muddy" form of black, so printers typically
augment these primary colors with the use of black ink. The result is referred to as CMYK, where the 'K' stands for "blacK" (we don't use 'B' to represent "black"
because this could be mistakenly assumed to refer to "blue").
Now, it may be that you have accepted all of the above without a quiver of doubt. On the other hand, you may be staring at this page with a furrowed frown on your forehead saying to yourself: "Just a minute, that's not what my old art teacher – Professor Cuthbert Dribble – taught me at elementary school. When it came to paints, he said that the three primary colors were red, yellow, and blue (RYB); that mixing red and yellow gave orange; combining red and blue gave purple; and blending yellow and blue gave green. So, can you explain this conundrum?"
Well of course I can! Look into my eyes ... have I ever lied to you before? The simplest explanation is that teachers can tell you anything they like
at elementary school and you'll believe them. A slightly more complex answer is that the concept of red, yellow, and blue as primary colors predates our modern scientific understanding of color theory.
However, although both of these arguments are true in their own way, the fact that you are reading this paper marks you as a person of high discernment,
sharp wit, and keen intellect who demands nothing less than the most fulsome of explanations for your reading pleasure, so here goes...
In reality, you can pretty much pick any three (or more) colors and call them "primary" colors, and this will be true on the basis that they are your primary colors. Mixing two of your primary colors together will result in a secondary color; mixing one of your primary colors with one of your secondary colors will result in a tertiary color, and so forth.
One example of a non-standard collection of primary colors was an early color photographic process known as Autochrome, which was invented circa 1903-1904 in France by the Lumière brothers, Auguste and Louis. This process typically used orange, green, and violet as its primary colors.
In 1666, as part of his experiments with prisms, Sir Isaac Newton developed a circular diagram of colors that is now commonly referred to as a "color wheel".
For one reason or another, theorists of that time decided that red, yellow, and blue were the best primary colors for pigments, and – even though we
now know that red, yellow, and blue primaries cannot be used to mix all of the other colors – they have survived in color theory and art education to the present day.
Purely for the sake of completeness, let's consider a color wheel based on red, yellow, and blue as it's primary triad as shown below:
Using our three primary colors as a starting point, we can generate three secondary hues: mixing red and yellow gives orange, yellow and blue gives green, and blue and red gives purple. Similarly, mixing the primary colors with their adjacent secondary colors results in six tertiary hues: red-orange, yellow-orange, yellow-green, blue-green, blue-purple, and red-purple.
There are lots of different theories regarding the way in which different colors can be used in conjunction with each other so as to produce a pleasing effect to the eye (that is, so that it looks good to humans). For example, complementary colors are any two colors that are directly opposite each other on the color wheel and provide maximum contrast, such as red and green, red-orange and blue-green, and so forth. By comparison, analogous colors are any three colors that are side-by-side on the color wheel, such as yellow, yellow-green, and green.

The problem is that – the above diagrams notwithstanding – red, yellow, and blue are not well-spaced around a perceptually-uniform color wheel
that embraces the entire spectrum of colors. This means that using red, yellow, and blue as primaries yields a relatively small gamut, and it is
impossible to mix them so as to achieve a wide range of colorful greens, cyans, and magentas. This is the reason why modern color photography and three-color
printing processes employ cyan, magenta, and yellow as primaries, because these offer a much wider gamut of colors.
At this juncture, we should perhaps briefly mention terms like shade, tint, and hue. The problem is that all of these words have several different meanings depending on whom you are talking to. For our purposes here, we may say that hue is the quality of a color that allows us to assign it a name like "greenish-blue" or "reddish-orange". More formally, one might say that the hue is the dominant wavelength of a particular color – that is, the "color of a color".
Meanwhile, shade may be described as "the degree of darkness within a hue" and tint may be considered to be "the degree of lightness within a hue". In the case of painting, for example, artists have long used the word "shade" in the context of mixing a color with black, so a shade is a color which has been made darker in this way. By comparison, artists use the word "tint" to refer to the mixing of a color with white, so a tint is a color which has been made lighter in this way.
As a further point of interest, it is common to refer to red, yellow, green, blue, white, and black as
being the psychological primaries, because we subjectively and instinctively believe that these are the basis
for all of the other colors.
Before we move on, a reader of an earlier version of this paper – retired electronics engineer Dwight W. Grimes – emailed me to say that
he'd been pondering my original "Additive and subtractive color combinations" diagram shown at the beginning of this topic. After considering the additive
and subtractive color combinations in the context of Venn Diagrams (one of
the logical tools used by electronics and computing engineers), Dwight suggested that a slightly more intuitive representation might be as shown below:

Dwight's idea is that, in the case of light, the surrounding "world" (in the form of an empty theater/stage/room with the lights turned off, for example)
should be black, then we add red, green, and blue light by activating appropriately colored spotlights; the combination of all of these
light sources results in white light. By comparison, in the case of paint, the surrounding
"world" (in the form of a large piece of paper, for example) should be white, then we subtract colors by applying cyan, magenta, and yellow pigments
to the paper; the combination of all of these pigments results in black. By Jove, I think Dwight is right (I'm a poet and I never knew-it). I will
use this new representation in the future. (If you want to know more about the origin of Venn Diagrams, please feel free to peruse and ponder our
Logic Diagrams and Machines paper.)
Last but not least, I recently (as I pen these words) ran across yet another representation of the mixing of primary colors that rather took my fancy.
As you can see in the following illustration, this is similar to Dwight's proposal, except that this new version is presented as a gradual merge between the various primary colors (I'm sorry I couldn’t arrange for the primary colors in this new illustration to be in the same relative locations as for my earlier diagrams, but this is the way I found them):
Mixing Light versus Mixing Paint
So why does mixing light work one way while mixing paint works another? Gosh, I was hoping you wouldn't ask me that one. Well, here's a question right back
at you – what colors come to mind when you hear the words "tomato," "grass," and "sky"? You almost certainly responded with red, green, and blue,
respectively, but why? The main reason is that when you were young, your mother told you that "Tomatoes are red, grass is green, and the sky is blue," and
you certainly had no cause to doubt her word.
However, the terms "red," "green," and "blue" are just labels that we have collectively chosen to assign to certain portions of the visible spectrum. If our
mothers had told us that "Tomatoes are blue, grass is red, and the sky is green," then we’d all quite happily use those labels instead.
What we can say is that, using an instrument called a spectrometer, we can divide the visible part of the spectrum into different bands of frequencies, and we've
collectively agreed to call certain of these bands "red," "green," and "blue." Of course everyone's eyes are slightly different, so there's no guarantee
that your companions are seeing exactly the same colors that you are. Also, as we shall see, our brains filter and modify the information coming from our
eyes, so a number of people looking at the same scene will almost certainly perceive the colors forming that scene in slightly different ways.
Here's another question for you: "Why is grass green?" In fact we might go so far as to ask: "Is grass really green at all?" Surprisingly,
this isn't as stupid a question as it might seem, because from one point of view we might say that grass is a mixture of red and blue; that is, anything and
everything except green! The reason we say this is that, when we look at something like grass, what we actually see are the colors it didn't absorb.
For example, consider what happens when we shine white light on patches of different colored paint:

The red paint absorbs the green and blue light, but it reflects the red light, which is what we end up seeing. Similarly, the green paint absorbs the
red and blue light and reflects the green, while the blue paint absorbs the red and green and reflects the blue. The white paint reflects all of the
colors and the black paint absorbs them all, which means that black is really an absence of any color. Thus, returning to our original question about
the color of grass: we could say that grass is green because that's the color that it reflects for us to see, or we could say that grass is both blue
and red because those are the colors it absorbs.
This explains why mixing paints is different from mixing light. If we start off with two tins of paint – say cyan and yellow – and shine white
light at them, then each of the paints absorbs some of the colors from the white light and reflects others. If we now mix the two paints together,
they each continue to absorb the same colors that they did before, so we end up seeing whichever colors neither of them absorbed, which
is green in this case. This is why we say mixing paints is subtractive, because the more paints we mix together, the greater the number of
colors the combination subtracts from the white light.
Turning Things Upside Down
Believe it or not, there is a point to all of this (well, most of it ... well, at least some of it), although
we won't find out what that point is until later in this paper. But before we move on, it is perhaps appropriate to note
that although the concept of colors is reasonably simple (being merely sub-bands in the visible spectrum), color vision is
amazingly complex.
The human visual system, is composed of our eyes, brain, and nervous system (actually, the eyes and brain are part of
the nervous system, but I tend to think of them as separate entities). Our visual system has evolved to such a sophisticated
level that for a long time we didn't even begin to comprehend the problems that nature had been compelled to overcome. In fact,
it was only when we (the human race) came to construct the first television cameras and television sets – and
discovered they didn't work as expected – that we began to realize there was a problem in the first place.
First of all, it is commonly accepted (athough not
necessarily correct, as is noted in the sidebar below) that we have five senses: touch, taste, smell, hearing, and sight. Of these senses,
sight accounts for approximately 80% of the information we receive, so our brains are particularly well-adapted at processing this information and
making assumptions based on it.

For example, if you give someone yellow jello, they will automatically assume that it will taste of lemon; similarly for green jello and lime and red jello
and strawberry. (What the Americans call "jello" would be referred to as "jelly" in England. By comparison, what the Americans call "jelly" would translate to "jam"
in the mother-tongue, and don't even get me started on what the Americans refer to as "preserves".) This association is so strong that if you give people yellow jello with a strawberry flavor, they often continue to believe it tastes
of lemon. One theory to explain this is that our brains give more "weight" to what our eyes are telling us compared to what our taste buds are trying to say;
another hypothesis is that this has more to do with repeated learning and pairing of particular colors with particular tastes.

When we use our eyes to look at something, the data is pre-processed by an area of the brain called the visual cortex, followed by the rest of the brain,
which tries to make sense of what we're seeing (this is something of a simplification – see also the How Color Vision Works topic below).
The brain's ability to process visual information is nothing short of phenomenal.
For example, in a famous experiment that was first performed in 1896, a psychologist at the University of California in Berkeley –
George Malcolm Stratton (1865-1957) – donned special glasses which made everything appear to be upside down. Amazingly, after a few days
of disorientation, his brain began to automatically correct for the weird signals coming in and caused objects to appear to be the right way up again.
Similarly, when he removed the special glasses, things initially appeared to be upside down because his brain was locked into
the new way of doing things. Once again, within a short period of time his brain adapted and things returned to normal. (Actually,
the way in which the lenses in our eyes function means that the images we see are inverted by the time they strike the retina
at the back of the eye, so our brains start off by having to process "upside-down" data as illustrated in the figure below – see also the
Left-to-Right and Top-to-Bottom topic later in this paper).
How Color Vision Works
The previous topic exemplifies the brain's processing capabilities, but it doesn't begin to illustrate
how well we handle vision in general and color vision in particular. As a starting point, we should probably remind ourselves
as to the main constituents of the human eye:
First, light from the outside world passes through the cornea, which acts like a clear, transparent, protective "window".
Just inside the cornea we find the iris, which gives our eyes their distinctive color. When we say "she has blue eyes,"
for example, we are talking about the color of that person's irises. The hole in the middle of the iris is called the pupil,
which determines how much light is passed into the body of the eye. The iris causes the pupil to shrink in bright light and to
enlarge in dim light.
Next, the lens is used to focus the light on the back of the eye, which is covered by a layer called the retina.
The retina often used to be compared to the film in a conventional camera, but it is actually more akin to the sensor
element in a modern digital camera. Amongst other things, the retina contains special photoreceptor nerve cells that convert
rays (photons) of light into corresponding electrical signals. After some processing in the eye itself, these signals are
passed along the optic nerve into the visual cortex region of the brain (actually, this is something of a simplification – if you want
a little more detail, check out the Left-to-Right and Top-to-Bottom topic later in this paper).
As an aside, the reason our pupils appear to be black is that – generally speaking – light goes into our eyes but
it doesn’t come back out. One exception to this rule is when someone takes photograph of you with the camera's flash turned
on and you end up with so-called "red eye" – that is, your eyes appear to be red in the picture. In this case, the
light from the flash is reflected back off the blood-rich retina on the rear of your eye and returns out of the pupil as red light.
Now, most high school biology textbooks would tell us that the retina in the human eye features three different types of color
photoreceptors; some are tuned to respond to red light, some to green, and some to blue (if you're a physicist or a biologist
or any other "ist", please read the disclaimer in the following paragraphs before you start jumping up and down, ranting and raving
and rending your garb with regard to our terminology). Based on this assumption, related engineering text books would tell
us that this is why we use red, green, and blue dots on our television screens and computer monitors to generate all of the
colors, because this directly maps onto the way in which our eyes work. Sad to relate, all of these text books are incorrect.
Before we proceed further, it's time for some "weasel words." Originating in the late 1800s, this is a term that may be
defined as: "equivocal words used to deprive a statement of its force or to evade a direct commitment," or "words that
make one's views equivocal, misleading, or confusing," or – my personal favorite – "the art of saying what
you don’t mean." This turn of phrase may have been sparked by the weasel's habit of sucking the contents out of a bird's
egg such that only the shell remains. (Lest we be unfair to the poor old weasel, we should also remember the
saying: "Eagles may soar high in the sky, but weasels rarely get sucked into jet engines at 20,000 feet!").
But, as usual, we're wandering off into the weeds. In a moment we're going to introduce the concept of color photoreceptor cells
called cone cells. The point is that when we say things like "blue cones" or "cones that respond to blue
light," we really mean "photoreceptor cells that are tuned to respond to the range of frequencies in the
electromagnetic spectrum that we perceive as being blue". However, although the scientists amongst us would prefer
this more precise terminology, it's a lot easier to refer to things like "blue cones," so if we occasionally slip,
you'll know what we mean.
Moving on, the reason most references talk about red, green, and blue receptors in the human eye dates back to around
1801-1802, when the English physician Thomas Young (1773-1829) performed a series of experiments and proposed his
"trichromatic theory."
Young's hypothesis originated in observations by artists, clothing manufacturers, and so forth, who recognized that if you
had three different pigments you could mix them to form any other color. Prior to Young, people had suggested that there were
three different types of light, so Young's recognition that the "three" was due to human anatomy and physiology rather than
the physics of light was a major conceptual breakthrough. Young's hypothesis, which was refined around 50 years later by the
German scientist Hermann von Helmholtz (1821-1894), proposed that the human eye constructed its sense of color using
only three receptors for red, green, and blue light. Based on this theory, humans are known as trichromats.
It was originally assumed that the electrical signals from the different types of color receptor cells were fed via the optic
nerve directly into the visual cortex portion of the brain, which used these signals to determine different colors. However,
we now know that the truth is a little more complex. We'll start by saying it is true that the human boasts three types
of color receptors called cone cells (so-named because they have a somewhat conical appearance when viewed under a
microscope). There are about 6 million of these cells in each eye. Each type of cone cell covers a range of frequencies,
but is primarily sensitive to a particular portion of the spectrum. As it happens, one kind of cone cell is primarily
sensitive to bluish-violet light, but the other two are most sensitive to greens; one peaks at a bluish-green
and the other peaks at a yellowish-green as shown in the illustration below. (You can discover more about this by
visiting the Wikipedia entry on Cone Cells and also
in the The Evolution of Color Vision topic in this paper.)

As there are around an order of magnitude fewer bluish-violet cone cells than the other two types – and as the other two
types are both sensitive to greens – this explains why the human eye is particularly sensitive to variations in the green
portion of the spectrum. (For the more pedantic amongst us, the actual ratio of bluish-violet to bluish-green to yellowish-green
cone cells is about 1:10:20.)
Young's trichromatic theory was extremely successful and became generally accepted wisdom for almost 175 years. However, as
opposed to our perceiving different colors by directly accessing the signals being generated by our cone cells, we now know that
our color perception is based on something called the opponent process. This alternate theory was first proposed by the
German physiologist and psychologist Karl Ewald Hering (1834-1918), but the opponent process didn't gain a wide following in
the scientific community until the 1970s.
The idea behind the opponent process is that – although their sensitivities peak at different frequencies – there
is a large amount of overlap with regard to the wavelengths of light to which the three types of cone cells respond, so our
visual systems are designed to detect differences between the responses of the different cones. In order to achieve this,
the retina boasts large numbers of "comparator" cells – each of these cells compares the signals being generated by a
number of different cone cells, and it is the signals coming out of the comparator cells that provide color information to
the brain The end result is that we perceive the color yellow when our yellowish-green cones are stimulated slightly
more than our bluish-green cones, for example; similarly, we perceive the color red when our yellowish-green cones are
stimulated significantly more than our bluish-green cones.
Actually, it's worth taking a little time to make sure that we more fully understand the way in which this works. Observe
how the response curves for the blue-green and yellow-green receptors in the illustration above strongly overlap each other.
Now, remember that the author drew these images in Microsoft® Visio® and he also created the spectrums at the bottom
using Adobe® Photoshop®, so these are just approximations. Having said this, if you look toward the right of the
curve associated with the yellow-green receptors where it's over the yellow area of the spectrum, you'll see that – at this
frequency – these receptors are being stimulated more than are the blue-green receptors, and thus we end up perceiving
this portion of the spectrum as being yellow.
Now, move a little to the left into the green portion of the spectrum. Once again, the yellow-green receptors are being
stimulated more than the blue-green receptors – but the trick here is that both types of receptors are being stimulated
more than they were for the yellow portion of the spectrum, so we perceive this as being green.
This is where things start to get a little tricky, because you could just say: "But that's just a matter of
intensity!" Sad to relate, this is where I (the author) pass beyond the scope of my knowledge. However, I think
it ties into the An Amazing Experiment topic later in this paper, which describes how the brain
weights all of the colors it sees against all of the other colors.
All of this serves to explain why, when the day draws towards dusk, we loose the ability to see red first. This is
because – even in reasonably bright light – we are only obtaining a relatively small amount of stimulation
in that part of the spectrum; thus, as the overall intensity of the ambient light starts to fall, this stimulation ceases
first. It also explains why – in the middle of the night (assuming a full moon) – we can still see hints of green,
because the main spectral component of moonlight is relatively close to the most sensitive portion of the response curve
for our blue-green receptors.
Moving on ... in addition to cones, the human eye also has a fourth (dim light) type of receptor called rods, which are so-named because
of their shape when viewed under a microscope. These cells, which are much more sensitive than cone cells, come into play in
low-light conditions like dusk and throughout the night. Rod cells, which outnumber their cone cell companions by a factor of
around 20-to-1, have their peak sensitivity around 498 nanometers (nm).

The fact that rod cells are so much more sensitive to dim light than are cone cells, and also the fact that there are so many
rod cells, explains why our sense of color drops as the level of ambient light falls. In bright light, our peak sensitivity is to
the bluish-green and yellowish green cones, which we use to perceive colors like green, yellow, and red. Under these bright
light conditions, our rod cells are being completely over-stimulated and are not providing any useful information whatsoever.
As the day heads toward dusk and the ambient light dims, however, our peak sensitivity switches to bluish-violet and bluish-green,
which is why we still see blue and green hues after the other colors have faded away. This effect, which is known as the
Purkinje Shift, is named after the Czech physiologist Jan Evangelista Purkinje (1787-1869). Finally, when the
ambient light becomes very faint, our cone cells effectively shut down and only our rod cells remain functional to supply
us with our night vision capabilities.
At this point, it may be worth noting that some references state that the peak sensitivity of rod cells is close to the main spectral component
of moonlight. Based on this, one might hypothesize that rod cells first evolved in nocturnal animals. Alternatively, one might propose that the evolution of rod cells facilitated certain animals becoming nocturnal. In fact, irrespective as to whether or not rod cells first evolved in nocturnal animals, their peak absorption is not particularly close to the main spectral component of moonlight, which actually occurs at anywhere between 548 and 575 nanometers depending on your source of data (I've shown the lower bound in the following illustration – see also the discussions on the The Evolution of Color Vision later in this paper).

Before we leap into the topic on the evolution of color vision with gusto and abandon (and that topic is a really, REALLY interesting one), let's
briefly summarize what we've learned thus far. First, the typical human eye has three different types of cone photoreceptors that
require bright light, that let us perceive different colors, and that provide what is known as photopic vision. Second,
we have rod photoreceptors that are extremely sensitive in low levels of light, that cannot distinguish different colors, and
that provide what is known as scotopic vision.
As one final a point of interest, some animals have only two types of cone receptors, so these animals are referred to as
being dichromats. One example of this that is close to home would be "man's best friend" – the dog – which
has some cones that are primarily sensitive to blue light and others that are primarily sensitive to yellow light.
It's a bit difficult to conceive what this might look like, but I just ran outside and asked a friend to take a picture of me wearing a red Hawaiian
shirt standing between a blue car and a yellow car with a green field behind me as shown in the image below. I then modified the picture to simulate
the effects of having only blue and yellow cones like a dog as shown in the lower picture below.
So, how did humans come to be trichromats while dogs ended up being dichromats? Well, as we shall soon discover (following a brief discussion on Color Blindness),
the evolution of color vision is a tale with so many exciting twists and turns that it will make your head spin.
Color Blindness
Before we proceed, this is probably a good time to note that several forms of color blindness – which means the
inability to distinguish certain colors or hues – are caused by deficiencies in one or more of the cone receptors.
Color blindness (also known as "Dyschromatopsia") may also be referred to as "Daltonism," so-named after the English physicist John Dalton (1766-1844)
who was one of the first to describe this condition, and who was himself affected (in addition to the purple and blue
portions of the spectrum, he could perceive only one other color – yellow).
Most of us remember the tests the eye doctor gave us at high school involving cards containing images formed from large
numbers of different sized circles of different colors. The idea was to determine if you could distinguish a number formed
from circles with one selection of colors presented against a background of circles with another selection of colors.
Some websites presenting examples of this sort of test are the Ishihara Test for Color Blindness,
the Color Blindness Self-Test,
and Mike Bennett's Color Vision Test pages.
Also, for your interest, there is a really amazing website on Color Vision
that allows you to mix different text and background colors and then see how they would look to folks with different types of color
blindness.)
But wait, there's more, because the Vischeck website
presents a tool called Vischeck that simulates colorblind vision and another tool called Daltonize that corrects
images for colorblind viewers.
And yet another very clever tool is Visolve from the folks at Ryobi System Solutions.
This is special software that takes colors on a computer display that cannot be discriminated by people with various forms of color blindness and
transforms them into colors that can be discriminated. In addition to a variety of transformations and filters, you can also instruct the
software to apply different hatching patterns to different colors. This really is very clever technology and you should take a moment to check it out.
Last, but certainly not least, on May 21, 2007, an article on the Science Daily website discussed how gene therapy was used to
Restore Cone Cells in blind mice. As reported
in this article, scientists have used a harmless virus to deliver corrective genes to mice with a genetic impairment that robs them of
vision. This discovery shows that it is possible to target and rescue cone cells – the most important cells for visual sharpness
and color vision in people. In the future, it may be possible to deliver gene therapies targeting a variety
of visual problems – such as color blindness – and degenerative diseases (see also the
Genetically Modifying Human Vision topic later in this paper).
The Evolution of Color Vision
Before we start this topic, it's worth remembering the famous quote by Sir Isaac Newton in a letter to Robert Hooke circa 1675-1676, in which he modestly said (in Latin): "Pigmaei gigantum humeris impositi plusquam ipsi gigantes vident," which translates as " If I have seen further, it is by standing on the shoulders of giants."
This comes from the idea that: "A dwarf standing on the shoulders of a giant may see farther than a giant himself." For those who are interested, the first usage of "on the shoulders of giants" (Latin: "nanos gigantium humeris insidentes") is attributed to the French Neo-Platonist philosopher, scholar, and administrator Bernard of Chartes (Bernardus Carnotensis) around 1130 (check out This Wikipedia Entry for more details on this topic).
But I'm wandering off into the weeds again. The point is that, in this case of this topic, I'm balanced precariously on the shoulders of research scientist Mickey P. Rowe of the Neuroscience Research Institute and Department of Psychology at the University of California, Santa Barbara, California.
Mickey is at the forefront of current understanding with regard to color vision in general and the evolution of color
vision in particular, especially with regard to mammals and – more specifically – primates. His work is based on
our ever-increasing understanding of the paleontological record and the application of new tools and techniques in
molecular biology.
My first introduction to this "man amongst men" was when I read an article authored by Mickey and his colleague Professor
Gerald H. Jacobs. This little scamp, which was entitled Evolution of Vertebrate Color Vision,
was published in the Journal of Optometrists Association in Australia (and some folks would say that I don’t know how to party
down and have a good time – go figure!).
What an article – I am a bear of little brain – I didn’t understand a word of it – that's how good it was!
Thus, following an exchange of emails, I chatted to Mickey on the telephone and he was kind enough to walk me through things
step-by-step. You wouldn’t believe how convoluted this all is, so it's important to understand that the following summation is
my greatly simplified version of the tale Mickey wove for me (any errors are mine own).
One final qualifier before we leap into the fray is that, for the purposes of this paper, we're primarily interested in
following the evolutionary path – as it pertains to color vision – from the dim-and-distant past to humans. There are
many other paths for other creatures – such as insects – that we don’t have the time to discuss here (having said
this, later in this paper you will run across occasional notes with regard to the visual systems of a variety of creatures
such as mantis shrimp, butterflies, fish, and birds).
OK, just to provide a sense of the time scale with which we're working, it's now generally accepted
(check out the US Geological Survey website, for example) that the earth formed
around 4.6 billion years ago give or take 100 million years or so, possibly on a Wednesday morning at about 9:00 am, but probably not
(note that we're using the American interpretation of "billion" equating to "one thousand million").
The earliest forms of life – possibly based on self-reproducing RNA molecules – are thought to have
originated somewhere around 4,000 million years ago (the Wikipedia entry for the Timeline of Evolution
provides a useful starting point for this sort of thing). Proto-cell-type organisms may have arisen as early as 3,900 million years ago, and the first
real single cell-like organisms began to appear on the scene sometime between 3,500 and 2,800 million years ago (many folks think this was
probably a good deal closer to the older age). These were followed by the first
multi-cell organisms, which entered the stage sometime between 1,500 and 600 million years ago (recent findings are
increasingly pushing us toward the earlier time).

Next, vertebrates (animals with backbones and/or or spinal columns) started to evolve somewhere between 530 and 510 million years ago during
the "Cambrian Explosion" portion of the Cambrian Period. (A good starting point for information on these creatures
is the Vetrebrates section of the University of California, Berkeley website.
The first tetrapod (an animal that has four limbs, along with hips and shoulders and fingers and toes) crawled out of the Earth’s oceans some time between
375 and 350 million years ago (this event probably occurred relatively soon after the "walking fish" called
Tiktaalik roseae took the stage, which happened 375 million years
ago in the late Devonian Period.)
Observe that the vertical scale in the above illustration is logarithmic. This provides a method for representing a
large span of time while maintaining resolution at the more recent end of the scale. Had we used a linear scale in
which 1 mm was used to represent a million years, for example, then our chart would have been 5 meters long. This wouldn’t
have worked because I had only a little over 7 inches to play with (stop smirking, you know what I mean).
Even though they aren’t particularly relevant to our story, it would be remiss of us to omit the creatures we commonly
think of as dinosaurs, which were a group of vertebrates that appeared during the Mesozoic Era (when I say "...we commonly think of as..."
I mean that we're talking about non-avian dinosaurs).
These little rapscallions had a good run from late in the Triassic Period (about 225 million years ago) until the end of the Cretaceous Period
(about 65 million years ago), at which point they exited the stage.
Meanwhile, the first mammals (which were small shrew-like animals) evolved in the Late Triassic and Early Jurassic
Periods, some 208 million years ago (the term "mammal" refers to the group of vertebrates having mammary glands, which
females of the species use to produce milk to nourish their young).
The term primate refers to the group of mammalian vertebrates that contains all of the species related to lemurs,
monkeys, and apes (where "apes" includes humans). Until relatively recently, it used to be thought that the evolution of the primates started in the
early part of the Eocene Epoch (this epoch began around 55 million years ago and lasted for around 20 million years). However,
even though it was only the size of a modern mouse, Purgatorius
is arguably a primate – or at least a proto-primate – and this little scamp lived during the early Paleocene Epoch,
so it's probably more accurate to say that primates began to evolve around 60 to 65 million years ago.
The first primates of the human genus (that is, to which the honorific "Homo" was applied) were Homo habilis; these were
users of stone tools who took their turn on the stage from around 2.2 million years ago to 1.6 million years ago. (The term
hominid used to be popular to describe all of the creatures in the human line since it diverged from that of
the chimpanzees, but the scientific community now favors the term hominin for this purpose. If you have any
questions relating to how we evolved, a good place to start is the PBS website on the
Origins of Human Evolution).
Neanderthal man was on the scene from around 250,000 years ago until 30,000 years ago. (It used to be thought that Neanderthals were only on the scene
from around 150,000 to 35,000 years ago, but ongoing discoveries keep on pushing the boundaries out in both directions.)
Meanwhile, the generally accepted date for the arrival of anatomically modern humans is around 100,000 years ago. In this case,
however, discoveries by Professor Frank Brown,
Dean of the College of Mines and Earth Sciences at the University of Utah, suggest that this could have occurred much earlier – perhaps even
as early as 195,000 years ago. Last but not least (for the purpose of these discussions), the first appearance of the Cro-Magnon culture
occurred around 40,000 years ago. This leaves us with the current peak in human evolution, which would be me (and you, I suppose).
So, how does the evolution of color vision map onto the above? Ah, ha! That's the million dollar question, isn’t it? Until
relatively recently, many folks worked under the incorrect assumption that the path from the original life forms to humans was
largely one of monotonic improvement. In the case of color vision, for example, many folks assumed that the evolutionary path
started with black-and-white vision and progressed first to dichromatic color vision and then to trichromatic color vision.
However, more recent developments in our understanding of the paleontological record – coupled with new tools and
techniques in molecular biology – have revealed that the picture (if you’ll forgive the pun) is far more complex.
Let's take things one step at a time. First, even as a "thought experiment," it would seem unlikely that rod cells preceded the
earliest cone cells. This is because rod cells are so much more sensitive to light than are cone cells, which makes it logical that at least one
type of cone cell evolved first. In fact, several lines of evidence now point to the fact that rod cells are derived from cone cells.
So when did the first cone cell evolve? Well, the ancestor of all animals with bilateral symmetry may have evolved anywhere
from 550 million to 1,000 million years ago, so this is the age range during which photoreceptors first evolved.
Furthermore, photoreceptors may have evolved twice or even more times, although this was probably from the same precursor cell population.
Either that, or there was a very early split into two very different types after the emergence of the first photoreceptor. Note that we aren't
talking about rods and cones here; instead we're referring to the difference between ciliated photoreceptors (like the ones we use for vision)
and rhabdomeric photoreceptors (like the ones arthropods use for vision).
Be this is it may, at some stage along the path – say 800 million years ago on a Wednesday afternoon following a small lunch – some multi-cell organisms
managed to develop photoreceptors that gave them the ability to detect and respond to some form of light. Purely for the sake of discussion, let's
begin by assuming that these first photoreceptors were cones that were primarily sensitive to ultraviolet (UV) light.

Observe that the spectrum as perceived by these creatures would have been monochromatic, which – in this context –
means "having or appearing to have only one color." The point is that they would have had the ability to perceive only
differences in the intensity of the band of wavelengths to which these cones were sensitive. This is why we've represented their
"perceived" spectrum as being gradations of black-and-white. Also, the fact that these creatures had only one type of cone
means that they would be classified as monochromats. (Note that the 360 nanometer peak sensitivity associated with these
cone is an educated guess based on the capabilities of existing life forms.)
It's important to remember that the idea that the first cones were primarily sensitive to ultraviolet light is purely conjectural
(an alternative scenario is presented a little later in this topic). So why would ultraviolet light make a good candidate? Well, ultraviolet
radiation is more energetic than what we now consider to be the visible portion of the spectrum, which would make it "easier" for a
biological system to evolve to detect it.
Another possibility is that the first photoreceptors were used for negative phototaxis (where "phototaxis" refers to
the influence of light on the movements of primitive organisms). Ultraviolet light is harmful (this is what gives us skin cancer). Early in
the earth's history, the ozone layer didn't protect us like it does now. Initially this may not have presented too much of a problem, because the
first animals probably lived in water deep enough to protect them from harmful radiation. As animals began to come closer to the surface, however,
they faced new challenges, including slow death caused by overexposure to ultraviolet radiation. Thus, the ability to detect ultraviolet and move
away from it would have provided an evolutionary advantage.
Before we proceed, this is probably a good time to briefly consider the way in which cones are actually formed and the way in which they perform
their magic. One way to think about this is to visualize an "antenna" formed from a molecule of retinal, which is a derivative of vitamin A
(our bodies produce vitamin A from the beta carotene found in many of the foods we eat, including – of course – carrots).
The role of the retinal molecule is to convert incoming light rays (photons) into corresponding electrical signals that can be processed by other
structures in the eye and – ultimately – by the brain. Each cone is formed from a large number of these "antennas" (say 100 million or more).
Each retinal molecule is surrounded by an associated "pigment" molecule. The purpose of this pigment molecule – which is actually
an incarnation of the protein iodopsin – is to "tune" the sensitivity of the cone to a particular band of frequencies. The pigment molecules
for the ultraviolet cones (introduced above) and the blue, yellow, orange-red, blue-green, and yellow-green cones (discussed below)
are all formed from different "flavors" of iodopsin – only a few of the amino acids located near the site where the
iodopsin binds to the retinal molecule are varied in each of the proteins. Collectively, these pigment molecules are referred
to as the opsins.
Now, remember that the idea that UV cones came first is purely speculative. In fact, here's another hypothesis that, given the data,
is equally plausible. One of the things we consider to be "noise" in vision is the thermal isomerization of pigments (where isomerization is
what normally happens when a photopigment absorbs a photon of light). This can also occur when the pigment gets jostled or absorbs a very
long-wavelength photon. There are more of these long-wavelength photons and more jostling at higher temperatures, so photoreceptor signals
are noisier at higher temperatures. But one cell's trash is another cell's treasure. Thus, it may very well be that the first photoreceptors were
not visual at all. It could be that the first photoreceptor was a "temperature sensor." Thermal isomerizations are more likely in pigments with
peak absorptions at longer wavelengths, so the first photoreceptor may have been an orange-red cone with a peak sensitivity of say 625 nanometers
as opposed to an ultraviolet cone with a peak sensitivity of 360 nanometers. (The 625 nanometer peak sensitivity associated with the orange-red cone
is an educated guess based on existing life forms.)

So the UV cone may have been first, or the orange-red cone, or even one of the blue, green, or yellow cones discussed below. Any of these scenarios
are consistent with existing data. The point is that having even a rudimentary form of vision obviously conveyed a tremendous evolutionary advantage, such
as the ability to sneak up on your visionless contemporaries, tap them on the metaphorical shoulder, and shout "Boo!" (Of course, you could simply eat
them if you weren't feeling in a party mood.)
On the downside, having only only one type of cone cell means that you're limited to seeing only a small portion of the electromagnetic spectrum.
If you can extend your visual capabilities to encompass additional portions of the spectrum, this will obviously convey an even greater
evolutionary advantage. Thus, during the course of the next several hundred million years (sometime before 450 million years ago), our ancestors
evolved four different types of cone pigments. (When we use the term "ancestors" in this context, we are referring to the
creatures that were to evolve into vertebrates, dinosaurs, mammals, primates and – ultimately – humans.) How good is this date?
Well, we know that diversification of ciliary photoreceptors into four spectral cone types occurred some time before the emergence
of the most recent common ancestor of parakeets and goldfish, and this is generally taken to be around 450 million years ago.

Creatures with only two types of cones are called dichromats; those with three types of cones are called
trichromats; and those with four types of cones are called tetrachromats. Observe that we've illustrated the spectrum as perceived by
these early creatures in two different ways. One representation shows multiple bands of black-and-white intensity. The reason for this is that,
at the time the second type of cone cell evolved, it seems likely that the signals being output from both types of cone cells were fed directly to the
creature's nervous system and/or brain. That is, there is a strong possibility that the "comparator" cells we now use to
compare the relative outputs from different cone cells had not yet evolved in those early years. Similarly, it's more than
possible that the "comparator" cells were still not present when the third and fourth cone cells evolved.
Having said this, it may be that these monochromatic representations paint a somewhat bleaker picture than was actually the
case (again, you'll have to forgive the pun). This is because – even without the presence of special "comparator" cells –
each type of cone cell would respond to different intensities in its own portion of the spectrum. For this reason, perhaps we
should visualize the spectrum perceived by these creatures as being more like the "alternate possibility" portion
of the preceding illustration. And, of course, there is always the possibility that these creatures had evolved the special
"comparator" cells used to compare the relative outputs from different cone cells, in which case they might have perceived the
spectrum in a similar manner to the way we do now.
As clever as they are, one problem with cone cells is that they function only in bright light. For this reason, rod cells appeared at some stage in the game.
As we discussed in the previous topic, rod cells are much more sensitive than cone cells and they give their owners the ability to see at night
(assuming some level of moonlight and/or starlight). We can visualize rod cells as having large numbers of "antennas" (say 100 million) formed from the
same retinal molecule we find in cones. In this case, however, the retinal is surrounded by a pigment molecule formed from the protein rhodopsin.

We aren't sure exactly when rod cells appeared on the scene, but our rods (the ones that eventually ended up in human eyes) probably
evolved after the split between jawless and jawed vertebrates; let's say somewhere around 450 to 500 million years ago just to give round numbers.
As we mentioned in the previous topic, some references state that the peak sensitivity of rod cells is close to the main spectral component of
moonlight. Based on this, some folks hypothesize that rod cells first evolved in nocturnal animals. However, we also noted that – in fact –
the peak absorption of rod cells (498 nm) is not particularly close to the main spectral component of moonlight, which actually
occurs at anywhere between 548nm and 575nm depending on your source of data.
In reality, we really don't know why rod cells peak where they do. In a classic paper published in the Quarterly Review o
f Biology back in 1990, author Tim Goldsmith goes through several possible explanations for the position of the peak of vertebrate
rod pigments (virtually all such pigments have a peak near 500nm in terrestrial animals). The bottom line is that Goldsmith
found none of the common explanations – including any relationship to the main spectral component of moonlight –
to be plausible. (If you wish to peruse this paper yourself – and be warned that it's not an easy read – then
you'll have to subscribe to the scholarly archive known as JSTOR.)
Now, after pondering the previous illustration for a while, you are probably saying to yourself: "Just a moment; as rod cells
primarily respond to the wavelengths we now regard as being in the cyan-green portion of the spectrum, why would they not perceive some
form of color, and therefore why would we not class these creatures as being pentachromats?" Well, there are a number of answers to this
as follows:
- All of our previous cone-and-rod response curve illustrations have sported the words "Normalized response/sensitivity" on
the vertical axis. In simple terms, this means that we've artificially drawn the curves such that they have the same maximum
height. As we noted earlier, however, rod cells are MUCH more sensitive than cone cells. The next illustration – which is
NOT to scale – is intended to provide a "feel" for this difference in sensitivity. The bottom line here is
that rods and cones simply don’t play together in the same lighting conditions. Cones require
bright light to function, but rods are saturated in a bright light environment and aren’t in a position to generate any
useful data. By comparison, in the dim lighting conditions when rods some into their own, cones shut down and provide
little or no useful information.
- In the case of modern animals (and we are probably safe in assuming that this was also the case with the creatures of
yesteryear), the only information used from rod cells is intensity, which is passed through the eye's luminosity channels;
that is, signals from rod cells do not make any contribution to the eye's color channels.
- The terms pentachromat, tetrachromat, trichromat, and dichromat are
generally associated with having five, four, three, or two types of cone cells, respectively. Similarly, creatures
like Owl Monkeys that have only one type of cone cell (along with their rod cells) are known as monochromats (as the
only nocturnal monkeys, Owl Monkeys are also known as "Night Monkeys"). Furthermore, in the case of modern creatures
like the skate (small cousins of the giant rays that have roamed the earth's oceans for around 400 million years) that have
only rod cells and no cone cells – and in the case of humans whose cone cells don’t function – the
term rod monochromat is used to distinguish this type of monochromat from creatures sporting only a single type
of cone cell.

Wow, four types of color cone cells and rod cells, things were starting to look pretty good back then (once again, you'll have to forgive this
turn of phrase – I can't help myself). Sad to relate, however, sometime between 310 and 125 million years ago our
ancestors lost first one and then two of these pigments. We don’t know exactly when or why, although one possibility is because
these creatures became nocturnal.
So, how did we arrive at the dates noted in the preceeding paragraph. Well, The first loss had to occur after the divergence between mammals and
reptiles, which is generally taken to have occurred somewhere between 288 and 338 million years (we averaged and rounded this out to 310 million years ago).
Next, all living mammals are divided into three groups: monotremes (those who lay eggs), placentals (those who give birth to live and more
mature young), and marsupials (those who give birth to live, less mature young that they subsequently nurse in pouches). The point is that
we also know that the first cone loss had to have occurred before the split between placentals and marsupials, which is thought to have
been some time in the range of 130 to 175 million years ago.
The earliest known placental mammal is
Eomaia scansoria, while the most
primitive and oldest known relative of all marsupial mammals is
Sinodelphys szalayi. Both of these
creatures lived around 125 million years ago during the early Cretaceous period. The second cone loss most likely occurred shortly
after the marsupial/placental split, which occurred prior to the emergence of the most recent common ancestor of all placentals.
The most recent common ancestor of all placental mammals appears to have had only two cone pigments. Creatures that still
use this system – such as dogs – are known as dichromats.
Also, at some stage along the way, creatures evolved the "comparator" cells in their eyes that allowed them to compare the
output signals from different cone cells and to perceive the results as being a range of different colors. This means that
modern dichromats like dogs probably enjoy a far richer visual experience than did their antediluvian counterparts.
Don't worry, we're almost home. Sometime between 45 and 30 million years ago, the primates that were to evolve into humans
"split" their yellow cones into two new types: blue-green and yellow-green. Actually, the situation with regard to primates is
complicated because the majority of New World monkeys are so unusual. But sticking with the lineage to us, it's almost certainly true
that the duplication event that gave us back a third photopigment occurred shortly after the split between New and Old world monkeys.
That would have been some time after Eosimias,
which was an early primate that lived about 40-45 million years ago in China.
Furthermore, analysis of the skull bones of a bunch of primates suggests that the cone split occurred some time around the appearance
of Aegyptopithecus zeuxis. (Such analysis involves
comparing fossil skulls of ancient creatures with those of modern animals whose visual ability we know and understand, and using
any similarities or differences as the basis to hypothesize different evolutionary scenarios.)
Also known as the
Dawn Ape, Aegyptopithecus was a small, tree-dwelling, fruit-eating animal that lived some 35 to 33 million years ago in the
early part of the Oligocene epoch. So, taking all of this into account, 34 million years ago is probably a good estimate for the
time of the cone split.
Last but not least, we also have the "comparator" cells that allow
us to use these cones to perceive the entire visual spectrum. This leaves us in our current situation in which normal humans
have three types of cones and are therefore known as trichromats.
And there you have it. As you can see, getting to our present state of color vision has been something of a roller coaster
ride, but the final results are rather spectacular, aren’t they? Having said this, different creatures have evolved their visual
systems in different ways, and – as we'll discover in the next topic on Tetrachromats, Pentachromats, Etc.
– some of these developments put us to shame. (See also the Interesting Nuggets of Trivia topic for some
additional discussions on creatures with rods but no cones, creatures with cones but no rods, and... the mind boggles!)
So let's pull all of the above into a timeline diagram that combines our original
representation of evolution in general with the evolution of color vision:

And finally, before we proceed to the next topic, throughout the discussions above we've been throwing words around like the Cambrian Period, the Mesozoic Era,
and the Paleocene Epoch. But what do terms such as Eon, Epoch, Era, Period, and Time (in the geologic sense)
actually mean and how do they relate to each other? Well, in the context of geology, an eon is defined as the largest division of geologic time,
comprising two or more eras; an era is defined as a major division of geologic time composed of a number of periods; a period is the basic
unit of geologic time, during which some standard rock system is formed (a period comprises two or more epochs and is included with other periods in an
era); and an epoch is defined as a sub-division of a geologic period during which a geologic series is formed. Finally, the term age is
used to refer to some span of time that is shorter than an epoch and that is distinguished by some special feature, such as the Ice Age.
The problem is that the actual definitions of the names and times associated with these various geologic terms are somewhat fluid, because geologists
are constantly making new discoveries that cause them to reassess and "tweak" things. As an example of what we mean, consider the following illustration,
which reflects the way in which various reference sources (books, websites, etc.) would have presented things until relatively recently:
Isn't the above a pretty diagram? (It should be, it took me long enough to draw!) As fate would have it, however, geologists have recently gone through
a fairly thorough revision of the time scale. Personally, I was a little concerned about the Paleocene Epoch (as I'm sure you will understand), but it looks like
this has not been revised away (at least, not yet). Some of the more significant changes are as follows:
- There is no longer a Tertiary Period. This used to stretch from 65 to 1.8 million years ago and encompass the Paleocene, Eocene,
Oligocene, Miocene, and Pliocene Epochs. (See also point 3 below).
- People are still wrestling with the subject of the Quaternary Period. This used to stretch from 1.8 million years ago to today and encompass the
Pleistocene and Holocene Epochs, where the Holocene is the name given to the last 10,000 years or so; that is, since the end of the last major glacial event,
or "Ice Age", to the present day. (See also point 3 below).
It should be noted that people who study relatively recently deceased things really like the idea
of the Quaternary, so there are a couple of different recommendations on how to keep using it as a term (only one of which keeps it as a "period").
If you have more fortitude than I, you can wade through the Recommendations by the Quaternary Task Group,
which operates under the auspices of the International Commission on Stratigraphy (ICS) of the International Union of Geological Sciences (IUGS) and also
under the auspices of the International Union for Quaternary Research (INQUA).
Of the two recommendations that the committee members backed, the one that does not consider the Quaternary to be a period appears to be winning.
- An acceptable version of current consensus (the "latest-and-greatest" as it were) that integrates currently available stratigraphic and geochronologic
information is known as the Geologic Time Scale 2004 (GTS2004). Boiled down, the current state-of-play
is as follows:
- The old Tertiary Period has been renamed the Paleogene Period, which is truncated at the end of the Oligocene Epoch (that is, the
old Tertiary Period used to encompass the Paleocene, Eocene, Oligocene, Miocene, and Pliocene Epochs, but the new
Paleogene Period encompasses only the Paleocene, Eocene, and Oligocene Epochs).
- The old Quaternary Period has been remaned the Neogene Period, and this now extends to encompass the Miocene and Pliocene Epochs (that is, the
old Quaternary Period used to encompass only the Pleistocene and Holocene Epochs, while the Neogene Period has been extended
to also encompass the Miocene and Pliocene Epochs).
- The Quaternary is now regarded as being a sub-period of the Neogene Period.
- The Holocene is now regarded as being a sub-epoch of the Pleistocene Epoch.
- A lot of the dates associated with the various Periods and Epochs have been "tweaked". In the following illustration I've mostly rounded things to the
nearest million years. (Note that work is already underway on the next revision of the Geologic Time Scale as I pen these words.)

More Than Three Color Receptors: Tetrachromats, Pentachromats, Etc.
Just when you thought things were complicated enough, some animals have the ability to detect infrared. For example, rattlesnakes have
infrared detectors in a hole or pit in front of each eye (this is why they are called pit vipers).
Furthermore, some birds and bees have four different types of color receptors in their eyes, so they are known as tetrachromats (bees in particular
can see much further into the ultraviolet than humans). But wait, there's more, because some butterflies have five different types of color receptors, so
they are known as pentachromats.
Actually, things become even more amazing, because I recently read an article (I can't remember where – I should have made a note – curse me for a fool!),
that reported the discovery of a species of fish (Cichlids from the East African Rift Lakes) whose genes can code for seven different types of color
receptor (cone) pigments. Having said this, only three types of color receptor pigments are primarily expressed ("turned on") in any particular fish. So why bother?
Well, if you take a group of these fish that are living at one side of a lake in certain lighting conditions, they will be born with a specific set of three
color receptor pigments in their "turned on" state. Meanwhile, if you take another group of fish (remember that these are the same species) living in a
different area of the lake with different lighting conditions, then members of this group will be born with a different set of color receptor
pigments in their "turned on" state. The theory (as I recall it) is that as these fish move around – or as the environment in the area where they
live change over time – they can quickly evolve such that the next generation will be better equipped to handle the new environmental conditions.
But wait, because there's yet more! Known as "sea locusts" by the ancient Assyrians, mantis shrimp are little tricksters, because they aren’t actually
shrimp or mantids, it's just that they bear a physical resemblance to both sea-living shrimp and the land-living praying mantis. (The term "mantid"
refers to a group of around 1,800 carnivorous insects – one of a few types of insect that can rotate their heads.) But we digress. These little
rapscallions – which can grow as big as 30 cm and can live for 20 years or more – are said to have the most complex eye known in the animal kingdom.
The intricate details of their visual systems (three different regions in each eye, independent motion of each eye, trinocular vision, and so forth) are
too many and varied to go into here. Suffice it to say that scientists have discovered some species of mantis shrimp with 16 different types of photo-receptors:
8 for light in (what we regard as being) the visible portion of the spectrum, 4 for ultraviolet light, and 4 for analyzing polarized light. In fact, it is
said that in the ultraviolet alone, these little rapscallions have the same capability that humans have in normal light.
But the really amazing point to note here (perhaps one of the most unexpected discoveries toward the end of the twentieth century with regard to color
vision), is that an extremely small percentage of female humans are tetrachromats because they have four different types of cone cells in their eyes.
Now, each type of cone can detect about 100 gradations of color, so the combination of three different cone types allows a typical person to distinguish
100 × 100 × 100 = 1 million different hues. A true tetrachromat has a fourth type of cone whose peak sensitivity falls half way between
those of the standard blue-green and yellow-green cones. Theoretically, this means that these folks may be able to distinguish as many
as 100 × 100 × 100 × 100 = 100 million different hues!
So what is it like to be a tetrachromat? That's a tricky one, because neither we nor they have the words to describe this sort of thing (how would
you describe color to someone who was colorblind?). What we do know is that tetrachromats can make more color distinctions between shades that appear
to be identical to the majority of us, For example, there's an interesting article on a lady called Susan Hogan who is an interior decorator. Susan can look at three samples of beige paint that appear identical to her clients,
but she can detect a gold undertone in one, a hint of green in another, and a smidgen of gray in the third. As another example, Susan can look at a river
and distinguish relative depth and the amounts of silt in different areas of the water based on subtle differences in shading that the rest of us
simply don't see at all.
So, it's probably safe to say that tetrachromats have a much richer visual experience in the real world than do the rest of us. However, there is a downside,
because the images presented on display devices like television sets and computer screens – which are formed by mixing the three additive primary
colors – do not appear as realistic as they do to the rest of us (similarly for images in print, which are essentially formed by mixing the
three subtractive primaries).
Now, this is a little tricky, but I wanted to give a hint of how an image that looks "photo-realistic" to us might appear like to a tetrachromat.
Consider the pictures below in which I'm wearing a red Hawaiian shirt standing between a blue car and a yellow car in the foreground with a
green field and light blue sky in the background. In the upper version I've used all of the colors available to me; by comparison, the lower image
employs a cut-down color palette, resulting in everything looking "flatter" and "chunkier".

This is obviously very much exaggerated, but I really think that it gives us an idea as to the downside of being a tetrachromat. For example,
while the rest of us are watching a nature program in high-definition on a state-of-the-art plasma display going "Oooohhh" and "Aaahhh"
and saying how realistic it looks, any tetrachromats amongst us are probably thinking: "Well, it doesn't look very convincing to me!"
(See also the Interesting Nuggets of Trivia topic for some
additional discussions on creatures with only rods [no cones] and creatures with only cones [no rods].)
Genetically Modifying Human Vision?
With regard to the previous topic, many of us will ponder what it might be like to be tetrachromat with four types of color
cones. But why stop there? What if we had five different types of color receptors like butterflies, or maybe even the sixteen photo-receptors sported
by the mantis shrimp?
Of course, there are two major problems associated with all of this. The first consideration is whether or not it would be technically feasible to
generically modify ourselves such that our eyes contained more types of photo-receptors. And, even if this were possible, would our brains be able to
adapt to process the additional information coming from the new receptors in a meaningful way?
Well, both of these questions appear to have been answered by researchers at John Hopkins and the University of California at Santa Barbara in a
study published in the journal Science on March 23, 2007 (you can access different articles pertaining to this announcement
Here,
Here, and
Here). (See also the discussions on using gene therapy to
restore cone cells in the Color Blindness topic earlier in this paper.)
As we discussed in The Evolution of Color Vision topic, sometime before 450 million years ago, the creatures that were to evolve into
vertebrates, dinosaurs, mammals, and primates had evolved four different types of cone pigments. Then, sometime between 310 and 125 million years ago,
our ancient ancestors lost first one and then two of these pigments. The result is that most mammals – including mice – are dichromats with only two types of color cones, which means they perceive a very limited
color palette as illustrated below:
So, what does the world look like to the average mouse? Well, for a start, even short people would look really tall, but that's beside the point.
Just to give us an idea, look at the two pictures below. In these images I'm wearing a red Hawaiian
shirt standing between a blue car and a yellow car with a green field behind me. The lower image has been modified to simulate
the effects of having only blue and yellow cones like a mouse.
The point is that the researchers mentioned above arranged to introduce a single human gene into the mouse chromosome. The result was to add the human blue-green cone
type to the existing blue and yellow mouse cones. Furthermore, tests show that the brains of these genetically augmented mice adapted to efficiently
process sensory information from their new receptors, thereby enabling them to perceive more colors as shown in the following diagram:
One issue that was not mentioned in these articles was whether or not the modified mice would pass the new gene on to their offspring. So I asked around
and was informed that the short answer is: "yes" [a slightly longer version is that: "the introduced copy of the gene is stable in the colony,
so it will remain in that gene pool as long as (a) the colony survives and (b) no one goes out of their way to breed it out of the colony" – scientists,
you have to love them!].
Now, applying similar techniques to humans could take awhile, but as a first step it may be that related procedures could be used to "repair"
defective color vision. That is, rather than giving normal people unique abilities, we might start off by trying to give abnormal people
(the folks with different forms of color blindness) normal abilities. For example, around one-to-two percent of the human male population have only two
cone types and could benefit from being engineered to produce a third. An additional seven-or-so percent of men have three cone types but one of them is so
abnormal as to limit their ability to make the same color discriminations the rest of us take for granted. All of these folks could – in principle –
be "cured" through genetic manipulations.
And as for the future? Well, if I were a betting man, my guess would be that it won't be long (relatively speaking) before someone somewhere tries to augment
humans with additional photo-receptor types beyond the standard three. We might start with the existing human tetrachromat cone, but it won't be long before
some "bright-spark" decides to experiment by adding say an ultraviolet cone, and who knows where things may go from there? Furthermore, there are also implications
with regard to introducing new sensory receptors for such things as our olfactory (smell) and gustatory (taste) systems. Why have drug-sniffing dogs when you can
have drug-sniffing policemen? It really is going to be a Brave New World.
An Amazing Experiment
The way in which our eyes detect and resolve different colors (as discussed in the previous topics) is amazingly clever, but
this physical portion of our visual systems is supplemented by an incredibly sophisticated color-processing component
within our brains. Assume for the sake of argument that you particularly like the shade of green you see in your lawn – so much so that you
instruct a carpet manufacturer to make you a rug in exactly that color. The great day arrives and your new rug is delivered.
The first thing you do is to drag this new floor covering into your garden, lay it
next to your grass, and confirm that they are indeed the same color. Next, you take the rug into your house and place it on
your recreation-room floor. Not surprisingly it remains the same color ... or at least, it appears to.
The fact that an object generally looks much the same color-wise irrespective of where we put it and, within certain constraints, regardless of
ambient lighting, is something we tend to take for granted. We can therefore only imagine the surprise of the creators of the first color television
cameras when they discovered that objects appeared to have strikingly different colors according to whether they were filmed inside a building or outside.
While engineers worked to correct the problem, people started to question why the same effect didn't occur with our eyes. Eventually it was
realized that this effect did indeed occur, but that our brains were correcting for it without our even noticing. One of the most effective ways
to demonstrate exactly what it is our brains are doing to handle this color problem is by considering the following illustration:
In order to perform this experiment, we commence by painting a board with a wide variety of colors in various interlocking geometric shapes. Next,
three light sources – which generate pure red, green, and blue light – are all set to the same intensity and used to light up the board.
The combination of these pure sources effectively illuminates the board with white light.
Next, a spectrum analyzer is pointed at the board. The analyzer is able to separate and distinguish the various bands of the spectrum that it's receiving.
The analyzer also has a telescopic lens, such that it can be focused on individual colored shapes on the board. Consider a shape that's painted primary red.
In this case the paint will reflect most of the light from the red light source and it will absorb the majority of the light from the green and blue sources.
Thus, if we were to point the spectrum analyzer at this red area, the light received by the analyzer will show a large red component, along with relatively
small green and blue components as shown at point (a) in the following illustration:

Similarly, if we point the analyzer at areas that are painted primary green or primary blue, it will see large green or blue components as shown in
points (b) and (c), respectively. Now, suppose we consider two shapes on the board that aren't painted in primary colors. Let's say that one of these shapes
is painted a lightish brown, while the other has a pinkish sort of hue; also that both colors reflect some amount of red, green, and blue light, but
in different proportions. Thus, if we pointed our spectrum analyzer at these two shapes in turn, we'd be able to detect the differences in their color components:

Our obvious reaction is that the differences between the amount of red, green, and blue light being reflected from these shapes gives each of them their
own distinctive color, and that's certainly true to an extent, but there's more to this than meets the eye (if you'll forgive the pun). Suppose we point our spectrum analyzer at the pinkish shape,
and then vary the intensities of our three light sources to create an artificial environment in which the light that's reflected from this shape has exactly
the same characteristics we previously recorded from the brownish shape. The question is: "What color is the pinkish shape now?""
Common sense (which actually isn't as common as it used to be) would dictate that if we're now receiving exactly the same color components from the
pinkish shape that we originally received from the brownish shape, then the pinkish shape should look exactly the same color as the brownish shape
used to (and also that all of the other shapes on the board will have changed color to one degree or another). Well, take a firm grip on your seats,
because here comes the amazing part ... it does and it doesn't and they do and they don't. (If you think this is difficult to follow, just wait until
you try to explain it to someone else!)
What do we mean by this? Well, let's suppose we take a large piece of white card the same size as the board; that we cut out a piece exactly the same
size, shape, and position as our original pinkish area; and that we then place the card in front of the board. In this case – assuming that we leave
the light sources generating our artificial lighting environment – our shape that was originally pinkish would now appear to be a lightish brownish
color. Similarly, if we did the same thing for any of the other shapes, they would all appear to have changed color to some extent.
However, if we were to remove the white card so that we could again see the entire board, the pinkish shape would once again appear to be,
well ... pinkish, and all of the other colors would appear to be pretty much as we'd expect!
As we said, this is pretty amazing. What's actually happening is that if you can see only the one shape, then your brain has no other recourse than
to assume that its color is determined by the different proportions of red, blue, and green light that are being reflected from that shape. By comparison, if you
can see that shape's color in the context of all of the other shapes' colors, then your brain does some incredibly nifty signal processing, determines
what colors the various shapes should be, and corrects all of the colors before handing the information over to the conscious portion of your mind.
To put this another way, your brain maintains a three-dimensional (3D) color-map, in which every color is weighted in relation to every other color.
Thus, when you can see the whole board, your brain automatically calculates all of the color relationships and adjusts what you're actually seeing
to match what it thinks you should be seeing.
Now all of this is quite impressive, but you’re probably asking yourself: "Why would nature take all of this trouble?" After all, does it really matter
if your lawn-colored rug looks a slightly different shade of green when it's inside the house? In fact, it's possible to set up a television camera to emulate
what we'd see if our brains weren't performing all of their "behind the scenes" activity, and the results are pretty staggering. For example, if you were to
take such a camera and film a yellow taxi-cab as it progressed down the street, you would see it dramatically changing color as it passed through
different lighting conditions such as shadows.
Thus, the signal processing performed by our brains actually has survival value. For example, life wouldn't be too much fun if you were to find
yourself reincarnated as a small rodent of a type that happens to be a favorite after-dinner snack for saber-tooth parrots (this once happened to a
friend of mine and it's no laughing matter, let me tell you!). As bad as this may seem, life could quickly get to be an awful lot harder if
everything in the jungle assumed new colors every time a cloud passed by, and if the parrots themselves were constantly changing color as
they passed through shadows during the process of swooping down on you from above.
Similarly, if you were a prehistoric hunter, explaining the local flora and fauna to a friend who was visiting for a long weekend could pose some problems.
We might imagine a conversation along the lines of: "One thing you've got to watch out for around here is lions. That's a lion over there, the big yellow thing ...
the big blue thing ... the thing that's stood next to the parrot that just turned from violet to emerald green ... good grief, where did that cloud come from?"
Left-to-Right and Top-To-Bottom
In our discussions in the How Color Vision Works topic, earlier in this paper, we said that – following
some processing in the eye itself – signals are passed along the optic nerve into the visual cortex region of the brain. We also noted that
this was something of a simplification, and we were right, because the truth is a little more complicated as we shall see.
Also, toward the end of the Turning Things Upside Down topic, we mentioned that the way in which the lenses in our eyes function
means that the images we see are "upside down" by the time they strike the retina. In reality, of course, images are also inverted in the horizontal
plane as shown in the following illustration, which assumes we have a "birds-eye view" looking down on the top of someone's head.
The term hemifield refers to one of two halves of a sensory field. In the case of vision, if one was to draw a line straight out from
one's nose into the distance, the fields of view to the left and right of that line are referred to as the left hemifield and right hemifield,
respectively.
For both eyes, information from the left hemifield is projected onto the right-hand side of the retina, while information from the right hemifield is
projected onto the left-hand side of the retina. In the case of the left eye, the right-hand side of the retina is referred to as the left nasal retina
because it's close to the nose, while the left-hand side of the retina is called the left temporal retina because it's close to the left temple.
By comparison, in the case of the right eye, the right-hand side of the retina is called the right temporal retina, while the left-hand side of
the retina is called the right nasal retina. (In this context, the term "temple" refers to the flattened region on either side of the forehead in human beings).
Now, this is where things start to get interesting. For reasons that are beyond the scope of this paper (which is another way of saying: "I don't know"),
the left half of the brain controls the right-hand side of the body, while the right half of the brain controls the left-hand side of the body. In turn,
this means that the left half of the brain is only interested in information from the right hemifield, while the right half of the brain is only interested
in information from the left hemifield.
The problem is that there is a massive amount of overlap with regard to the images seen by both eyes. The way this is sorted out is that the bunches of fibers
forming the optic nerves from both eyes first pass through an area called the optic chiasm, where they are sorted to separate the data from
the left and right hemifields.
Following the divide at the optic chiasm, the resulting bunches of fibers are referred to as the optic tract. The optic tract wraps itself
around the midbrain to an area called the lateral geniculate nucleus (LGN). After this point, the nerve fibers are known as the optic radiations,
and it is these signals that are ultimately presented to the primary visual cortex at the back of the brain.
If you wish to probe further into all of this, there is a very interesting paper titled
Basic Visual Pathways from the
Washington University School of Medicine. And if your quest for knowledge knows no bounds, then a cornucopia of information is available
from the folks at the University of Utah on their WebVision site.
Seeing Sounds and Tasting Colors
The word synaesthesia (also spelled synæsthesia and synesthesia) is derived from the Greek syn, meaning "together"
or "union", and aesthesis or aisthesis, meaning "sensation" or "to perceive". Thus, depending on who you are talking to, synaesthesia
can be taken to mean "synthetic experience" or "joined sensation" or "to perceive together". And if you think this is confusing, just wait to see what's to come...
In a nutshell, synaesthesia embraces a variety of different conditions in which the stimulation of one sets of sensory inputs (say sound) is simultaneously
perceived by one or more of the other senses (sight or touch, for example).
There are many different forms of synaesthesia. For our purposes here, we are primarily interested in those that pertain to color vision. One very
common type is when folks associate numbers and letters of the alphabet with different colors. For example, consider the way in which a non-synaesthete
would see the alphabet printed as black text on white paper as illustrated below:
Now consider the same alphabet – still presented as black text – as it might be seen by a synaesthete as illustrated below:
Note that the above is simply a representation created by the author of this paper. Every synaesthete (of this type) perceives their own color alphabet.
Having said this, research on a large number of synaesthetes reveals certain trends, such as the fact that 'a' is often red, 'b; is often blue, 'c' is often
yellow, and so forth.
Another interesting point is that some synaesthetes "see" the letters as being black, but "perceive" the colors as being "associated" with the letters.
By comparison, other synaesthetes actually do "see/perceive" the letters as having those colors.
And what about words. Well let's start by considering the way in which a non-synaesthete would see a group of words printed as black text on white
paper as illustrated below:
For some synaesthetes, each word will appear as (or be perceived as being associated with) a color that is derived from the individual colors of that
word's constituent letters. By comparison, other synaesthetes may "see" or "perceive" the words as having colors that are not related to the letters associated with
their particular color alphabet. An example of this latter case might be as illustrated below:
As a slightly different example, consider the following illustration, which comprises a random assortment of the numbers 2 and 5. Can you quickly count
how many number '2' characters there are?
For the non-synaesthetes amongst us, counting the number of '2' characters in the illustration above may require a little concentration ("Did I
already get that one?"). Well, now consider the way a synaesthete might perceive this same image as shown below:
Wow! Now, it's really easy to see that there are only eight number '2' characters surrounded by a plethora of number '5'
characters. How cool! As a slight counterpoint to this, having perused this paper, my friend Johannes in Austria emailed me to say:
I do not "see" letters or words in colors, but (definitely!) colors have numbers associated with them – red, blue, brown,
white are "even", whereas yellow, orange, green, violet are "odd". And one more thing I probably don't even need to mention
because it is so obvious (grin) is that lighter colors have lower numbers.
It's important to note that synaesthesia is additive; that is, it "overlays" the primary senses. Also, we should remind ourselves that there are
many different types of synaesthesia. For example, when some synaesthetes hear music, they might see patterns of colors hovering about three feet
in front of them. A trill of the flute may appear as a collection of purple triangles and small pink dots, for example. (It is said that if a
non-synaesthete wants to get a feel for what this might be like to experience, a good start would be to watch appropriate portions of the
original Fantasia movie by Walt Disney.)
So how many of us are synaesthetes? This is really difficult to pin down, because there are so many different types (listening to music can cause
a tickling sensation of touch, or a perception of different smells, or ...), and different folks can be affected to lesser or greater amounts (a "feel" of a
color versus actually "seeing" that color). Some estimates put synaesthetes as being roughly one in 25,000, while others say one in 2,000, and still
others say as many as one in 100 may by synaesthetic.
Does this latter value seem high to you? Well, consider that if people are asked to associate different colors with different notes on a piano, the
vast majority of us will associate darker colors with lower notes and lighter colors with higher notes. Why should this be (considering that colors and
tones have nothing intrinsically to do with each other) unless we are all synaesthetic to at least some small degree?
Now, this is where things start to get interesting. The author of this paper (that would be me) is an electronic and computer design
engineer by trade. Over the course of the years, I have spent a lot of time looking at schematic (circuit) diagrams composed of symbols
representing Boolean logic functions such as AND, OR, XOR, and NOT. The way in which a non-synaesthete (like me) sees such a schematic
diagram printed in black ink on white paper is illustrated below:
I started wondering, are there any synaesthetic digital logic designers out there who would associate different colors with the various symbols.
If so, would they see our example schematic as looking something like the illustration presented below?
So I began to ask around in various blogs and articles. First I heard from a logic designer who is
synaesthetic with regard to "feeling" music. As he told me: "For me, adjacent keys on the piano have very different subjective
personalities." With regard to logic functions as discussed above, he said: "On reflection – now you ask – I do have a different feeling impression of the basic
Boolean operators OR, AND, NOT ... but nothing that intrudes into my consciousness during the course of doing my job."
And then I heard from an electrical engineer called Jordan A. Mills, who says that he does indeed perceive different colors when looking at
gate-level schematic diagrams. Jordan was kind enough to take a black-and-white schematic I created and to modify it to reflect the way in
which he perceives it as shown below.
In Jordan's case, the shapes of graphic elements seem to be irrelevant to his synaesthetic perception. A small example is the triangular clock
input to the flipflop, which is "yellow and sharp" (Jordan's words). By comparison, the triangular inverter bodies are "red and sharp" while the
bobbles on the inverters are "yellow and smooth".
Interestingly enough, Jordan noted that – while adding these colors – if he paused to think about what he was doing his perception
changed. In this case, he had to stop and think about something else for a second and then return to the diagram to make sure his perception
wasn't skewed. (Jordan also commented that this does not happen for glyphs he uses regularly such as letters and numbers – in those cases,
his perception is immediate and unchanging.)
But wait, there's more. During the course of our conversations, Jordan mentioned that he also perceives colors and textures when looking
at flowcharts. I'd never even considered this, so I created the following example and sent it to him.
Jordan responded with the colored version shown below. He noted that – in addition to colors – he also perceives "textures" associated
with the various shapes. In the case of a flowchart, the action rectangles have "soft" edges, while the decision diamonds have "sharp" or "pointy" edges.
As an aside, butterflies have chemoreceptors (taste receptors) on their feet, but this has absolutely nothing to do with what we are talking about here!
Strange as it may seem, although the concept of synaesthesia has been around for a long time, until recently no one actually knew whether these
effects were actually being perceived as described by the individuals concerned, or if they were a byproduct of some other psychological mechanism
such as memory. However, a really Interesting Article was published
on the www.PhysOrg.com website on 24 July 2007. As reported in this article, new research
published in the June issue of Psychological Science appears to indicate that synaesthetic colors are perceived in a realistic way,
just as synaesthetes report (I never doubted it for a moment).
If you are interested in learning more on this topic, there's a Very
Interesting Website maintained by Sean Day that details different types of synaesthesia, tests for synaesthesia, famous synaesthetes, and
a very interesting history of synaesthesia [did you know, for example, that Aristotle, in his On Sense and the Sensible (350 BC), established
a correspondence between flavors and colors?]. Another interesting site hosted by MIT on Synaesthesia and the Synaesthetic Experience provides individual anecdotes and interactive activities that simulate synaesthesia.
And if you're from Australia or New Zealand (or even if you're not), you may be interested in the
Synesthesia Down Under website, which constains some really useful information,
including synaesthesia-related activites in the Antipodes and links to other really great synaesthesia-related sites.
Finally, in closing this topic, one of the synaesthetics with whom I talked made some rather profound remarks as follows: "Analytical
thought and spoken language have strongly shaped our interpretation of – and relationship to – the natural world around us. I suspect that there is
another potentially richer arena of sentience that is non-verbal. I further suspect that this realm lies dormant and atrophied within 99.99% of us!"
Interesting Nuggets of Trivia
The more one reads and the more one bounces around the internet, the more amazing things one finds with regard to vision in general and color vision
in particular. As one example, consider the creatures called "brittle stars" (the "brittle" portion of their name comes from the fact that they are in fact
brittle, while the "star" appellation comes from the fact that they resemble starfish). Most brittle stars have five (or a multiple of five) slender arms
radiating from a flat central disk, where these arms may reach up to 2 feet (60 centimeters) in length on larger specimens. The point is that it has been discovered
that one member of the brittle star family – this type is officially known as Ophiocoma wendtii – is covered with hundreds of tiny lenses and associated
photo-receptors that allow these little scamps to detect approaching enemies. This means that their bodies are effectively one big eye!
The following offers a few more nuggets of interesting vision-related trivia (if you know of any others, please let us know and we'll be delighted to add
them to this paper):
- Each compound eye of the humble house fly comprises around 3,000 lenses.
- Some frogs have more than one type of rod cell in their retinas.
- The common goldfish is the only fish that can see in both infrared and ultraviolet (in addition to the visible spectrum).
- Each eye of the giant squid is 8 inches [25 centimeters] in diameter, and the retina can contain up to a billion photoreceptors.
Last, but certainly not least, while researching the Evolution of Color Vision topic presented earlier
in this paper, I also asked research scientist Mickey P. Rowe a few questions relating to creatures with different combinations of rods
and cones. My edited versions of these questions and Mickey's answers are as follows:
Q: Are there any creatures whose retinas have only rods and no cones?
A: As far as we know at this time, the only back-boned creature on earth whose retina contains all rods and no
cones is Raja erinacea. This is the humble skate, which is a small cousin of the giant rays that have roamed the
earth's oceans for around 400 million years. (For a long time it was thought that rats did not have cones. We now know
that they have two types of cones – one of which is primarily sensitive to ultraviolet light – but they have
very few of these cones compared to other animals.)
Q: Are there any creatures whose retinas have only cones and no rods?
A: The common garter snake appears to have only cones (four types). I use the "appears" qualifier because the eyes
of snakes are generally bizarre compared to the eyes of other vertebrates. Also, there are some animals that have retinas
that are largely dominated by cones. Ground squirrels are like this. They have some rods, but not a lot. [Editor's Note: See
also the answer to the last question below.]
Q: Are there any creatures that have only one type of cone cell plus rod cells?
A: For reasons that are entirely mysterious, all whales, dolphins, seals, sea lions etc. seem to fit this bill.
They have lost the ability to make shortwave sensitive cones. The genes coding for the pigments have been sequenced, and
the S-cone genes are non-functional, so you don't have to worry about the cones being just rare... the machinery for making
them is broken. And it is broken in different ways in the different groups of animals, so it's not something that whales and
seals obtained from the same ancestor. This is particularly odd in that ocean water is enriched in short wave radiation
relative to the light that terrestrial animals are exposed to. But there you go. Also, raccoons and at least some of
their relatives have only one type of cone.
For other sorts of "intermediates," you might look into fish. Individual fish species generally live at a restricted range of depths,
and fish that live in deeper waters tend to have fewer types of cones. In the deepest lake (Lake Baikal) you can find fish
that have anywhere from one to four types of cone. The same is true of the ocean. I don't know of any such fish (aside
from the skate above) that have only rods and zero cones, but I wouldn't be surprised if someone eventually found these as
well. As per our discussions on Evolution of Color Vision, all of these animals are derived from
an ancestor that had four cone types. As they adjust to life at lower light levels they give up cone types. It's not clear
what the cost of additional cone types might be or whether it might just be that the benefits of additional cone types
are lost as you go to dimmer (and more monochromatic) illumination, and hence cone types are lost randomly. But it's
pretty consistent... deeper fish... fewer cone types.
Q: Are there any more interesting facts with regards to rods and cones?
A: Well, with regards to the above points... here's another complication for you... as you can guess from their names,
"rods" and "cones" were originally named because of their shapes. Later, when people began to recognize the differences in
function between human rods and human cones, they started using the terms to refer to function... rods are receptors that
work under low light intensities, while cones at relatively high intensities (we're talking only about vertebrates here).
But things can become a bit problematic as some cells appear to function under different intensity regimes than did
the cells from which they were derived. In particular, there are some cells in the retinas of some geckos that appear
to have characteristics both of classical rods and of classical cones. It was back in the 1930's or 1940's that
Gordon Walls suggested that rods could (evolutionarily) become cones and vice versa. More recent evidence indicates that
Walls was right – it appears that this has happened back-and-forth in geckos. Some geckos from Madagascar appear
to have retinas with no rods, for example, but they appear to derive from animals whose retinas contained only rods.
So here you have an animal that may have no rods, but some (or all) of its photoreceptor cells derived from cells that
were rods... and the thinking goes that the rods from which these cones (that used to be rods) evolved were ancestrally
cones... you probably didn't want to know all of this... [Editor's Note: This is the only time Mickey was wrong –
I did want to know all of this – I LOVE this stuff!]
Vision and Visual Illusion Websites
There are many additional resources relating to vision available on the Internet. The following are some that the author found
to be are particularly interesting (please feel free to let me know of any other items you feel should be included in this list):
1) |
As was noted earlier in this paper, there is a really amazing website on
Color Vision that allows you to mix different text and background
colors and then see how they would look to folks with different types of color blindness. |
| |
2) |
Another useful resource is the Vischeck
website. Here you will find a tool called Vischeck that simulates colorblind vision and another tool called
Daltonize that corrects images for colorblind viewers. |
| |
3) |
Yet another very clever tool is Visolve from the folks at Ryobi System Solutions.
This is special software that takes colors on a computer display that cannot be discriminated by people with various forms of color blindness and
transforms them into colors that can be discriminated.
|
| |
4) |
John Sadowski's website presents a rather cunning Optical Illusion.
You place your mouse cursor to the side of a picture formed from a weird mish-mash of colors. You stare at the picture for 30 seconds and then – without
moving your eyes – you move the mouse cursor over the picture. This causes the picture to be replaced by a black-and-white image, but you see it in
full color ... until you move your eyes. This really is cool, not the least that John has provided a tutorial describing how you can create your own illusions. |
| |
5) |
Michael Bach has put a phenomenal amount of work into creating a website that provides incredibly cool examples of
65 different Visual Illusions. It's easy to lose days of time roaming around this website,
which is obviously a labor of love for Michael. I can't recommend this site highly enough. Why not take a look and then tell me (and Michael)
what you think? |
| |
6) |
Some websites presenting examples of tests for color blindness are the
Ishihara Test for Color Blindness,
the Color Blindness Self-Test,
and Mike Bennett's Color Vision Test pages.
|
| |
7) |
If you have normal vision but are interested in understanding the challenges facing a color-blind person,
check out this really Interesting Demonstration
that provides examples as to how the world appears to a person with a severe type of color blindness called deuteranopia.
See how you would do in a color blind world...
|
| |
8) |
There are an amazing variety of different ways by which one can present different collections of colors. For example,
Bob Stein at VisiBone has developed some very innovative color charts as shown
below. My personal favorite is the Color KiloChart.
Also, you should try playing with his interactive Color Lab (just
start clicking on different colors in the chart on the upper left of the screen).
|

| |
9) |
Last, but certainly not least, Professor Akiyoshi Kitaoka from the Department of Psychology, College of Letters, Ritsumeikan
University, Japan has created the most incredible visual illusions. If you visit his Akiyoshi's Illusion Pages
you will be amazed by what you see. These illusions are simply mind-blowing. I have never seen anything like them.
Professor Kitaoka kindly gave me permission to display two of these illusions as shown below. The first is called Rotating Turtles
and the second is called Doughnut of Rotating Snakes. Click on these images to see full-size versions that are
even more impressive (if your web browser automatically resizes images to fit your screen, make sure you use the "Expand to normal size"
option to see the full-size versions). (In addition to its motion, the Rotating Turtles image boasts a second illusion; one's knee-jerk reaction
is that adjacent "turtles" are angled in opposing directions, but if you lay a straight edge on your screen you will see that
all the "turtles" are perfectly aligned both horizontally and vertically.)
Note that these are NOT animated in any way – they are static images – the motion you see is an optical illusion
(please be careful to not look at these images too long, and stop looking immediately if you feel dizzy and/or uncomfortable.)
|
|
|