What color spectrum distribution of light produces the most attractive photography?

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
I recently participated in another thread where there was a good discussion about CRI inadequacy, CCT, the Kruithof curve, "full spectrum" lighting, and related concepts. I highly recommend reading this entire thread as background.

Although you can custom white balance in camera or in post processing, it's always best to get things right up front (which saves you valuable time later). I have two questions:

1. What is the ideal temperature (CCT) for light bulbs from a photography perspective?
2. A light bulb emits radiation across the entire color spectrum. What impact does the shape of that radiation curve have on how colors are shown in photographs. Natural sunlight apparently produces most of its radiation at around 480 nm, though sunlight also contains radiation at all wavelengths. If a bulb produces most of its radiation at other wavelengths, what impact does that have on photography and color rendition?
3. Do CCT and the color spectrum curve interact in any way? Or are they totally independent?
 

AnAppleSnail

Flashlight Enthusiast
Joined
Aug 21, 2009
Messages
4,200
Location
South Hill, VA
1. My experience is that 4000K is easiest to work with because it is reasonably close to other sources. I don't know if this makes it best. Some photographers would murder me for suggesting anything but 6000K, others 5000K.

2. Missing photons reduce saturation of those colors. I can play tricks with light to make a very normal-looking photo but put dead-pale-green cast on all the humans in it, without using makeup. RGB LEDs are especially good at showing this - Any red pigments that aren't in the main band of, say, the red, will appear nearly black. It is possible to selectively boost saturation later (By area and color) but it's an ugly fix that isn't usually needed with intelligent light selection.

3. They interact with the Color Temperature of the camera. Photometrically speaking, the CCT value comes from the spectral power output. There is a complex math equation to turn a spectral power graph (From low to high frequency, with relative or absolute outputs) into a bell curve of best fit, with its center (Peak) defining the CCT, and the goodness of fit defining the CRI. It is possible to make a light that allows better, fuller distinguishing of color than any possible 100 CRI light, but it's expensive and doesn't quite look like we expect it to.
 

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
1. My experience is that 4000K is easiest to work with because it is reasonably close to other sources. I don't know if this makes it best. Some photographers would murder me for suggesting anything but 6000K, others 5000K.
Yes, I just looked at an umbrella flash on Amazon that was 5500.

2. Missing photons reduce saturation of those colors. I can play tricks with light to make a very normal-looking photo but put dead-pale-green cast on all the humans in it, without using makeup. RGB LEDs are especially good at showing this - Any red pigments that aren't in the main band of, say, the red, will appear nearly black. It is possible to selectively boost saturation later (By area and color) but it's an ugly fix that isn't usually needed with intelligent light selection.
Ok, so more output on particular frequencies increases saturation at that frequency. So more at the red frequency -> all the reds look like a deeper red. So it might actually be better to have a flat spectral power distribution so all colors are equally saturated.

3. They interact with the Color Temperature of the camera. Photometrically speaking, the CCT value comes from the spectral power output. There is a complex math equation to turn a spectral power graph (From low to high frequency, with relative or absolute outputs) into a bell curve of best fit, with its center (Peak) defining the CCT, and the goodness of fit defining the CRI. It is possible to make a light that allows better, fuller distinguishing of color than any possible 100 CRI light, but it's expensive and doesn't quite look like we expect it to.
That almost makes the CCT sound like a second derivative from calculus which I took decades ago. Does that sound it might be the case?
 

AnAppleSnail

Flashlight Enthusiast
Joined
Aug 21, 2009
Messages
4,200
Location
South Hill, VA
Ok, so more output on particular frequencies increases saturation at that frequency. So more at the red frequency -> all the reds look like a deeper red. So it might actually be better to have a flat spectral power distribution so all colors are equally saturated.
Best for what? Read Here to see where the human eye is most sensitive. A working human eye compares the outputs of three filters as shown. Your ability to distinguish colors comes from that. You look at a color card and the 'blue' filter says "20%" and the green filter says "10%" and you see something between cyan and robin's egg blue. On the other hand, you might see Blue=0, Green=0, Red=5% and know you see deep, deep red. If you were color-blind, you're missing a filter and can't distinguish anything beyond the filters you have. Some people have an extra one at orange, and their color perception is strange (And they match weird colors for style). Sorry to get off topic.

So: Flat color spectrum, for what goal? To present to human eyes, probably not. To distinguish arbitrary color, maybe. To match color perceptions in other environments (Wallpaper or carpet design), no.


That almost makes the CCT sound like a second derivative from calculus which I took decades ago. Does that sound it might be the case?

It's more like a centroid. Look at a spectral power distribution, draw a bell-curve that fits snugly, and drop a string from the peak of that bell curve. That with some math tells you the CCT, in my understanding.

CRI is a test of fit. In other words, your flat spectral power distribution would have an awful CRI, even if it were good for distinguishing colors.
 

UnknownVT

Flashlight Enthusiast
Joined
Dec 27, 2002
Messages
3,671
I recently participated in another thread where there was a good discussion about CRI inadequacy, CCT, the Kruithof curve, "full spectrum" lighting, and related concepts. I highly recommend reading this entire thread as background.

Although you can custom white balance in camera or in post processing, it's always best to get things right up front (which saves you valuable time later). I have two questions:

I have posted a reply in the other thread - please see: post:#69 in thread CRI of White LEDs
 

SemiMan

Banned
Joined
Jan 13, 2005
Messages
3,899
I recently participated in another thread where there was a good discussion about CRI inadequacy, CCT, the Kruithof curve, "full spectrum" lighting, and related concepts. I highly recommend reading this entire thread as background.

Although you can custom white balance in camera or in post processing, it's always best to get things right up front (which saves you valuable time later). I have two questions:

1. What is the ideal temperature (CCT) for light bulbs from a photography perspective?
2. A light bulb emits radiation across the entire color spectrum. What impact does the shape of that radiation curve have on how colors are shown in photographs. Natural sunlight apparently produces most of its radiation at around 480 nm, though sunlight also contains radiation at all wavelengths. If a bulb produces most of its radiation at other wavelengths, what impact does that have on photography and color rendition?
3. Do CCT and the color spectrum curve interact in any way? Or are they totally independent?


5000-6000K sunlight like sources provide the smoothest spectrum across the visible range and hence have the best ability to have a wide range of colors pop. That is why photographers like it.

That said some people like accurate colors and some people like higher levels of saturation, or more red, or more blue. 5000-6000K and flat gives you the most flexibility in post processing as you will have less color noise and be able to do more adjustments.

For the most part, white balancing is counteracting flat spectral deficiency in order to arrive at a consistent white point no matter the illumination. White looks "yellowish" under incandescent lighting, but take a picture of it with a camera white balanced for incan and it will not be much different from if you used flash. That said, the other colors will be different.

Semiman
 

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
I have posted a reply in the other thread - please see: post:#69 in thread CRI of White LEDs
UnknownVT, thanks for joining in! I checked out some of those threads you posted. Just wanted to quickly respond and say I found this fascinating assertion in one of them:
I was at NAB in April, and noticed that solid state lighting had largely taken over the small and medium sized studio lighting market...But I didn't see any of them pitching "high CRI". I reasoned that this was due to a combination of factors: current cameras are already sampling at just three spectral points (R, G and B), and so will likely be unaffected by the cyan "ditch" or by the lack of deeper reds. They are also highly white-point flexible, and last but not least, the vast majority of footage is shot with the expectation of being heavily manipulated in post anyway, in color correction and grading (which almost completely eliminates the issues with gels etc. that were such headaches for film).
Does anyone know anything more about camera sensors and the color spectrum? I guess this goes back to the basics for light and color. Objects absorb all the frequencies of sunlight except for certain frequencies. The frequencies of light that are not absorbed and are reflected back at us are the "color" of the object as humans perceive it.

Canuke's post seems to suggest output along the entire spectral power graph is irrelevant and we just need to look at the RGB frequencies where the camera samples. First, is this true? Second, how wide are the frequency bands that are considered R, G or B? For example, this wavelength color chart suggests reds are centered around 650 nm. Does a camera only sample at 650 nm? Or it samples at 620-680 nm? I guess I would consider it a big deal if it simply ignores certain wavelengths like 600 nm. It almost seems impossible we could have good photography if this were the case. What am I or Canuke missing? How do camera sensors work? How can a camera sensor ignore entire wavelengths of color yet manage to reproduce them? Is this the difference between sensors, some receive input from the entire spectral power distribution while others only receive input at fixed RGB wavelengths? I have seen camera sensor analyses that suggest certain sensors have better "dynamic range" than others. Is this the reason?

Best for what? Read Here to see where the human eye is most sensitive. A working human eye compares the outputs of three filters as shown. Your ability to distinguish colors comes from that. You look at a color card and the 'blue' filter says "20%" and the green filter says "10%" and you see something between cyan and robin's egg blue. On the other hand, you might see Blue=0, Green=0, Red=5% and know you see deep, deep red. If you were color-blind, you're missing a filter and can't distinguish anything beyond the filters you have. Some people have an extra one at orange, and their color perception is strange (And they match weird colors for style). Sorry to get off topic.
Your comment is closely related to what I just posted. I'm not sure you're stating it the way things actually work physically. Is it possible you're describing how a camera sensor works rather than the human eye? Objects reflect light presumably from the entire spectral power distribution. The eye may have peak sensitivity at certain wavelengths but surely we must perceive color at all of the wavelengths, isn't that what makes them the "visible spectrum"? That link you posted shows the eye perceives all of the visible spectrum. However, the way you described it was different than what was portrayed. Higher sensititivity at the peaks does not equate to classifying all colors as different ratios of RGB. Your link suggests the entire visible spectrum is perceived physically, it's just that some are a bit more sensitively perceived than others.
 
Last edited:

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
Look at a spectral power distribution, draw a bell-curve that fits snugly, and drop a string from the peak of that bell curve. That with some math tells you the CCT, in my understanding.

CRI is a test of fit. In other words, your flat spectral power distribution would have an awful CRI, even if it were good for distinguishing colors.
Ok, that helps a lot. So CCT is essentially a measure of the color frequency which is emitted most by a particular light source. It makes sense that we would characterize bulb temperatures by the color that it emits the most of.

We call a 2700 light bulb "warm" because it is emitting more light at the yellow/amber frequency than any other frequency. We call a 6500 light bulb "cool" because it is emitting more light at the blue frequencies.

Interesting that the CCT index is arbitrarily arranged low/yellows on the left and high/blues on the right. That's the exact opposite of the arbitrary arrangement of color wavelengths where blues are low and reds are high.
 

AnAppleSnail

Flashlight Enthusiast
Joined
Aug 21, 2009
Messages
4,200
Location
South Hill, VA
Cameras and human eyes distinguish color in similar ways because they both have three color responses. Blue, red, and green. Check out that eye page I linked.

A teal card will set off your eye's cones at, say, 30% blue and 50% green. Your eye interprets this signal as teal because it is part blue and part green. A camera is similar, where each pixel (in a jpg) senses magnitude of red, green, and blue. Luminance is added somehow. If your light source ONLY had those three key frequencies for a trichromatic sensor (eye, camera) then very few colors would appear.

That's the trick I can play with light. I could make light that makes red funny-looking to you, and dead black to a camera. By having no blue or green, and the wrong red, the camera sees no light (black). The same trick makes blood look dead black under a hunter's blue light. Green leaves have some blue and appear dim blue, but blood has almost no way to reflect blue light. Black blood stands out like motor oil for the hunter.

I think of three color sensors making all light as being like three directions making all of volume. If you get the x, y, and z coordinates, there it is. And with the R, G, and B coordinates (plus luminosity), there is every color you can see.
There are some color science details about quad-color coloring, or limits of pure tones, but eyes (and cameras, specially built and used to imitate eyes) use a tri-color sensor to distinguish color.

If two colors had identical RGB values but differed in another color (that isn't represented by any RGB, or it simplifies to a new RGB location), we couldn't tell them apart. Such is the case for UV-reflecting scorpions and flowers.
 

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
I think you're doing the photography equivalent of mixing metaphors here. Check out this point made in this wikipedia article on RGB:

As an example, suppose that light in the orange range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more. The difference in the response can be detected by the brain and associated with the concept that the light is orange. In this sense, the orange appearance of objects is simply the result of light from the object entering our eye and stimulating the relevant kinds of cones simultaneously but to different degrees.

Physically, the eye does not perceive several different colors in different proportions per se. Rather, whatever wavelength of light that enters the cones stimulates the cones, and the cones more sensitive to those wavelengths is stimulated to a greater degree. The relative stimulation of the cones determines the minds interpretation of what color it is. If the teal wavelength of light is coming into the eye, then the green sensitive cone is stimulated the most, and the yellow and violet cones less. The human mind has precalibrated itself to interpret this precise level of stimulation to equal the color teal.

From what I can tell, a camera sensor, on the other hand, typically uses a Bayer filter arrangement where a filter layer on top of the pixels uses dyes to "pass a certain range of wavelength" through to particular pixels. Apparently, though, with these dyes, the pixels can still perceive light along a wide spectrum of wavelengths, it's just that they have a color wavelength sensitivity peak. There must be some sort of algorithm that maps every type of pattern to certain colors in a pre-calibrated color space.

I think the concept of combining primary colors to produce a final color appears to have something to do with the physical properties of light wavelengths, which are additive. A red beam of light and a green beam of light produces a yellow beam. I was a bit surprised when I read that, but I guess it's true. Not sure how or why that works, in terms of math or physics. 320 nm + 220 nm = 540 nm, for example? But anyway, the additive property exists, and I guess cameras and printers use this additive property to produce colors from the RGB primary colors too.
 

SemiMan

Banned
Joined
Jan 13, 2005
Messages
3,899
A red beam of light and a green beam of light produces a yellow beam. I was a bit surprised when I read that, but I guess it's true. Not sure how or why that works, in terms of math or physics. 320 nm + 220 nm = 540 nm,

Ahhh ... NO

Not sure where you read that, or where you interpreted that, but that is not true. Perhaps it was something about some specific physical substance material, etc. but no.

If what you said was true, then sunlight would essentially be one wavelength which you know is not true and if what you said was true, you would not be able to subsequently break light down into a spectrum because it would be one wavelength.

Back to stuff above, camera sensors and cones in the eye for all intents and purposes work exactly the same. However, they have much different outcome requirements and hence the tuning of those sensors is different between the eye and a camera.

The eyes color passbands are organized as such that one can tell what color something is easily. Hence there is a lot of overlap between the various passbands which allows an easy mechanism for color determination.

A camera on the other hand for the most part is designed to record data such that it can be played back in the future. If the playback mechanism matches the recording mechanism, whether a screen, print, etc. then the results will be "accurate" ... within the limits of the recording/playback system.

A Bayer filter cameras has a broad response right from 400nm to 700nm. Each color sensor's bandwidth is relatively wide, approximately 100nm, and there is overlap.

You cannot simply illuminate a scene at the peaks of the filter though as that will only indicate how well that scene reflects light at those peaks and what you are trying to do is record how well the scene reflects light across all visible wavelengths which can of course ONLY be done if you illuminate the scene with all wavelengths.

Semiman
 

AnAppleSnail

Flashlight Enthusiast
Joined
Aug 21, 2009
Messages
4,200
Location
South Hill, VA
I think you're doing the photography equivalent of mixing metaphors here. Check out this point made in this wikipedia article on RGB:

It turns out that, as Semiman repeated, cameras and eyes work similarly. We built them in this way. See here for more info. Most colors set off two or three of the cone types, and your brain interpolates to assume color. Color blind people lack one of the sensors, so there are ranges of.frequency wherein they cannot interpolate different frequencies as different colors.
I think the concept of combining primary colors to produce a final color appears to have something to do with the physical properties of light wavelengths, which are additive. A red beam of light and a green beam of light produces a yellow beam. I was a bit surprised when I read that, but I guess it's true. Not sure how or why that works, in terms of math or physics. 320 nm + 220 nm = 540 nm, for example? But anyway, the additive property exists, and I guess cameras and printers use this additive property to produce colors from the RGB primary colors too.
540 nm is deep red or infrared, not yellow. Our eyes combine colors into something like an average frequency AND a sum. So red and blue make violet. I haven't combined red and green in ages, but your computer screen can do it with pixels. What's that make?


The takeaway here is:

Camera film was originally black and white, responding to variation in light.
Some clever person made photochemicals that responded appropriately to colors.
These were mixed on a single sheet. (negatives were necessary, beside my point though).
All this was clever chemists changing blends to come out right. And right is " like we see things."
Digital photography uses filters. Each pixel has one red, one blue, and two green... Much like human eyes. The electronic signal is encoded to mimic what we would see, and you get a .jpg to look at.

Cameras are intelligently designed to make nice pictures similar to human vision. They interpolate RGB to make color, but in extreme cases cameras and prints will not match human vision.

Human vision is a complex system with sensors and senses combining to create images(a face), not just pictures (a tan blob with blots).



Edit: red light and green light combined look yellow. But this isn't the same as light of an amber-type frequency. Check further in that wikipedia article. The short version is, red and green can be yellowish, but not pure yellow. We can filter white light down to those frequencies and get a purer yellow than by combining monochromatic red and green. Both of these (broad filtered source and dual monochromatic source) will look somewhat different than monochromatic amber. Think about tow truck lights.

Added colors overlap, but none of their intrinsic physical properties combine (wavelength and polarization, for photons).
 
Last edited:

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
5000-6000K sunlight like sources provide the smoothest spectrum across the visible range and hence have the best ability to have a wide range of colors pop. That is why photographers like it.

That said some people like accurate colors and some people like higher levels of saturation, or more red, or more blue. 5000-6000K and flat gives you the most flexibility in post processing as you will have less color noise and be able to do more adjustments.

For the most part, white balancing is counteracting flat spectral deficiency in order to arrive at a consistent white point no matter the illumination. White looks "yellowish" under incandescent lighting, but take a picture of it with a camera white balanced for incan and it will not be much different from if you used flash. That said, the other colors will be different.

Semiman
Earlier, AnAppleSnail indicated that CCT is determined by the peak frequency in the spectral power distribution. Essentially, cool CCTs emit most at the blue frequencies in their spectral power distribution, warm CCTs emit more at the red frequencies in their spectral power distribution. Natural sunlight's peak output is around 480 nm, which is essentially blue light.

Semiman can you confirm if CCT is determined by the peak frequency in the spectral power distribution? If it is, then I think the natural conclusion is the cooler CCTs are most like natural sunlight. In fact, there should be a cooler CCT that peaks at exactly 480 nm and this would be the bulb that mimics natural sunlight the closest, assuming the spectral power distribution has a similar shape. Of course this is assuming the spectral power distribution of natural sunlight peaks at 480 nm throughout the day. Would be great if there was a resource that shows how the spectral power distribution of sunlight shifts during the day, if at all, and in different seasons or weather conditions.

Edit: Found Color for Science, Art and Technology textbook on google that says morning light is 2000K while afternoon light can exceed 10,000K. Average diffuse skylight without direct sunlight is considered 6500. Indoor north sky daylight is 7500. D65 is the CIE's representation of daylight at 6500. It said the graphic arts community prefers to evaluate colors using a relatively flat spectral power distribution at 5000 and uses illuminant D50. (Note: some of this seems possibly plain wrong because I have also seen it said that the actual kelvin temperature of the sun's surface is closer to 5800, and the sun is a blackbody radiator, so I'm a bit mystified at how sunlight could arrive at the earth at 10,000K.)

Also found this which addresses AnAppleSnail's statement about the peak of the SPD determining the CCT temperature. CCT is measured using the kelvin temperature of a theoretical blackbody radiator, and "It turns out that black body radiation provides us with a set of very precise working equations that relate the temperature of an object to the light it emits. Working from the ideal and using Planck"s law, we can predict the energy distribution across the spectrum for a given temperature. The total emitted power is calculated using the Stefan-Boltzmann law. The wavelength of the peak emission, and hence the color that dominates for this temperature, is provided by Wien's displacement law. Knowing the ideal case allows us to predict or calculate actual values by correcting for the imperfections of actual hot objects." So the SPD, peak wavelength and dominant color can be directly calculated and predicted at any given kelvin temperature using Wien's displacement law, but the caveat is this is true of theoretical blackbody radiators, but not for artificial lighting. So it'd be nice to find something that confirms how CCT is determined for artificial lighting where the SPD does not follow that of a blackbody radiator.
 
Last edited:

UnknownVT

Flashlight Enthusiast
Joined
Dec 27, 2002
Messages
3,671
UnknownVT, thanks for joining in! I checked out some of those threads you posted. Just wanted to quickly respond and say I found this fascinating assertion in one of them:

Does anyone know anything more about camera sensors and the color spectrum? I guess this goes back to the basics for light and color.

Cannot claim to be any authority on the subject - but because I have been facing this photography challenge for some 3 years now -
I think I may now have accumulated a fair bit of experience in this subject matter.

There is a distinct difference between a white with a continuous spectrum like daylight (around D65 or D50)
and a white that is made up of discrete RGB LEDs -
gives a spectrum like this:
Red-YellowGreen-Blue_LED_spectra.gif


even if they match the Bayer sensor in our cameras.

At first sight our eyes may see it as white - but often our cameras will capture anomalies - simply because of the peaks.
a very simple example -
I would guess that a pimple/spot right at the red peak would be emphasized when compared to daylight....

There is a lot more discussion in this very long thread:
Modern LED Stage Lighting & photography problems (
multipage.gif
1 2 3 ... Last Page) (over at PentaxForums)

A very good illustration is from the Academy of Motion Picture Arts and Sciences

Solid State Lighting Project with very interesting videos from the symposium.

It appears that solid state lighting ie: LEDs can cause lots of problems in film and video -
even when they may appear to the eye as indistinguishable from normal lighting for the film industry.

This is a good example with flesh tones and makeup -

Makeup Case

this is a "summary" of sorts:

Summary

EDIT to ADD -
to answer the header of this thread -
the most pleasing color spectrum light for digital photography would probably be daylight with a color temperature that matches the native color temperature balance of the camera (eg: my dSLR is balanced for 5200K)
But probably light sources with D50, D55, or even D65 (the CIE standard daylight illuminant and the white point of monitors and sRGB)

Having said that I think you may be asking about artificial light - that poses a far more difficult question -
traditionally for film photography - tungsten lighting (~2700K) was well known and films and filters were balanced for that with known and acceptable results.

However with the advent of digital photography there is more flexibility for white balance both in camera and post processing -
so more light source became usable including ones that may seem non-ideal.

Although LEDs look promising with the ability to produce white LEDs of varying CCT -
the Academy of Motion Picture Arts and Sciences found LEDs to be problematic -
even when by eye they were indistinguishable from other known and commonly used light sources -
here we are talking about a body where tens of millions of $$ are at stake -
so I would say this is a pretty serious and probably very rigorous study.

Please check out the links just above this edit to see for oneself how LED/solid state lighting perform (link: Makeup Case).
 
Last edited:

AnAppleSnail

Flashlight Enthusiast
Joined
Aug 21, 2009
Messages
4,200
Location
South Hill, VA
In short, you can combine two frequencies to get something that looks like an 'average' of the two frequencies. But it is not the same thing as that average frequency would be. So A pure red-green-blue trio of frequencies, with the right relative intensities, would kind of look white. But things of color would not look right under them - this would be a BAD color model to use for lighting the world. You will have most colors (Objects reflecting a range of frequencies) appear VERY dim and washed out, with the few that match to the lit frequencies blazing like Day-Glo paint. In other words: You can only see reflected light. You can only reflect light that's there. And you can only get the light that comes out of the source, and no other frequencies (Barring fluorescence). So without real yellow photons, you don't have yellow...it just looks yellow. Without real red photons, it's not red (And you'll notice this mostly on skin and browns). And so on.

To a much lesser extent, we see this effect in lights with spikes or deficient areas of output. These days, most accepted artificial light sources have pretty insignificant holes. You can almost always distinguish most colors, although there are weak areas. You can still tell weak sources, as skin will look like zombie flesh or blue is indistinguishable from black. But most are so good that these differences are only noted in direct comparison.
 

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
I just wanted to chime in with a quickie before I hit the sack tonight and say that I feel like I understand white balance and RGB histograms so much better after what we've talked about in this thread! Yay! It's so much easier now to look at a photo, the RGB histogram and see the color balancing it needs. Who knew talking about this stuff would end up having such a practical effect?
 

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
5000-6000K sunlight like sources provide the smoothest spectrum across the visible range and hence have the best ability to have a wide range of colors pop. That is why photographers like it.
Now that we've established that natural blackbody radiators like the sun have a unique SPD that can be calculated at any temperature, I would mention that Bayer filter arrays (the dominant existing sensor type) have double the number of green filtered sensors, apparently because the human eye is most sensitive to greens. I assume camera manufacturers can't be wrong on this. Which means that assuming you can mimic the SPD of natural light, I would think a temperature who SPD peaks in the green wavelengths (roughly 530 nm) targeted by Bayer filter arrays would be a very good spot. Just a guess.

The sun's temperature of 5778 has an SPD peak at about 502 nm. No way to know if that graph showing 530 nm was drawn to scale, but using Wien's displacement law, it's easy to calculate that the optimal temperature for a SPD peaking at 530 nm is 5467 kelvin.

So then is it best to target where the human eye is considered most sensitive? Do manufacturers using Bayer filter arrays have it right? I guess we need more backup on where peak eye sensitivity lies. Here's some opinions of where that wavelength lies.
 
Last edited:

UnknownVT

Flashlight Enthusiast
Joined
Dec 27, 2002
Messages
3,671
I would mention that Bayer filter arrays (the dominant existing sensor type) have double the number of green filtered sensors, apparently because the human eye is most sensitive to greens. I assume camera manufacturers can't be wrong on this. Which means that assuming you can mimic the SPD of natural light, I would think a temperature who SPD peaks in the green wavelengths (roughly 530 nm) targeted by Bayer filter arrays would be a very good spot. Just a guess.

No, sorry -
it is not because natural daylight has more green -
it's because the human eye sensitivity peaks at green.

All this type information is readily available eg: Bayer Filter @ Wikipedia
and subject of years of intensive research and study -
(there are also other digital sensor color filter arrays being used)
This is why digital cameras are as good as they are today -

There had been years of very serious usage of color film
in multi-million $$ industries in both still photography and film/movies -
to have digital photography even acceptable to those established industries would have required much study/work/research to be able to displace film.

So there is a lot of information and reference material readily available (on the web even) -

Please do some more looking at established references.
 
Last edited:

AnAppleSnail

Flashlight Enthusiast
Joined
Aug 21, 2009
Messages
4,200
Location
South Hill, VA
Now that we've established that natural blackbody radiators like the sun have a unique SPD that can be calculated at any temperature,

Sunlight is not a black-body light source. Neither is the Sun.

On Earth, Sunlight is filtered and refracted and reflected by the atmosphere (And surroundings) Each of these things makes it less like a black-body source. That's one reason camera white balance must be different for dawn, noon, twilight, and cloudy days. Your eyes can't tell sunlight from black-body light of appropriate temperature, unless those filtering and reflecting effects are quite powerful.

In space, Sunlight is not blackbody radiation either. It is a real object, and doesn't quite follow the blackbody ideal of being in thermodynamic equilibrium with its environment. Some physics labs do very expensive materials work to approximate them more closely. It's pretty darn close though, and takes good analytical equipment to distinguish the difference. However, a spectrograph is often sufficient - certain frequencies are blocked by gases around the star (Hydrogen, etc).

We can backtrack through physiology to try and perfect the light source, but the closest insights we have to what best helps us perceive color are psychological studies. These use reasonably-sized (30-ish) groups of people who are asked to evaluate several sources of light. Their responses are analyzed and some conclusion is usually made. This is where the Kruithoff curve comes from, and works to dispel some myths (Say, about red or turquoise light being optimal for general night-adjusted vision). But it's expensive and muddly because people rarely really know what they want. Usually the scientists have to trick them to get at the questions of interest.
 

teddoman

Newly Enlightened
Joined
Jan 14, 2013
Messages
20
Location
NYC
No, sorry -
it is not because natural daylight has more green -
it's because the human eye sensitivity peaks at green.
Point of clarification- you misread my post. I am saying the same thing as you.

Also wanted to share this article from Architectural Lighting. It says blue light (and by implication, high CCT) is the key to circadian rhythms and SAD. I also noticed it is one of the few sources to not suggest the CCT of daylight can exceed the real surface temperature of the sun in kelvins. It suggests peak daylight is around 5500.
 
Last edited:
Top