Reworking of the lumen/lux units

Twinbee

Newly Enlightened
Joined
Aug 15, 2006
Messages
77
As many of you well know, the lumen and lux units are human adjusted to take into account our sensitivity to the various wavelengths.

However, perhaps what isn't taken into account so much is how the sensitivity of vision reacts when these wavelengths are mixed. For example, blue/violet is dull alone, but if it is mixed with green wavelengths, then the blue component in a sense becomes greater.

Evidence has shown that human vision responds better to white light, than monochromatic light at the same lumen level (see for example street lighting at wikipedia, and how sodium lamps may be very efficient, but how lower lumen levels of white light can be more effective).

It seems to me as though blue and red in general provide details and vision (especially for blue or red objects ;) that are being biased against on the lumen scale.

Below is an image I knocked up. The black curve represents the standard luminosity function that lumens/lux use. The white line represents the "all wavelengths are equal" function (i.e. flat), and my 'proprosed' function is the red line, where sensitivity tails off at the edges.

vision.png


What do other people think? Intuitively, it seems as if the lumen scale should lean at least slightly towards this type of curve than it currently is.
 
Last edited:
Even if it was the case that, for example, adding a little green light to blue/violet light made a subjective difference in brightness greater than one might think from merely adding the lumen values of the two light sources, (ie that there was some subjective nonlinearity in brightness happening when mixing light) that simply isn't something that could be represented on a simple line graph, however you draw the curve.

Once, being obliged to prepare a slide for a conference which included acknowledging some contributors who my immediate boss had fallen out with, I prepared the acknowledgement slide (recorded on film) with names in various colours (red, green, blue, yellow, magenta, cyan) on a black background, with the offending names in blue.
All colours were fully saturated, (so cyan was 100% blue, 100% green, etc.) 100%red/green/blue would give a perfect white (transparent) slide.

When projected, not only did the blue names look somewhat darker than the rest, but they were also almost completely unreadable. You could clearly see there was blue and black, but you couldn't 'see' the edges, so you couldn't read the text.

In the example you chose (blue/violet light lighting a scene badly), quite a lot could be down to the fact that the human eye is particularly bad at dealing with pure blue light, since it has many fewer 'blue' receptors than 'green' or 'red' ones, and so visual acuity in far blue light is particularly bad. Adding even a little green light would not only improve colour perception, but also make edges of objects much easier to see.

You might find that other examples (adding a little blue to green, a little green to red, etc) wouldn't have as dramatic effect as adding a little green or red to blue/violet. Colour rendition would improve, but there wouldn't be nearly as dramatic a gain in visual acuity.
 
The technical definition of Lux and lumens includes definition of the spectrum within which it is measured. The goal is to get a repeatable number that approximates visual brightness rather than to exactly track the eye (and brain's) physiological response to light. That's just too hard, especially as different people will have different eyes [get married if you're not, and you'll see what I mean. Women truly see things differently..].

No way that technical definition is going to change. The "how it looks to me" response can only be evaluated subjectively. It's probably part of the incan vs. LED light value discussion so common around here.
 
Practically speaking, there are largely monochromatic light sources, and basically white ones.

If comparing monochromatic sources, some curve of relative apparent luminance seems useful, and that curve is going to be peaky. Lumens seem a good enough measure.

If comparing basically white sources, colour temperature and CRI might well be important factors, possibly the most important factors. If comparing two sources of similar CRI and colour temp. (ie with apparently similar spectra), then it doesn't much matter what units are used for intensity, so lumens will do as well as anything.

If comparing a white source and a monochromatic one, it's likely that intensity isn't going to be the sole priority, and possibly not the major one
 
As many of you well know, the lumen and lux units are human adjusted to take into account our sensitivity to the various wavelengths.

However, perhaps what isn't taken into account so much is how the sensitivity of vision reacts when these wavelengths are mixed. For example, blue/violet is dull alone, but if it is mixed with green wavelengths, then the blue component in a sense becomes greater.
How much blue/violet, and green are we talking about (in watts radiated)?

Adding a wider range of colors improves contrast. It's not that our photoreceptors are more sensitive to the colors when they are mixed, but that with a wider range of colors, our eyes can better distinguish differences in the light being reflected back. That has nothing to do with how bright that light is though.

There are many situations where a 100 lumen flashlight would probably be more useful than 10000 lumen sodium vapor light. That doesn't mean the flashlight is brighter.

Evidence has shown that human vision responds better to white light, than monochromatic light at the same lumen level (see for example street lighting at wikipedia, and how sodium lamps may be very efficient, but how lower lumen levels of white light can be more effective).
If I were to light up say a room with a really bright low pressure sodium lamp at say 10,000 lumens (amber) or a relatively dim white LED flashlight at 100, I could probably see better with the white one. That doesn't make it brighter than the LPS, which is all the lumen scale is measuring. True, lumens aren't an end-all measure of how well one can see. But such a measure as a single unit would be impossible, especially considering how depending on surroundings envrionment, certain colors might create more helpful contrast.

Below is an image I knocked up. The black curve represents the standard luminosity function that lumens/lux use. The white line represents the "all wavelengths are equal" function (i.e. flat), and my 'proprosed' function is the red line, where sensitivity tails off at the edges.

vision.png


What do other people think? Intuitively, it seems as if the lumen scale should lean at least slightly towards this type of curve than it currently is.
You can't just redefine a quantitiative function, based on specific experiemnts, based on conjecture.

For example, I have next to me a red 5mW laser, and a green 5mW laser. Your curve should suggest that the green would appear only slighly dimmer, when in fact, there is a staggering difference, similar to what the original function suggests.



Now there are a few special cases where the lumen scale does not apply. That is because lumens refers to center of field of vision, photopic vision. So in very dark environment (scoptopic is completely dark, mesopic is a blend), or in your peripheral vision, your eyes will be more sensitive to blue light than suggested by this scale.
 
For example, I have next to me a red 5mW laser, and a green 5mW laser. Your curve should suggest that the green would appear only slighly dimmer, when in fact, there is a staggering difference, similar to what the original function suggests.

Are you sure that's not because the green laser is not more efficient than the red? If you compare red and green on monitor screens, the red is almost as bright as the green component.

The reason for my initial post was to obtain reasonable metrics for white light.

There are basically two ways we can make white light:

a: Mix a pure red wavelength, with a pure blue/violet wavelength with a pure green (like what monitor screens do).

b: Mix wavelengths throughout the entire spectrum (like what the sun does).

Using my system of lumens (the thick red curve in the diagram), the two ways of producing the same brightness of white light will have similar "my-proposed-type-of-lumen" levels. However, using ordinary lumens, types A and B will have different lumen levels at the same perceptive brightness (type A will have a higher lumen level than type B because of the disproportionate weighting towards green/yellow).
 
If you compare red and green on monitor screens, the red is almost as bright as the green component.
Red, green and blue levels on a monitor will be tuned such that 100% R,G,B gives a decent white (though that doesn't mean the R,G,B levels are equal in lumens).
There are basically two ways we can make white light:
a: Mix a pure red wavelength, with a pure blue/violet wavelength with a pure green (like what monitor screens do).

b: Mix wavelengths throughout the entire spectrum (like what the sun does).
With most white LEDs, we don't use either method a) or b) - we use a narrow-spectrum blue LED and a wide spectrum yellow phosphor, and get a pretty usable white much of the time.
Then there are fluorescent lights, which tend to use multiple phosphors.
Using my system of lumens (the thick red curve in the diagram), the two ways of producing the same brightness of white light will have similar "my-proposed-type-of-lumen" levels. However, using ordinary lumens, types A and B will have different lumen levels at the same perceptive brightness (type A will have a higher lumen level than type B because of the disproportionate weighting towards green/yellow).
What do you base your assertions on?
Have you experimented with 'spiky' and broad-spectrum light, and found a real deviation from the lumen function for people balancing the two types of light?
 
With most white LEDs, we don't use either method a) or b) - we use a narrow-spectrum blue LED and a wide spectrum yellow phosphor, and get a pretty usable white much of the time.
Then there are fluorescent lights, which tend to use multiple phosphors.

Yes, okay, there are those too. But our eyes will break down yellow monochromatic light to activate the red and green eye cones which is perhaps why I listed it as more 'basic'.

What do you base your assertions on?
Have you experimented with 'spiky' and broad-spectrum light, and found a real deviation from the lumen function for people balancing the two types of light?

Well I have no figures to back up my claim, but I find it highly unlikely that assuming a perceptively similar brightness, that spikes on red, green and blue (or just yellow and blue if you like) will equal the traditional lumen level of an even wavelength distribution across the spectrum. It would surely be less, or maybe more (I take back my earlier claim that it is necessarily R+G+B mono wavelengths that's the higher lumen level). I could be wrong, but it would have to be quite a coincidence.

I'll try and dig up some numbers.
 
Last edited:
Isn't the point of a perceptual (eye-pigment-absorption) derived system like the lumen that it is based on how the eye absorbs light, and within the absorption spectrum of a given receptor, spiky and smooth spectra of similar lumen rating should give similar subjective brightness?
 
Are you sure that's not because the green laser is not more efficient than the red? If you compare red and green on monitor screens, the red is almost as bright as the green component.
Efficiency is irrelevant -- 5mW is the output power in both cases. (In fact, 5mW green lasers are far LESS efficient than red laers). Lumens is a unit weighing optical output power to apparent brightness.

In the case of the monitor, power levels of each of the colors are intentionally calibrated so that they appear the same brightness/produce the same number of lumens. They are not inherently equal in the amount of power they are radiating.

There are basically two ways we can make white light:

a: Mix a pure red wavelength, with a pure blue/violet wavelength with a pure green (like what monitor screens do).

b: Mix wavelengths throughout the entire spectrum (like what the sun does).

Using my system of lumens (the thick red curve in the diagram), the two ways of producing the same brightness of white light will have similar "my-proposed-type-of-lumen" levels. However, using ordinary lumens, types A and B will have different lumen levels at the same perceptive brightness
lumens levels = perceived brightness, except in the exceptions I mentioned (peripheral vision, and extremely dark conditions)

(type A will have a higher lumen level than type B because of the disproportionate weighting towards green/yellow).
This is true. RGB white light sources can theoretically approach higher lumens/watt efficiencies than broad light sources for this reason. If you were to compare a 1watt-output RGB source, to a 1watt-output broad source at the same color temperature the RGB will appear brighter. The broad source will offer better color rendition, but that's a separate issue.
 
This is true. RGB white light sources can theoretically approach higher lumens/watt efficiencies than broad light sources for this reason. If you were to compare a 1watt-output RGB source, to a 1watt-output broad source at the same color temperature the RGB will appear brighter. The broad source will offer better color rendition, but that's a separate issue.
But Twinbee is talking lumens, not Watts, and is asserting that an N lumen wide-spectrum source has inherently different luminance than an N lumen RGB source.
 
But Twinbee is talking lumens, not Watts, and is asserting that an N lumen wide-spectrum source has inherently different luminance than an N lumen RGB source.
The definition of lumens is based on watts. Lumens are the total radiant power (in watts) multiplied by a sensitivity factor at each given wavelength (ie, 683.2 @ 555nm).

Twinbee's argument is that that that sensitivity curve should be adjusted to be a "flat" frequency respsonse for all colors between 700 and 400nm, rather than having a sharp sesnsitivity "peak" around 555nm. My point in bringing up the lasers, which are devices of known power and wavelength, was to show that the way humans perceive light in that case agrees with the original curve, not the proposed curve.
 
Last edited:
The definition of lumens is based on watts. If units weren't defined, talking about mixing "amounts" of different clolors would have no meaning at all.
But (especially for an RGB source), watts are not really useful units, since depending on what R,G,B sources were chosen, the lumen value could vary greatly for a given total power.
It's still the case that Twinbee is saying that 'spiky lumens' should be different to 'smooth lumens' in apparent brightness, and that avoids any issue of the efficiency of light sources

Twinbee's argument is that that that sensitivity curve should be adjusted to have a "flat" frequency respsonse for all colors between 700 and 400nm, rather than having a sharp "peak" around 555nm.
Indeed.
And I'm still trying to work out what the basis for the suggestion is, beyond the examples of:
a) Blue light with added green being better (brighter?) than just blue light, which could have various explanations quite independent of anything to do with lumens.
b) White light sometimes being more 'effective' than monochromatic light (and blue and red light being useful for blue and red objects), which I doubt many people would disagree with, but which really relates to CRI-like issues rather than apparent brightness.
 
im not an expert on this or something, but surely when you stop limiting the light to being for just one type of receptor in the eye, you will be able to see better?

then again, that would mean that colours like yellow, cyan and purple are colours that we can see the best with... like having 2 sets of receptors in the eye contribute to the image, rather than just one...
 
im not an expert on this or something, but surely when you stop limiting the light to being for just one type of receptor in the eye, you will be able to see better?

then again, that would mean that colours like yellow, cyan and purple are colours that we can see the best with... like having 2 sets of receptors in the eye contribute to the image, rather than just one...
You are also correct. The following are the response curves of the three types of cones, S (short) M (medium) and L (long):

Cones_SMJ2_E.svg
287px-Cones_SMJ2_E.svg.png


Unfortunately the curves on this graph are normalized, but in reality the M and L are each more dominant than S, at least for center of field of vision. 555nm, the sensitivity peak, is dead in between M and L, so it stimulates both. The luminosity function showed earlier is the sum of these three.

As you can see, the region in between M and the S isn't as strong, as the M response drops of sharply at wavelengths shorter than 500nm.

Note: The Rod cells, which only take effect during night vision, are weighted more toward the short wavelengths. The lumen scale does not apply to these.
 
then again, that would mean that colours like yellow, cyan and purple are colours that we can see the best with... like having 2 sets of receptors in the eye contribute to the image, rather than just one...
As the graphs show, over much of the spectrum, both the M and L receptors are active. Much colour perception comes about by taking some very subtle differences between signals from different receptors - just because something looks 'green' doesn't mean that the L receptor isn't being activated, just that the M receptor is somewhat more active.

There's also some odd perceptual stuff happening with colours.
Though red, green and blue are primary colours, yellow also feels like a primary colour in the sense that yellow in no way looks like it's made up from redness and greenness, unlike cyan and magenta, which to look to be made up from green-and-blue and red-and-blue, respectively.
 
There's also some odd perceptual stuff happening with colours. Though red, green and blue are primary colours, yellow also feels like a primary colour in the sense that yellow in no way looks like it's made up from redness and greenness, unlike cyan and magenta, which to look to be made up from green-and-blue and red-and-blue, respectively.

Not only that, but monochromatic yellow light isn't physically the same as a mixture of red and green light. The human eye will see them as equivalent on a white background, but reflections from colored surfaces can be different for monochromatic yellow versus a red-green mixture.
 
Top