I'm confused about lumens

FlameOn

Enlightened
Joined
Dec 7, 2002
Messages
266
I had an Arc AAA and it was rated at 10.5 lumens...this light was extemely bright up to the day it was lost or (most likely stolen)...now I see AAA lights like the ITP A3 that boast 80 lumes....I can't see it being 8 times brighter...what am I missing here?
 
Not a thing, that's just the march of technological progress. You could theoretically see 200+ lumens from a 1xAAA powered by a li-ion..
 
I don't know the specifics of the Arc light (what emitter it used, how old it was, etc) but 80 lumens isn't hard to do at all.

The only reason it is limited to 80 lumens in the A3 is for longer run times.

If the Arc had a modern (efficient) emitter and was limited to 10 lumens then that was done for longer run times.

At 80 lumens the A3 is supposed to run for 45 minutes or so. There is a 3 mode version with 80-20-1.5 lumens. At 20 lumens (medium mode) it runs for 4 hours.

I'm not sure if I've answered your question?
 
I had an Arc AAA and it was rated at 10.5 lumens...this light was extemely bright up to the day it was lost or (most likely stolen)...now I see AAA lights like the ITP A3 that boast 80 lumes....I can't see it being 8 times brighter...what am I missing here?

The ITP-A3 is 60 Lumens OTF as measured by bc.
Whether or not your eyes will actually perceive a 6x lumen increase is anybody's guess.
 
Last edited:
Not a thing, that's just the march of technological progress. You could theoretically see 200+ lumens from a 1xAAA powered by a li-ion..

You could well see 300+ if it's a XP-G with a high bin and low Vf
 
The light from an XP-G R5 powered by a LiIon 10440 in a Peak Eiger at a high current draw does appear at least ten times brighter than that from the (dinosaur) 5mm led used in the Arc AAA. But because of limited battery capacity you must be willing to sacrifice runtime to get such output, and heat can quickly become a problem in lights not designed to handle it.

No doubt about it, new emitters in tiny lights with LiIon's can produce an amazing "Wow" effect, and at times such performance is really handy. But if you choose, you can simply elect to have two or three or four times the output of the old Arc AAA for much longer runtimes.
 
Lumens is the total output, but depending on the shape of the beam will depend on how bright it looks. 80 lumens of all flood may not look as bright as 10 lumens focused into a small spot. One way to compare the output is to bounce the lights off a light colored (or white) ceiling. As you turn each light off and on look at something on the floor and you should have an easier time noticing the difference in output.
 
The Arc uses a 5mm LED which are often used for better runtime than higher output.

The ITP and many similar lights use a power LED, like a Cree XP-E or similar. They are designed for high output above all else, and may use circuitry to offer multiple output levels in order to also boast good runtimes. Although it is not unlikely that the ITP is overestimating its output by 30% or even more... due to the way they define their output.
 
Lumens is the total output, but depending on the shape of the beam will depend on how bright it looks. 80 lumens of all flood may not look as bright as 10 lumens focused into a small spot. One way to compare the output is to bounce the lights off a light colored (or white) ceiling. As you turn each light off and on look at something on the floor and you should have an easier time noticing the difference in output.
I thought it was generally agreed that flood lights appears brighter than throw/focused lights, at least in the same close-mid range distance?

It's because of the demanding increase in lumens output before our eyes perceives it to be so many times brighter. Normally it would be 4x the lumens for twice the perceived brightness.

So the 80 lumens should look like its 4 times brighter.

Between my SST-50, my Malkoff M60, and my DeCree XP-G, the SST-50 typically wins out, even at a distance.
 
I think he is saying can you perceive that the light is 8x Brighter.. Like if i have a 10 lumen flashlight and a 100 lumen one is the 10 lumen one 1/10 the brightness of the 100 one? i remember reading a while ago... it has something to do with like logs or somehting xD and you also need to remember that lumens rated by the manufacture are a "ball park" figure.. i have a "1800 lumen " x5 Q5 Flashlight that actually seems a little bit dimmme than my P7 mag which is 900 lumens if i got a perfect P7.. which is highly unprobable so its probably around 600 or 700 lumen moral of the story lumens dont really mean anyhting... theyre a ball park figure and usually exagerated
 
The Arc uses a 5mm LED which are often used for better runtime than higher output.

The ITP and many similar lights use a power LED, like a Cree XP-E or similar. They are designed for high output above all else, and may use circuitry to offer multiple output levels in order to also boast good runtimes. Although it is not unlikely that the ITP is overestimating its output by 30% or even more... due to the way they define their output.
The 10.5 lumen Arc uses a Nichia GS with 2 0.25mm* 0.25mm dies - total LED area 0.125 mm^2.
The XRE/XPE LED die is 1mm*1mm - total LED area 1 mm^2 or 8X the area of the Nichia. 8X more light from 8X more LED surface area is normal.

Not all LEDs are equal. LEDs range from 3 lumens of the Nichia BS to 2200 lumens of a SST-90.
 
Last edited:
Lumens operate on a logarithmic scale for humans, or decaying exponential, or something.

Basically, humans perceive brightness in such a way that it takes a big percentage of lumens increase for us to notice.

For example, I have a flashlight that has a 3 lumen, 30 lumen, and 200 lumen modes. Each mode appears to be about 4 times as bright as the previous. But it does NOT appear 10 times as expected.


This is why the 80 lumen light will only appear to be a few times brighter than the 10 lumen one.


Other factors include:
1) False specs. Some say 80 lumens, but this is based on the emitter theoretical output, not the actual light output.
2) Throw/Spill factor. A light with a really tight beam (high throw) will appear brighter than one that has a more spread out hotspot.
 
I thought it was generally agreed that flood lights appears brighter than throw/focused lights, at least in the same close-mid range distance?

:thinking: Never heard that before.

The more you spread out the beam, the less bright it will appear. When I want my mag to light up something more (brighter) I focus the beam into a tighter hot spot. I don't adjust it to flood to increase the illumination. :poke:
 
:thinking: Never heard that before.

The more you spread out the beam, the less bright it will appear. When I want my mag to light up something more (brighter) I focus the beam into a tighter hot spot. I don't adjust it to flood to increase the illumination. :poke:
I can understand that in a focusable where the lumens output itself remains the same.

I'm just saying it's why my Malkoff M60 or DeCree XP-G looks dimmer with an optic to focus the beam, as opposed to my SST-50 with a reflector. It's because my SST-50 has nearly twice the output OTF.
 
Thanks guys...I did get that light in 04, so I guess I was ready to update anyway....:)
 
For example, I have a flashlight that has a 3 lumen, 30 lumen, and 200 lumen modes. Each mode appears to be about 4 times as bright as the previous. But it does NOT appear 10 times as expected.

According to my personal experience that's is a very good description!

Regards, Patric
 
Lumens operate on a logarithmic scale for humans, or decaying exponential, or something.

Basically, humans perceive brightness in such a way that it takes a big percentage of lumens increase for us to notice.

For example, I have a flashlight that has a 3 lumen, 30 lumen, and 200 lumen modes. Each mode appears to be about 4 times as bright as the previous. But it does NOT appear 10 times as expected.

Our vision does indeed have a logarithmic response to brightness. Astronomers have known this for a long time - they rate stars according to magnitude - a measure of the relative brightness of stars. Magnitudes are arranged in the manner the ancients liked - so a first magnitude star is noticeably brighter than a second magnitude star, which is in turn brighter than a third magnitude star. The ancient greeks weren't especially scientific about this - the really bright stars were first magnitude, the next tier down were second magnitude, etc.

The difference in brightness between stellar magnitudes is a factor of 2.51 - so a first magnitude star is 2.51x brighter than a star of the second magnitude. The magnitudes are spaced such that they are obvious to the human eye - a first magnitude star is CLEARLY brighter than a 2nd magnitude star. The factor of 2.51x means that a difference of 5 magnitudes is a difference in brightness of a factor of 100x. Folks who are VERY experienced can estimate a magnitude to about 0.1 - so if you are really practiced you can see a ~25% difference the in the brightness of a star. (You can't be anywhere near this precise without another nearby star of known magnitude to use as a reference.)

Keep in mind that since I'm talking about stars it's purely a point source of light - area illuminated by a flashlight changes this discussion a bit (makes it worse actually - your eyes work on DIFFERENCE, so a point source is the best case for something you can see), but the general principle applies - a LARGE linear difference in the relative brightness of a light source has a much smaller apparent difference to your eyes - 5 magnitudes of stellar brightness is a factor of 100x brightness. The sun,btw, is magnitude -26 - it is apparently 25 billion times brighter than a very bright star.

And that tells you the reason our eyes have a logarithmic response to light - our eyes can handle an absolutely HUGE range of brightness, from starlight to sunlight.
 
Our vision does indeed have a logarithmic response to brightness. Astronomers have known this for a long time - they rate stars according to magnitude - a measure of the relative brightness of stars. Magnitudes are arranged in the manner the ancients liked - so a first magnitude star is noticeably brighter than a second magnitude star, which is in turn brighter than a third magnitude star. The ancient greeks weren't especially scientific about this - the really bright stars were first magnitude, the next tier down were second magnitude, etc.

The difference in brightness between stellar magnitudes is a factor of 2.51 - so a first magnitude star is 2.51x brighter than a star of the second magnitude. The magnitudes are spaced such that they are obvious to the human eye - a first magnitude star is CLEARLY brighter than a 2nd magnitude star. The factor of 2.51x means that a difference of 5 magnitudes is a difference in brightness of a factor of 100x. Folks who are VERY experienced can estimate a magnitude to about 0.1 - so if you are really practiced you can see a ~25% difference the in the brightness of a star. (You can't be anywhere near this precise without another nearby star of known magnitude to use as a reference.)

Keep in mind that since I'm talking about stars it's purely a point source of light - area illuminated by a flashlight changes this discussion a bit (makes it worse actually - your eyes work on DIFFERENCE, so a point source is the best case for something you can see), but the general principle applies - a LARGE linear difference in the relative brightness of a light source has a much smaller apparent difference to your eyes - 5 magnitudes of stellar brightness is a factor of 100x brightness. The sun,btw, is magnitude -26 - it is apparently 25 billion times brighter than a very bright star.

And that tells you the reason our eyes have a logarithmic response to light - our eyes can handle an absolutely HUGE range of brightness, from starlight to sunlight.

As a binoholic and amateur astronomer I have thought a bit about these things. Yes, the perceived brightness differences of lights sources isn't linear. But I am sure that a big contributing factor to creating the magnitud scale is the same as with the decibel scale: using a linear scale would make it harder to handle for practical use. A difference between 50 and 150 decibel is a difference of 1024 times, for example. The difference between the lowest sound we can notice and 150 dB (which I read provides immediate deafness) is FAR higher.
When it comes to magnitudes I calculated that the brightness difference between the sun a sunny day and the faintest star a human normally can notice a clear night is around 13 thousand billion times... It will be much easier to handle with the magnitud scale.
The brightness difference between the sun and fullmoon is around half million times. But before I knew this fact I never could guesstimate it. I thought it may be 1000 times or so, but never around half million...
This shows how difficult it is for us to estimate brightness differences, especially when the difference is very big.

Regards, Patric
 
Last edited:
Henry's article "LED Flashlight White Paper" is very informative regarding human perception of light. It can be found on the HDS Systems website here: http://www.hdssystems.com/?id=Articles

this article, and the science behind it, had saved me hundreds of dollars as the visual difference between 100 lumens and 200 lumens is negligible. lovecpf
 
Henry's article "LED Flashlight White Paper" is very informative regarding human perception of light. It can be found on the HDS Systems website here: http://www.hdssystems.com/?id=Articles

this article, and the science behind it, had saved me hundreds of dollars as the visual difference between 100 lumens and 200 lumens is negligible. lovecpf

I wouldn't call it negligible. The difference is greater than that between 500 lumens and 900 lumens.

Your point is well taken however regarding the money savings part and not sweating the Q5 vs R2 differences.
 
Top