The idea of logarithmic sensitivity is often misunderstood on CPF.
The human eye is sensitive to small changes in brightness, much smaller than factors of 2, 10, etc. The actual number is irrelevant to whether the appropriate scale is logarithmic or not. The question is how the eye and brain perceive successive changes in brightness.
Suppose that you are designing a light with a maximum output of 100 lumens, and you want to have 6 brightness levels. To simplify, you have two choices regarding how to set up the scaling:
Scale A (linear): 100, 80.5, 61, 41.5, 22, 2.5
Scale B (logarithmic): 100, 50, 25, 12.5, 6.25, 3.125
If you build a multi-level light using scale A, you will find that the differences in perceived brightness at the high end seem rather small. You will notice them, but the degree of dimming won't seem very useful. At the low end, the differences will be very noticeable, and the jumps at the very lowest end may seem too large. (In some situations, you might wish that there was a 5 lumen or 10 lumen level.) But all of the intervals in this scale are 19.5 lumens.
Scale B is something closer to what the Surefire U2 uses. Here, the brightness between successive levels changes by a factor of 2. With this scale, you will find that the steps in brightness between adjacent steps are noticeable and useful, at both the high and low ends of the scale.
I should point out that there is no single "correct" base for a logarithmic scale, since they are all inter-convertible
through scaling by a constant factor. If you had more levels and a smaller % change between steps, the scale would still be logarithmic.