Hi there JollyRodger,
To power any two terminal device (including light bulbs, leds. etc)
you have to look at the devices' characteristic voltage-current curve in
order to decide which is the best way to drive it. You choose to drive
the device based on what you find out to be the least sensitive dimension.
The reason for this lies in the fact that any way you drive it, you
will end up with many errors that enter the system one way or another,
and if that error ends up blowing up the device, your part will fail.
If you choose the least sensitive way, you will most likely succeed
in powering the device successfully.
In general, there are two approaches to powering any two terminal device:
1. Provide a set voltage across the device.
2. Provide a set current through the device.
Although the LED has a particular characteristic voltage this voltage is
not something that can be designed around because for small changes in
voltage, the LED current changes drastically. On the other hand, if we
provide a constant current through the device, we find that for large
changes in current the voltage changes very little. This in itself
is enough to conclude that the LED be driven with a current, rather then
a voltage.
In other words, the goal is never to provide a very accurate voltage
across the device, but rather simply to provide a certain set current
level flowing through the device. If we cant acheive a set current,
then we accept any current that will max out at the max rating of the
device and any current that falls below this level, because at least
we acheive safe operation of the device which is of primary concern.
Another way of looking at this is to think of the LED as a zener diode,
because it behaves much like that except that it also emits light.
You provide a zener diode with a current (usually a resistor
in series with a voltage source) and you get whatever voltage the zener
is rated at. The LED is rated at approx 3.3v so it would look like a zener
diode rated at 3.3v (LS type). You can put various currents through it,
but you still get very close to 3.3v or so, just like a zener diode.
Another compelling reason to drive the LED with a current is because the
manufacturers' recommendations require a certain maximum current which
to operate the device at. If you use a voltage to drive it, how do you
know what the current is, and if you dont know what the current is,
how do you know it isnt too high???
So how do we understand what happens with a zener diode in parallel with
an LED?
When we put a current though the LED, the voltage goes to whatever it wants
to go to, and we must allow this because the voltage spec isnt that tight.
The current spec, on the other hand, is very well defined. We must observe
the current spec very carefully and make sure the device gets the right current.
If we satisfy all the current specs, we automatically get the correct operation.
It's almost impossible to try to observe the voltage spec, because it's just
not there. The data sheet says something like "typically 3.6 volts"
but this is subject to very wide variation, so if we try to drive it with
a set voltage we very well might burn it up quickly.
Connecting a zener diode in parallel with an LED might have it's merits,
but none of them have anything to do with efficiency. If anything,
it would be to protect the rest of the circuit if the LED became disconnected.
COUNTER POINTs
#1
If we choose the zener rating carefully so that we find that
the zener has some limiting effect on the LED current we seem to get
something that looks like proper operation.
This is true, it does work to some extent, and if it werent for the
LED voltage variation with temperature and efficiency loss we might have a
workable solution. Unfortunately, it's harder to predict the
change in current with temperature when operating with a set voltage
source then it is to simply drive the LED with a set current, even
when the voltage source (or limiter) is hand chosen per LED.
Also, when the input voltage is high enough to overdrive the LED,
we end up with a 'shunt regulator' type of operation, which really
kills average efficiency of the whole setup.
In short, it's safer and more efficient to drive the LED with a current.
If you do need regulation and you dont want to use a switcher, then
the next best choice is a linear series regulator. With a small
input voltage range, you can realize very high efficiencies.
#2
If we choose the source type (batteries) such that we get very high current
with fresh batteries we accept any and all decrease in life span of the
LED in exchange for ultimate simplicity. This works very well with the
small Nichia type LED's, but doesnt appear to be acceptable with the larger
high powered devices such as the LS's because these devices appear to have
a more strict max current requirement (subject to future experimentation).
SUMMING UP
Simply put, the LED voltage will never be so predictable such that we can use
a set voltage to drive the device, or use a voltage limiting device to achieve
proper operation without extreme drawbacks. On the other hand, if we use a
set current or a range of currents, we find that we get correct operation every
time while maintaining the rather high efficiency we would expect.
Even more simply put:
It's easier to guarantee correct and efficient operation when driving with a
current then with a voltage or voltage limiter.
Good luck with your LED circuits,
Al