The main problem with standard RGB is that certain colors will be left out by necessity.
See
this diagram for reference.
The outside edge of the CIE diagram represents pure wavelengths. Inside this curve are colors that we see when wavelengths are mixed. Any particular color that we see is a function of which particular wavelengths (components) are used, and the relative intensity of each component.
If we were to mix two components, the resulting range or set of possible perceived colors for that combination -- the
gamut -- would be represented on this diagram by a straight line linking the wavelengths of your two components. For example, if you want to see the color range of a 635nm red source and a 505nm bluish green source, locate these two wavelengths on the curve and draw a straight line linking them, across the CIE diagram. The line shows you the set of all possible colors that can be realized using that particular gamut.
Side note: this is how you can detect "
metamerisms", i.e. combinations of different components that look the same to the human eye. Simply draw the gamuts of your particular component combination, and anywhere those lines intersect, is the one color where the human eye cannot distinguish between them. This occurs because this use of discrete wavelengths to represent a spectrum of wavelengths, is a sampling process, which is subject to all the errors inherent in such things.
That is also how you can determine which combinations of two wavelengths will result in "white" to the human eye: any line that crosses the midpoint "E" in the diagram includes "white" in its gamut. So, if you wanted to determine what second wavelength you need to make "white" with 635nm red, draw a line through "E" to the other side, which hits at about 495nm.
Conversely, you can link up two standard wavelengths to see their gamut; if you link your standard 590nm amber and 470nm blue, the line does not cross "E", but gets closest to it somewhere to the "southwest" of it, which is in the pink-purple zone (And sure enough, that's the color I see when I mix those).
Now, as you can see, using only two components comes nowhere near the possible gamut available to us (in fact, using two components approximates the color gamut of red-green colorblindness). In order to cover an
area of the possible colors, we need to use three components.
The gamut of three-component systems can be visualized on the CIE diagram in the same manner -- except now, you get a triangle instead of a line segment. This is what we see in the diagram.
As you can see, a triangle is not a perfect fit; some possible colors are necessarily left out of the possible gamut.
Now, note that the
saturation of any particular hue is a function of its distance from the white point "E". This signifies that for any three-color combination intended to reproduce all possible hues (i.e. surrounds and includes "E"), the out-of-gamut colors will always be the most saturated ones.
In the case of the sRGB gamut, presented in the diagram, the colors that can't be reached are the very saturated emerald bluish-greens. (The Sharp TV adds yellow, rather than emerald green; that suggests to me that their green component may ot be sRGB green, but in the 510nm zone perhaps.) For sRGB, displaying those colors represent a conundrum: if you display the most accurate possible hue of emerald, it cannot be as saturated as it looked in reality, nor in comparison with the other colors that it can display. The only way to improve saturation -- get further away from "E" -- is to change the hue, towards green or blue.
And sure enough, as anyone of you who have cyan LED's or argon lasers in the 480-510nm range can attest, it can be a real bear to get beamshots that look right.
By adding a fourth component, Sharp is seeking to expand the display gamut by adding another "corner" to the gamut, so it can cover a larger area and thusly "reach" more saturated colors further away from "E". Described another way, they are taking more "samples" in color space to increase its resolution.
Now that I've explained that, I need to tackle a few loose ends to round things out.
First: the diagram assumes completely pure wavelengths. In reality, the RGB sources we use, be they phosphors on CRT's, LED's, or filtered sources, they are not in fact pure, or completely saturated. This means that they are closer to "E" than pure wavelengths; the gamut therefore shrinks inward on the diagram. (The purest practical light sources we have are lasers; this is one reason why laser TV's and projectors are a hot area of development at the moment.)
For those of you wondering why we don't simply move the green point up to the 520nm area: I suspect that this is a legacy of phosphor tech on CRT's which were dominant when the standards were defined. There may be more to the story; I understand that there are green diode lasers in development that lase around 515nm, which may address this.