Typical use in my mind is a use that more closely reflects the load placed on batteries in devices like flashlights, radios, headlights, and other relatively high-current, intermittent use devices. That along with more typical charging regimens like 1A or 2A and then sitting for a day or a week before use. It doesn't have to be spot on, just MORE close than a 0.2C discharge within an hour of a 16-hour dumb charge.
Radios are low-medium drain.
David is doing constant run, not intermittent testing. Most people use flashlights intermittently.
Most AA powered flashlights do NOT pull 2A or 4A.
Most people do not use 1A and 2A chargers, rather 'most' people are using .2-.8Ah chargers.
Perhaps your point is that not everybody uses their batteries in the same way? If that's the case, no testing regimen is more accurate than any other. All you can do is find one that more closely resembles your pattern of usage. I suppose if you only use dumb chargers for a 16-hour charge and run a TK40 with 32 AA batteries to keep the discharge down to 0.2C the industry standard ratings will give you the information. My point is that if everyone only tests batteries in one regimen we'll never really have a good idea how they'll function in our various devices, and it's ok to say battery X sucks because it's rated as Y mAh, but only delivers Z mAh at 3A.
Standards are
made accurate or relevant by general application/use.
It is NOT OK to say, "battery X sucks because it's rated as Y mAh, but only delivers Z mAh at 3A," this is exactly what is problematic.
I don't have a problem with David making up his own methodology like Tom/Silverfox has already done but the issue that I have raised repeatedly was his using his OWN standard to declare batteries real/fake based upon his own (frankly flawed) methodology and results.