When the subject is photography, Ken Rockwell is the #1 Internet celebrity. He's known, loved and hated for his opinionated, hyperbolic and reducionist essays. Personally, I like his texts and love his motto:
Do you need XYZ? If you have to ask, the answer is 'no'.
And it is certain that his most polemic article is the discussion RAW versus JPEG. Ken Rockwell claims to shoot JPEG only. To add insult to injury, he even advocates using low-resolution, high-compression settings!
But is he right?
The "easiest" argument in favor of RAW is that every RAW pixel has 12-14 bits of information (4096-16384 distinct values), while JPEG is 8 bits per pixel and color channel (just 256 values). This argument has two weak points.
First off, every RAW pixel is monochromatic, while JPEG pixels have three color channels, making 24 bits per pixel (3x8). That is, JPEG has 10-12 pixels to spare in relation to RAW! When the Bayer "mosaic" is converted to a picture, every JPEG pixel inherits information from the neighboring RAW pixels. The superior dynamic range of RAW pixel is realizable only in monochromatic pictures.
Another, more important, weak point is that RAW uses a linear curve, while JPEG normally uses a color profile that implies a logarithmic curve, more often called "gamma-corrected". A log scale of 8 bits if perceptually as good as 12 or 13 linear bits, either in audio or imaging. This puts JPEG and RAW in a similar level, at least until 16-bit sensors hit the market.
JPEG makes use of lossy compression, but it is possible to modulate the compression level. In "JPEG fine" mode, losses are negligible. Some RAW formats also do lossy compression. The compression argument often touted against JPEG does not hold either. It is weaker than the bit-depth issue.
An even worse derivative argument: "When a JPEG is altered and saved many times, the compression degradation accumulates." This happens only if the photographer is stupid, saving the new JPEG over the original. Indeed this problem cannot happen with RAW because RAW software writes manipulations in a separate file. The point is, the preservation of the "digital negative" depends on the workflow, it is not a problem inherent to the format.
Another weak argument against JPEG is the alleged impossibility of color correction. Temperature color adjustments are very subtle, they are perfectly absorbed by JPEG's 24 bits per pixel.
RAW format depends on software processing to become a visible picture, on screen or on paper. Then the dilemma begins: which software to use? There are many, many options, from free to very expensive ones.
And the choice must be well-made. It is not uncommon that the camera-generated JPEG looks better than the software-generated image.
As a general rule, the best bet is to use the manufacturer's RAW converter, at least in the first step: convert RAW to TIFF and then manipulate the TIFF on Photoshop. Each camera has "n" proprietary and/or undocumented features in its RAW, and even the finest generic software won't support every such feature.
Camera manufacturers are not software enterprises; they don't employ the best practices when doing software. They make many mistakes. They use proprietary and ever-changing RAW formats — different for every camera, so brand-new models go unsupported for months. They release crappy software — slow, buggy and with terrible usability. They offer weak software intending to charge a lot for the "premium" version...
This is less of a problem nowadays than it was in 2008 when Ken Rockwell wrote the famous article. Still, RAW processing is an unpleasant distraction and photographers not that acquainted with computers must suffer, I think.
There are literally hundreds of RAW formats, with per-manufacturer and per-camera variations. This creates a enormous problem: nobody guarantees that these RAWs will be legible in the future, let's say 40 years from now.
It is true that, nowadays, this problem is mitigated in two ways. First, it is possible to convert to DNG, the Adobe's open and public RAW format. Some cameras (e.g. Leica) have even adopted DNG as their RAW format. Some RAW conversion software is free software, and its open source is sort of a public documentation of the RAW formats.
RAW format is basically the crude information read from camera's sensor. In theory, taking pictures in RAW format avoids an expensive processing step within the camera. But in practice the cameras are very, very optimized to do JPEG processing.
The great bottleneck is actually writing data onto the memory card. Since RAW files are much bigger, they take longer to write, which translates to slowness and/or more rapid buffer filling when doing burst RAW shots.
Compounding with this disvantage, most cameras can only save RAWs with the native sensor's resolution, that is often enormous these days (24MP APS-C, 36MP full-frame, there is even a cell phone with a 41MP sensor!). On the other hand, we can always choose the JPEG resolution. 6MP pictures are good enough most of the time. By saving smaller JPEGs, burst shots get extra breath.
Some cameras do save reduced-resolution RAWs to mitigate this particular problem, but only the most expensive ones can do that. Such cameras also have huge burst shot buffers, they accept fast CompactFlash cards... they get all the nicest features. Manufacturers need to do their sales pitch for the high-end models, after all.
Compressed RAW formats are another way to mitigate this problem but they also discard some information.
Assuming that the picture was well-exposed, JPEG has the big advantage of emerging "ready" from the camera, in the desired resolution and quality, packed in a small file. This facilitates the immediate usage of the picture, as well as sending it via Internet. (There are those WiFi-enabled SD cards that allow live access to the picture files, and sending them to the cloud, without touching the camera.)
Up to this point, the balance is in favor of JPEG. After all, are there any advantages in RAW?
One argument in favor of RAW that is strong and valid: if the camera has a crappy firmware and/or there is an expectation of improvement in renderization or noise-reduction algorithms, a RAW picture can profit from these advances in technology.
A JPEG picture is tied to the yesterday's status quo. For example, in 2008 we didn't have a powerful noise-reduction software like Noise Ninja. It is possible to apply such techniques on a JPEG image (or on any other format) but it is not as effective as on RAW.
We have said earlier than a gamma-corrected 8-bit scale is equivalent to 13-bit linear. This is only true when the signal is well-conditioned, like an audio with correct volume, or a well-exposed picture. When the signal is too strong or too weak, "fixing" it will lose some bits.
Suppose a heavy manipulation that "eats" 4 bits. A JPEG with 8 bits per color will lose half of the information, while a 12-bit RAW is still left with 8 bits, enough to fill the final JPEG's dynamic range. (Professional audio processing uses 24 bits due to the same reason: having 16 bits left of dynamic range in the final result, in order to to fill completely the capacity of CD and MP3 formats.)
Situations in which a heavy picture manipulation would be necessary:
In a very quick-and-dirty test I did, JPEG images accepted "invisible" corrections up to a point only, while RAW accepted 2 points. "Usable" corrections in JPEG went up to 2 points, while RAW accepted up to 4 points. A point is twice (or half) the light, and it is equivalent to 1 linear bit.
If the camera is correctly configured, it is unlikely to shoot a bad picture. If anything went wrong, the LCD review would show it. This instant feedback reduces the need of major exposure corrections to save the picture, so this particular RAW advantage is not used so often. Whatever the case, to guarantee that some unique picture is usable, shooting RAW is indeed a good measure.
In my opinion, the biggest issue of shooting JPEG is the irreversible sharpening that is applied onto the image.
Sharpening is an artificial increase in sharpness/perceived resolution. The conversion from Bayer mosaic to the final image always robs sharpness. For example, a Bayer sensor of 20MP delivers a image of 12MP effective. To mask this loss and improve overall subjective quality, the JPEGs are processed by some sharpening algorithm within the camera, so it looks sharper.
This procedure is not a cheat. It is necessary to get images of good quality.
Even the (seemingly trivial) image resizing on a computer needs sharpening to deliver a visually acceptable result. Images shrunk in different software do look differently! Using the best available software is important, even for a simple resize. (Interestingly, Mac's Preview delivers excellent results, and it comes for free if you have a Mac.)
The problem is, while it is easy to adjust color temperature or a small exposure error, sharpening is irreversible. Normally the camera does a good job, but some pictures might end up under- or over-sharpened.
Shooting RAW avoids these problems, because the sharpening is carried out in the computer, so the photographer can choose among many algorithms and settings.
Hand in hand with sharpening, there is the noise reduction. Both algorithms need to be considered as a whole, because they hinder each other when working separately (sharpening exaggerates noise and noise reduction robs sharpness).
High-ISO pictures are RAW territory, given the degree of adjustments (with manual touches) to get acceptable results. Noise also interacts badly with JPEG compression, since random information is not compressible. This translates to bigger JPEGs than usual and/or compression artifacts.
Every time that I went back from JPEG to the original RAW, it happened because I wanted to manipulate the sharpness. That is the major reason I shoot RAW: to have more freedom to manipulate sharpness, which is even more important if the picture is intended to be B&W.