Why is this? I only have 16 bit 44.1kHz "normal" CDs and I notice some terrible mastering jobs, eg. RHCP Stadium Arcadium loses the snare for the first track and is like listening to minutes of distortion yet apparently the vinyl was really well mastered.
Is there a technical reason why the mastering is so different for the two mediums, CD versus vinyl?
This is true, to a certain extent. But it doesn't mean that masters for vinyl always have more dynamics left intact.
More often than not these days, the same compressed master is used for the vinyl. To combat the groove-jumping problem, the overall level is simply dropped.
I've seen it argued that multi-disc setups are where the loudness wars started.
When you can switch between CDs quickly, you start comparing how they sound. If the volume knob is left alone, the CD that is mastered louder is going to sound louder, and thus better.
Thus, people started mastering CDs for loudness.
An alternative idea of mine is simpler. Loud music is considered worse than quiet music (quiet music sounds worse, but loud music still does, and also bothers other people). So, when you need to pick a volume setting for your collection, you bias towards setting it lower, so the really loud ones don't become too loud. Thus, the quieter CDs are annoying because they always sound quiet, whilst the load CDs sound about right, because your volume is much more suitable for them.
Assuming equivalent quality of CD playback equipment and LP playback equipment, LPs provide superior sound quality. LPs are analog, so what reaches your ears is undiluted sound. CDs are digital, so there are gaps in the sound, and there's the analog audio conversion that must happen to record to a digital/CD format, and digital audio conversion back to analog that must happen to hear the sound from a speaker. Both conversions reduce fidelity.
As ever this difference can be impossible to detect if the equipment and environment aren't of a sufficient fidelity/quality.
What kind of gaps? There are no gaps. Sampled signals perfectly represent the original wave up to half the sampling frequency. Analog systems are inevitably limited in their frequency response as well, so, given the same bandwidth, there would be no difference at all.
In the real world, imperfect A/D and D/A conversions are typically still far less destructive than all the mechanical and electromagnetic sources of noise that affect analog systems. You can't consider one but not the other.
I think you're right that recording equipment has a long way to go, though; regardless of format I think people can relatively easily distinguish real acoustical instruments from recordings.