All dynamic lighting, dynamic shadows, no lightmaps. I guess it was a couple generations too early for that to work.
Maybe lightmaps are dying off now because game worlds are getting bigger and denser, and storage isn't getting faster as fast as GPUs are, so it's better to throw together something approximate and dynamic that can be rendered in 2 or 3 frames, than something perfect that can never move and has limited resolution at bake time.
I would say it's a few things at once. GPUs now have pretty much general purpose compute, and they're typically memory-bandwidth bound more than compute bound. So - the paper & article briefly talk about - for example temporal reprojection/accumulation in order to get enough samples (see figure 3 in paper) and applying extra compute (filters) using the information already locally available to increase image quality while having no effect on performance due to the memory bandwidth limitation.
You can see what this accumulation looks like for CP2077's SSAO in this Gamer's Nexus video: https://youtu.be/zUVhfD3jpFE?t=960 He attributes it to TAA but to me it looks like a clear example of the exact temporal accumulation technique described by the GTAO paper in order to get samples performance-efficiently.
This ("free" compute due to memory bottlenecks) is true for the last-gen consoles as well as modern cards.
So rather than a storage limitation, it's more a limitation of being able to get the data from system memory or even on-card GPU memory to the GPU cores fast enough since the cores now process it so quickly. A general increase in resolution of textures and towards 1440p and 4K displays has not helped with that issue.
Going along with that is a general trend towards PBR (physically based rendering) in all fields that use computer graphics - film/tv, 3D modelling, design, games.
The PBR philosophy is generally anti using tricks & hacks where possible. The reasons why are the reasons that PBR has been so widely (almost universally) adopted: PBR makes it easy for artists to create consistent and robust results, using understandable methods that mostly work how you would intuitively think they should, and the results automatically look good and are interoperable since there is a clear target - to emulate real light transport, materials and so on.
It's simply not feasible (or at the very least extremely costly in artist/dev labour) to be manually designing bespoke tricks to get each individual scene to look right. It's a "let the computer do the work" type of philosophy, which Cyberpunk seems to follow since they didn't even seem to include manually placed occlusion planes, for example (going by the original article).
So it's a combination of the hardware now being capable of it, having gradual software/algorithmic improvements to get better approximations to correct lighting, and a relatively wholesale overhaul of the entire graphics pipeline toward PBR, due to its substantial benefits. Then, since the PBR approach is highly compatible with ray-tracing inherently, ray tracing is the cherry on top, for now still only usable on top end hardware for realtime games.
Couldn't most textures have a lot of procedurally generated elements? That way you wouldn't need to store all the information.
Artists would just need to specify the procedural part, ie their editors would need to have that as well.
I think currently, the way artists work, they anyway have a lot of layers, and some of them are procedurally generated. I know people at least used to use something like "clouds" procedurally generated in Photoshop, and use that to apply some effects to the texture like wearing or staining. But in the end they "burn" the texture to bitmaps, and a lot less layers. Maybe that could be instead done on the GPU. Then the base texture becomes mostly flat, easily compressable, and the clouds can be procedurally generated, requiring no storage at all. It can still be deterministic, you can just store the seed.
That's essentially what PBR game engines do. They take in base textures/resources (NB: a "texture" doesn't necessarily correspond in any straightforward way to the final image, and shouldn't be thought of as an "image" except in the mathematical sense) and apply all sorts of processes to get a good looking final result.
Not everything can be procedurally generated on the fly (at least with a small algorithm that would run quickly enough on a GPU). At a certain point just providing compressed bitmaps becomes more efficient, especially since GPUs are highly specialised to work with exactly those. Everything is a tradeoff in real time graphics.
All dynamic lighting, dynamic shadows, no lightmaps. I guess it was a couple generations too early for that to work.
Maybe lightmaps are dying off now because game worlds are getting bigger and denser, and storage isn't getting faster as fast as GPUs are, so it's better to throw together something approximate and dynamic that can be rendered in 2 or 3 frames, than something perfect that can never move and has limited resolution at bake time.