It helps to give some context to 90's game coding by looking at the predecessors. On the earliest, most RAM-starved systems, you couldn't afford to have memory intensive algorithms, for the most part. Therefore the game state was correspondingly simple, typically in the form of some global variables describing a fixed number of slots for player and NPC data, and then the bulk of the interesting stuff actually being static(graphical assets and behaviors stored in a LUT) and often compressed(tilemap data would use either large meta-tile chunks or be composited from premade shapes, and often streamed off ROM on cartridge systems). Using those approaches and coding tightly in assembly gets you to something like Mario 3: you can have lots of different visuals and behaviors, but not all of them at the same time, and not in a generalized-algorithm sense.
The thing that changed with the shift to 16 and 32-bit platforms was the opening of doing things more generally, with bigger simulations or more elaborate approaches to real-time rendering. Games on the computers available circa 1990, like Carrier Command, Midwinter, Powermonger, and Elite II: Frontier, were examples of where things could be taken by combining simple 3D rasterizers with some more in-depth simulation.
But in each case there was an element of knowing that you could fall back on the old tricks: Instead of actually simulating the thing, make more elements global, rely on some scripting and lookup tables, let the AI be dumb but cheat, and call a separate piece of rendering to do your wall/floor/ceiling and bake that limit into the design instead of generalizing it. Simcity pulled off one of the greatest sleights of hand by making the map data describe a cellular automata, and therefore behave in complex ways without allocating anything to agent-based AI.
So what was happening by the time you reach the mid-90s was a crossing of the threshold into an era where you could attempt to generalize more of these things and not tank your framerates or memory budget. This is the era where both real-time strategy and texture-mapped 3D arose. There was still tons of compromise in fidelity - most things were still 256-color with assets using a subset of that palette. There were also plenty of gameplay compromises in terms of level size or complexity.
Can you be that efficient now? Yes and no. You can write something literally the same, but you give up lots of features in the process. It will not be "efficient Dwarf Fortress", but "braindead Dwarf Fortress". And you can write it to a modern environment, but the 64-bit memory model alone inflates your runtime sizes(both executable binary and allocated memory). You can render 3 million tiles more cheaply, but you have to give up on actually tracking all of them and do some kind of approximation instead. And so on.
The thing that changed with the shift to 16 and 32-bit platforms was the opening of doing things more generally, with bigger simulations or more elaborate approaches to real-time rendering. Games on the computers available circa 1990, like Carrier Command, Midwinter, Powermonger, and Elite II: Frontier, were examples of where things could be taken by combining simple 3D rasterizers with some more in-depth simulation.
But in each case there was an element of knowing that you could fall back on the old tricks: Instead of actually simulating the thing, make more elements global, rely on some scripting and lookup tables, let the AI be dumb but cheat, and call a separate piece of rendering to do your wall/floor/ceiling and bake that limit into the design instead of generalizing it. Simcity pulled off one of the greatest sleights of hand by making the map data describe a cellular automata, and therefore behave in complex ways without allocating anything to agent-based AI.
So what was happening by the time you reach the mid-90s was a crossing of the threshold into an era where you could attempt to generalize more of these things and not tank your framerates or memory budget. This is the era where both real-time strategy and texture-mapped 3D arose. There was still tons of compromise in fidelity - most things were still 256-color with assets using a subset of that palette. There were also plenty of gameplay compromises in terms of level size or complexity.
Can you be that efficient now? Yes and no. You can write something literally the same, but you give up lots of features in the process. It will not be "efficient Dwarf Fortress", but "braindead Dwarf Fortress". And you can write it to a modern environment, but the 64-bit memory model alone inflates your runtime sizes(both executable binary and allocated memory). You can render 3 million tiles more cheaply, but you have to give up on actually tracking all of them and do some kind of approximation instead. And so on.