A: there is a very specific set of characteristics shared by most machines labeled RISCs, most of which are not shared by most CISCs.
The RISC characteristics:
a) Are aimed at more performance from current compiler technology (e.g., enough registers).
OR
b) Are aimed at fast pipelining in a virtual-memory environment with the ability to still survive exceptions without inextricably increasing the number of gate delays (notice that I say gate delays, NOT just how many gates).
Even though various RISCs have made various decisions, most of them have been very careful to omit those things that CPU designers have found difficult and/or expensive to implement, and especially, things that are painful, for relatively little gain.
I would claim, that even as RISCs evolve, they may have certain baggage that they'd wish weren't there ... but not very much. In particular, there are a bunch of objective characteristics shared by RISC ARCHITECTURES that clearly distinguish them from CISC architectures.
I'll give a few examples, followed by the detailed analysis:
MOST RISCs:
3a) Have 1 size of instruction in an instruction stream
3b) And that size is 4 bytes
3c) Have a handful (1-4) addressing modes) (it is VERY hard to count these things; will discuss later).
3d) Have NO indirect addressing in any form (i.e., where you need one memory access to get the address of another operand in memory)
4a) Have NO operations that combine load/store with arithmetic, i.e., like add from memory, or add to memory. (note: this means especially avoiding operations that use the value of a load as input to an ALU operation, especially when that operation can cause an exception. Loads/stores with address modification can often be OK as they don't have some of the bad effects)
4b) Have no more than 1 memory-addressed operand per instruction
5a) Do NOT support arbitrary alignment of data for loads/stores
5b) Use an MMU for a data address no more than once per instruction
6a) Have >=5 bits per integer register specifier
6b) Have >= 4 bits per FP register specifier
END QUOTE
Not having a hardware division opcode isn't on the list; in fact, the MIPS chips had hardware division, but it was odd in that it used the hi and lo registers and had an architectually-visible latency such that the compiler or human was encouraged to schedule opcodes such that they wouldn't stall the pipeline by trying to read the results of a division right after the division opcode had issued.
The divide opcode also didn't have a divide-by-zero exception. The point is that the MIPS, like a lot of RISC designs, prioritized pipelineability over convenient assembly language behavior, and expected compilers and humans to pick up the slack and write code to implement what, in a CISC design, would have been implemented in microcode.
https://danluu.com/risc-definition/
START QUOTE
A: there is a very specific set of characteristics shared by most machines labeled RISCs, most of which are not shared by most CISCs.
The RISC characteristics:
a) Are aimed at more performance from current compiler technology (e.g., enough registers). OR b) Are aimed at fast pipelining in a virtual-memory environment with the ability to still survive exceptions without inextricably increasing the number of gate delays (notice that I say gate delays, NOT just how many gates).
Even though various RISCs have made various decisions, most of them have been very careful to omit those things that CPU designers have found difficult and/or expensive to implement, and especially, things that are painful, for relatively little gain.
I would claim, that even as RISCs evolve, they may have certain baggage that they'd wish weren't there ... but not very much. In particular, there are a bunch of objective characteristics shared by RISC ARCHITECTURES that clearly distinguish them from CISC architectures.
I'll give a few examples, followed by the detailed analysis:
MOST RISCs:
3a) Have 1 size of instruction in an instruction stream
3b) And that size is 4 bytes
3c) Have a handful (1-4) addressing modes) (it is VERY hard to count these things; will discuss later).
3d) Have NO indirect addressing in any form (i.e., where you need one memory access to get the address of another operand in memory)
4a) Have NO operations that combine load/store with arithmetic, i.e., like add from memory, or add to memory. (note: this means especially avoiding operations that use the value of a load as input to an ALU operation, especially when that operation can cause an exception. Loads/stores with address modification can often be OK as they don't have some of the bad effects)
4b) Have no more than 1 memory-addressed operand per instruction
5a) Do NOT support arbitrary alignment of data for loads/stores
5b) Use an MMU for a data address no more than once per instruction
6a) Have >=5 bits per integer register specifier
6b) Have >= 4 bits per FP register specifier
END QUOTE
Not having a hardware division opcode isn't on the list; in fact, the MIPS chips had hardware division, but it was odd in that it used the hi and lo registers and had an architectually-visible latency such that the compiler or human was encouraged to schedule opcodes such that they wouldn't stall the pipeline by trying to read the results of a division right after the division opcode had issued.
https://devblogs.microsoft.com/oldnewthing/20180404-00/?p=98...
The divide opcode also didn't have a divide-by-zero exception. The point is that the MIPS, like a lot of RISC designs, prioritized pipelineability over convenient assembly language behavior, and expected compilers and humans to pick up the slack and write code to implement what, in a CISC design, would have been implemented in microcode.