The CPUWhy did you discard microprogramming?
I designed the machine around SSI/MSI integrated circuits, no LSI, so ROM chips got automatically discarded. With no ROM for storing microcode, the microprogramming approach gets some how dissimulated.
There is a philosophical argument too: According to the Heritage/1 culture, Software is something "external" to the computer, something that has to be loaded at a later time and not while the computer is still waking up. Microcode is not strictly "software" but it is "soft", some kind of ghost doing its magic inside the CPU. I wanted logic functions to be done with logic circuits, not with "ghost logic".
I also find that hardwired logic is more clear: When I say that the Op Cod Fetch sequence takes 3 clock cycles I mean exactly that: 750 nanoseconds (assuming a 4 MHz clock frequency), not 3 microinstruction cycles each of which takes in turn some other nanoseconds to complete.
You are not implementing the so called MDR and MAR registers in this computer...
I am new to this matter and, yes, I've read about the so called MDR and MAR registers and I've seen how other homebrew designers have implemented such a thing, but -I may be wrong- I haven't seen the need for those so far in my design.
I read the instruction operational code from memory directly into the IR register (a single transfer: 2 clock periods) and it stays there for the instruction's life-term. Similarly, I read the operand (if any) into the Operand Register (OR) while my IR is still providing the operational code to the instruction decoding circuitry. I don't see the need for any intermediate register.
If the operand happens to be a direct address, I open the Address Buffer of the OR register for providing the address to the bus. If it happens to be an immediate value, I open the Data Buffer instead. It is maybe that I've replaced the so called MAR register with the double-buffering design of my registers... I don't really know, but again, I don't see the need for intermediate registers.
What I guess is that not having intermediate transfers will make my CPU to work faster... I guess...
How did you get to such a bizarre interrupt architecture?
It is not bizarre. The idea came to me from the DEC PDP-11 Manual. I just simplified it to the minimum: just one IRQ/IAK couple plus a third signal that I had to add: Interrupt Service End (ISE). I'm not sure if it's going to work. It works on paper, at least.
|