15.01.2024 Views

CompTIA A+ Certification All-in-One Exam Guide

  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Parallel Execution

Modern CPUs can process multiple commands and parts of commands in

parallel, known as parallel execution. Early processors had to do everything

in a strict, linear fashion. The CPUs accomplish this parallelism through

multiple pipelines, dedicated cache, and the capability to work with multiple

threads or programs at one time. To understand the mighty leap in efficiency

gained from parallel execution, you need insight into the processing stages.

Pipelining To get a command from the data bus, do the calculation, and

then send the answer back out onto the data bus, a CPU takes at least four

steps (each of these steps is called a stage):

1. Fetch Get the data from the EDB.

2. Decode Figure out what type of command needs to be executed.

3. Execute Perform the calculation.

4. Write Send the data back onto the EDB.

Smart, discrete circuits inside the CPU handle each of these stages. In

early CPUs, when a command was placed on the data bus, each stage did its

job and the CPU handed back the answer before starting the next command,

requiring at least four clock cycles to process a command. In every clock

cycle, three of the four circuits sat idle. Today, the circuits are organized in a

conveyer-belt fashion called a pipeline. With pipelining, each stage does its

job with each clock-cycle pulse, creating a much more efficient process. The

CPU has multiple circuits doing multiple jobs, so let’s add pipelining to the

Man in the Box analogy. Now, it’s Men in the Box (see Figure 3-24)!

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!