Module 2: Computer Memory

Introduction:

This is the second module for architecture.  Last module we went over the CPU and learned that the CPU is what executes the instructions of a computer.  This time we are going to see where the CPU pulls those instructions from.  There are two kinds of memory we are going to discuss in this module,  Random Access Memory (RAM) and cache memory.

For this module all we need to understand is that a program has to be stored somewhere for the CPU to access the information in it.

Keep It Simple:

Computer memory is where a program is stored during execution.  Imagine a piece of lined notebook paper.  We can number each line of the paper.  Start with 0 on the first line all the way down to the last line of the page. Remember the black box example from module 1.  This time we are going to look at the instructions for adding numbers.  They could go something like this:

  1. Accept first number
  2. Store first number
  3. Accept second number
  4. Add first and second number
  5. Store new number.
  6. Display new number.

We have six instructions.  We need a place to put them so that the CPU can execute them.  We can start by writing the first instruction on line 0 of the page and continuing until line five of the page contains instruction six.

We can also store more than just instructions on the page.  We have plenty of room to store the three numbers in our computation as well.  We could change instruction two to something like: Store first number at line seven.  So whenever we need to access the first number we can just tell the CPU to get the value at line seven.

The Computers Notebook:

RAM memory is what the notebook paper represents in the example.  A program is stored there during execution so the CPU can have access to the program information.   There are many ways of accessing RAM and organizing it.  We aren’t going to discuss those this post but you can learn more about it in a good operating systems book.  There are different types of RAM and different speeds of RAM that effect how it performs.  But even the fastest RAM has a high cost of access for the CPU.  RAM is located outside the CPU and takes many CPU instruction cycles to fetch information being stored there.

Sometimes we need to reuse information frequently.  It’s like when you’re working on a car and you know you’re going to need a 10mm socket you can just put it in a tray right next to you instead of walking it back to the tool box every time we need it.  There is a similar concept in computer memory called cache memory.

There are three types of cache memory that can be built into a CPU: L1, L2, and L3.  Each type has a different access speed for the CPU measured in CPU cycles.  L1 is the fastest and L3 is the slowest.

Address Space:

Address space refers to RAM address space and doesn’t include cache memory.  This is where the 32-bit vs 64-bit part of the architecture is important (or 8-bit and 16-bit).  For 32-bit systems the addresses are 32-bit’s wide.  that means that there are 32 binary entries in the address for a total of 2^32 (approximately 4GB) addresses.  Which is why a 32-bit machine can’t use more than 4GB of memory without the help of features like Physical Address Extension or PAE.  64-bit systems have 2^64 addresses available allowing them to use a much larger amount of memory.

Lets look at what that really means.  For each binary number each entry is a bit.  As an example the number 1100 has four bits.  The number 1100 in binary represents the number 12 in decimal, and the number C in hexadecimal.  (If all of this is confusing we will go over numbering systems in an assembly language module.)

We can compute the number of possible addresses by calculating the possible entries for each bit.  For the first bit there are two choices, 1 and 0.  So if we had a one bit address space there would be two addresses.  Adding another bit we have two choices for the next entry, 1 and 0 again.  So there are two choices for the first entry, and two choices for the second entry which gives us 2 * 2 = 4 possible addresses.  Continuing in this way we multiply by two for each successive address bit.  That’s how we get 2^32 addresses for a 32-bit architecture.

Lets look at an example address.  There are 32 bits so we have 0000 0000 0000 0000 0000 0000 0000 0000 possible entries.  I grouped them into fours for a reason.  Lets pick some entries randomly to turn into 1’s.  1100 0101 0001 1000 0011 1001 0001 1111.  That’s not a horrible representation but we can get a better one.  This is where hexadecimal notation comes into play.  It takes four bits to represent a hexadecimal number, which is why I grouped the numbers the way I did.  This same address has the representation 0xC518391F in hexadecimal.  Which is a bit easier to read than the binary representation.  When we see memory addresses in debuggers they are represented in hexadecimal notation.

Conclusion:

We now know a little bit about computer memory.  RAM memory is located outside the CPU and has the slowest access speed.  RAM is where a program is stored during execution.  Cache memory is located inside the CPU and is used for storing information the CPU needs to access frequently.  The three levels of cache memory are all accessed faster than RAM.

There are a lot of ways that RAM can be organized and accessed.  Those are topics we will cover as they become important to what we are doing.  If you want to learn more now a good operating systems book will go into the details about it.  For now the notebook is a decent analogy.  We can access information based on what page contains the lines the CPU needs.

 

 

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s