Inside a Computer and Binary Representation

Based on our previous learning on computing system, last week, we studied about things inside a computer — including the CPU's components and their respective functions — and binary representation. One notable thing about the week's classes was that lots of activities were carried out, which, personally speaking, made the contents much more enjoyable and comprehensible. 

How Does a Computer/CPU Work

In order to better understand what’s inside a computer and how it works, we first watched a video which shrank us down to the size of electrons and led us on “a tour” inside a computer. The animation vividly showed us that the process between a click by the mouse and the final display of a video is actually extremely complicated, though all of this usually happens within milliseconds (human eyes can hardly recognize the time interval). This can be applied to other situations in life as well — a thing considered as easy and normal can actually require lots of internal work and wisdom.

Functions and Main Components of CPU

Numerous things keep a computer working, and among them, the central processing unit (CPU), or microprocessor, is so important that it can be considered as the “brain” of a computer. It is responsible for executing a sequence of stored instructions called a program, which will take inputs from an input device, process the inputs in some way and output the results to an output device.

An Illustration of a CPU

There are basically 3 most essential components in a CPU — the control unit, the arithmetic logic unit, and registers. A control Unit (CU), makes decisions and sends appropriate signals down its lines to other parts of the CPU, and controls the timing of operations in the computer and the instructions sent to the processor and the peripheral devices. An arithmetic Logic Unit (ALU) is responsible for carrying out arithmetic (calculations) and logic (decision) functions. As for registers, they provide temporary memory storage locations within the CPU.

Roleplaying Activity

To better understand the functions of each component within the CPU, we did a roleplaying activity. Divided into 4 groups of 5, each of the group member represents a component of CPU. Student CU (control unit) is given the program to run and is responsible for telling the other components (students) what they need to do; student ALU (Arithmetic/Logic Unit) & register (Memory) keeps track of the current values of x and y and performs any math operations, if needed, requested by the CU; student display responds to commands from the CU by plotting the x, y values on the display grid; student CPU bus carries the messages between the other components; student CPU clock keeps a record of total work times of all other components.

Add 4 to x

Add 6 to y

Plot (x, y)

Examples of Instructions Given by CU
Two Final Displays of the Activity

Through the simplified demonstration of CPU, we had a better idea about how it works and some of its characteristics. Firstly, the computer doesn’t “understand” what it’s doing; it simply follows the instructions, receiving an input, processing, and giving an output. Secondly, when a program begins, there’s no way to stop it or make changes / corrections at halfway. Even if there is a mistake in the program, the computer will still keep going and try to execute the program as written.

Binary Representation

Data & Information

Before actually digging into binary representation, things need clarifying are the definition of and differences between data and information. Data is the raw material, the numbers that computers work with, a fact that needs to be processed. Information includes words, numbers and pictures that can be understood; it’s usually a processed data. So the relationship between data and information is that data is processed, or converted, by a computer into information.

Binary Number System

Most of us are probably familiar with the fact that data in computers is stored and transmitted as a series of zeros (0) and ones (1). But how can all the various words and numbers represented using just these two symbols?

Let’s start with transforming binary to and from decimal.

Firstly, how to convert decimal to binary.

  1. List all the powers of 2 which are smaller than the decimal number from the largest to the smallest, from left to right, either in your head or write them down. Note that the numbers are placed in the order of decreasing magnitude!
  2. Calculate which of the numbers (the powers of 2), when added together, is the decimal number that you want to represent. Each of the powers of 2 are either used once or not at all.
  3. Put a “1” under the number (power of 2) you need and a “0” under the number not needed.
  4. Write the series of “o”s and “1”s down, (note that the series always starts with a “1”) which is the binary representation of the decimal number.

Despite of the poorly summarized steps above, here’s an actual example. When converting the number ten from decimal to binary, list all the powers of 2 smaller than ten: 8, 4, 2, and 1. Through a simple addition, we can find that 10 = 8 + 2. Only the numbers 8 and 2 are used, so a “1” is put under 8, a “0” for 4, a “1” for 2, and a “0” for 1. Then the final binary representation we get is 1010.

Converting 10 to Binary: 10 (10) = 1010 (2)

Secondly, how to convert binary to decimal.

  1. List all the powers of 2, from the smallest (1) to the largest, from right to left, each number matching a digit of the binary number.
  2. Multiply the powers of 2 listed with their corresponding “1” or “0”, and add the products together. The resulting sum is the decimal number of the binary number.

Again, take binary number 1010 as an example. List the powers of 2, which, in this case, are 1, 2, 4, and 8, from right to left. Carry out the simple calculation: 0*1 + 1*2 + 0*4 + 1*8 = 10. So the decimal representation of binary 1010 is 10.

These two conversion processes may seem complicated as first, but things always get better after practicing, which was probably why we were given an assignment on this after class.

The Assignment on Converting Decimal to Binary

ASCII & UNICODE

Other than numbers, computers store and transmit everything in binary form, including words, pictures and videos. Then how are texts represented? There are two common ways that are used widely nowadays — ASCII and UNICODE.

ASCII is the abbreviated form of American Standard Code for Information Interchange. It’s a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices as well. Most modern character-encoding schemes are based on ASCII, although they support many additional characters.

UNICODE is a computing industry standard whose goal is to propose a unique character set and character encoding containing all characters used in the world, and defining rules to store these characters in form of bytes in memory or on physical supports. Unicode characters are to be represented in sequence of bytes using character encodings.

UNICODE Table for Difference Languages

The major differences between ASCII and UNICODE are as follows:

  1. ASCII uses an 8-bit encoding while Unicode uses a variable bit encoding.
  2. Unicode is standardized while ASCII isn’t.
  3. Unicode represents most written languages in the world while ASCII does not.
  4. ASCII has its equivalent within Unicode.
Through our further study on computer and its inner world, I feel like I am starting to know about the machine we deal with every day, which I haven't until recent weeks. Its hidden wonders and all the wisdom and efforts behind it truly fascinate me. The long-lasting journey in computer science for the remaining of this semester is yet to come... 

Leave a comment