EducationThe science

Presentation of information in a computer

Have you ever wondered what is common between ancient people, modern man and computer? Despite the differences, there is still something in common. Rock carvings of the primitive man, records of our contemporaries and binary code in computers - all these are ways of presenting information, or, more precisely, only some of their kinds. Now, when computers have firmly entered the daily life of society, everyone has to understand at least basic terms and concepts in order to keep up with the times.

Since its inception, computing systems have changed several generations: at first they were mechanical counting machines, then lamp models and, finally, semiconductor electronic versions of them. Interestingly, since the early days when computer calculations began, the basic principles of data coding remained unchanged. In other words, the representation of information in a computer is exactly the same as in mechanical devices. Of course, we are talking about principles, not ways of implementation. Everyone knows that the presentation of information in a computer is of a binary nature. This is told at the first lessons of computer science back in school. What is hidden behind the term "binary calculus"?

Let's count up to ten: 0, 1, 2, 3, 4 ... 10. In this row there are ten digits, and the "10" itself is absent, since it consists of two simpler "1" and "0". The presentation of information to the computer is different. It uses only the first two digits, not just their image, but electrical discharges: a transistor, this "brick" of modern electronic circuits, can have two positions - closed and open. If a blocking voltage is applied to its base (there is a discharge, a logical unit), then the element does not conduct the current, and vice versa. Of course, in practice, the representation of information in a computer is realized by more complex mechanisms: "1" can mean both the presence and absence of a signal. And the latter does not just control the state of a single transistor, but forms the work of logical circuits "AND - OR".

Logical "0" and "1" are called bits (binary digit, digit). A group of eight (not ten!) Bits is a byte. Combining their sequence, you can encode any character. Therefore, a byte is the smallest unit of information. In turn, changing the order of bytes, you can encode (present in digital form) any information. This encoding is performed by separate devices and by computer programs. For example, when we say "via Skype" through a microphone, the analog electrical signal (wave) is converted by a sound card into a stream of logical zeros and ones that are transferred to the program of the interlocutor where the reverse transformation is performed - to the wave sent to the sound reproducing device. Similarly, by pressing any key of the keyboard, the user informs the program of the desired binary code, although for convenience, the desired symbol is displayed on the screen.

The methods of presenting information in a computer, as already indicated, allow us to code everything. For example, to digitize an image, the following solution is applied: since any picture can be represented as a set of points, each of which is characterized by coordinates on the plane, brightness, color, it is enough to turn all these data into a computer-understandable sequence of ones and zeros. Next, to view such an electronic copy on the monitor screen, the program sends information to each output device for each point, and a picture is constructed according to it.

The advantage of the binary system of calculating before others lies in its simplicity and convenience of "tying" to the management of electronic keys. In part, this was the main reason for its use in modern computing systems.

Similar articles

 

 

 

 

Trending Now

 

 

 

 

Newest

Copyright © 2018 en.atomiyme.com. Theme powered by WordPress.