ComputersInformation Technology

Units of information in informatics. The minimum unit of information

Our high-tech century is distinguished by its wide capabilities. With the development of electronic computers, people have discovered amazing horizons. Any interesting news can now be found in the global network for free, without leaving home. This is a breakthrough in the field of technology. But how much data can be stored in the computer's memory, processed and transmitted over long distances? What units of information measurement exist in computer science? And how to work with them? Now not only people directly involved in writing computer programs, but ordinary students should know the answers to these questions. After all, this is the basis of everything.

Definition of information in computer science

We used to believe that information is all the knowledge that is being conveyed to us. But in computer science and computer science this word has a slightly different definition. This is the basic component of the whole science of electronic computers. Why basic, or fundamental? Because the computer technology processes the data, saves and delivers to people. The minimum unit of information is in bits. The information is stored on the computer until the user wants to view them.

We used to think that information is a unit of language. Yes, it is, but in computer science another definition is used. This information about the state, properties and parameters of the objects of our environment. It is quite clear that the more we learn about the object or phenomenon, the more we understand that our view of them is scanty. But now, thanks to such a huge amount of absolutely free and accessible materials from all over the world, it's much easier to learn, make new acquaintances, work, relax and just relax while reading books or watching movies.

Alphabetical aspect of measuring the amount of information

Printing documents for work, articles on websites and managing your personal blog on the Internet, we do not think about how the data exchange between the user and the computer itself is going. How can a machine understand commands, in what form does it store all files? In informatics, a bit is taken for the unit of measurement of information, which can store a binary code of zeros and ones. The essence of the alphabetical approach in the measurement of text symbols is a sequence of signs. But do not twist the alphabetic approach with the content of the text. These are completely different things. The volume of such data is proportional to the number of characters entered. Due to this it turns out that the information weight of the sign from the binary alphabet is equal to one bit. Units of information in information science are different, like any other measure. A bit is the minimum measurement value.

The content aspect of calculating the amount of information

The measurement of information is based on the theory of probability. In this case, the question of how much data is contained in the message received by a person is considered. Here in the course are the theorems of discrete mathematics. To calculate the measure of the volume of materials, two different formulas are taken, depending on the probability of the event. At the same time, the units of measurement of information in the informatics remain the same. The tasks of calculating the number of symbols, graphics for a meaningful approach are much more complicated than in alphabetical order.

Types of information processes

There are three main types of processes performed in an electronic computer:

  1. Data processing. How does this process go? Through data entry tools, whether it's a keyboard, an optical mouse, a printer or other device, the computer gets the information. Then converts them into binary code and writes them to the hard disk in bits, bytes, megabytes. To translate any unit of information in information science, there is a table on which you can calculate how many in one megabyte bits, and implement other translations. The computer does everything automatically.
  2. Storage of files and data in the device memory. The computer is able to remember everything in a binary form. A binary code consists of zeros and ones.
  3. Another of the main processes occurring in an electronic computer is data transmission. It is also implemented in binary form. But on the monitor screen the information is displayed already in the symbolic or other familiar form for our perception.

Coding information and the measure of its measurement

A bit is used for the unit of measurement of information, which is easy enough to work with, because it can hold the value 0 or 1. How does the computer encode ordinary decimal numbers in binary code? Consider a small example that will explain the principle of encoding information by computer technology.

Let's say we have a number in the usual system of calculation - 233 . To convert it into a binary view, it is necessary to divide by 2 until it is less than the divisor itself (in our case, 2).

  1. We begin division: 233/2 = 116. The rest is written separately, this will be the components of the response binary code. In our case this is 1.
  2. The second action will be: 116/2 = 58. The remainder of the division - 0 - again recorded separately.
  3. 58/2 = 29 without residue. Do not forget to write the remaining 0, because after losing only one element, you will get a completely different value. This code will be further stored on the hard drive of the computer and will be bits - the minimum units of information in computer science. 8-graders are already able to cope with the translation of numbers from the decimal type of the calculus to binary, and vice versa.
  4. 29/2 = 14 with the remainder 1. It is written separately to the binary digits already received.
  5. 14/2 = 7. The remainder of division is 0.
  6. A little more, and the binary code will be ready. 7/2 = 3 with the remainder 1, which we write in the future response of the binary code.
  7. 3/2 = 1 with the remainder 1. Hence we write two units in response. One - as the remainder, the other - as the last remaining number, which is no longer divisible by 2.

Remember that the answer is written in reverse order. The first resulting binary number from the first action will be the last digit, from the second - the penultimate one, and so on. Our final answer is 11101001 .

This binary number is stored in the computer's memory and stored in this form until the user wants to look at it from the monitor screen. Bit, byte, megabyte, gigabyte - units of information measurement in computer science. It is in these quantities that binary data is stored in the computer.

Reverse translation of a number from binary to decimal

In order to carry out a reverse translation from a binary value to a decimal system of calculus, it is necessary to use the formula. We count the number of characters in a binary value, starting with 0. In our case, there are 8 of them, but if we start from zero, then they end with the sequence number 7. Now we need to multiply every digit of the code by 2 into powers of 7, 6, 5, ..., 0.

1 * 2 7 + 1 * 2 6 + 1 * 2 5 + 0 * 2 4 + 1 * 2 3 + 0 * 2 2 + 0 * 2 1 + 1 * 2 0 = 233. Here is our initial number, which was taken even before the translation into a binary code.

Now you know the essence of information coding by a computer device and the minimum measure of information storage.

Minimum unit of information: description

As mentioned above, the smallest amount of information measurement is a bit. This word is of English origin, in translation it means "binary digit". If we look at this value on the other hand, we can say that this is a memory cell in electronic computers, which is stored in the form of 0 or 1. Bits can be converted into bytes, megabytes and even larger quantities of information. The electronic computer itself is engaged in such procedure, when it stores the binary code in the memory cells of the hard drive.

Some computer users may want to manually and quickly translate measures of the amount of digital information from one to another. For such purposes, online calculators have been developed, they will perform an operation this very second, on which one could spend a lot of time manually.

Units of information in informatics: a table of quantities

Computers, flash drives and other devices for storing and storing information differ from each other in memory, which is usually calculated in gigabytes. It is necessary to look at the basic table of values to see the comparability of one unit of information in the computer science in ascending order from the second.

Table №1. Translation of units of measurement into the minimum value of the calculation of information
Name of amount of information Degree of transfer to the minimum value Character notation
Byte 10 0 B
Kilobyte 10 1 Kb
Megabyte

10 2

MB
Gigabyte 10 3 GB
Terabyte 10 4 TB

Using the maximum unit of information

In our time, the maximum measure of the amount of information, called jottabyte, is planned to be used in the national security agency to store all audio and video materials obtained from public places where video cameras and microphones are installed. Currently, jotabytes are the largest units of information in computer science. Is this the limit? Hardly anyone can give the exact answer now.

Similar articles

 

 

 

 

Trending Now

 

 

 

 

Newest

Copyright © 2018 en.atomiyme.com. Theme powered by WordPress.