Who created 1. The first computers

The very first people for whom the word computer was used were people who performed all settlement operations in their minds, and they lived back in 1613. But later, with the advent of the 19th century, mankind began to realize that if there were machines for computing, they could do their work faster and did not require rest.

It is believed that the very first computer in the world was created by a mathematician born in England, his name was Charles Babbage. His machine is recognized as the first device capable of automatically performing calculations and printing the results on paper. But due to financial problems, the scientist was never able to create the final version.

The first computers of the twentieth century


Despite the fact that the first development was created a very long time ago, a full-fledged computer was assembled only in 1938. The very first electromechanical binary machine was presented to the world, and it was invented by Konrad Zuse, a scientist from Germany.

He called this computer Z1, but in the same year another device was built to emulate human actions, which followed a certain algorithm of instructions, and all the results were displayed on a paper tape. This device was called the Turing machine, since it was invented by another scientist - Alan Turing.

The very first computer officially recognized


The officially registered very first computer is considered to be the Mark-1 programmable machine. Initially, its main goal was to serve for the benefit of the military. After a series of tests that were successful, in 1944 the computer was put into action.

It was created by IBM engineers and Harvard mathematician Howard Ackson. They took the developments of Charles Babbage as a basis and began assembling it on the territory of that same Harvard.

The world was presented as the very first computer, and the most expensive - its price was 500 thousand dollars. The device consisted of more than 760,000 parts, its length was 17 m, and glass and stainless steel were used for the body. Given the height of 2.5 meters, it was decided to allocate a separate room for it.

As for the rest of the characteristics of the very first computer (electronic computer), they are as follows:

  1. Weight - more than 4.5 tons.
  2. The total length of the cables inside the very first computer is 800 km.
  3. The length of the synchronizing shaft is 15 meters.
  4. The power of the electric motor that served to start the machine is 5 kW.

Some inventors saw the first computer as a large and powerful adding machine. This opinion was shared by those people who believed that the ENIAC device became the impetus for the development of all further computers. But the one who invented the Mark-1 is still considered its ancestor, thanks to the machine's ability to automatically perform the required tasks.

Note!

Carrying out its work with the help of punched tape, the first automatic device practically did not need human intervention in its work.

The main advantage of "Mark-1" was the ability to perform the following tasks:

  1. Division - 15 sec.
  2. Summation and subtraction - 0.33 sec.
  3. Multiplication - 6 sec.
  4. Ability to operate with 72 numbers.

But soon the characteristics of the computer did not meet the overestimated requirements of customers, so Howard Aiken proposed to create computers of a more powerful and modern design. After that, scientists released 3 more versions of one of the very first computers, the last model of which was created in 1952.

ENIAC


All early computers were invented for roughly the same purpose. Both in terms of characteristics and external data, they did not differ radically (you can compare Mark-1 and ENIAC using numerous photos from the Internet). But if we talk about the computer created in 1945, it was already distinguished by its multitasking and increased level of capabilities. But since the war was over this year, the car did not have time to be used for military purposes.

It was decided to use the machine for other purposes, such as simulating the activation of a hydrogen bomb. Although the device was assembled later than its predecessor, the computer was just as huge, but its price was slightly less, while the machine included more than 17,000 lamps in its design. The world-famous electronics engineers John Mosley and his partner John Eckert worked on the creation of this giant.

To protect the structure from damage, to increase reliability, it was decided to apply the same principle that was used at that time for musical electric organs. This helped to positively influence the reduction of accidents, after which, out of a huge number of lamps, only 2-3 pieces could deteriorate within 7 days.

At that time, scientists invented the best device for computing, its characteristics were as follows:

  • the total cost of the structure is $487,000;
  • weight - 27 tons;
  • memorization capabilities - 20 alphanumeric combinations;
  • multiplication speed - 357 different combinations per second;
  • summation of numbers - 5,000 operations per second.

It took 200,000 man-hours to assemble the most advanced computer in the world.

Before the ENIAC machine, no computer had yet used a tab stop to enter and output data from a computer. The only significant drawback of this device was its huge size and weight - they exceeded the Mark-1 computer by several times in weight and 2 times in size.

EDVAC


Soon, Eckert and Mosley set about the next invention "EDVAC". Electronic engineers have figured out how to make the first machine that will carry out calculations, not only using punched cards, but also relying on programs in its memory.

These opportunities became available after the creation and further use of mercury tubes. And with the help of the binary system, the issue of computers using a huge number of lamps and using complex calculation algorithms was solved.

Thus, the computer era has advanced one step further, the device was assembled from the following elements:

  1. Timer.
  2. Devices that allow you to store information and perform complex calculations.
  3. A device for receiving signals and further transmission to computing modules.
  4. A device for recognizing information on a magnetic tape.
  5. An oscilloscope that controls the operation of a computer.
  6. temporary registers. In the modern world they are called "clipboards".

The computers that were the predecessors no longer looked like the fastest when EDVAC appeared. To balance the sums, multiply and divide, it was enough for him a fraction of a second, although he still occupied a fairly large area - about 46 square meters. meters. But the number of lamps, compared with ENIAC, decreased by 14,000 pieces, and the power was increased to 50 kW.

Interesting fact!

A little more time passed, and the first game in the world appeared. They called herspace war, and its essence was the struggle of two spaceships that fired missiles at each other.

quantum computer


Everyone already roughly understands when the first computer appeared, but what about a quantum device? pleasant surprise was a recent development of Russians and scientists from America. For the first time in the world, they were able to assemble a quantum computer and successfully test it.

At the moment, a more complex quantum device has not yet been created. Such a large-scale achievement allowed Russian scientists to become leaders in the race to create a full-fledged quantum machine.

Such devices are special computing mechanisms that consist of qubits, in this case of 51 pieces, and standard modules for calculations - they store a range of values ​​between 0 and 1.

Such a quantum computer was preparing to create a few more scientists from different countries. It was believed that the closest to its creation is the representative Google— John Martinis.

But still, Russian scientists, together with the Americans, were able to get ahead of everyone. They noticed that sets of atoms that are held inside laser cells and have a very low temperature can be used as quantum qubits. It was the use of this technology in a quantum device that could bring domestic scientists into leadership positions in the world market.

How the situation with the cost of such a machine will be is not yet known, but it is not yet necessary to expect a low price, given the capabilities and complexity of the device.

Conclusion


The first computers that went on mass sale were devices from Apple - they bear little resemblance to a modern computer. But it is thanks to all the developments of that time that almost every resident of the country can now afford a computer.

Very soon, with the advent of quantum computers, humanity will be one step closer to stunning discoveries and the creation of an ideal computer.

One of the first devices (5th-4th centuries BC), from which the history of the development of computers can be considered, was a special board, later called "abacus". Calculations on it were carried out by moving bones or stones in the recesses of boards made of bronze, stone, ivory and the like. In Greece, the abacus existed already in the 5th century. BC, among the Japanese it was called "serobayan", among the Chinese - "suanpan". V Ancient Russia for counting, a device similar to an abacus was used - a “board count”. In the 17th century, this device took the form of familiar Russian accounts.

Abacus (V-IV centuries BC)

The French mathematician and philosopher Blaise Pascal in 1642 created the first machine, which received the name Pascaline in honor of its creator. A mechanical device in the form of a box with many gears, in addition to addition, also performed subtraction. Data was entered into the machine by turning dials that corresponded to numbers from 0 to 9. The answer appeared at the top of the metal case.


Pascalina

In 1673, Gottfried Wilhelm Leibniz created a mechanical calculating device (Leibniz step calculator - Leibniz calculator), which for the first time not only added and subtracted, but also multiplied, divided and calculated the square root. Subsequently, the Leibniz wheel became the prototype for mass calculating devices - adding machines.


Leibniz step calculator model

English mathematician Charles Babbage developed a device that not only performed arithmetic operations, but also immediately printed the results. In 1832, a ten-fold reduced model was built from two thousand brass parts, which weighed three tons, but was able to perform arithmetic operations with an accuracy of six decimal places and calculate second-order derivatives. This computer became the prototype of real computers, it was called a differential machine.

differential machine

The summing apparatus with continuous transmission of tens is created by the Russian mathematician and mechanic Pafnuty Lvovich Chebyshev. This device has achieved automation of all arithmetic operations. In 1881, a prefix was created for a adding apparatus for multiplying and dividing. The principle of continuous transmission of tens has been widely used in various counters and computers.


Chebyshev summing apparatus

Automated data processing appeared at the end of the last century in the United States. Herman Hollerith created a device - Hollerith's Tabulator - in which, applied to punched cards, it was deciphered by electric current.

Hollerith tabulator

In 1936, a young scientist from Cambridge, Alan Turing, came up with a mental calculating machine-computer that existed only on paper. His "smart machine" acted according to a certain predetermined algorithm. Depending on the algorithm, the imaginary machine could be used for a wide variety of purposes. However, at that time these were purely theoretical considerations and schemes that served as a prototype of a programmable computer, as a computing device that processes data in accordance with a certain sequence of commands.

Information revolutions in history

In the history of the development of civilization, there have been several information revolutions - transformations of social social relations due to changes in the processing, storage and transmission of information.

First the revolution is associated with the invention of writing, which led to a gigantic qualitative and quantitative leap of civilization. It became possible to transfer knowledge from generations to generations.

Second(mid-16th century) the revolution was caused by the invention of printing, which radically changed industrial society, culture, and the organization of activities.

Third (late XIX c.) a revolution with discoveries in the field of electricity, thanks to which the telegraph, telephone, radio, devices appeared that allow you to quickly transfer and accumulate information in any volume.

Fourth(since the seventies of the XX century) the revolution is associated with the invention of microprocessor technology and the advent of the personal computer. Computers, data transmission systems (information communications) are created on microprocessors and integrated circuits.

This period is characterized by three fundamental innovations:

  • transition from mechanical and electrical means converting information to electronic;
  • miniaturization of all nodes, devices, devices, machines;
  • creation of software-controlled devices and processes.

History of the development of computer technology

The need for storing, converting and transmitting information in humans appeared much earlier than the telegraph apparatus, the first telephone exchange and an electronic computer (computer) were created. In fact, all the experience, all the knowledge accumulated by mankind, one way or another, contributed to the emergence of computer technology. The history of the creation of computers - the general name of electronic machines for performing calculations - begins far in the past and is associated with the development of almost all aspects of human life and activity. How many exist human civilization, so much time a certain automation of calculations is used.

The history of the development of computer technology has about five decades. During this time, several generations of computers have changed. Each subsequent generation was distinguished by new elements (electronic tubes, transistors, integrated circuits), the manufacturing technology of which was fundamentally different. Currently, there is a generally accepted classification of computer generations:

  • First generation (1946 - early 50s). Element base - electronic lamps. Computers were distinguished by large dimensions, high energy consumption, low speed, low reliability, programming in codes.
  • Second generation (late 50s - early 60s). Element base - semiconductor. Improved in comparison with the computers of the previous generation, almost all specifications. Algorithmic languages ​​are used for programming.
  • 3rd generation (late 60s - late 70s). Element base - integrated circuits, multilayer printed wiring. A sharp decrease in the dimensions of computers, an increase in their reliability, an increase in productivity. Access from remote terminals.
  • The fourth generation (from the mid-70s to the end of the 80s). Element base - microprocessors, large integrated circuits. Improved specifications. Mass production of personal computers. Directions of development: powerful multiprocessor computing systems with high performance, creation of cheap microcomputers.
  • Fifth generation (since the mid-80s). The development of intelligent computers began, which has not yet been crowned with success. Introduction to all areas of computer networks and their association, the use of distributed data processing, the widespread use of computer information technologies.

Along with the change of generations of computers, the nature of their use also changed. If at first they were created and used mainly for solving computational problems, then later the scope of their application expanded. This includes information processing, automation of management of production, technological and scientific processes, and much more.

How Computers Work by Konrad Zuse

The idea of ​​​​the possibility of building an automated calculating machine came up with the German engineer Konrad Zuse (Konrad Zuse) and in 1934 Zuse formulated the basic principles on which future computers should work:

  • binary number system;
  • the use of devices operating on the principle of "yes / no" (logical 1 / 0);
  • fully automated operation of the calculator;
  • software control of the computing process;
  • support for floating point arithmetic;
  • use of large capacity memory.

Zuse was the first in the world to determine that data processing begins with a bit (he called the bit "yes / no status", and the formulas of binary algebra - conditional propositions), the first to introduce the term "machine word" (Word), the first to combine arithmetic and logical calculators operations, noting that “the elementary operation of a computer is to check two binary numbers for equality. The result will also be a binary number with two values ​​(equal, not equal).

First generation - computers with vacuum tubes

Colossus I - the first computer on lamps, created by the British in 1943, to decode German military ciphers; it consisted of 1800 vacuum tubes - information storage devices - and was one of the first programmable electronic digital computers.

ENIAC - was created to calculate artillery ballistics tables; this computer weighed 30 tons, occupied 1000 square feet and consumed 130-140 kW of electricity. The computer contained 17468 vacuum tubes of sixteen types, 7200 crystal diodes and 4100 magnetic elements, and they were contained in cabinets with a total volume of about 100 m 3 . ENIAC had a performance of 5000 operations per second. The total cost of the machine was $750,000. The electricity requirement was 174 kW, and the total space occupied was 300 m2.


ENIAC - a device for calculating artillery ballistics tables

Another representative of the 1st generation of computers that you should pay attention to is EDVAC (Electronic Discrete Variable Computer). EDVAC is interesting in that it attempted to record programs electronically in so-called "ultrasonic delay lines" using mercury tubes. In 126 such lines, it was possible to store 1024 lines of four-digit binary numbers. It was "fast" memory. As a "slow" memory, it was supposed to fix numbers and commands on a magnetic wire, but this method turned out to be unreliable, and teletype tapes had to be returned to. The EDVAC was faster than its predecessor, adding in 1 µs and dividing in 3 µs. It contained only 3.5 thousand electron tubes and was located on 13 m 2 of area.

UNIVAC (Universal Automatic Computer) was an electronic device with programs stored in memory, which were entered there no longer from punched cards, but using a magnetic tape; this provided a high speed of reading and writing information, and, consequently, a higher speed of the machine as a whole. One tape could contain a million characters written in binary form. Tapes could store both programs and intermediate data.


Representatives of the 1st generation of computers: 1) Electronic Discrete Variable Computer; 2) Universal Automatic Computer

The second generation is a computer on transistors.

Transistors replaced vacuum tubes in the early 1960s. Transistors (which act like electrical switches) consume less electricity and generate less heat, and take up less space. Combining several transistor circuits on one board gives an integrated circuit (chip - “chip”, “chip” literally, a plate). Transistors are binary counters. These details fix two states - the presence of current and the absence of current, and thereby process the information presented to them in this binary form.

In 1953, William Shockley invented the p-n junction transistor. The transistor replaces the vacuum tube and at the same time works with more speed, emits very little heat and consumes almost no electricity. Simultaneously with the process of replacing electron tubes with transistors, information storage methods were improved: as memory devices, magnetic cores and magnetic drums began to be used, and already in the 60s information storage on disks became widespread.

One of the first transistorized computers, the Atlas Guidance Computer, was launched in 1957 and was used to control the launch of the Atlas rocket.

Created in 1957, the RAMAC was an inexpensive computer with modular external memory on disks, combined magnetic core random access memory and drums. Although this computer was not yet completely transistorized, it was highly operable and easy to maintain and was in great demand in the office automation market. Therefore, a “large” RAMAC (IBM-305) was urgently released for corporate customers; to accommodate 5 MB of data, the RAMAC system needed 50 disks with a diameter of 24 inches. The information system created on the basis of this model smoothly processed arrays of requests in 10 languages.

In 1959, IBM created its first all-transistorized large mainframe computer, the 7090, capable of 229,000 operations per second—a true transistorized mainframe. In 1964, based on two 7090 mainframes, the American airline SABER pioneered automated system sale and booking of air tickets in 65 cities of the world.

In 1960, DEC introduced the world's first minicomputer, the PDP-1 (Programmed Data Processor), a computer with a monitor and keyboard, which became one of the most notable items on the market. This computer was capable of performing 100,000 operations per second. The machine itself occupied only 1.5 m 2 on the floor. The PDP-1 became, in fact, the world's first gaming platform thanks to MIT student Steve Russell, who wrote a Star War computer toy for it!


Representatives of the second generation of computers: 1) RAMAC; 2) PDP-1

In 1968, Digital began mass production of minicomputers for the first time - it was the PDP-8: their price was about $ 10,000, and the model was the size of a refrigerator. It was this PDP-8 model that laboratories, universities and small businesses were able to buy.

Domestic computers of that time can be characterized as follows: in terms of architectural, circuit and functional solutions, they corresponded to their time, but their capabilities were limited due to the imperfection of the production and element base. The machines of the BESM series were the most popular. Serial production, rather insignificant, began with the release of the Ural-2 computer (1958), BESM-2, Minsk-1 and Ural-3 (all in 1959). In 1960, they went into the M-20 and Ural-4 series. At the end of 1960, the M-20 had the maximum performance (4500 lamps, 35 thousand semiconductor diodes, memory for 4096 cells) - 20 thousand operations per second. The first computers based on semiconductor elements (Razdan-2, Minsk-2, M-220 and Dnepr) were still under development.

Third generation - small-sized computers on integrated circuits

In the 1950s and 60s, assembling electronic equipment was a labor-intensive process that was slowed down by the increasing complexity of electronic circuits. For example, a CD1604 computer (1960, Control Data Corp.) contained about 100,000 diodes and 25,000 transistors.

In 1959, Americans Jack St. Clair Kilby (Texas Instruments) and Robert N. Noyce (Fairchild Semiconductor) independently invented the integrated circuit (IC), a collection of thousands of transistors placed on a single silicon chip inside a microcircuit.

The production of computers on ICs (they were called microcircuits later) was much cheaper than on transistors. Thanks to this, many organizations were able to acquire and master such machines. And this, in turn, led to an increase in demand for universal computers designed to solve various problems. During these years, the production of computers acquired an industrial scale.

At the same time, semiconductor memory appeared, which is still used in personal computers to this day.


Representative of the third generation of computers - ES-1022

Fourth generation - personal computers on processors

The forerunners of the IBM PC were the Apple II, Radio Shack TRS-80, Atari 400 and 800, Commodore 64 and Commodore PET.

The birth of personal computers (PC, PC) is rightfully associated with Intel processors. The corporation was founded in mid-June 1968. Since then, Intel has become the world's largest manufacturer of microprocessors with over 64,000 employees. Intel's goal was to create semiconductor memory, and in order to survive, the company began to take third-party orders for the development of semiconductor devices.

In 1971, Intel received an order to develop a set of 12 chips for programmable calculators, but the creation of 12 specialized chips seemed cumbersome and inefficient to Intel engineers. The task of reducing the range of microcircuits was solved by creating a "twin" from semiconductor memory and an actuator capable of working on commands stored in it. It was a breakthrough in computing philosophy: a universal logic device in the form of a 4-bit central processing unit i4004, which was later called the first microprocessor. It was a set of 4 chips, including one chip controlled by commands that were stored in an internal semiconductor memory.

As a commercial development, a microcomputer (as the microcircuit was then called) appeared on the market on November 11, 1971 under the name 4004: 4 bit, containing 2300 transistors, clock frequency 60 kHz, cost - $ 200. In 1972, Intel released an eight-bit microprocessor 8008, and in 1974 - its improved version Intel-8080, which by the end of the 70s became the standard for the microcomputer industry. Already in 1973, the first computer based on the 8080 processor, Micral, appeared in France. For various reasons, this processor was not successful in America (in the Soviet Union it was copied and produced for a long time under the name 580VM80). At the same time, a group of engineers left Intel and formed Zilog. Its loudest product is the Z80, which has an extended 8080 instruction set and has made it a commercial success for household appliances, managed with one supply voltage of 5V. On its basis, in particular, the ZX-Spectrum computer was created (sometimes it is called by the name of the creator - Sinclair), which practically became the prototype of the Home PC of the mid-80s. In 1981, Intel released the 16-bit processor 8086 and 8088, an analogue of the 8086, except for the external 8-bit data bus (all peripherals were still 8-bit at that time).

Intel's competitor, the Apple II computer, differed in that it was not a completely finished device and there was some freedom for refinement directly by the user - it was possible to install additional interface boards, memory boards, etc. It was this feature, which later became known as "open architecture", became its main advantage. Two more innovations, developed in 1978, contributed to the success of the Apple II. An inexpensive floppy disk drive and the first commercial calculation program, the VisiCalc spreadsheet.

The Altair-8800 computer, built on the basis of the Intel-8080 processor, was very popular in the 70s. Although the Altair's capabilities were rather limited - RAM was only 4 Kb, there was no keyboard and screen, its appearance was met with great enthusiasm. It was released to the market in 1975 and several thousand sets of the machine were sold in the first months.


Representatives of the 4th generation of computers: a) Micral; b) Apple II

This computer, designed by MITS, was sold by mail order as a DIY kit. The entire build kit cost $397, while only one processor from Intel sold for $360.

The spread of the PC by the end of the 70s led to a slight decrease in demand for main computers and minicomputers - IBM released the IBM PC based on the 8088 processor in 1979. The software that existed in the early 80s was focused on word processing and simple electronic tables, and the very idea that a "microcomputer" could become a familiar and necessary device at work and at home seemed incredible.

On August 12, 1981, IBM introduced the Personal Computer (PC), which, in combination with software from Microsoft, the standard for the entire PC fleet of the modern world. The price of the IBM PC model with a monochrome display was about $3,000, with a color one - $6,000. IBM PC configuration: Intel processor 8088 with a frequency of 4.77 MHz and 29 thousand transistors, 64 KB of RAM, 1 floppy drive with a capacity of 160 KB, - a conventional built-in speaker. At this time, launching and working with applications was a real pain: due to the lack of a hard drive, you had to change floppy disks all the time, there was no mouse, no graphical windowed user interface, no exact correspondence between the image on the screen and the final result (WYSIWYG ). Color graphics were extremely primitive, there was no question of three-dimensional animation or photo processing, but the history of the development of personal computers began with this model.

In 1984, IBM introduced two more innovations. First, a model for home users was released, called the PCjr based on the 8088 processor, which was equipped with almost the first wireless keyboard, but this model did not achieve success in the market.

The second novelty is the IBM PC AT. The most important feature: Upgrade to higher-end microprocessors (80286 with 80287 digital coprocessor) while maintaining compatibility with previous models. This computer proved to be a trendsetter for many years to come in a number of respects: it was the first to introduce a 16-bit expansion bus (which remains standard to this day) and EGA graphics adapters with a resolution of 640x350 at a color depth of 16 bits.

1984 saw the release of the first Macintosh computers with a graphical interface, a mouse, and many other user interface attributes that modern desktop computers cannot be without. Users of the new interface did not leave indifferent, but the revolutionary computer was not compatible with either the previous programs or hardware components. And in the corporations of that time, WordPerfect and Lotus 1-2-3 had already become normal working tools. Users have already become accustomed to and adapted to the symbolic DOS interface. From their point of view, the Macintosh even looked somehow frivolous.

Fifth generation of computers (from 1985 to our time)

Distinctive features of the 5th generation:

  1. New production technologies.
  2. Rejection of traditional programming languages ​​such as Cobol and Fortran in favor of languages ​​with enhanced character manipulation and elements of logic programming (Prolog and Lisp).
  3. Emphasis on new architectures (for example, data flow architecture).
  4. New user-friendly input/output methods (e.g. speech and image recognition, speech synthesis, natural language message processing)
  5. Artificial intelligence (that is, automation of the processes of solving problems, obtaining conclusions, manipulating knowledge)

It was at the turn of the 80-90s that the Windows-Intel alliance was formed. When Intel released the 486 microprocessor in early 1989, computer manufacturers didn't wait for an example from IBM or Compaq. A race began, in which dozens of firms entered. But all the new computers were extremely similar to each other - they were united by compatibility with Windows and processors from Intel.

In 1989, the i486 processor was released. It had a built-in math coprocessor, a pipeline, and a built-in first-level cache.

Directions for the development of computers

Neurocomputers can be attributed to the sixth generation of computers. Despite the fact that the actual use of neural networks began relatively recently, neurocomputing as a scientific direction has entered its seventh decade, and the first neurocomputer was built in 1958. The developer of the machine was Frank Rosenblatt, who gave his brainchild the name Mark I.

The theory of neural networks was first identified in the work of McCulloch and Pitts in 1943: any arithmetic or logical function can be implemented using a simple neural network. Interest in neurocomputing flared up again in the early 80s and was fueled by new work with multilayer perceptrons and parallel computing.

Neurocomputers are PCs consisting of many simple computing elements working in parallel, which are called neurons. Neurons form so-called neural networks. The high speed of neurocomputers is achieved precisely due to the huge number of neurons. Neurocomputers are built according to the biological principle: nervous system human brain consists of individual cells - neurons, the number of which in the brain reaches 10 12, despite the fact that the response time of a neuron is 3 ms. Each neuron performs fairly simple functions, but since it is connected on average with 1-10 thousand other neurons, such a team successfully ensures the functioning of the human brain.

Representative of the VIth generation of computers - Mark I

In optoelectronic computers, the information carrier is the luminous flux. Electrical signals are converted to optical and vice versa. Optical radiation as an information carrier has a number of potential advantages over electrical signals:

  • Light streams, unlike electrical ones, can intersect with each other;
  • Light fluxes can be localized in the transverse direction of nanometer dimensions and transmitted through free space;
  • The interaction of light streams with non-linear media is distributed throughout the entire environment, which gives new degrees of freedom in organizing communication and creating parallel architectures.

Currently, developments are underway to create computers entirely consisting of optical information processing devices. Today this direction is the most interesting.

An optical computer has unprecedented performance and a completely different architecture than an electronic computer: for 1 clock cycle of less than 1 nanosecond (this corresponds to a clock frequency of more than 1000 MHz), an optical computer can process a data array of about 1 megabyte or more. To date, individual components of optical computers have already been created and optimized.

An optical computer the size of a laptop can give the user the ability to place in it almost all the information about the world, while the computer can solve problems of any complexity.

Biological computers are ordinary PCs, only based on DNA computing. There are so few really demonstrative works in this area that it is not necessary to talk about significant results.

Molecular computers are PCs, the principle of which is based on the use of changes in the properties of molecules in the process of photosynthesis. In the process of photosynthesis, the molecule assumes different states, so that scientists can only assign certain logical values ​​to each state, that is, "0" or "1". Using certain molecules, scientists have determined that their photocycle consists of only two states, which can be “switched” by changing the acid-base balance of the environment. The latter is very easy to do with an electrical signal. Modern technologies already allow you to create entire chains of molecules organized in this way. Thus, it is very possible that molecular computers are waiting for us “just around the corner”.

The history of the development of computers is not over yet, in addition to improving the old ones, there is also the development of completely new technologies. An example of this is quantum computers - devices that operate on the basis of quantum mechanics. A full-scale quantum computer is a hypothetical device, the possibility of building which is associated with the serious development of quantum theory in the field of many particles and complex experiments; this work lies at the forefront of modern physics. Experimental quantum computers already exist; elements of quantum computers can be used to increase the efficiency of calculations on an existing instrument base.

From "Apple") creates a personal computer and receives a patent for it!

Did you know that the world's first personal computer was created, by no means, by Steve Jobs and Steve Wozniak in the Palo Alto garage, but by a simple Soviet designer Arseniy Anatolyevich Gorokhov at the Omsk Research Institute of Aviation Technologies?

Let's rewind time.

1950s. Computers are huge, bulky, expensive. The Soviet "Whirlwind" of 1951, the first machine with data output to the screen, has only 512 bytes, occupies two-storey house. American peer - Univac- has a magnetic metal tape drive, a high-speed printer, but weighs 13 tons and is worth about $1.5 million. Bendix G-15, released in 1956, is called a mini-computer - it actually weighs 450 kg and costs at least $50,000. Not a single car can be called personal.

1960s. Computers are getting faster, more powerful, smaller. The first commercial computer equipped with a keyboard and monitor is released in the USA - "PDP-1". The dimensions of the new apparatus are from three refrigerators, the price is ten times lower than the cost of an ordinary large computer. A big step forward, but not enough for the widespread introduction of technology. Total only 50 copies sold.

The first "home" computer claims to be Honeywell Kitchen Computer introduced to the US in 1969. He weighed about 65 kg, cost 10600$ , was a pedestal with a built-in cutting board, a panel of light bulbs and buttons. Performed only one function - the storage of various recipes. Working with the "kitchen computer" required a two-week course, because the recipes were displayed on the screen in binary code. Those wishing to purchase such an expensive "cookbook" were not found.

1970s. With the creation of the first microprocessor, the era of personal computers begins. Inventors compete to build their own models. American entrepreneur Edward Roberts is the first to understand the potential of the 8-bit microprocessor. Intel 8080, released in 1974, and creates a microcomputer based on it "Altair 8800". Thanks to a deal with Intel for the wholesale purchase of microprocessors ($75 apiece, with a retail price of $360), Roberts sets a record price for his invention - only 397 "bucks"! Advertising on the cover of a respected magazine "Popular Electronics" per 1975 the year is doing its job. In the first month, developers sell several thousand copies "Altair 8800". However, the received order becomes a surprise for buyers: the kit consists of a set of parts and a box for the case. Users have to solder, test, create programs in machine language themselves. (Which, of course, is also not bad, because it is on "Altair 8800" founders Microsoft Bill Gates and Paul Allen test their famous program - Basic).

Be that as it may, Roberts' computer is a godsend for inventors, and "mere mortals" are still left without technology. To help them in 1976 Steve Wozniak and Steve Jobs arrive and decide to sell their "Apple I" , assembled for personal use in a Palo Alto (California) garage. The cost of a new computer is 666,66$ . The main advantage is that, unlike "Altair 8800" and many other machines of that time, "Apple I" offered already collected. All you need is a case, keyboard and monitor to work. But they will also be included in the kit 2 years later, in the serial production of color, sound "Apple II". Such is the history of the personal computer.

Stop, stop, stop... But what about the Soviet scientist and the Aviation Technology Research Institute?!

Oh yes! Completely forgot. There are in the history of personal computers and dark page.

Here is how it was. far away 1968 year, 8 years before the first "apple", Soviet electrical engineer Arseny Anatolyevich Gorokhov invented the car under the name "Device for setting the program for reproducing the contour of a part." So, in any case, it is indicated in the patent, copyright certificate № 383005 , dated May 18, 1968. The name is not accidental, because the developed apparatus was intended, first of all, to create complex engineering drawings. The inventor himself prefers to call the device a “programmable intellectual device”.

According to the drawings, the "intellector" had a monitor, a separate system unit with a hard drive, a device for solving autonomous tasks and personal communication with a computer, motherboard, memory, video card and other, except for a computer mouse.

Omsk electrical engineer Arseny Gorokhov 45 years ago invented a device that is now called a Personal Computer

According to the Omsk Time website, today, alas, it is impossible to see the world's first personal computer, the institution where it was created - the "mailbox" of the Omsk Research Institute of Aviation Technologies, has been closed for several years. The author of the invention has patent, with description "Programmable intellectual device" and an entry in the Russian book of records DIVO: 45 years ago, in 1968, the Omsk electrical engineer Arseniy Gorokhov invented a device that is now called the Personal Computer.

Now Gorokhov uses his personal personal computer mainly as a typewriter. According to him, it was new 5 years ago, and to make an “upgrade”, that is, to modernize, is expensive, the pension will not be enough.

The components of a modern computer - a monitor, a system unit, a keyboard - were also in Gorokhov's "intellector", however, under other names. The device was intended, first of all, to create complex engineering drawings. Gorokhov also developed his own "software" - a way of dialogue with the machine without thick packs of punched cards and a team of programmers. But further All-Union patent things didn’t work out - the “green light” for the invention was not turned on, and in 1975 they learned that the term “personal computer” was given to the world by the American company Apple.

40 certificates of authorship and patents of Arseny Gorokhov for three decades - only moral satisfaction from work. Traces of the material remained in the patent statements - 20 rubles for each not included in the series. If the novelty was still allowed to break into the "series", the author received 1000 times more. That's just to recognize the mysterious "law of luck" the inventor did not always succeed. And Gorokhov now considers probable profits on the contrary, not "how much they got, but how much they could not."

“Not oil is the future of Russia, but inventors”- the leitmotif of another article by Gorokhov "The system of accelerated development of inventions", published in the last, 12th, issue of 2003 of the journal "Intellectual Property". It is a pity that in Russia there is no practice, as in the USA, where the President meets twice a year with the head of the Patent Office. Increasingly, instead of a sense of pride, one has to use irony, the author says. Prospects are slipping away.

Now the inventor has a new type of periodic table on his desktop, and a blank for spatial television. But those who are interested in the idea, except for the rare guests-journalists, have not been, and are not.

About the invention cell phone article "Mystery of the cell" ...

Now the use of personal computers from Apple, Samsung, HP, Dell and other manufacturers seems to us something completely natural. However, less than a century ago, the average person had no idea about computer technology, and any development that is used today on every device has become a real breakthrough in the industry.

In this article, we will talk about what the very first computers in the world were like, who developed them and why, what their capabilities were, and how much they contributed to the development of technology.

Building the very first computers

The very first computers in the world took dozens square meters, and their weight was measured in tons. However, it was they who allowed humanity to come to those compact and convenient devices which we are currently using. Unfortunately, there is no exact answer to the question of which computer was really the very first computer. However, there are several variants of this answer, which we will consider below.

Computer "Mark 1"

The Mark 1, also known as the ASCC (Automatic Sequence Controlled Calculator), was designed and built in 1941. The US Navy acted as the customer of the work, and IBM as the general contractor. Five engineers were directly involved in the development of the device, led by the representative of the American army, Howard Aiken. As a basis for the implementation of the project, the developers took an analytical computer, which was created by the famous British inventor Charles Babbage.

At its core, "Mark 1" was an advanced adding machine that could be programmed and did not require human intervention directly in the process of performing calculations. The developers did not take into account all the advantages of the binary number system, which is used by most modern computers in the world, and forced the machine to operate in decimal numbers.

Information was entered into the device using punched tape. Mark 1 could not perform any conditional jumps, and therefore the code of each program was very long and cumbersome. There was also no software option for creating cycles: in order to make a loop in the code, the punched tape with the code literally had to be “closed” by connecting its beginning and end.

Physically, ASCC looked like this:

  • length about 17 m;
  • height over 2.5 m;
  • weight about 4.5 tons;
  • 765,000 parts;
  • 800 km of connecting wires;
  • 15-meter shaft that provides synchronization of the main computing elements;
  • 4 kW electric motor.

At the insistence of IBM CEO Thomas Watson, the computer was placed in a stainless steel and glass case, while Howard Aiken insisted on a transparent case to leave the "innards" of the computer visible.

"Mark 1" was able to work with numbers, the length of which was up to 23 digits. It took only 0.3 seconds to subtract and add, 6 seconds to multiply, 15.3 seconds to divide, and more than a minute to perform trigonometric functions and calculate logarithms. At that time, this was an amazing speed, which made it possible to perform calculations in one day, which would previously have taken six months. Therefore, at the final stage of the Second World War, the device was quite successfully used by the American Navy, after which it worked for about 15 years at Harvard University.

The debate about who created the very first computer in the world, and when it happened, has not subsided so far. As it is not difficult to guess, in the USA the first “ancestor” of modern PCs is considered to be the “Mark 1”. However, in reality, he began to work about 2 years after the German engineer Konrad Zuse developed his Z3 computer, presented to the general public all in the same 1941. In addition, Zuse, in principle, used more advanced technologies (at least the binary number system), while Mark 1, according to some researchers, was outdated even before it was created.

Or is it Z3 from Zuse Konrad

Konrad Zuse is one of the most important figures in the history of all computer engineering in the world, although he worked for the benefit of the Third Reich. However, Zuse considered the bombing of Dresden and other German cities, where the predominantly civilian population remained, by Anglo-American aircraft as the main motivation in his work. Conrad began working on his computers back in the 1930s, while studying at the Berlin Polytechnic University.

His work was based on several revolutionary ideas at the time:

  • The memory must be divided: one part of it must be allocated for control data, the other for calculated data.
  • Numbers must be represented in the binary system.
  • The machine must be able to work with floating point numbers (whereas the Mark 1 only worked with fixed point numbers). It is worth noting that the algorithm for implementing this idea, which Zuse called "semilogarithmic notation", is similar to that used on modern computers.

Data in the Z3 was entered using punched tape. All instructions that the machine could execute were divided into three groups: arithmetic operators, memory, and also input and output. There were no restrictions on the location of instructions within the punched tape, while there were two specific commands - Ld and Lu - designed to display information on the display and read from the keyboard, respectively.

Both of these instructions stopped the machine so that the operator could write down the result, or enter the required number. This computer did not support conditional jumps, and the cycles, as in the case of Mark 1, had to be implemented by fastening the beginning and end of the punched tape.

The main characteristics of the machine are as follows:

  • the addition operation was performed in 0.7 seconds;
  • multiplication and division operations lasted 3 seconds;
  • the device consisted of 2600 telephone relays;
  • the clock frequency of the Z3 was approximately 5.33 Hz;
  • the device consumed 4 kW of energy;
  • its size was about two times smaller than the dimensions of the "Mark 1";
  • its weight was 1 ton.

The machine existed until 1944 and helped the Third Reich to make complex calculations for fascist aviation. In 1944, the computer burned down along with project documentation after one of the regular air bombardments. However, Konrad Zuse soon created the Z4, and the Z3 computer was reconstructed in 1960 by Zuse KG. But that's a completely different story.

Unbiased critics agree that the status of the first free programmable and workable computer in the world rightfully belongs to Z3, and all attempts to refute this statement are pseudo-patriotic speculation by representatives of individual countries. It is unlikely that these discussions will ever end, but one can definitely say the following: if the Mark 1 was outdated even before its release, then the Z3 implemented many of the technologies and principles that began to be applied in the computers of the future.

The first electronic computer in the USSR and continental Europe

The first computer on the territory of the USSR and continental Europe is considered to be a development called "MESM", which stands for "Small Electronic Computing Machine". The device was created in Ukraine, in the laboratory of computer technology of the Kiev Institute of Electrical Engineering. The project was implemented under the leadership of academician Sergei Lebedev.

Sergey Alekseevich, like Tsuse, began to think about the creation of a computer back in the 30s of the last century. However, he was able to start this work closely only after the war, and even then not in the most best conditions: The Institute of Electrical Engineering was given the premises of a monastery hotel in Feofaniya (at a distance of about 10 km from Kiev), in a dilapidated house.

However, domestic engineers managed to more or less repair the building, and in just three years to create and establish MESM. At the same time, only 12 engineers worked on the project, as well as 15 installers and technicians who helped them as needed. The machine had the following characteristics:

  • occupied a room of about 60 square meters;
  • could perform 3000 operations per minute, which at that time was an incredible indicator;
  • worked on 6,000 vacuum tubes, which consumed 25 kW;
  • could perform addition, subtraction, division, multiplication, and shift, taking into account comparison in absolute value, sign, transmission of numbers from a magnetic drum, control transfer, and addition of commands.

As you might guess, 6000 lamps provided an almost tropical climate in the room. Nevertheless, MESM was successfully used in large numbers until 1957. scientific research: in the field of space flight, thermonuclear processes, mechanics, long-distance power lines and so on.

Other early systems

"Mark 1" and Z3 are not all participants in the dispute for the title of the very first computer in the world. Considering that in the middle of the twentieth century the development of computer technologies began to develop exponentially, and computers acquired more and more features of modern computers, many researchers give first place in this kind of “rating” to those systems that will be discussed below.

Eniac calculators

The ENIAC electronic digital computer began to be developed in 1943, and completed in 1945. Scientists from the University of Pennsylvania John Eckert and John Mauchly worked on its creation. The order for the development of ENIAC was fulfilled by the US Army, which needed a device for the accurate calculation of firing tables. But due to the fact that the computer was assembled only towards the end of the war, its purpose had to be changed: from 1947 to 1955, it was used by the US Army Ballistic Research Laboratory, which, using ENIAC, performed various calculations in the development of thermonuclear weapons. It is noteworthy that six girls became the first programmers of this computer.

First commercial copies of UNIVAC

Conventionally, the first computer of the UNIVAC series (UNIVersal Automatic Computer I) is considered the first commercial computer in the United States, and the third in the world. It was developed by the same John Eckert and John Mauchly, commissioned by the US Air Force and the US Army in cooperation with the Census Bureau. The UNIVAC I was developed from 1947 to 1951. It was the Bureau that officially sold the first computer of this series, several dozen other copies appeared in private corporations, government agencies and three American universities. UNIVAC I used BCD arithmetic, 5200 vacuum tubes consuming 125 kW of electricity, and weighed 13 tons. In one second, he could carry out 1905 operations. To accommodate it, a room of 35.5 square meters was required.

Apple's first computer

The first computer from the eminent "apple" brand was called "Apple I" and was released in 1976. The key innovation used in the creation of this computer was the ability to enter information from the keyboard with its instant display on the display. During the presentation of the device, the oratorical and entrepreneurial talent of Steve Jobs appeared, while his shy friend Steve Wozniak was directly involved in the development of the Apple I. This computer was completely assembled on a circuit board, which consisted of about thirty chips, which is why it is sometimes called the very first full-fledged PC in the world.

The price of the very first computer

The cost of developing the first computers in the world was significantly higher than the current prices for computers in the middle price segment. So, about $500,000 was invested in the creation of Mark 1. Z3 cost the Third Reich 50,000 Reichsmarks, which at the rate of those times was about $ 20,000. For the creation of ENIAC, the developers requested 61,700 dollars. And to fulfill the first order for the Apple I, made by Paul Terrell, Jobs and Wozniak needed $ 15,000. At the same time, the first models of the "apple" computer were sold at $666.66 apiece.

Video "The first computer"

All the information provided above was taken from open sources, mainly from the free encyclopedia Wikipedia.

Portable computing devices, when they first appeared, were treated with great skepticism. The most was created after the Second World War, on February 14, 1946, by American developers. It was extremely massive and consisted of many constituent parts, and in terms of its software and technical properties, it did not go far from the calculator.

Building the very first ENIAC computer

ENIAC has worked long and hard to create a portable device. Of course, their research activity was multifaceted. But even before them there were attempts to create a computer. So, for example, even before the creation of a multi-ton ENIAC, similar prototypes were tested, but due to technical flaws they could not be created.

Scientists around the world were preoccupied with building the very first computer. The year of completion of development falls on 1946. Already on February 14, in the democratic USA, the ENIAC computer was presented to the public. By its size, it looked like a small house more than its weight was about 30 tons, and the number of electron tubes could illuminate a small city - there were 18,000 of them.

A little about the first computer

With such huge dimensions, the computing power was 5000 operations per second. ENIAC worked for a little over 9 years and went for processing. This giant was created by a group of five engineers. Like Internet technology, the creation of the very first computer was ordered by the military. After its development and preliminary testing, the finished product was handed over to the US Air Force.

The computer stretched seventeen meters in length, and its head part consisted of 765 thousand parts of various kinds. The amount of development was about half a million dollars. The height of the car was at around 2.5 meters. The apparatus was located at Harvard. However, the date of creation of the first computer formally fell on 1944, when it was first tested.

Parameters of the American-style device

As noted earlier, the computer of the 1946 model did not reach the level of current portable computers. But its parameters and main characteristics:

  1. The computer weighed over 4.5 tons.
  2. The total length of the wires in the body was 800 kilometers.
  3. The shaft synchronizing the calculation modules was 15 meters long.
  4. For the simplest (addition and subtraction) mathematical operations, the computer took 0.33 seconds.
  5. It took 15.3 seconds to divide, and he multiplied a little faster, in just 6 seconds.

Enormous resources were spent on the creation of the very first computer. The year of this event is 1946.

The very first attempts to create primitive electronic computing devices

scientist from Russian Empire A. Krylov in 1912 was able to develop the first machine for calculating complex differential equations. Already 15 years later, in 1927, developers from America tested the first

Even the Nazis were developing computers. A year before the outbreak of World War II, in 1938, the German scientist Konrad Zuse created a digital model of a computer with a programming component, which was given the name Z1. And in 1941, "Z the first" underwent a series of upgrades and received the final name Z3. This model was much more like a modern portable computer.

Refinement of the ABC prototype

The developer John Atanasov from the USA in 1942 led the development of the ABC model computer. But he was drafted into the army, and the creation of a computer stopped for a while. His model began to be tested for study by another group of developers led by John Mauchly. As a result, he began to conduct his own work on the creation of the ENIAC computer.

He was the first to give rise to the binary system of calculus, which is still used in our PCs to this day. The original purpose of the computer was to help the military in solving certain problems. They contributed to the automation of calculations during the bombing of artillerymen and air force.

Creation of the first computer in the USSR

The Soviet Union did not lag behind world trends. In the laboratory of S.A. Lebedev developed the first computer model throughout Eurasia. The first success of the Soviet electronic computing structure was followed by others, less loud, but extremely useful for science.

Soviet scientists have developed and tested a small electronic calculating machine, abbreviated MESM. It was a layout of a larger computing apparatus.