Analog computers, which function by continuously processing data, were traditionally utilised in the 1970s for interpreting analogue input and executing complicated calculations. Today, engineers use them regularly for inputting values such as temperature, pressure, voltage and speed, and they possess various components, including comparators, multipliers and function generators, that help with the processing of this data.
Non-deterministic logic is utilized by analogue computers to execute operations on actual numbers which makes them more suitable for processing complex and continuous operations than digital systems. Moreover, they have greater error tolerance than their digital equivalents.
Regarding why the analogue approach was ultimately discarded,
In order for the computers to execute diverse tasks, engineers needed to execute physical changes to the circuit, such as moving pedals, operational amplifiers, and multipliers. The voltage and pulse frequency inputs, in addition, were vulnerable to errors and susceptible to interference from the environment. Moreover, the inputs were inherently noisy, amplifying the already substantial inaccuracies of the system.
The rise of digital computers has been a major factor in the downfall of analogue systems, primarily on account of their superior precision and efficiency. Hence, the ascendancy of digital systems over analogue ones is unsurprising.
I am perplexed as to why they keep resurfacing.
Although digital computers with mouse and keyboard input are widespread, there has been a resurgence of analogue systems. This is because the utilization of digital computers with von Neumann architecture can lead to memory bottlenecks, which are caused by the vast quantities of data that are produced and handled by today’s computers.
A bottleneck can occur at the memory-processor interface because the incoming data needs to be converted to binary before it can be processed by the motherboard.
Analog computers process information in memory directly without having to convert it to a machine language, such as binary. They accomplish this by using resistors to perform calculations, with the output being digitalised after processing.
As a result, the procedure is significantly hastened, and the results are superior due to the lesser number of analogue-to-digital conversions required (ADC). Moreover, analogue configurations are not restricted to performing all of the calculations in a single cycle; they can carry out a sequence of intermediate computations, which will ultimately be combined to produce the final result. This modification considerably elevates the system’s efficiency.
Analogue devices have a considerably long Mean Time Before Failure (MTBF). For instance, this particular device may operate for 300,000 hours without encountering a catastrophic failure, resulting in an MTBF of 30,000 hours. In certain situations, analogue systems may be faster than digital counterparts, making them suitable for analysing extensive waveform collections, such as those produced by nuclear pulses and super collision events. These events can be overly taxing for digital computers.
When programming the crossbars of analogue systems, integrated circuits are now the preferred tool for engineers. Compared to earlier systems that could only accomplish a single task, present-day analogue systems have the advantage of being able to perform multiple operations by rearranging the crossbars. Moreover, modern systems can carry out differential, high-level scientific computations without requiring manual patching.
Applications of Analogue Computing in the Corporate Sphere
Numerous companies are now utilising neural networks and deep learning algorithms to obtain valuable insights from their data. As a first step, businesses made the transition from CPUs to GPUs for modelling and standardising data.
Training models on GPUs can be a time-consuming process. The Tensor Processing Unit (TPU) is the latest hardware innovation for processing neural networks. This integrated circuit has been crafted specifically for the purpose of training neural networks.
Despite the incorporation of TPUs, modelling continues to be excessively sluggish. Consequently, companies are investigating the feasibility of utilising analogue computers for neural network modelling, which has the ability to outpace TPUs in terms of speed and efficiency on specific workloads. They are particularly valuable in handling large datasets, such as those in image recognition and speech processing. However, programming analogue systems is difficult, and they are more susceptible to noise errors.
Neural networks, which require only a modest amount of accuracy to operate, are well-suited for predictable processes, such as those utilising analogue systems.
The majority of contemporary sensors are analogue in nature and necessitate analogue systems for processing. Memristors are frequently employed because they are capable of retaining a constant value. The ADC converter is the sole component employed in the ultimate output. Firms can employ these technologies to create robots and machines that can independently perform basic tasks.
Sophisticated computations necessitate the use of supercomputers; however, they have a significant energy requirement. For instance, Tianhe-1A, a supercomputer with 2.5 petaflops of capacity, can consume up to 4.04 megawatts of electricity. In contrast, an average American household uses 1,800 kilowatt-hours of energy per year, highlighting the massive amount of power needed by a supercomputer.
Therefore, even though supercomputers can perform intricate computations promptly, the exchange of time and energy is too significant for them to be feasible.
The design of analogue computers is influenced by the human brain’s architecture, as they obtain data from analogue chips instead of reading from memory. Moreover, analogue systems employ a 1-bit analogue multiplier instead of a 32-bit digital multiplier, which enhances efficiency and lessens energy consumption.
Manual adjustments to the circuit are required for a function switch, but the system can be conveniently scaled in terms of both performance and power (given that parallel computing components can also be scaled). Analogue computing is advantageous for any challenge that could derive advantages from parallel processes. Analog technologies can be used to address power concerns in supercomputers.
As of late, hybrid systems combining digital computers with analogue relays have been adopted by companies. Such systems have the ability to process both continuous and discrete data types. Furthermore, electric analogues are produced through interconnections rather than conventional programming elements such as loops and algorithms.
These approaches are extremely successful, efficient, and result in quick response times. Numerous hybrid systems, including fuel processors and heart monitors, are currently employed; however, their complete potential remains largely undiscovered. There is a noteworthy reason for this.
The reintroduction of analogue computing presents a significant architectural and economic obstacle. Most businesses are not prepared to make the substantial investments required to modernize their existing network infrastructure. The absence of analogue systems in the market for an extended period implies that businesses would additionally need to invest in training and accreditation.
It is expected that significant time will be needed to develop solutions for the extensive current and potential market for analogue computing. As we steer towards hybrid systems, we have already begun to shift away from digital systems. In the end, a fundamental alteration in how we comprehend computing is likely to take place, as we reside in an analogue reality rather than a digital one.