Powered by artificial intelligence technology, today’s computers can engage in persuasive conversations with people, make songs, paint of paintsplay chess and goand diagnose diseasesto name just a few examples of their technological prowess.
These achievements can be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what powers a computer.
There are two aspects to the power of a computer: the number of operations its hardware can perform per second and the efficiency of the algorithms it runs. Hardware speed is limited by the laws of physics. Algorithms—in general set of instructions—is written by humans and translated into a sequence of operations that can be performed by computer hardware. Even if the computer speed reaches the physical limit, computational constraints will remain due to the limitations of the algorithms.
These obstacles include problems that are impossible for computers to solve and problems that are theoretically solvable but in practice are beyond the capabilities of even the most powerful versions of computers imaginable today. Mathematicians and computer scientists try to determine whether a problem can be solved by testing it on an imaginary machine.
An Imaginary Computing Machine
The modern idea of an algorithm, known as a Turing machine, was developed in 1936 by a British mathematician. Alan Turing. It is an imaginary device that simulates how arithmetic calculations are performed with a pencil on paper. The Turing machine is the template on which all computers are based today.
To accommodate calculations that would require more paper if done manually, the supply of imaginary paper in a Turing machine is assumed to be unlimited. It is equivalent to an imaginary unlimited ribbon, or “tape,” of squares, each of which is blank or contains a symbol.
The machine is controlled by a finite set of rules and starts with an initial sequence of symbols on the tape. The operations that the machine can perform are moving to a neighboring square, erasing a symbol, and writing a symbol to a blank square. The machine computes by performing a sequence of these operations. When the machine finishes, or “stops,” the symbols left on the tape are the output or result.
Computing is often about decisions with yes or no answers. By analogy, a medical test (problem type) examines whether a patient’s specimen (a problem instance) has a specific disease indicator (yes or no answer). An instance, represented on a Turing machine in digital form, is the first sequence of symbols.
A problem is considered “solvable” if a Turing machine can be designed to stop for every chance whether positive or negative and correctly determine which answer will produce the chance.
Not All Problems Can Be Solved
Many problems are solvable with a Turing machine and therefore solvable on a computer, while many others are not. For example, the domino problem, a variation of the tiling problem developed by Chinese American mathematicians Hao Wang in 1961, was insoluble.
The task is to use a set of dominoes to cover an entire grid and, following the rules of most domino games, match the number of pips on the ends of the abutting dominoes. It turns out that there is no algorithm that can start with a set of dominoes and determine whether the set will completely cover the grid.
Keep it Reasonable
A number of solvable problems can be solved by algorithms that stop in a reasonable amount of time. The “polynomial-time algorithms” are efficient algorithms, meaning that it is practical to use computers to solve instances of them.
Thousands of other solvable problems are not known to have polynomial-time algorithms, despite ongoing intensive efforts to find such algorithms. This includes the traveling salesman problem.
The traveling salesman problem asks whether a set of points with several points directly connected, called a graph, has a path that starts at any point and passes through every other point exactly once, and returns at the original point. Imagine that a salesman wants to find a route that passes through all the households in a neighborhood exactly once and returns to the starting point.
These problems, called NP-completewas independently developed and demonstrated to exist in the early 1970s by two computer scientists, American Canadian Stephen Cook and Ukrainian Americans Leonid Levin. Cook, whose work came first, was awarded the 1982 Turing Award, the highest in computer science, for this work.
The Cost of Knowing Exactly
The most well-known algorithms for NP-complete problems essentially search for a solution from all possible answers. The traveling salesman problem on a graph of a few hundred points would take years to run on a supercomputer. Such algorithms are inefficient, meaning there are no mathematical shortcuts.
Practical algorithms that address these real-world problems can only offer approximations, though estimates are improving. If there were efficient polynomial-time algorithms to do solve NP-complete problems belongs to seven millennia open problems posted by the Clay Mathematics Institute at the turn of the 21st century, each carrying a prize of one million dollars.
Beyond Turing
Could there be a new form of computation beyond Turing’s framework? In 1982, American physicist Richard Feynmana Nobel laureate, put forth the idea of computation based on quantum mechanics.
In 1995, Peter Shor, an American applied mathematician, presented a quantum algorithm in factor integers in polynomial time. Mathematicians believe that polynomial-time algorithms in Turing’s framework cannot solve this. Factoring an integer means finding a smaller integer greater than one that can divide the integer. For example, the integer 688,826,081 is divisible by a smaller integer of 25,253, because 688,826,081 = 25,253 x 27,277.
A basic algorithm called RSA algorithm, widely used in securing network communications, is based on the computational difficulty of factoring large integers. Shor’s result suggests that quantum computing, if it becomes a reality, will change the landscape of cybersecurity.
One can absolutely quantum computer develop to factor integers and solve other problems? Some scientists believe it can happen. Several groups of scientists around the world are working to build one, and some have already built tiny quantum computers.
However, as with all novel technologies ever invented, issues with quantum computation will almost certainly emerge that will impose new limitations.
This article was republished from The conversation under a Creative Commons license. Read the original article.
Photo Credit: Laura Ockel / Unsplash