Computing Advances

It seems to me….

It’s hardware that makes a machine fast. It’s software that makes a fast machine slow.” ~ Craig Bruce.

Very few people actually care how their computing or communication devices actually work at the component level – just as long as they do, are fast, and inexpensive. Still, an occasional look under the hood of these devices can be quite interesting. Much is happening that will impact what we will depend in the future. I’ll keep this non-technical so stay with me.

Moore’s Law, formulated by Gordon Moore in 1965, stated that the number of components on a computer chip would double every two years. While his law has remained relatively accurate over the subsequent years, it now is pushing the limits of physical science. Chip manufacturers are now able to etch features onto a silicon wafer as small as 14 nanometers (a human blood cell is about 6-10 nanometers) and rapidly approaching the size of elementary particles. While silicon foundries continue to invest in fundamentally new computing architectures and processor designs attempting to enable even smaller feature sizes, chip designers began encountering problems beginning around the year 2000: features smaller than 65 nanometers can “leak” electrons due to quantum effects or can generate sufficient heat to melt if switched more rapidly than four billion times per second.

Engineers have found creative ways around these problems but which have not resulted in faster less expensive computers: reduce electron leakage using so-called tri-gates (control electron flow from three sides instead of only one), multicore CPUs (multiple 3 GHz processors rather than one at 10 GHz) to control excessive heating.

While tri-gate architectures are relatively successful at controlling electron leakage at current component line widths, multicore CPUs are limited in how much additional speed can be achieved. Amdahl’s Law states that the possible speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program – each new processor added to a system will add less usable power than the previous one.

This is easily illustrated by the example that if one carpenter can build a house in seven days and seven carpenters can build it in one day, it does not mean that if one ship can sail across the ocean in seven days that seven ships could sail across in only one. This is an example of a bounded problem where adding resources does not improve the outcome. Many types of computation are basically sequential, including most functions in current computer operating systems, so adding additional cores does not improve throughput.

Fortunately, other problem sets, such as the carpenters constructing houses, are basically unbounded. Given the resources, an infinite number of carpenters would still theoretically be able to build an infinite number of houses in that single day. This is an example of Gustafson’s Law where computations involving arbitrarily large data sets can be efficiently parallelized. Rather than limiting the size of problems to use the available equipment to solve problems within a practical fixed time, if faster (more parallel) equipment is available, larger problems can be solved in the same amount of time. Where Amdahl’s Law is bounded, Gustafson’s Law is not.

Most problems can be solved sequentially though it might be impractical to do so; the set of problems with bounded solutions exceeds that of those with known unbounded solutions: if one woman can have a child in nine months, how many women would it take to … never mind; you get the idea.

Fortunately, many types of computations from weather prediction, aircraft design, even computer chip layout can be performed in an unbounded form. The problem is determining the type of problem and how it can be written in that unbounded form. Navier–Stokes equations, an example of this type of problem, describe the motion of viscous fluid substances used to compute the physics of many things of scientific and engineering interest such as modeling the weather, ocean currents, water flow in a pipe, or airflow around a wing. The equations are used in the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other things.

It is still too early to predict which technology will supersede today’s designs but specialized technologies probably will be used to perform specific tasks currently assigned to a single processor[1]. Several companies have rediscovered methods used forty or more years ago to increase processing speed and now are once again developing special-purpose processors for certain processing intensive tasks such as output display drivers or data storage.

HP is attempting to develop a so-called memristor (memory resistor) able to eliminate processor high-speed cache storage and collapse other aspects of the storage hierarchy into a form of non-volatile universal memory. They also hope to use photonics (light) rather than electricity for processor communications. Photonics provides only a very small increase in processing speeds since only extremely short communication paths normally are involved but does permit different types of circuit connections since photons being massless and electronically neutral do not interact as electrons would (multiple beams of light can pass through each other without interacting).

The substitution of germanium in place of silicon has been predicted back since the initial development of solid-state components but engineering innovation has always triumphed: silicon is less expensive and less brittle than germanium. Some companies, such as IBM are now hoping to use carbon in the form of either graphene or nanotubes rather than silicon but are experiencing many of the same difficulties as with germanium.

Computers are extremely good at arithmetic computations – the entire purpose for which they were initially developed. Their basic design, called a von Neumann architecture after the mathematician John von Neumann who initially formalized the design in 1945, is essentially optimized for sequentially performing symbolic instructions. Much of what we want computers to do today however is much more than just arithmetic operations: pattern recognition, autonomous operation, speech understanding…. Computer scientists are very creative and have made significant progress in all these areas but where biological organisms, especially humans, are able to easily perform these types of operations, computers limited by their inherent organizational structure are able to extract actionable information from an environment only with great difficulty.

To better address these types of tasks, many researchers are attempting to develop specialized processors modeled similar to a biological brain rather than traditional architectures. IBM recently demonstrated a computer chip containing more than five billion transistors arranged into 4,096 neurosynaptic cores that model one million neurons and 256 million synaptic connections[2].

It probably will be several years before any processors using alternative non-linear designs find significant application since even if successfully develop, applications will still require additional effort to fully exploit any processing advantage they provide. Until then, the computing/communications industries will need to quickly make substantial evolutionary advances just to meet currently anticipated developments.

It is estimated that the Internet of Things[3] (IoT) will involve nearly 26 billion devices by 2020[4]. Most of these will involve machine-to-machine (M2M) communication in a worldwide network of interconnected objects interacting anywhere/anytime and able to interoperate and collaborate using a heterogeneous range of entities. Data from sensors and gadgets; e.g., that originating from radio-frequency identification (RFID) tags increasingly replacing familiar supermarket-type barcodes; will gather exponentially escalating amounts of data. The amount of data gathered will require extremely small inexpensive processors in addition to seriously impacting communications networks.

Meeting anticipated requirements of soaring numbers of mobile communications devices – smart phones, notepads, etc. – and cloud-based storage will place even greater demands on already stressed networks. Convergence of the entertainment industry from current cable and satellite systems on to the Web will even further impact communications depending upon the rate at which this shift occurs.

As has been frequently true in the past, technological change is occurring at a rate difficult for product providers to maintain. As users, it should be a great though sometimes frustrating time so just hold on and enjoy the ride. No one knows where the path is leading but it should be fun.

That’s what I think, what about you?

[1] Pavlus, John. The Search For A New Machine, Scientific American, May 2015, pp58-63.

[2] Modha Dharmendra S. Introducing a Brain-inspired Computer, IBM Research, http://www.research.ibm.com/articles/brain-chip.shtml, August 2014.

[3] Bornmann, Lewis J. Internet of Things, WordPress, https://lewbornmann.wordpress.com/2013/12/10/internet-of-things/, 10 December 2013.

[4] Gartner Says the Internet of Things Installed Base Will Grow to 26 Billion Units By 2020, Gartner, http://www.gartner.com/newsroom/id/2636073, 12 December 2013.

Advertisements

About lewbornmann

Lewis J. Bornmann has his doctorate in Computer Science. He became a volunteer for the American Red Cross following his retirement from teaching Computer Science, Mathematics, and Information Systems, at Mesa State College in Grand Junction, CO. He previously was on the staff at the University of Wisconsin-Madison campus, Stanford University, and several other universities. Dr. Bornmann has provided emergency assistance in areas devastated by hurricanes, floods, and wildfires. He has responded to emergencies on local Disaster Action Teams (DAT), assisted with Services to Armed Forces (SAF), and taught Disaster Services classes and Health & Safety classes. He and his wife, Barb, are certified operators of the American Red Cross Emergency Communications Response Vehicle (ECRV), a self-contained unit capable of providing satellite-based communications and technology-related assistance at disaster sites. He served on the governing board of a large international professional organization (ACM), was chair of a committee overseeing several hundred worldwide volunteer chapters, helped organize large international conferences, served on numerous technical committees, and presented technical papers at numerous symposiums and conferences. He has numerous Who’s Who citations for his technical and professional contributions and many years of management experience with major corporations including General Electric, Boeing, and as an independent contractor. He was a principal contributor on numerous large technology-related development projects, including having written the Systems Concepts for NASA’s largest supercomputing system at the Ames Research Center in Silicon Valley. With over 40 years of experience in scientific and commercial computer systems management and development, he worked on a wide variety of computer-related systems from small single embedded microprocessor based applications to some of the largest distributed heterogeneous supercomputing systems ever planned.
This entry was posted in Amdahl’s Law, Architecture, Barcode, Brain, Chip, Communications, Computer, Convergence, CPU, Entertainment, Equation, Germanium, Gordon Moore, Graphene, Gustafson's Law, Hardware, Hewlett Packard, HP, IBM, Internet of Things, IoT, John von Neumann, Memory, memristor, Moore’s Law, Nanotubes, Navier–Stokes, Neuron, Photonics, RFID, Silicon, Tri-Gates, von Neumann, Wafer and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.