It seems to me….
“I’m a physicist, and we have something called Moore’s Law, which says computer power doubles every 18 months. So every Christmas, we more or less assume that our toys and appliances are more or less twice as powerful as the previous Christmas.” ~ Michio Kaku.
Roughly nine-in-ten American adults (92 percent) now own a mobile phone of some kind – 77 percent of adults now have a smartphone. Ownership for traditional computers has remained stable, some 73 percent of U.S. adults own a desktop or laptop computer, and though this percentage has fluctuated a bit over the years, computer ownership levels have remained roughly similar for the past decade (though slightly down from a high in 2012 when 80 percent of Americans said they had a desktop or laptop). Much of this has become possible by advances in electronic components.
Integrated circuit components, along with most other aspects of computing technology, have adhered to predictions of exponential development, most commonly referred to as Moore’s Law. Moore’s Law, a special case of the learning curve which has been well known for over one hundred years, was first stated by Gordon Moore, co-founder of both Fairchild Semiconductor and Intel Corporation, at a meeting of the Electrochemical Society in late 1964 and then published in April 1965, has resulted in a transistor cost reduction of more than 30 percent per year while doubling the transistors per chip about every two years. This exponential growth has simultaneously occurred at all levels of the computing ecosystem: chips, systems, adopting communities.
It is difficult to predict, how the changing nature of devices – phones, watches, wearables… – will affect future computer ownership but while steady advances in computing power and size reductions can be anticipated, continued development will be affected by reaching basic physical limitations. Countless industries have been upended by digital disruption but now, after five decades, the end of Moore’s law is in sight. The ways we’ve built software in the past is starting to break down; the concept of digital transformation encapsulates advances in many different directions.
Making transistors smaller no longer guarantees that they will be less expensive or faster though progress can be expected to continue at least for the next few decades. Chips will still get better, but at a slower pace and the future of computing will be defined by improvements in three other areas beyond raw hardware performance: software, storage, and architecture. Though the process of cramming more transistors on silicon wafers is indeed slowing down, developers are finding various ways to speed up overall performance, such as quantum computing, neuromorphic chips, and 3D stacking. Slowing hardware progress will provide stronger incentives to develop performance gains through new software algorithms. New computing architectures consisting of specialized chips optimized for particular tasks such as cloud computing, neural-network processing, computer vision, and other tasks will be necessary.
User interfaces can take advantage of flexible Web technologies, bringing responsive design across all platforms. Computing is now associated with a new set of endpoints, not just the familiar PCs and smartphones, but also wearable devices, wall screens, and a whole host of IoT hardware – from devices like Amazon’s Echo to Apple’s Watch, and to the screens in your car. It no longer matters where an application is running. Thanks to virtualized containers, the same code can run on a phone, PC, or a cloud server and now also can run in the network thanks to container support in the latest core routers and switches. Not just computing and storage have been virtualized, virtualized networks are at the heart of modern clouds.
Using 3D printing technology, production facilities are able to service numerous product requests saving time and money by concentrating diverse manufacturing facilities in a single location and not having to construct dedicated factories serving separate fabrication and assembly lines. Sharing information in real-time allows production facilities to adapt to changes in production schedule, deal with downtime, aroid shutdowns, and better manage inventory. Rapid prototyping could result in faster continuous innovation than previous possible.
The computing field, relative to any other field, is still comparatively young. While it may at times be difficult to envision continued progress at current rates or what directions that progress will take, we can be assured it will continue to amaze us with spectacular developments for quite some time into the future.
That’s what I think, what about you?
 Michio Kaku is an American theoretical physicist, futurist, and popularizer of science.
 Smith, Aaron. Record Shares Of Americans Now Own Smartphones, Have Home Broadband, Pew Research Center, http://www.pewresearch.org/fact-tank/2017/01/12/evolution-of-technology/?utm_source=Pew+Research+Center&utm_campaign=e0354b17d7-EMAIL_CAMPAIGN_2017_02_02&utm_medium=email&utm_term=0_3e953b9b70-e0354b17d7-400092341, 12 January 2017.
 Denning, Peter J., Ted G. Lewis. Exponential Laws of Computing Growth, Communications of the ACM, http://cacm.acm.org/magazines/2017/1/211094-exponential-laws-of-computing-growth/fulltext, January 2017, pp 54-65.
 The Future of Computing, The Economist, http://www.economist.com/news/leaders/21694528-era-predictable-improvement-computer-hardware-ending-what-comes-next-future?cid1=cust/ednew/n/bl/n/20160310n/owned/n/n/nwl/n/n/NA/n, 12 March 2016.
 The “cloud” consists of networks of data centers that deliver services over the internet using combinations of interconnected and available powerful or specialized processors when required.