It seems to me….
“Big Data is just that – big. But, it’s a term that is largely misunderstood and difficult to explain.” ~ Rick Smolan.
Computing is a powerful general-purpose technology responsible for creative/destructive transformations in production organization, product design, and business models – it is constantly recasting a significant portion of the world economy. Productivity moves in jumps as new paradigms of organization and innovative technologies combine to permit new plateaus and each jump to a new level implies both production reorganization and new forms of work and work organization.
In recent years, there has been an increasing demand to store and process extremely large amounts of data in domains such as finance, science, and government.
Though deployment of technology is as crucial to productivity as technology itself, the productivity frontier is constantly being pushed outward but the best practices are not being implemented equally throughout the economy. The leading 10 percent of global firms in each sector had significant and steady productivity increases in the 21st century while the other 90 percent continued to lag. The problem for society becomes one of deployment and diffusion, business practices, and structural policy, not the inherent possibilities of the technology.
Data processing and analysis has increasingly become dependent upon so-called “big data” and “cloud computing” enabling some of the most disruptive technologies to have come into recently usage. Big data and the cloud make for a perfect combination, together they provide a scalable and cost-effective infrastructure that can support computational analytics.
Cloud computing is the commodification of computing time and data storage by means of standardized technologies. It is a powerful technology capable of performing massive-scale and complex computing eliminating much of the need to maintain expensive computing hardware, dedicated space, and software resulting in massive growth in the utilization of data; especially big data frequently generated through cloud computing. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis.
Cloud computing has matured to deliver computing services; data storage, computation, and networking; to users at the time and location and in the quantity they wish to consume with costs based solely on resources used. Powerful computing resources can be assembled, orchestrated, and deployed as needed. For those purchasing cloud computing as a service, the data center is no longer a capital cost, it has now become simply another variable operating cost.
The term “big data” describes the increasing volume, velocity, and variety of data collected by organizations. The developing concept and application of big data represents an epistemic shift in data availability where data is now considered to be infinitely networkable, indefinitely reusable, and significantly divorced from the context of collection.
The term big data arose from the extremely rapid increase in global data as technology is now able to store and process big and varied volumes of data providing both enterprises and science with deep insights over its clients and processes. Cloud computing provides a reliable, fault-tolerant, available, and scalable environment to harbor big data distributed management systems.
This type of processing is implemented using digital platforms incorporating algorithms and software structures that run in the cloud and operate on the data. Platforms themselves facilitate aggregation and analysis of data with the intent of controlling systems and actions.
The analytical techniques developed to analyze such data, collectively known as “data analytics”, generally involve the use of computer-based quantitative models. Data analytics use algorithms to manipulate data sets to extract meaningful information. (An algorithm is a detail description of all the steps required to complete a task.) While an appropriate algorithm used to analyze a particular kind of data can be very beneficial, algorithms that improve themselves as they encounter data can be exponentially more useful.
The information in these data sets; e.g., an organization might include information about its products and services, internal processes, market conditions and competitors, supply chain, trends in consumer preferences, individual consumer preferences, and specific interactions between consumers and products, services, and online portals; can be used in either backward- or forward-looking analysis.
Backward-looking methods are sometimes described as descriptive (analyzing data and developing summaries and visual depictions of important trends) or diagnostic (looking at past data to determine what went wrong). Forward-looking methods can be predictive (using past trends to predict future trends) or, at best, prescriptive (predicting future trends and suggesting organizational strategies to maximize performance according to certain measures).
Big data and big data analytics are viewed by both business and scientific areas as a way to correlate data, find patterns, and predict new trends. Although big data is mostly associated with the storage of extremely large data volumes, it also concerns ways to process and extract knowledge from it.
The world will increasingly be organized through the interplay of algorithms and data. Much will depend on how intelligent tools, including big data analytics, artificial intelligence, robotics, and sensors coalesce into systems that appear to be nearly autonomous.
It is difficult to predict how these developments will affect employment opportunities and compensation.
The politics of 21st century growth occasionally involve deep dislocations in already prosperous well-organized societies that will continue to be difficult politically. Capturing the promise of the technology is as much a political problem as it is a narrowly economic constraint, suggesting policy and political action rather than descent into economic pessimism. The impact of intelligent tools on productivity will depend not just on the technological advances but on the capacity to deploy and diffuse them. It is almost certain that sustainable productivity increases will be a necessary though likely insufficient condition for increasing employment and wages.
Broad swaths of work are standard routine tasks, arguably the bulk of work today, that are directly vulnerable to displacement by intelligent tools. While computation can augment human intelligence and capabilities, it is not an absolute outcome.
Any discussion of work and jobs must consider how production of goods and services will be reorganized as ever more sophisticated processing methodologies are introduced. It is impossible to predict what new work will arise as the economy changes. Labor markets will be created and transformed by platforms and intelligent tools based on the character and organization of work.
It is unclear if the increased movement of computation to digital platforms will provide real and rising incomes with reasonable levels of equality comparable to current levels as demand for developer involvement might possibly decrease. The goal of firms could be to simply displace work and remove human intelligence from work tasks. Alternatively, it is possible for intelligent tools to help augment intelligence and capabilities, supporting rather than displacing workforce abilities.
Several labor-market studies broadly focused on the consequences of automation suggest the current digital revolution will generate a world of greater unemployment, more unskilled workers, and greater inequality. If society invests in technologies, business models, and companies subscribing to the belief that intelligent tools will inevitably displace work, with investment after investment made to find ways to substitute capital for labor, then a dystopian outcome is inevitable and with it a road toward digital displacement on a mass scale. The continuing progress of intelligent tools will, if it simply displaces work and absent the retraining and creation of new employment opportunities, create significant social upheaval.
Many studies highlight concerns about the destruction and devaluation of work and skills but the conclusions concerning employment are less clear. Differences depend on the varied judgments of what can be automated and what might be economically feasible to automate, data sources used to estimate the possible changes, and the timeframe of the structural changes being observed. The innovation dynamic can never be totally automated remaining for the foreseeable future a domain of human inventiveness and initiative.
There are numerous technological innovations in addition to the cloud and big data; all of which potentially could substantially alter societal organization. All are dependent upon considerable investment in education, training/retraining, and research. We can only hope we pursue our options wisely.
That’s what I think, what about you?
 Rick Smolan is the co-founder of the America 24/7 and Day in the Life photography series – and a natural storyteller in many media.
 Zysman, John, Martin Kenney. The Next Phase In The Digital Revolution: Intelligent Tools, Platforms, Growth, Employment, Communications of the ACM, https://cacm.acm.org/magazines/2018/2/224635-the-next-phase-in-the-digital-revolution/fulltext, February 2018, pp 54-63.