“Classical” Literature

It seems to me….

Change is the principal feature of our age and literature should explore how people deal with it. The best science fiction does that, head-on.” ~ David Brin[1].

I’ve always enjoyed reading but never have adapted to reading a book in electronic form. While I accept the many advantages e-books have over hardbound books, there is something about the feel of an actual book that personally seems more comforting. Digital media’s electrons provide obvious benefits over static molecules that will only increase in the future so our generation might be the last who prefer their books in the form of physical matter rather than electronic bits.

When young, my preferred genre was fiction – especially science fiction: Arthur C. Clarke, Isaac Asimov, Ray Bradbury, Robert A. Heinlein, Marion Zimmer Bradley, Larry Niven, and too many others to mention. It was a time prior to launching of the very first satellites into orbit; a time when paperback-novels only cost $0.25. The future remained full of possibilities and I would escape current reality to vicariously dwell in some unknown future. We might have had Flash Gordon but nothing comparable to Star Wars or Star Trek. While much of what I read is still considered totally impossible, nothing back then seemed beyond imagination. I found the world that might be preferable to what actually was.

The subject matter of reading assignments preferred by high school teachers and college professors, however, was oriented in the past: static, fixed, unimaginative – boring. I tolerated but rarely enjoyed them considering them to be a type of literature only possibly appreciated by literature majors.

My primary undergraduate major was physics (though I changed majors several times and my B.S. was in mathematics). My graduate degrees were in computer science (again with several other majors thrown in). Consequently, most of the reading in the academic and employed stages of my life became technical material related to whatever I was working on at the time. While not forgotten, too little time remained for escape into imaginative worlds. My primary motivation was to make that world of which I dreamed a possibility.

Now in retirement and once again free to indulge in whatever genre is appealing, much of the science fiction currently being written seemingly either lacks the imaginative quality prevalent in the past or it is me that has changed. Space flight is now a reality with plans for permanent colonies on Mars. My awareness of what is currently considered feasible also has increased perhaps jading my previous ready acceptance of that which greatly exceeds all valid scientific possibility. The science fiction genre now also includes fantasy which when young only included a very limited number of books on vampires or werewolves.

I, like the times, have definitely changed. Though I read as much as ever, my interests are heavily skewed toward current events and basic technical material. Perhaps I’ve also gotten lazy as I now normally prefer more general material devoid of the higher-level math I read throughout much of my working life.

Some time back I decided to reread some of the supposedly “classic” literature favored as assignments while in high school and college: Plato, Horace, Swift, Boccaccio, Keats, Tennyson, Browning, Balzac, Whitman…. My conclusion after reading dozens of those pieces should perhaps not be surprisingly: while some are fairly good, the majority are as bad now as I found them to be back then.

University literature professors who publish anthologies to be inflicted upon English-101 students apparently primarily select works of interest only to others in their profession; namely other professors of literature. Their selections mostly are dry and of little interest to anyone outside their field. It should be of no surprise that most students required to take these classes consider them totally boring and normally never read those selections again.

Granted, essentially all the material I read is either a translation or extremely dated (and I read them as literature, not history) so it is somewhat unfair to express such a blanket judgement but I feel it is correct that the vast majority of what now is considered to be a “classic” never would be read today if it appeared newly-written on a bookstore shelf. Even Beowulf is boring – the plot is weak and without sufficient character development – as is equally true for The Nibelungenlied.

This is not to imply that I did not find some of the material enjoyable. Some written in ancient Greece; e.g., Homer’s Iliad, Aeschylus’s Agamemnon; were quite good as was Dante’s The Divine Comedy, but then there is Voltaire’s Candide which is not only boring but also poorly written. And who really cares about a monotonous run-on diatribe of someone stealing a clipping of a woman’s hair in Pope’s Rape of the Lock? Many other authors fail to fare any better: Cervantes’ Don Quixote, Milton’s Paradise Lost, Spenser’s The Faerie Queene….

Other authors get a very mixed review: Shakespeare for example. Everyone deservedly admires Portia’s speech “The quality of mercy is not strained…” from the Merchant of Venice or the lines “To be or not to be…” from Hamlet. Several other of his works; e.g., Comedy of Errors or Two Gentlemen of Verona; I would not recommend to anyone.

Perhaps it is not only me that has changed. Much about the entire world is now very different. Perhaps much classical literature no longer can be appreciated in an age of accelerating discovery. We live in an exciting age; perhaps our literary preferences must reflect life as we know it. The volume of literature being published today greatly exceeds publication rates of the past. Given that volume, not only are there vast quantities of mediocre publications but also an increasing quantity of what in the past would have been considered exceptional.

Though much classical literature actually is very good, so is much of the literature currently being written. My favorite work of fiction remains Lord of the Rings by J. R. R. Tolkien which I have read several times. My early preference for space has not been totally forgotten: The Martian by Andy Weir was excellent fiction but the non-fiction Failure Is Not an Option by Gene Kranz is a reminder of how far we have progressed. If I was to recommend any non-fiction, it probably would be Guns, Germs, and Steel by Jared Diamond but given the many exceptional options, it would be difficult to choose. While I personally might prefer the realm of “what might be”, I’ll still settle for “what is” rather than “what was”.

I readily admit to fairly plebian preferences favoring Gilbert & Sullivan to most opera or ballet (and, if given the option, would greatly prefer walking through some remote wilderness area). It probably is me, and anyone is welcome to disagree, but there undoubtedly are many others with similar literary tastes.

Reading remains enjoyable and I always will anticipate reading all types of literature, including many of the classics I’ve just denigrated, and encourage others to do the same. There is so much in life and the world that cannot be experienced in any other way.

That’s what I think, what about you?

[1] Glen David Brin is an American scientist and award-winning author of science fiction.

Posted in Aeschylus, Agamemnon, and Steel, Andy Weir, Arthur C. Clark, Ballet, Balzac, Beowulf, Boccaccio, Browning, Candide, Cervantes, Classics, College, College, Comedy of Errors, Computer Science, Computing, Dante, Divine Comedy, Don Quixote, E-Books, Education, English, English-101, Entertainment, Failure Is Not an Option, Fiction, Fiction, Flash Gordon, Gene Kranz, Germs, Gilbert & Sullivan, Greece, guns, Hamlet, High School, Hiking, Homer, Horace, Iliad, Isaac Asimov, J. R. R. Tolkien, Jared Diamond, Keats, Larry Niven, Literature, Lord of the Rings, Marion Zimmer Bradley, Mars, Mathematics, Merchant of Venice, Milton, Nibelungenlied, Opera, Paradise Lost, Physics, Plato, Pope, Professor, Rape of the Lock, Ray Bradbury, Robert Heinlein, Science, Shakespeare, Shakespeare, Space, Space Flight, Spenser, Stage, Star Trek, Star Wars, Swift, Teachers, Tennyson, The Faerie Queene, The Martian, Tolkien, Two Gentlemen of Verona, Voltaire, Whitman, Wilderness, William Shakespeare | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments

Crime And Punishment

It seems to me….

Money will determine whether the accused goes to prison or walks out of the courtroom a free man.” ~ Johnnie Cochran[1].

Prison population reduction needs to become a major U.S. priority. The federal prison system, especially, is too large, too costly, and in dire need of comprehensive reform. Though the federal prison system is by far the nation’s single largest jailer in the U.S., the majority of those imprisoned are collectively incarcerated by the states.

President Obama named criminal justice reform as a main policy priority for the end of his second term and after decades of growth, the federal prison population declined in 2013 and 2014. While half of those housed in state prisons have been convicted of a violent offense, fewer than 5 percent of federal prisoners’ most serious convictions were for similar crimes. In the federal system, drug offenses are the primary conviction for almost half (49 percent) of the prison population and immigration and weapons offenses account for another 25 percent.

There also is a racial disparity within our legal system. Reported crime data concentrated in historically heavily policed areas skew statistics to over-represent poor or minority neighborhoods. Caucasian college students who can afford legal representation when charged with drug use, rape, or assault are convicted of felonies at a much lower rate than African-Americans facing similar charges.

There are areas, primarily within predominantly African-American sections of large cities, where the majority of the young – teenage to twenties – are either incarcerated, on probation or parole, or facing an outstanding warrant. About 60 percent of African-American men that do not complete high school spend time in prison prior to reaching their mid-twenties.

The gulf between urban cores and isolated neighborhoods of concentrated poverty has contributed to political and social turbulence manifested in everything from the populist upheavals in recent mayoral elections in New York and Chicago to the violent street protests in Baltimore. Economically, socially, and politically, these disparate events suggest such uneven patterns of growth may increasingly destabilize even cities where the overall wellbeing is improving.

Crime statistic database analysis that considers such factors as antisocial behavior, prior convictions, and age of first arrest has provided judges, correction workers, and parole officers with risk assessment probabilities able to fairly accurately determine prospects for recidivism[2] Some social activists object to this statistical risk determination claiming it perpetuates existing biases disproportionately affecting African-Americans who typically are sentenced to prison terms 20 percent longer than for offenders of other ethnic origin. Preliminary results of data driven criminal justice has demonstrated its effectiveness but probably also should consider additional factors including geography and criminal history.

The number of violent crimes committed per 100,000 people in 2013 (368) was less than half that of 1991 (758[3]). This in part resulted from risk assessment algorithms that weigh a variety of factors related to recidivism. Predictive policing uses data analytics and algorithms to better predict where and when a crime might occur enabling police to be more effectively deployed but both have been criticized on moral, logistical, and political basis as to their fairness and ethicalness[4]. While risk assessment systems have been shown to be effective, there is concern they may penalize racial minorities by overpredicting recidivism in those categories as predictions are biased by relying on past statistics gathered from areas historically over-policed. While the algorithms might not be biased, the data might be.

The sentencing of Dzhokhar Tsarnaev, the convicted Boston Marathon bomber, to death had a public approval of 63 percent though support of capital punishment continues to decline. Capital punishment is increasingly perceived to be counter to the sanctity of life beliefs and acknowledgment of the high costs required to actually execute someone. It also is rarely performed: while 60 federal prisoners are currently on death row, only 3 federal executions have been carried out in the last 50 years and none in the last 12. Judges, lawyers, and politicians increasingly oppose it. Most western democracies no longer use capital punishment leaving us grouped with China and Iran as the only major nations where it is still used.

There never can be absolute assurance in guilt. The somewhat absurd dichotomy is illustrated by the saying “Why do we kill people who kill people to prove killing people is wrong[5].

In reality, very few of those sentenced to death are ever actually executed. The death penalty as it currently exists can more accurately be described as “life in prison with the remote possibility of death” along with repeated expensive appearances in court. It makes considerably more sense to invest in jobs and education than to spend billions of dollars on jails and incarceration.

That’s what I think, what about you?

[1] Johnnie L Cochran, Jr. was an American lawyer best known for his defense celebrity clients such as Michael Jackson and O. J. Simpson.

[2] Calabresi, Massimo. Prison Math, Time, 13 Aug 2014, p10.

[3] FBI report.

[4] Kirkpatrick, Keith. It’s Not The Data, It’s The Algorithm, Communications of the ACM, February 2017, pp21-23.

[5] Drehle, David von. Death Of The Death Penalty, Time, 6 June 2015, pp26-33.

Posted in African-American, African-American, Antisocial, Baltimore, Bias, Caucasian, Chicago, China, Cities, College, Convictions, Crime, Death Penalty, Dzhokhar Tsarnaev, Education, Employment, Execution, High School, Incarceration, Iran, Jail, Judges, Judicial, Lawyers, New York, Obama, Offense, Parole, Police, Politicians, Prison, Prison, Probation, Punishment, Punishment, Race, racial bias, Racism, Recidivism, Recidivism, Student, Students, Urban | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Employment And Automation

It seems to me….

The test of our progress is not whether we add more to the abundance of those who have much; it is whether we provide enough for those who have little.” ~ Franklin D. Roosevelt[1].

Major traditional industries have been lost throughout history and the resulting effect on employment has always been worker displacement whether the cause was technological advancement or globalization.

There is something familiar about fears that new machines will take everyone’s jobs benefiting only a select few and upending society[2]. Such concerns sparked furious arguments two centuries ago as industrialization took hold in Britain. People at the time did not talk of an “industrial revolution” but of the “machinery question”. First posed by the economist David Ricardo in 1821, it concerned the “influence of machinery on the interests of the different classes of society”, and in particular the “opinion entertained by the laboring class, that the employment of machinery is frequently detrimental to their interests”. Thomas Carlyle, writing in 1839, railed against the “demon of mechanism” whose disruptive power was guilty of “oversetting whole multitudes of workmen”.

The Luddites protesting mechanization of textile manufacturing when automated looms were introduced in the early 19th century at the beginning of the industrial revolution is only one of numerous similar examples: entire industries were decimated due to the introduction of the automobile; agricultural employment totally changed as a result of mechanized farming equipment…. Advances in computing have created then eliminated positions for keypunch operators, in data entry, and as computer operators.

While appearing similar to what has occurred in the past, this time could be different. When loss of job categories occurred in the past, workers were able to simply transition into employment with somewhat comparable foundational skill requirements in another category. Agricultural workers sought employment in the factories. Blacksmiths and stable employees became automotive mechanics. Now, manufacturing jobs in a variety of industries are being lost and there is nothing with which to replace those jobs; those industries are gone and will never come back. Similar to how the role of horses were eventually eliminated from agricultural production by the introduction of machinery, so shall general dependence on human labor diminish in increasing segments of employment. The future belongs to the next generation of industries whether it be nanotech, biology … whatever.

Technological innovation is not a silver bullet to achieve broad-based prosperity: dramatic improvements in technology, especially software, do not translate easily into wage increases for the average worker. For most of the second half of the twentieth century the economic value generated in the U.S. – the country’s productivity – grew hand-in-hand with the number of workers. But in 2000 the two measures began to diverge. From the turn of the century, a gap opened between productivity and total employment. By 2011, that delta had widened significantly, reflecting continued economic growth but with no associated increase in job creation. Throughout advanced economies, the share of national income paid out as wages has dropped precipitously since the start of the information technology and automation revolutions began in the mid-1970s, the first time this has happened in modern history. Unlike much of the 20th century we’re now seeing a falling ratio of employment to population, something that deserves reasonable concern.

Production in this second machine age depends less on physical equipment and structures and more on the four categories of intangible assets: intellectual property, organizational capital, user-generated content, and human capital. The Nobel Prize-winning economist Wassily Leontief[3] stated in 1983 that “the role of humans as the most important factor of production is bound to diminish in the same way that the role of horses in agricultural production was first diminished and then eliminated by the introduction of tractors”.

While computerization and offshoring were not the sole causes of declining opportunities for high school graduates, both factors played an important role. Diminishing fortunes of high school educated workers had two important consequences: (1) Many people faced downward economic mobility earning less real income than their parents had earned; (2) Education moved from being one source of upward mobility (along with generally rising earnings) to the main source of upward mobility. Even as the economy has improved, jobs and wages for a large segment of workers, particularly men without college degrees doing manual labor, have not recovered. Even in the best case, automation leaves the first generation of workers it displaces at a disadvantage as they usually do not have the skills to perform the innovative and more complex tasks required by new employment opportunities.

Computerization, automation, and artificial intelligence (AI) have ratcheted up the definition of foundational skills. Jobs are being disaggregated. More highly skilled portions of any job are requiring greater skill (with higher remuneration) while the relatively routine portion will be either automated or pay minimum wages. As knowledge becomes more abstract, the average person’s earnings have become increasingly correlated with educational attainment. In 1980 the average 40-year-old male with only a bachelor’s degree had weekly earnings 26 percent higher than the average 40-year-old male whose education stopped at high school graduation; by 2009 the gap had grown to 84 percent. Technology has increased competition not only for corporations but also for individuals; there are countless additional displaced applicants for a decreasing number of less-skilled positions. This competition for employment increasingly results directly from computerization and automation.

Skills-biased technical change has increased relative demand for highly educated workers while reducing demand for less educated workers whose jobs frequently involve routine cognitive and manual tasks. Skills-biased technical changes are resulting in a worsening of economic conditions and prospects not only for families of non-college graduates but also the children of those families. As the demand for labor decreases, so do wages for the relatively less skilled.

47 percent of U.S. jobs are vulnerable to automation[4]. The proportion of threatened jobs is much greater in poorer countries: 69 percent in India, 77 percent in China, and as high as 85 percent in Ethiopia. The cheapness of labor in relation to capital affects the rate of automation. Passing laws that make it less costly to hire and fire workers is likely to slow its advance. Scale also matters: farms in many poor countries are often too small to benefit from machines that have been around for decades. Consumer preferences are a third barrier.

When considering employment categories as candidates for automation, the most probable differentiator would seem to be between human and digital labor – allowing people to focus on those tasks where they have a comparative advantage over the computer and allowing computers to do the work at which they are most suited. While generally true – computers are very good at math but not at pattern recognition; computers cannot perform many tasks which even a child can do without difficulty – recent advances are changing expectations, similar to what has become fact in other areas. Increasingly, tasks considered extremely difficult or even impossible for computers to perform are being accomplished: autonomous vehicles, advanced pattern recognition…. People who previously felt relatively secure have found that some high-level reasoning requires little computation while basic sensorimotor skills entail considerable computational resources[5].

Bank of America Merrill Lynch predicted that by 2025 the “annual creative disruption impact” from AI could amount to $14 trillion-$33 trillion, including a $9 trillion reduction in employment costs thanks to AI-enabled automation of knowledge work; cost reductions of $8 trillion in manufacturing and health care; and $2 trillion in efficiency gains from the deployment of self-driving cars and drones.

AI, which will inevitably cost (and create) jobs as it automates various tasks, is going to be a contentious issue for decades to come. There are some things at which machines are simply better than humans – but humans still have plenty going for them. AI will be hailed and vilified in equal doses; there will be AI for good as well as evil. Still, AI is going to result in the elimination of jobs[6].

While productivity and employment traditionally tracked together, they became decoupled in the 1990s with productivity since then continuing to climb but the employment to population ratio, as well as the real income of the median worker, now lower than any point since then.

Record wealth creation has resulted from capital-based technological changes that substitute physical capital for labor increasing the capital owner’s profits and reducing the share of profits going to labor. Wealth distribution in the U.S. (as well as in much of the rest of the world) has therefore changed from a normal bell curve distribution to a power law (Pareto Curve) distribution as wealth increasingly flows exponentially to a decreasing percentage of recipients essentially eliminating the middle-class. This has resulted in the combined net worth of the wealthy significantly increasing while the median U.S. household income has precipitously decreased. Economic inequality if permitted to result in corresponding political inequality enables the privileged to gain further empowerment and still greater economic advantage in a self-serving destructive spiral.

The industry most affected by automation is manufacturing. For every industrial robot per thousand workers, up to six workers lost their jobs and wages fell by as much as three-fourths of a percent[7]. Even if overall employment and wages recover, there will be losers in the process, and it’s going to take a very long time for these communities to recover. Robots are to blame for up to 670,000 lost manufacturing jobs between 1990 and 2007 and that number will rise as the installed base of industrial robots is expected to quadruple.

While tempting to blame employment decreases on globalization as many politicians attempt to do, this hypothesis is not completely supported by facts as labor markets throughout the world are experiencing similar declines as automation and digitization result in reduced lower-level educated employment demand. The overall effect of price equalization is compensated by locating manufacturing closer to sales to reduce product transit duration and to customers, engineers, designers, and adequately trained workers.

Automation, more than other factors like trade and offshoring that President Trump campaigned on, has been the more significant long-term threat to blue-collar jobs. Researchers determined that – “large and robust negative effects of robots on employment and wages” – remained strong even after controlling for imports, offshoring, and software that displaces jobs, worker demographics, and the type of industry.

Unrestrained globalization results in factor price equalization; competition will bid the factors, labor or capital, to a single common price. People who work in parts of the country most affected by imports generally have greater unemployment and reduced income for the rest of their lives. Over time, automation has had a more significant effect than globalization and would have eventually eliminated those jobs offshored anyway; only 13 percent of manufacturing job losses resulted from offshoring while the remainder were to enhanced productivity due to automation. Apparel making was hit hardest by trade while computer and electronics manufacturing were primarily affected by technological advances.

Accelerating technological change necessitates more rapid response and adjustment by displaced workers and institutions but governments have not formulated any plausible approach to respond to the possible massive social upheaval that economic dislocation resulting from computerization and automation is likely to effect. Worker displacement is a political problem for if jobs are eliminated, displaced workers no longer pay taxes. Neither companies or industrial robots pay sufficient taxes to compensate for what is lost from all the employees who are displaced.

Labor economists say there are ways to ease the transition for workers whose jobs have been eliminated by automation including retraining programs, stronger unions, more public-sector jobs, a higher minimum wage, a bigger earned-income tax credit, and free access to higher education and vocational programs for the next generation of workers. Additionally, without a substantial investment in research, there is no future.

That’s what I think, what about you?

[1] Franklin Delano Roosevelt was an American statesman and political leader who served a record four terms as the 32nd President of the United States from 1933 until his death in 1945 and emerged as a central figure in world events during the mid-20th century directing the U.S. government during most of the Great Depression and World War II.

[2] The Future of Work, The Economist, http://learnmore.economist.com/story/57ad9e19c55e9f1a609c6bb4, 9 September 2016.

[3] Wassily Wassilyevich Leontief, was an American economist of half Russian-Jewish descent notable for his research on how changes in one economic sector may affect other sectors.

[4] Frey, Carl Benedikt, and Michael Osborne. From a study published while at Oxford University, 2013.

[5] Moravec’s Paradox: the discovery by artificial intelligence and robotics researchers that contrary to traditional assumptions, high-level reasoning requires very little computation but low-level sensorimotor skills require enormous computational resources.

[6] Heath, Nick. Why AI Could Destroy More Jobs Than It Creates, And How To Save Them, TechRepublic, http://www.techrepublic.com/article/ai-is-destroying-more-jobs-than-it-creates-what-it-means-and-how-we-can-stop-it/, 1 November 2016.

[7] Acemoglu, Daron, and Pascual Restrepo. The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares and Employment, Massachusetts Institute of Technology, http://economics.mit.edu/files/11512, May 2016.

Posted in Agriculture, Agriculture, AI, AI, Artificial Intelligence, Artificial Intelligence, Automation, Automation, Automation, Britain, China, China, College, Computerization, David Ricardo, Economic, Education, Education, Employment, Employment, employment, Ethiopia, Farming, Globalization, High School, Income, India, Inequality, Inequality, Infrastructure, Jobs, Jobs, Jobs, Low-Skill, Luddites, Luddites, Manufacturing, Manufacturing, Middle Class, Middle-Income, Off-Shoring, Outsourcing, Pareto Curve, Research, Robots, Skilled, skilled, Taxes, Technology, Technology, Thomas Carlyle, Unemployment, Wages, Wages, Wassily Leontief, Workers | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

The State Of AI

It seems to me….

The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code. The key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end – as soon as it tilts even a little, it quickly falls the rest of the way.” ~ Eliezer Yudkowsky[1].

Artificial Intelligence (AI) supposedly was my area of concentration in graduate school in the 1960s. This was the height of “expert system” development – directly preceding the start of the so-called AI winter. I hung around the Stanford AI Lab in the early 1970s; did some work on speech recognition (without success): my AI credentials are extremely weak. As one of the co-chairs for a large international AI-centric conference in San Francisco (ACM-84) in 1984, I strongly believed at the time in the feasibility of program goals established by the Japanese. It appeared as if we might finally be on our way.

The Japanese Fifth Generation artificial intelligence project (FGCS) proposed to build a massively parallel computing system over a ten-year period: 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982, the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies. The term “fifth generation” was intended to convey the system as being a leap beyond computer architecture at the time but it quickly was surpassed in speed by less specialized hardware (e.g., Sun workstations and Intel x86 machines). The project did produce a new generation of promising Japanese researchers but after the FGCS Project, the Japanese Ministry of International Trade and Industry (MITI) stopped funding large-scale computer research projects and the research momentum developed by the FGCS Project dissipated.

Many of us believing in the basic premise of AI became disillusioned with what we perceived to be the slow rate of progress. Initial application success could not be prolonged; so-called “expert systems” proved inadequate; hoped for breakthroughs in theorem development and probability or speech understanding never materialized. We began to appreciate just how difficult it was to duplicate seemingly easy human capabilities. During the ensuing AI winter, research visibility diminished but continued with much more realistic and achievable goals. Now, AI once again is gaining public attention as intelligence increasingly is incorporated in seemingly all devices and applications.

Defining AI is difficult and while current researchers might object, to many outside the field anything demonstrating improved performance over time seems to qualify as AI. This was the only requirement in the past and, in a general sense, still remains sufficient. How an algorithm is implemented is immaterial. In the 1960s we implemented expert systems by continually adding additional rules and it was considered AI – but times have changed. Gomper’s checker playing program was considered AI but now chess applications on cell phones can defeat most humans and they are only thought of as games and not AI.

Recent advances in AI in the last few years are somewhat alarming since many of the people working in the field do not seem to fully appreciate the potential negative consequences. Full general AI surpassing human-level intelligence is inevitable – it WILL happen. We just haven’t any idea when. As humans, it encompasses our greatest hope and possibly the only way for us to survive as a species. Unfortunately, the dark side is that there are an unlimited number of ways it could go wrong. No, I believe the Hollywood scenarios are impossible but we have to understand that machine intelligence never will equate to human intelligence.

AI is one of humankind’s truly revolutionary endeavors. Artificial intelligence has many benefits to offer, especially in supporting human beings to make better decisions and to pursue scientific goals that are currently beyond our reach. It will transform our economies, our society, and our position at the center of this world. If we get this right, the world will be a much better place. We’ll all be healthier, wealthier, and happier.

The human brain is the most complex known system in the universe. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no known physical laws that would prevent us from reproducing or exceeding its capabilities.

If there can be artificial intelligence, then there can be super-intelligent artificial intelligences. There doesn’t seem to be any reason why entities other than human beings could not be intelligent. Nor does there seem to be any reason to think the highest human IQ represents the upper limit on intelligence.

For the time being, superintelligent machines that pose an existential threat to humanity are the least of our worries. The more immediate concern is how to prevent robots or machines with rudimentary language and AI capabilities from inadvertently harming people, property, the environment, or themselves. There is a danger they will interpret commands from a human too literally or without any deliberation about the consequences. Overconfidence in the moral or social capabilities of robots is also dangerous. The increasing tendency to anthropomorphize social robots and for people to establish one-sided emotional bonds with them could have serious consequences.

Human fallibility poses greater risk and challenges than general level artificial intelligence as artificial entities become increasingly autonomous and ubiquitous. Humans are fallible; they make mistakes; they frequently give faulty, incomplete, or ambiguous instructions; they are inattentive, nefarious, and self-serving.

An oft-cited thought experiment[2] illustrating why the potential threat of general AI is considered such as extreme danger is if given even a very narrowly defined goal that is totally benign, an entity not sharing human values could adopt unforeseen hazardous instrumental sub-goals posing an existential threat-risk to humanity. To logically even evaluate such a potential threat requires a reasonable analysis, an estimate of the risk, and an understanding of the underlying or technological phenomena necessary to formulate a response. We tend to assume that emotion, ethics, and intelligence are somehow interdependent and not actually possible in one form without the others but there isn’t any substantiation for this belief.

Researchers in the field are aware of the potential negative consequences of future development. Recent conferences[3],[4] brought together AI researchers from academia and industry along with thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI; people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps that can be taken to ensure resulting technology is beneficial. A number of guiding principles resulted that have been endorsed by many in the field[5].

Assuming the perfect superintelligent AI entity could be developed without any concerns of negative consequences; what would be the long-term effects on humanity? It would be able to perform any work more efficiently, faster, and economically than possible by humans. Not only would it end all human drudgery, it also would be the end of most intellectual work. It is unclear what the long-term effects of leisure might be but given current expectations, it could be abject poverty.

It is unknown how long still remains prior to creation of such an entity. Today, the common enactment of AI seems to have evolved to a dependence on deep learning algorithms primarily based on distributed representations. Numerous machine-based analysis techniques, all broadly labeled as artificial intelligence, have been developed including neural networks, multilinear subspace learning, random forests, deep Boltzmann machines, deep Q-networks, etc. Numerous approaches have been pursued in the past without success; it remains to be seen what the limitations of these approaches will be but their potential has just begun to be exploited.

One of the major limitations is that computers are not yet capable of ideation – original recombinant innovation; there still isn’t any system that is creative, entrepreneurial, or innovative. Computers can be programmed to generate new combinations of pre-existing elements but are unable to originate new ideas or concepts.

While the existential threat of general level AI is real, it currently is sufficiently in the future that while it deserves consideration, other concerns assume greater importance; e.g., global warming. Near term, AI will become the operating system of all our connected devices. It will be the way we interact with our smart phones, cars, fridges, central heating system, and front door.

Even current limited AI poses challenges. It is highly probable that AI system development will be used mainly to further enrich the wealthy and to entrench the influence of the powerful. Robotic weapons could make it easier for governments to start wars because they will hold out the illusion of being able to fight without taking any casualties. They could increase the risk of accidental war as militaries deploy unmanned systems in high threat environments where it would be too risky to place a human being such as just outside a potential enemy’s airspace or deep-seaports.

Related advances in computing and automation present an immediate challenge in some areas such as employment as they progressively subsume an increasing number of job categories. The reasons for replacing people with machines are not simply based upon available technology; the predominant factor is actually the business case and the social attitudes and behavior of people in particular markets. How we cope with this change is a question not for technologists but for society as a whole. History would suggest that protectionism is unlikely to work.

In the past, displaced agricultural workers found employment in the factories. Now the factory jobs are being eliminated but a rapidly increasing percentage of newer employment opportunities are higher on the intelligence scale necessitating advanced education which exceeds the ability of a significant percentage of potential available workers. Automation and computerization will result in expanded employment opportunities but many of those positions will be in research and development.

It is necessary to ensure there is an educated workforce able to adapt to the new jobs created by technology; people need to enter the workforce with skills for jobs that will exist in a couple of decades’ time when the technologies for these jobs have been invented. If people are educated for flexibility rather than to fit into a particular job, intelligent people will be able to cope with the dislocation.

Advanced affordable education must be available to all to alleviate unemployment and a deficiency of qualified employees for those positions. For others for whom higher education is not an option, other alternatives must be found or society as a whole will suffer the consequences.

That’s what I think, what about you?

[1] Eliezer Shlomo Yudkowsky is an American AI researcher and writer best known for popularizing the idea of friendly artificial intelligence.

[2] Bestrom, Nick. Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.

[3] AI Safety conference in Puerto Rico, Future of Life Institute, https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/, 12 October 2015.

[4] Beneficial AI 2017, Future of Life Institute, https://futureoflife.org/bai-2017/, 5 January 2017.

[5] Asilomar AI Principles, https://futureoflife.org/ai-principles/.

Posted in AI, AI, AI Lab, AI Winter, Algorithms, Artificial, Artificial Intelligence, Artificial Intelligence, Automation, Boltzmann Machines, Checkers, Deep Learning, Economy, Education, Employment, employment, Expert System, Game Playing, Gomper, Human-Level, ICOT, Ideation, Implementation, Institute for New Generation Computer Technology, Intelligence, Intelligence, Japan, Japan, Ministry of International Trade and Industry, MITI, Multilinear Subspace Learning, Neural Networks, Q-Networks, Random Forests, Recombinant Innovation, San Francisco, Stanford, Threat, War | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

The Anthropocene Epoch

It seems to me….

“Imagine our descendants in the year 2200 or 2500. They might liken us to aliens who have treated the Earth as if it were a mere stopover for refueling, or even worse, characterize us as barbarians who would ransack their own home.” ~ Paul J. Crutzen[1].

In 1543 Nicolaus Copernicus published his model of the universe placing the sun at the center of the universe rather than the Earth. In 1859 Charles Darwin showed that humans are just another species of animal. In the 20th century geologists realized that all of human recorded history only began around the 4th or 5th century BCE with the Hellenic Greeks while our planet is 4.6 billion years old. We now know we live on an average planet circling an average star far out on a spiral arm of an average galaxy that is only one of an estimated 10 trillion other galaxies each of which contains about 100 billion stars.

Still, one thing is unique: our small rather insignificant planet is our home and the only one on which we live. As this for now is our only option as a place to live, we should be taking better care of it. Life is dependent upon our planet maintaining a delicate equilibrium of the correct amounts of air quality (oxygen, carbon dioxide…), flora (forests, grasslands, savannahs…), water (oceans, lakes, rivers, streams, ice…) and other life forms (fauna, aquatic, microscopic…) – everything is interrelated and interdependent. We are not being very good caretakers.

It is obvious that humans have profoundly altered a significant proportion of our environment but has human activity affected enough aspects of our planet to a sufficient extent that if all life suddenly terminated that any archeological evidence would remain of our existence? Artifacts would decay and crumble; nature would relatively quickly reclaim the planet undoing even our greatest constructs or accomplishments.

It is now believed that industrialized humanity has changed the composition of Earth’s atmosphere and oceans and modified the landscape and biosphere to such an extent that geological change is sufficiently perverse to represent a distinct era in our planet’s history similar to other epochs such as the Jurassic, Cretaceous, Pleistocene, or Holocene. The Anthropocene would be but the latest unit designating this geologic time scale.

There are valid arguments for the end of the Holocene, the present geological epoch which has lasted for 12,000 years, and recognizing that Earth has entered a new one: the Anthropocene[2]. It basically acknowledges that humans, far from being merely another species on the planet’s surface, now fundamentally affect the way it works.

The Anthropocene is a proposed epoch that begins when human activities started to have a significant global impact on Earth’s geology and ecosystems. Neither the International Commission on Stratigraphy nor the International Union of Geological Sciences (IUGS) has yet officially approved the term as a recognized subdivision of geological time.

Humans are the Earth’s dominant predator taking about a quarter of the planet’s total biological production, constitute about a third of the landmass of all land vertebras, and responsible for the extermination of a sufficient number of species for the planet’s biodiversity to be considered catastrophic. Even if all humans were to suddenly disappear, our effect on the planet; species extermination, climate change, sea-level rise…; will continue well into the future. Now, the emergent technosphere, itself an outgrowth of the biosphere, represents the possibility of even greater future planetary change[3].

The question is what clearly differentiates this new epoch from the previous one; geologists traditionally demand clear and sudden change visible in rocks. While the exchange of species between the New and Old Worlds in the 1600s or the appearance of plastics in the 1950s might seem sufficient; most of the Anthropocene Working Group’s (AWG) members preferred the high point of nuclear-weapons testing in 1964. Fallout from those tests scattered plutonium, an element vanishingly rare in nature, far and wide across the planet. Future geologists, depending on precisely how much time has passed and therefore how much radioactive decay has occurred, will be able to see a layer of plutonium, uranium, or (eventually) lead in the rocks. At their congress, the AWG’s members voted for this “bomb spike” to be the marker.

A recent paper[4] summarized research that identifies major ways in which Holocene conditions no longer exist.

  • Atmospheric concentrations of CO2 have exceeded Holocene levels since at least 1850, and from 1999 to 2010 they rose about 100 times faster than the increase that ended the last ice age. Methane concentrations have risen even further and faster.
  • For thousands of years, until 1800, global average temperatures were slowly falling, a result of small cyclical changes in the Earth’s orbit. Since then, increased greenhouse gases have caused the planet to warm abnormally rapidly overriding the orbital climate cycle.
  • Between 1906 and 2005, the average global temperature increased by up to 0.9°C, and over the past 50 years the rate of change has doubled.
  • Average global sea levels began rising above Holocene levels between 1905 and 1945. They are now at their highest level in about 115,000 years and the rate of increase is accelerating.
  • Species extinction rates are far above normal background rates. If current trends of habitat loss and overexploitation continue, 75 percent of species could die out in the next few centuries. This would be the Earth’s sixth mass extinction event and equivalent to the extinction of the dinosaurs 65 million years ago.

It is extremely unfortunate that essentially everything differentiating the proposed Anthropocene epoch from the preceding Holocene is negative. This planet will be our home for the foreseeable future and we should be taking better care of it. If we do not, humans might very likely also be headed for extinction.

That’s what I think, what about you?

[1] Paul Jozef Crutzen is a Dutch, Nobel Prize-winning, atmospheric chemist.

[2] Worland, Justin. The Anthropocene Should Bring Awe-and Act As a Warning, Time, http://time.com/4475604/the-anthropocene-should-bring-awe-and-act-as-a-warning/?iid=sr-link6, 1 September 2016.

[3] Zalas Ewicz, Jan. A History In Layers, Scientific America, September 2016, pp30-37.

[4] Yes, A New Epoch Has Begun, Peak Oil, http://peakoil.com/enviroment/yes-a-new-epoch-has-begun/comment-page-1, 12 January 2016.

Posted in Air Quality, Anthropocene, Anthropocene Working Group, anthropogenic, AWG, Biosphere, Charles Darwin, Climate, Climate Change, Climate Change Conference, CO2, Cretaceous, Environment, Evolution, Extinction, Extinctions, Flora, Galaxy, Geology, Global Warming, Global Warming, Greece, Greeks, Hellenic, Holocene, International Commission on Stratigraphy, International Union of Geological Sciences, IUGS, Jurassic, Methane, Nicolaus Copernicus, Pleistocene, technosphere | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Income, Wages, And Poverty

It seems to me….

Inequality causes problems by creating fissures in societies, leaving those at the bottom feeling marginalized or disenfranchised.” ~ Nicholas Kristof[1].

Even with the economy continuing to expand and unemployment at its lowest point in ten years, an increasing number of U.S. workers are losing their jobs every year as a result of automation, outsourcing, and skill set changes. Over a two-year period, half of all households with a 401(K)-plan experienced some job loss or other form of economic shock. This led to a 24 percent increase in 2015 of employees purchasing private wage insurance policies.

The good news, wages, long stagnant, finally appear to be growing again. In September 2015, when the Fed considered raising interest rates, average hourly pay had just grown by 2 percent, on an annualized basis, over the prior three months[2]. In 2016, that had risen to 2.8 percent. By one measure, wages grew by fully 4 percent in the third quarter of 2015. Accelerating pay suggests that slack in the labor market was almost gone. The pick-up in wages, however, might not be sustained. Many workers left the labor force following the 2007 recession: labor-force participation among 25- to 54-year-olds in the third quarter of 2015 was lower than at any time since 1984. If some can be tempted back to work, growth in wages will be again slow.

The bad news is that income inequality continues to increase. While productivity and total income have grown in the overall economy, the increase is now going to a smaller percentage of recipients. Real wages of unskilled workers in the U.S. and elsewhere have declined nearly 1 percent since 1999 and in 2012 for the first time since prior to 1929 half of all income went to just the top 10 percent of Americans, almost a quarter of all income to the top 1 percent, and almost 6 percent went to the top 0.01 percent.

In the past, new income was spread more evenly across the economic ladder than it is today, when a disproportionate share of the country’s gains is going to the very richest Americans. Inflation-adjusted wages have only inched up for most workers since the 1980s but the country’s highest earners have seen their pay balloon by 35 percent according to a 2015 report by President Obama’s Council of Economic Advisors.

On average, about 80 percent of people in their early 30s would earn more than their parents today if income growth were distributed as evenly as it was in 1940. Making growth more equal would help middle-class people the most but it would also deeply affect wealthy Americans. People whose parents are among the top earners in this country would see their likelihood of making that much money increase by more than 30 percentage points if growth were more balanced.

This inequality has many associated negative social effects; things are not working for a significant percentage of Americans. While life expectancy in general continues to rise, it has fallen for those not having completed high school. It is likely that falling real earnings for male high school graduates has even contributed to their declining rates of marriage and family formation. In 1970, a child living with a high school graduate mother and a child living with a college graduate mother had the same chance—nine-in ten—of living with two parents. In 2008, a child with a college graduate mother still had a nine-in-ten chance of living with two parents. But the chance for a child with a high school graduate mother had fallen to seven-in-ten. One consequence of more female-headed families was an increase from 15 percent in 1980 to 20 percent in 2009 of children living in poverty. The spread of computerized work is increasing the importance of education even as it is undermining many children’s opportunity to acquire foundational skills needed for post-secondary education.

Since the 1940s, it has become less and less likely that children will grow up to earn more than their parents[3]. Research shows children born in 1940 have a 92 percent chance of taking home more income than their parents. By contrast, someone born in 1984, who is 32 years old today, has just a 50 percent likelihood of making more than his or her parents.

This loss of absolute mobility is shared among all economic classes as incomes are stagnating for everyone, not just the poor. Upper-middle-class Americans saw their chances of earning more than their parents decline the most of any group born from 1940 to 1980.

Part of the reason for the stagnation is that the country’s economy isn’t expanding as fast as it previously was. U.S. gross domestic product (GDP) often grew at more than 5 percent in the postwar years and hit 7.3 percent in 1984. Annual growth hasn’t reached 5 percent since then; it was 2.6 percent in 2015. Slower growth means there’s less new wealth to divide among the people who live and work here.

Unfortunately, one-third of Americans have not saved anything towards retirement; 56 percent have saved less than $10,000. About 27 percent of Americans, or 66 million people, do not even have an emergency fund; just 28 percent of Americans have saved for six months of expenses[4]. Those aged 71 or older had the greatest likelihood, at 47 percent, of having saved for at least six months’ worth of anticipated expenses. More than a quarter of younger people, so-called “millennials”, between the ages of 20 to 30 are unable to cover a $3,000 emergency either through personal savings or even by borrowing from friends. They live from day-to-day knowing that just one mistake or accident could mean financial ruin.

Even among American households that earned $100,000 a year or more, 27 percent lacked a rainy-day fund able to meet three months’ worth of expenses. Additionally, a 2015 study found that about a third of Americans who earn $75,000 or more a year at times live from paycheck to paycheck.

That’s what I think, what about you?

[1] Nicholas Donabet Kristof is an American journalist, author, liberal/progressive op-ed columnist, and a winner of two Pulitzer Prizes.

[2] Buckle Up: Interest Rates In America, The Economist, http://www.economist.com/news/finance-and-economics/21679806-first-three-pieces-federal-reserves-imminent-interest-rate-decision?cid1=cust/ednew/n/bl/n/20151210n/owned/n/n/nwl/n/n/NA/n, 12 December 2015.

[3] Hendren, Nathaniel. A working paper authored by researchers from Stanford and Harvard universities and the University of California, Berkeley, 8 December 2016.

[4] Bell, Claes. Financial Security Index, Bankrate, http://www.bankrate.com/finance/consumer-index/financial-security-charts-0621.aspx, 21 June 2016.

Posted in Automation, College, Debt, Economy, Education, Education, Employment, Employment, employment, Finance, Financial, GDP, High School, Income, Inequality, Inequality, Inequality, Inflation, Jobs, Jobs, Life Expectancy, Life Expectancy, Marriage, Middle-Income, Poverty, Recession, Recovery, Savings, Savings, Unemployment, Upper-Income, Wages, Wealth | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Moving Outward

It seems to me….

The United States was not built by those who waited and rested and wished to look behind them. This country was conquered by those who moved forward, and so will space.” ~ John F. Kennedy[1].

The U.S. needs someone with the same scope and boldness as President Kennedy challenging us to dream and dare – the very attributes that as a nation we now seem to have lost. It is difficult to understand the foolishness and shortsightedness of the politically-motivated decisions to abandon the manned exploration of space almost a half-century ago. While there still are plans and discussions of a return to the Moon or a flight to Mars – they remain only talk with little apparent motivation or national priority. Given the experience and rapid development of the early space program in the 1960s, it is difficult to believe how everything accomplished up to then was abandoned and what by now could have been achieved: a permanent manned Lunar base by 1980, a manned Mars landing by 1990, a permanent base on Mars by 2000…. All thrown away by visionless politicians unable to dream of greater possibilities. The greatest opportunities were open to us – and they shut the door.

In a recent conversation with a neighbor, who also is a friend and usually in politically agreement, he mentioned he feels any investment in manned missions to Mars to be a waste of money. I could not more disagree. Somewhat surprised, I responded that I considered it extremely important. Not only is continued space exploration imperative, it seems easy to justify an expansion of such programs regardless of metrics used.

Most importantly, humans are an exploring species: it is a basic part of our nature. With few frontiers remaining here on Earth, only space remains. Some people travel for the sake of discovery and adventure but regardless of the reason, travel seems to be a human compulsion; a defining element of what is a distinctly human identity that will never rest at any frontier whether terrestrial or extra-terrestrial.

As humans, we are driven to explore the unknown, discover new worlds, push the boundaries of our scientific and technical limits, and then push the envelope even further. This intangible desire to explore and challenge the limitations of what we know and where we have been has proven beneficial to humanity from our very origin.

Space exploration helps to address fundamental questions about our place in the universe and the history of our solar system. Through addressing the challenges related to human space exploration we expand technology, create new industries, and help foster a peaceful connection with other nations. Curiosity and exploration are vital to the human spirit as is accepting the challenge of going deeper into space.

Mars has always been a source of inspiration for explorers and scientists. A mission to our nearest planetary neighbor provides the best opportunity to demonstrate that humans can live for extended, even permanent, stays beyond low Earth orbit. Sending scientists with proper instrumentation rather than robots would broaden the range of science and produce discoveries much more quickly. The technology and space systems required to transport and sustain explorers will drive innovation and encourage creative ways to address challenges. As previous space endeavors have demonstrated, the resulting ingenuity and technologies will have long lasting benefits and applications.

The U.S. manned space program at its very height cost each tax payer about $0.25 per person a year but the estimated return on the investment was several times that amount. It still is difficult to accept the ignorance and stupidity of the politicians responsible for those decisions.

The U.S. economy and technology allow us to accomplish anything we determine to be a priority yet we have stood with our feet firmly affixed to the ground since December 1972 rather than continuing our manned exploration of the universe. Everyone who has ever listened to the gnat-like calls of the distant stars cannot help but feel the frustration of what has been lost. Humanity, by our very nature, must become a multi-planet species.

We, as a nation, have reached a junction where we must either renew our commitment to push farther out, to build on our successes, to keep doing the increasingly difficult – or to lower our sights and compromise our goals. We, as humans, either are an exploring species or we slowly die never having realized our true destiny.

It is time for Homo sapiens to escape the earthly bonds that throughout our existence have constrained us to a single solitary rock in the vastness of the universe and to move outward realizing our destiny as an exploring species; to finally take our first steps on our way to the stars.

That’s what I think, what about you?

[1] John Fitzgerald “Jack” Kennedy was an American politician who served as the 35th U.S. President from January 1961 until his assassination in November 1963.

Posted in Earth, Exploration, Exploration, Human, Humanity, John (Jack) F. Kennedy, Kennedy, Lunar Base, Mars, Moon, Space, Stars | Tagged , , , , , , , , , | 3 Comments