J. Other issues: Business models
(RFI question 11)

Last updated July 28, 2016

In market-driven economies, progress also crucially depends upon the creation of new business models that rewards more effective outcomes and overall benefits to society.

Acceptance of a technology that is as far-reaching and influential as AI will require a high level of public trust. This trust will not come overnight and there will be setbacks if and when AI systems fail – no matter where the fault lies. A failure may be due to some misuse, or due to something that is inherent in the tool itself, but social acceptance of AI will require transparency, careful assessment of risk, and social understanding of the benefits such that people see that will be worth the mistakes to persevere.

Business Models: AI systems have rapidly advancing technological capabilities, and much more slowly changing regulatory policies. The adoption of AI systems is determined in part by business models. So far the trusted economic model (the open API economy and business models) for secure cognitive transactions platform (Blockchain, crypto-ledger) is not in place. This is a people-centered systems redesign challenge that depends on trust to succeed. When this trusted economic platform is in place people will be better able to build, understand, and work with cognitive systems – cognitive systems will have at last become somewhat like trusted social software organisms, with social intelligence rising to higher levels, like the capabilities of dogs and horses. Trusted, well-working cognitive engines, like working steam engines before, will transform and dramatically grow the US and world economy.

According to a book from 1994 by cognitive psychologist Don Norman, knowledge, technology, and organizations are three ways people augment themselves to become smarter. The engineering, social and managerial sciences together have an important role to play going forward.From cognitive tool to assistant to collaborator to coach to mediator, this is a progression of trust as much as it is technological capability advances. However, it is worth it. In a complex world where data is the new natural resource - cognitive assistants for all occupations and cognitive mediators for all people – augmented intellect available to all - is the goal of people-centered system redesign in the cognitive era of advanced AI technologies.

Trust takes time, and there will be mistakes: In the 19th century, people did not trust the firststeam engines and boilers, because they were prone to exploding without warning. Over time the design and engineering improved, trust improved and the huge economic potential of steam was realized. For example, an article in 2011 by economist Brian Arthur noted: “In 1850, a decade before the Civil War, the United States’ economy was small—it wasn’t much bigger than Italy’s. Forty years later, it was the largest economy in the world. What happened in between was the railroads.”

A further example, where trust was never fully gained, is arguably nuclear power, which is now in retreat despite its potential to help fight greenhouse gas emissions. Certainly the economic case for nuclear power proved to be hard to make, but one has to suspect that after accidents in the UK, US, Ukraine and Japan, and after years of dispute in many countries about where to store nuclear waste, the appetite to try to make that case has been considerably diminished, such that a number of countries are now reducing or removing their nuclear power capabilities altogether.

As of just now, our belief is that people are increasingly inclined to distrust many forms of AI, either out of fear of their social impact or under the impact of stories such as the recent fatal accident involving a Tesla car in self-drive mode, or before that when a Google car was in an accident that was determined to be its “fault”. The fact that the Tesla was being used in a manner contrary to the company’s recommendations, and the fact that the Google car had been in a number of previous accidents that were all determined to be the fault of human drivers around it, are from a trust point of view irrelevant. Trust is a social construct and if the sensational publicity around the events in question has predisposed people not to trust these forms of AI, then self-driving cars face an uphill struggle for acceptance. As the Thomas theorem has it, “if [people] define situations as real, they are real in their consequences”.

The path to trust for all forms of AI requires some of the factors defined elsewhere in this response: algorithmic transparency, openness, clear governance frameworks, ethical applications of the technology and careful management of risk. Ensuring that these qualities are available will in turn require concerted collaboration within the technology industry, with universities and with governments.

Back to summary.