Image from https://beta.dreamstudio.ai/dream
“But the overall pattern is clear: in case after case, when a model can be created and tested, it tends to perform as well as, or better than, human experts making similar decisions. Too often, we continue to rely on human judgment when machines can do better.” — Andrew McAfee, Erik Brynjolfsson [1]
The writer David Foster Wallace tells the story, “This is Water”, of two young fish who do not know what water is. He makes the point that the most important realities are often completely invisible to us (and will remain that way if we let them). [2] The famous mathematician and philosopher Bertrand Russell provided an answer, “what science cannot discover, mankind cannot know”, but he was wrong. Fritjof Capra and Pier Luigi Luisi provide one example of how Russell was wrong:
“Emergence results in the creation of novelty, and this novelty is often qualitatively different from the phenomena out of which it emerged.” [3]
If this in-depth educational content is useful for you, subscribe to our AI mailing list to be alerted when we release new material.
Another proof that Russell was wrong is the quickly evolving field of Generative AI. The remainder of this article discusses this form of artificial intelligence (AI) and what it means for the future…of mankind.
In the last two weeks three staggering articles showed up in my reading. The first was an article by the renowned venture capital firm, Sequoia Capital, in which they gave notice that the business model was yet again being redefined by AI. To quote Sequoia:
“The best Generative AI companies can generate a sustainable competitive advantage by executing relentlessly on the flywheel between user engagement/data and model performance.” [4] — Sequoia Capital
BCG’s four models, shown below, were no longer the way to think about current business models. [5] Generative AI was changing the basis of competition!
The next article was from the international consulting firm McKinsey. They interviewed Berkeley Professor and MacArthur Genius Daphne Koller. In the article Koller talks about how Generative AI is enabling researchers to take medicine to a whole new level of fundamental science. The Generative AI is producing emergent results that scientists have not seen beforehand. Effectively, we have reached the point where the machines are producing creative insights not previously documented by humans. [6][7] Koller says that this ability to abstract from reality will change the understanding and practice of medicine.
The third writing was from a group of Canadian academic economists. In their book, Power and Prediction: The Disruptive Economics of Artificial Intelligence, Professors Agraval, Gans and Goldfarb document their thinking that “every problem is an information problem”. The professors illustrate their conclusion by examining most governments’ approach to COVID as a “health problem”. This approach created a huge economic and mental health cost. If the governments early on had built new simulations for the spread of COVID using the latest AI technology, the key disease sources would have been identified earlier, isolated and the disease spread would have been reduced faster. Using new models rather than dated technology would have produced more insight and resulted probably in only the disease carriers being home bound.
Microsoft explains this new approach well. [8]
· “The data that is used to train the neural networks [AI] itself comes from numerical solution of the fundamental equations of science rather than from empirical observation.
· We can view the numerical solutions of scientific equations as simulators of the natural world that can be used…to compute quantities of interest in applications.”
To summarize, “the machine is generating something new rather than analyzing something that already exists”. [9]
The remainder of this article talks about how we got to Generative AI, “every problem is an information problem” and what it means for the future. To understand how the VCs, consultants and academics all simultaneously arrived at the same realizations about Generative AI and problem solving, we need to first review some history. Specifically, we need to review the contributions of Claude Shannon, John Wheeler and Bryan Arthur.
Claude Shannon was probably the most famous researcher to work at Bell Labs. Shannon developed Information Theory which was the mathematical, scientific and engineering basis for the Digital Age beginning in the late 1950s. “Information theory is the scientific study of the quantification, storage, and communication of information…and involves the application of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering.” [10] Shannon applied the 2nd Law of Thermodynamics to show the relationship between information and uncertainty. We know that the universe is moving toward increasing disorder and uncertainty, which we call entropy. Negative entropy, the reduction of uncertainty, represents information both at the microscopic level of sub-atomic particles and the macroscopic level we perceive, such as temperature, force or volume. What this means is that energy and matter both at the microscopic and macroscopic levels can be understood as information. Viewing reality at all levels as information frees us from the constraints of our organic cultural upbringing and makes possible this current age of transdisciplinary, non-linear, networked connectivity. This fundamental transition from a reality shaped by energy and matter to a reality explained in terms of information was the “single” foundational principle that explains the Digital Age beginning in the 1960s. Thank you, Claude Shannon.
In 1989, the famous physicist John Wheeler published a short essay titled “Information, Physics, Quantum: The Search for Links”. The purpose of the essay was to explain quantum mechanics, information theory and existence — a modest undertaking. In the essay Wheeler coins the now famous expression “It from Bit” and explains the concept that reality (It) can be explained through the fundamental binary framework of the bit (0,1 or yes/no) popularized in computer science and before that by Aristotle. Therefore, all reality is simply information. Wheeler’s essay perhaps more understandably explained Shannon’s point that reality could be understood in terms of information.
If we recognized the seminal contributions of Shannon [1948] and Wheeler [1989], why then did it take us another 30+ years to realize that “all problems are information problems”. The short answer is that for at least 40,000 years our instincts and culture have reinforced the idea that knowledge and problem solving are based in our empirical data or perception of reality. Fortunately, we have retired Stanford economics professor Bryan Arthur to explain why it was another 30 years after John Wheeler’s essay before we changed the dated Cartesian epistemology that evolution provided us.
Bryan Arthur was a founder of Santa Fe Institute, perhaps the leading research institute in the U.S. on the application of complexity to the physical, natural and social sciences (including economics). Arthur’s research showed that technology appears to solve the problems of its times and usually it is a pairing of several technologies in a paradigm. So, what was the technology paradigm required to problem solve in a reality defined by information? The paradigm was the coincident combination of Artificial Intelligence, Cloud Computing and Internet of Things (IoT). This technology enabled us to capture the data, store the data and extract from it for use with AI at a data scale originally measured in petabytes and now in exabytes (1+18 zeros). The original developers of AI thought the capability limitation was computing power. It turned out we needed to capture more data, be able to efficiently (and securely) store it and then effectively process it. This technology came about around 2005–2006, perhaps with the launch of the AWS cloud service, and was widely used beginning in about 2015.
What I have hopefully introduced is that reality is an abstract, logical, computational system processing information and that Generative AI has given us new tools to understand this reality. Perhaps you do not recall that Galileo, Kurt Godel, John von Neumann and more recently the physicist Max Tegmark, to name a few luminaries, all share a similar view. I will refrain from saying that Generative AI is the basis for a third school of epistemology, but I am tempted.
Philosophy and physics are intriguing, but neither field is considered very practical. We should focus on the question of how Generative AI is going to shape the future and what skills will be required in this new world. The Sequoia quote at the beginning of the article gives us much guidance on the application of this AI, whether we work in government, non-profits, academia or the private sector. To state the guidance from Sequoia again:
“The best Generative AI companies can generate a sustainable competitive advantage by executing relentlessly on the flywheel between user engagement/data and model performance.” [11] — Sequoia Capital
The lessons might be:
First, we should not let the technology move us further to lose sight of the importance of a human-centric focus (user engagement). The AI is not responsible for the human consequences of the emergent findings of the technology. We, the humans, are. Don’t blame the AI, blame the people. We need more courses and training in the ethical questions surrounding AI as we shape the customer experience and the interaction between humans and Generative AI.
Second, data should be considered a resource like farmland or capital. We need to acquire data intentionally and thoughtfully, clean and organize data and store it in the cloud for easy access. Datasets are becoming increasingly valuable. Some commentators say that Microsoft bought LinkedIn and Elon Musk bought Twitter to acquire large consumer datasets. This strategy is called “Cloud Capital” to illustrate the importance of large datasets. The National Science Foundation (NSF) and the National Institute of Health (NIH) also realize the scientific and social value of large datasets and are making significant efforts to organize open source datasets to support research and commercialization (and rapid response). To manage such datasets well, we need for training to begin at the same age as computer programming. We also need for data structures, network theory, graph theory, complexity and principles of cloud computing to be taught in high school and not be considered esoteric advanced subjects. Datasets should be considered like water, fundamental to life for everyone.
Third, “model performance” looks to improve the performance of the algorithms. This improvement of the algorithm requires the in depth study of advanced math, statistics and computer science. This training also needs to begin well before university given the importance of the subject matter.
Fourth, “competitive advantage” will come from picking better problems (opportunities). What does that mean? AI is going to provide much of the “insight” and creative solution through the emergent process that Capra and Luisi described at the beginning of the article. The value will be even more in problem selection. The researcher Neri Oxman’s description of creativity includes four domains — science, engineering, design and art. [12] For the first four centuries of the Industrial Revolution, the value creation was based in science and engineering. Today, with AI no longer limited by the empirical data available, value creation will come more and more from design and art. Design here is used in the way that Herbert Simon defined it, [13] as problem solving, and at the heart of problem solving is picking the problem or reframing the problem. The art produced by Generative AI is fantastic, nearly indiscernible from human work. Do not be depressed, just use this art to tell your stories and sell your ideas more effectively. The venture capital firm Lightspeed puts it well:
“Our thesis for generative AI starts with the belief that storytelling, whether it be about a person, business, or idea, is fundamentally what makes us human…Today, the process of content creation remains manual and difficult…Generative AI has the power to reduce much of this “manual” work and make it more accessible to all.”
What art, design, math and now computing all do is to abstract from reality and make it more understandable. Generative AI is a powerful tool for abstracting not seen before in human history. We need to change our thinking, education system and values to harness this technology for the betterment of mankind.
I have said for several years that mankind is on the verge of a 2nd Renaissance. Generative AI probably makes that statement true. To help you remember, the 2nd Renaissance will be defined in terms of — data science — computer models — abstracting — emergence — design.
[1] Machine, Platform, Crowd: Harnessing Our Digital Future …
[2] Lessons from David Foster Wallace’s “This Is Water”
[3] Fritjof Capra, Pier Luigi Luisi, The Systems View of Life
[4] Generative AI: A Creative New World (Sequoia Capital)
[5] The Right Time for Deep Tech
[6] It will be a paradigm shift’: Daphne Koller on machine learning in drug discovery (McKinsey)
[7] The article Deep Learning by Yann Le Cunn and Y. Bengio explains the foundational concepts of Generative AI
[8] It will be a paradigm shift’: Daphne Koller on machine learning in drug discovery (McKinsey)
[9] Generative AI: A Creative New World (Sequoia Capital)
[10] Re-examination of Fundamental Concepts of Heat, Work, Energy, Entropy, and Information Based on NGST
[11] Generative AI: A Creative New World (Sequoia Capital)
[12] Age of Entanglement
[13] The Science of the Artificial
This article was originally published on Medium and re-published to TOPBOTS with permission from the author.
Enjoy this article? Sign up for more AI research updates.
We’ll let you know when we release more summary articles like this one.
Leave a Reply
You must be logged in to post a comment.