Chip Wars and the Race for Artificial Intelligence

Chip Wars and the Race for Artificial Intelligence

Radical transformation is coming in many fields of human culture and endeavor.  No, this is not just boilerplate praise of ongoing incremental technological innovation.  The only way to grasp the magnitude and significance of these incipient changes is to imagine yourself living during the time when steam and then electricity opened continents, shrank the world, destroyed and rebuilt entire societies and ways of life.  With that imaginative endeavor we can at least get a glimpse of the transformation that has already begun, but has not yet become fully apparent.

We’ve written about a few aspects of this revolution in the past months: about the acquisition of machine-learning firm DeepMind by Alphabet, Inc. [NASDAQ:  GOOG]; about Elon Musk’s new venture to explore the incorporation of physical computer hardware into the human brain; about an inflection point in the arrival of autonomous, self-driving cars; and about unexpected innovations that are pushing Chinese social media and e-commerce companies ahead of their Western peers.  All these facets of the coming transformation point in one direction: artificial intelligence (AI).

          Here Comes the Singularity

There are many definitions of AI.  (Some of the best explorations of AI issues have been through science fiction.  We’re looking forward to the upcoming Blade Runner remake, even though we think no one could do justice to Rutger Hauer’s performance as the replicant leader Roy.)  However, there is one definition that reveals why AI represents a change as momentous as steam power or electricity, or perhaps more so: and that definition revolves around “machine learning.”

Futurist Ray Kurzweil — hired in 2012 as a leader of GOOG’s natural language recognition program — refers to the arrival of a “technological singularity.”  He’s borrowing a term from astrophysics that refers to the heart of a black hole — a source of gravitation so strong that nothing passing near it can escape its pull.  Go across the threshold, and there’s no return — just an inescapable, exponential acceleration into the unknown.

To Kurzweil, the technological singularity will be brought about by the arrival of computers which can reconstruct and reprogram themselves.  Once they have that ability, Kurzweil believes, their development will no longer be constrained by a human pace of innovation.  Every cycle of improvement will accelerate the next one, which could create a runaway exponential sequence of self-improvement cycles.  Machine superintelligence could rapidly transcend human intelligence and become autonomous and probably incomprehensible to us — in much the same way that human intelligence is incomprehensible to animals.  Many figures — from tech entrepreneurs like Elon Musk to world-class physicists like Stephen Hawking — warn that the consequences of AI could be ruinous, pointing out that there’s no way to know beforehand whether the singularity would usher in a Utopia, or spell the end of the human race as we know it.  Would our new robot overlords be benevolent, malevolent, or indifferent to us?

          Yes, But Is All That Real?

Kurzweil is a real scientist with a real role in a real company operating on the frontiers of technological development.  Nevertheless, there are many reasons to doubt that the singularity, if it arrives, will really unfold the way he imagines.  For one thing, exponential growth is a characteristic of mathematical models, but that math rarely translates into the physical world.  Constraints usually emerge to bring exponential growth back to earth.

While Kurzweil’s singularity may not happen as he anticipates, the centerpiece that could drive it — self-programming computers — is already happening.

          Deep Learning and Neural Networks

Laypeople usually think of computer programming as an activity in which the programmer gives instructions to the computer, and then the computer executes those instructions.  Machine learning turns that around.  The programmer gives the computer a set of sample data and a desired outcome, and the computer generates its own algorithm on the basis of those data that it can apply to any future data.

While the idea of machine learning has been around for a long time, since the 60s and 70s, the tipping point has come with the emergence of extremely large data sets.  These inspired the development of “deep learning” techniques — machine learning applied to massive data collections.  Deep learning — self-programming machines learning through the study of vast data sets — has given us exponentially better voice and image recognition, responsive search algorithms, the first self-driving cars, and disease diagnosis that exceeds the performance of human doctors, among many other feats.  One of the most promising deep learning technologies simulates the algorithmic complexity of the human brain’s structure and function — so-called “artificial neural networks.”

With the rollout of incredibly cheap commodity data collectors in virtually all everyday objects, tools, and machines — the so-called “internet of things” — deep learning algorithms are already exceeding human capacities in many disciplines.

What’s remarkable (and somewhat disquieting) is that the nature of deep learning means that it is fundamentally opaque to human questions about its internal processes.  Deep neural networks are so complex that there may be no comprehensible answer to the question, Why did the system produce that result?

The MIT Technology Review notes:

“… by its nature, deep learning is a particularly dark black box.  You can’t just look inside a deep neural network to see how it works.  A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers…  Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does…  If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it.”

Still, the results are so compelling in terms of the value they add that we believe artificial neural networks and deep learning will proceed inexorably, transforming every field of human activity — if perhaps not in the cataclysmic fashion predicted by Ray Kurzweil and feared by Elon Musk.

Here’s an example.  In 2015, researchers at Mount Sinai Hospital in New York applied deep learning to the records of 700,000 patients.  With no human programming and without the intervention of any experts, “Deep Patient” became much better than any other method at identifying patients who were at risk for a wide variety of ailments, including liver cancer.  The lead investigator said, “We can build these models, but we don’t know how they work.”

In short, not only will self-driving cars drive better than you do, but according to Deep Patient’s investigators, your robot doctor may eventually be better than your human doctor at diagnosing you (at the very least, you as a consumer will demand that your human doctor have a robot consultant).  And in neither case will we necessarily ever be able to say how and why.

          The Components of AI

So AI has three basic components: the data, the hardware, and the learning algorithms.  These three components are a fundamental guide for investors who want to know what opportunities will be presented by the arrival of AI and deep learning.

The Data

Knowing that enormous data sets are the basic raw material for AI deep learning systems explains a lot of otherwise inexplicable behavior.  Ride sharing services?  Their first interest is not ride sharing; it’s the collection of data that deep learning systems will be able to use and apply to create AI self-driving cars.  The vast data troves of personal preferences and online behavior accumulated by the likes of Google [NASDAQ:  GOOG], Facebook [NASDAQ:  FB], and Amazon [NASDAQ:  AMZN], not to mention Chinese giants such as Alibaba [NASDAQ:  BABA] will serve similar purposes for e-commerce.

Private data sets — for example, medical records — will become increasingly valuable, as will public data sets compiled by governments.  Both will become the focus of citizen activism: private data sets by groups that want to protect privacy, and public data sets by groups that don’t want government-funded data collection to fuel private profits by tech firms.  In turn, those firms will argue, successfully we believe, that the potential benefits to health and welfare that AI will provide means that they should have data access.  So a key component of evaluating a large tech company’s prospects will include the quality of its relationships with the governments where it operates and the maintenance of goodwill.  (Uber’s current troubles are a cautionary example.)  If there is a future groundswell of public resentment against the tech giants, this factor could become crucial.

In sum, a basic analytical question in evaluating any company in any industry will be “What is the value of any proprietary data it possesses?”  Find companies with underappreciated data troves.

          The Hardware

The unquestioned king of AI hardware is Nvidia [NASDAQ:  NVDA].  NVDA began as a company specializing in making chips for video games.  This ultimately proved to be an unexpected gold-mine, as their GPUs (graphical processing units) proved superior to standard chips for AI applications.  NVDA GPUs are the hardware behind the execution of AI in Tesla’s [NASDAQ:  TSLA] autopilot technology, the first step toward their self-driving cars.  They were also the foundation for GOOG’s AlphaGo, the deep learning program that defeated the world’s reigning champion of the Asian strategy game go.

But let investors not forget that the semiconductor business is merciless.  NVDA’s founder Jensen Huang is careful always to link “AI” and “GPU” together when speaking to the public, because NVDA GPUs are the current state of the art.  The next state of the art is already taking shape, though.

All chips in mass use currently use the same basic architecture, named after Princeton physicist John von Neumann, who first described it in 1945.  But von Neumann architecture is not intrinsically suited to deep learning.  New “neuromorphic” chips are under development by companies such as IBM [NYSE:  IBM], Qualcomm [NASDAQ:  QCOM], and Intel [NASDAQ:  INTC] which mimic human brain structures and processes not just algorithmically, but physically — specifically constructed for the kinds of applications needed for deep learning, and much more energy efficient.  They represent one of the first fundamental changes in computing architecture in more than 50 years.  IBM’s TrueNorth chip is an example, and the company is working on making it widely used by researchers and academics — no small feat, since it requires a completely different programming ecosystem.  The winners here will be those companies that succeed in encouraging mass adoption of their chips and are able to harness network effects.

In short, companies such as NVDA may be on top today because GPUs came at an opportune moment in the development of machine learning — but neuromorphic chips will be the real workhorses of the AI future.

The Training Systems

Every deep learning system has to be trained — it needs to do its equivalent of human learning to develop algorithms from the data sets that it’s provided.  GOOG here is working as it usually does.  It’s rolling out TensorFlow, a deep learning system, as an open-source affair — which it will then make available through the cloud on what it refers to as Tensor Processing Units (TPUs).  It’s the same strategy that it used with Android — use open source to drive customers to higher value-added products.  In a world of rapidly inflecting AI adoption, this could be a game-changer in terms of GOOG’s cloud presence.  GOOG’s new TPUs train neural networks several times faster than existing systems, which could help the company challenge AMZN’s cloud dominance.

Investment implications:  Artificial intelligence is coming, and it will change everything.  The application of AI requires three basic components.  First, deep learning and artificial neural networks require data for the learning process by which they train themselves to generate algorithms: so in a world of AI inflection, access to data — public, private, or proprietary — will become a key economic variable for company performance.  Companies with data and the capacity to generate it, as well as companies with the political savvy to make use of externally generated public and private data, will benefit.  The second necessary component is hardware.  Today’s chip leaders will likely not be tomorrow’s.  The key is the arrival of neuromorphic chips which discard the legacy chip architecture in favor of new architectures intrinsically suited for deep learning application — neuromorphic chips.  Look not just for manufacturers, but for those companies that can harness network effects to win dominance in the adoption of their chips and their programming ecosystem.  The final component is the training systems; look for companies able to implement fast and cheap training systems at scale. 

We’ve moved!

You are reading the historical archives of Guild’s Global Market Commentary, and our occasional public posts. To read all our content, please visit our Substack page to subscribe.

Community Comments

You might also enjoy