Today, we’re thrilled to announce our investment in Graphcore and its two co-founders, Nigel Toon and Simon Knowles. Graphcore secures the lead in the global AI chip race with $200 million in new capital from leading financial and strategic investors including Atomico, BMW i Ventures, Microsoft, Merian Chrysalis, Pitango, Sequoia and Sofina.
Graphcore is a UK based private company founded in 2016 developing an intelligence processor unit (IPU) that can improve the performance of machine intelligence training and inference by 10x to 100x compared to currently available solutions. The company is in a stage of rapid global growth, tripling the size of the team and opening new offices in London, Palo Alto and Beijing in 2018.
AI is manifesting itself at a phenomenal rate driven by deep/machine learning applications such as speech and image recognition, video surveillance, customer service, network monitoring and automotive. The annual worldwide AI software revenue will grow from $5.4bn in 2017 to $108.8bn by 2025 according to the Tractica Artificial Intelligence Market Forecast. The overall AI compute today is estimated to account for 1-2% of public cloud service sales growing to be about 30% - 40% of the revenue by 2025 [Tractica].
Driven by AI software compute needs, the demand for specialized hardware accelerators is imminent. The global AI chip data center revenue currently generated by GPUs, CPUs and FPGAs (Field Programmable Gate Arrays) amounts to over $4bn. It is expected that this volume will expand by 2025 to reach more than $11bn (BMW i Ventures estimation), driven mostly by sales of purpose-built AI chips, such as Graphcore’s IPU, posing a threat to Nvidia’s dominance.
The market is expected to double until 2021, so now is the time to release products in order to gain market share. 2019 and 2020 are likely the years when a ramp-up in deep learning chipset volumes will take place and “winners” will begin to emerge.
A “winner” in the chipset market can rise very, very rapidly. Looking back to the year 2000, when Intel launched a data center offensive with its x86 CPU architecture, there was a clear dominant technology in the market for server CPUs - IBM’s family of processors called “Power” - owning 40%+ market share, corresponding to more than $2bn of annual sales. The other leading technology was provided by Sun Microsystems, called “Sparc”.
Jumping to 2018, “Power” and “Sparc” are together holding less than 5% of the data center CPU market while Intel’s x86 processor family has emerged as the clear leader (s. Intel x86 data center market share development below).
For reference, there is also a distinct dominant technology in 2018 - Nvidia’s family of GPUs - owning 70%+ of the data center AI chip market, corresponding to approximately $3bn of sales expected in 2018. From 2000 to 2005, Intel grew at a phenomenal rate from about 0% to 57% market share for data centers with a revenue totaling well over $1bn [BMW i Ventures estimation]. While Graphcore’s story may unfold with even higher velocity, due to the unique characteristics of the much faster growing AI chip market today, Intel’s path shows how quickly the adoption of a new architecture can evolve in the processor world.
A decade ago, the overall CPU data center market was growing at a much slower pace than today’s AI chip market, which meant that high sales growth also translated to winning high market share. While we may not observe a gain in market share at such a rate, we expect a similar pattern to develop in the market for purpose-built AI processors, with Graphcore well positioned to lead with the exceptional combination of superior technology, team and timing.
We believe Graphcore has the right team and product at the right time to succeed in an emerging market. The company has developed a novel AI accelerator chip (IPU) that is tailored to enable the machine intelligence of tomorrow. Unlike currently available CPUs, GPUs and FPGAs, Graphcore’s IPU is purpose-built for machine/deep learning algorithms and delivers an order of magnitude better performance. IPUs provide more compute per watt than CPUs and GPUs, while they have also been designed to be extremely efficient at mini-batch training.
An important difference of Graphcore’s IPU is its innovative approach to memory. Computing architectures have always relied on RAM (Random-Access Memory) for program storage. This means that the variables for computation have to be fetched from the external RAM, which is both very energy intensive and is also making the memory interface between the processor and the RAM the bottleneck limiting the overall system performance. Graphcore’s team decided to store the variables (i.e. the weights of a neural net) on the processor itself and not in the external RAM. This results in 100x+ memory bandwidth increase compared to traditional architectures.
CPUs and GPUs typically have 10’s to 100’s of separate processor cores. Graphcore’s first IPU processor, Colossus, has 1,216 independent processor cores on each chip. The low power consumption of the IPU allows fitting 2 IPU chips onto a single 300W PCIe card resulting in over 2,400 independent processor cores, capable to operate with more than 14,000 independent IPU program threads that are all working in parallel with minimal latency because the knowledge models are held inside in-processor memory.
To make use of these levels of data-parallelism, Graphcore has developed its proprietary software called Poplar™. It determines how the thousands of individual processors on the chip communicate with each other, making sure that data is moved across the chip most efficiently and at the right time, utilizing all available processors. This leads to significant performance improvements, enabling Graphcore to achieve processing speed gains of 10x or more when compared to today’s highest performance GPUs.
Our Graphcore Investment
The market is driven by the increasing number of artificial intelligence and machine learning applications while being limited by the performance boundaries of traditional compute architectures. This causes a large demand for scalable & optimized processing units. Being first to market with a production-ready AI chip and a unique, future-proof design, Graphcore offers an unparalleled opportunity to invest in the potential winner in that space.
So far there are limited high-performance options for central AI computation, especially when looking for companies that have systems ready for evaluation and deployment on the market. Graphcore is selling IPUs to data centers & cloud providers, while the company also has a strong product pipeline aiming towards smaller structures and automotive solutions. The combination of using Graphcore’s IPU in a data center and the possibility to utilize the same architecture in a vehicle is very attractive. In the data center, the IPUs can be used for simulation and training while the same architecture (certified for automotive) can be used in the car for inference, resulting in quicker turnaround times and less complexity during the development phase. Furthermore, the ability to run different kinds of neural nets with equally high performance on an IPU is beneficial: a flexible hardware allows innovating on the algorithms while taking a hardware design decision earlier.
This latest round of funding will allow Graphcore to execute on its product roadmap, accelerate scaling and expand the company’s international footprint. It is a further step towards fulfilling the team’s ambition to build an independent global technology company, focused on the new and fast-growing machine intelligence market.
The race for the dominant architecture in the AI chip market has just begun and there is no clear winner yet. Graphcore is first to market with a production-ready dedicated chipset for today’s and tomorrow’s AI computing needs.
Graphcore’s chipsets are computationally powerful and optimized for machine learning, allowing for high throughput at a very low latency. With its versatility and flexibility Graphcore’s IPU – which supports multiple machine learning techniques with high efficiency – is well-suited for a wide variety of applications from intelligent assistants to self-driving vehicles. It’s equally good at training and inference, allowing the use in a data center as well as in a vehicle.
Graphcore has the potential to become the winner of that space. We look forward to supporting Nigel, Simon and the Graphcore team in building a major global technology company that can help innovators in AI create the next generation of machine intelligence.
P.S: Special thanks to our interns Marco Linner and Marie Tai for the help with the diligence.