One hot topic recently is that Apple released its new ARM chips -- M1. It is not the first time Apple designs chips -- Apple has successfully designed chips for its iPhone and IPads. It is also not the first time Apple uses non-Intel chips in its Mac products -- Mac had Intel cores only since 2006.
Then why is it important? In short, this is a declaration of war from Apple to Intel and a game-changer for Reduced Instruction Set Computer (RISC) in performance-sensitive applications.
What are Instruction Sets?
Developers use chip instruction sets to communicate with computer chips. Metaphorically chip instruction sets are similar to the alphabets of human languages.
There are only twenty-six characters in English, but more than three thousand in Chinese. Similarly, the size of chip instruction sets also varies. Reduced Instruction Set Computer (RISC) refers to building chips using a small instruction set. In contrast, Complex Instruction Set Computer (CISC) refers to the option that uses an extensive instruction set. (Please see here for more descriptions)
A little history
Early computer chips were all CISC and mostly were designed by Intel. In the 1980s, there was a movement of reducing the instruction set. The ARM technology was founded in this period, and Apple–IBM–Motorola alliance built the PowerPC chips for Macintosh computers.
On the other side of the table, the Windows-Intel alliance (a.k.a "WinTel") kept investing heavily in CISC. The rest is history; WinTel crushed Apple computers in personal computing. Apple had to switch to Intel chips in 2006. ARM survived only in a then niche market of IoT devices thanks to its energy efficiency.
Then the mobile Internet era came, thanks to Apple's iPhone release. ARM is appealing for those applications because people care about the battery life of smartphones. As a result, ARM captured 90% of the market share for mobile processors. Intel lost the mobile war because it suffered from the Innovator's Dilemma and wasn't willing to risk upsetting its existing CISC business.
Despite ARM's success in mobile phones, Intel still holds the crown for applications that require high-performance. Many people think this is due to CISC's inherent superiority in high-performance computation, and Intel is safe in those fields.
Apple declared this is wrong through the release of M1. Intel maintained CISC's advantage in the high-performance applications through massive investment, and previously there was no significant player who could compete.
Except for Apple. Some early users mention the performance of M1 could be comparable to NVIDIA's popular 1080Ti GPU. The TensorFlow team also shows new M1 chips could outperform many workstations for AI applications, which have the highest computation requirements.
What's more, Apple has a great track record for disrupting industries. A lot of ARM manufacturers will follow Apple's path to optimize ARM for high-performance applications, and they are eager to do so, given that the mobile phone market is saturating.
Besides, NVIDIA now owns ARM. The merger gives both edges in the age of AI. The road ahead for Intel is not rosy. Would the aging Titan be able to hold its position? It's hard to say. But one thing is sure. More competition in the field is a great thing for companies in downstream areas like Cloud and AI, which could benefit from increased computation powers and reduced cost.
Note: There is an interesting podcast from A16z about Apple Silicon. 16 Minutes #46: Apple Silicon — A Long Game, Changing the Game
Note: Although it is very promising. If you are ML researchers, please still wait for a few months before you decide to upgrade to Big Sur or M1 chip. A lot of the libraries are not compatible with the new system yet (tweet)