New processor competition in Apple M1 and Amazon's AWS Inferentia
posted Saturday Nov 14, 2020 by Scott Ertz
Over the past year, Intel's place as THE processor maker has continued to slip away because of a stalling in innovation. Consumer devices have become less computer-focused and moved to phones and tablets, both generally powered by ARM chips. For those who are looking for a laptop, even Windows has implemented support for the ARM architecture in partnership with Qualcomm. But, in traditional terms, AMD has continued to creep up and steal market share with its Ryzen line, which has begun to outperform Intel's chips for less money and using less electricity.
AI processes work better on GPUs than CPUs, putting NVIDIA in the lead for that industry, taking a large chunk of new server construction for itself. But NVIDIA is a video card company and has struggled to make the transition to AI in a meaningful way, leaving the door open there for new competition.
But, these are not the only challengers to the traditional processor business. Apple announced this year that it would begin to migrate all of its computers from Intel's processors to the company's own Apple Silicon. Even Amazon is getting in on the game, developing its own server chips for AWS.
Apple is no stranger to shaking up its processor infrastructures for the Mac. The original era Macs ran on the Motorola 68000 series. Those were replaced by the second era PowerPC processor line, built by Motorola and IBM (the irony there was palpable). The third era has been powered by Intel chips, dating back to the Core Solo and Core Duo line.
This year, Apple will launch the fourth era of Macs, powered for the first time by its own Apple Silicon. The company has worked to become a contender in this space by building the chips that power the other products in its primary business - the iPhone, iPad, and Apple Watch. That learning has led to the Apple M1, which will power the first generation of new Macs, including the MacBook Air, MacBook Pro, and Mac Mini.
These new chips, which are siblings of the A14 chips in other products, are based on the ARM license. It contains 8 cores, though it is not an 8-core processor. In fact, it is two separate 4-core processors in one. One is designed for high draw uses, while the other is designed for casual uses. This architecture concept has been in play in other computer processors for years, with netbooks often using a processor with 3 or 5 cores, with one dedicated to low draw uses. The intention is to increase battery life for some tasks. Apple claims 15 hours of web browsing and 18 hours of video playback, but we'll wait to see what LAPTOP Mag says.
The M1 also contains an 8-core GPU, hopefully bringing some real graphics capabilities for the Mac. The portable products have long suffered from less than impressive performance, leading game developers to often skip the Mac entirely. Unfortunately, Apple has no focus on gaming for these new M1-powered computers, so don't expect the problem to get better in this generation - in fact, expect it to get worse.
The M1 also has dedicated processing capabilities for AI, which could allow Siri to become more useful on the new era of Macs.
But, do not let Apple's marketing fool you - these new Macs are not more powerful than 98% of Windows laptops. You knew that was a lie when they said it, and so did everyone else, including Apple. They have refused to publish any information about how they arrived at that claim, including tests run, benchmarks used, and computers tested. This is not to say that they are not a huge improvement over the current Macs but making such a wild and baseless claim is ridiculous.
Amazon AWS Inferentia
While the Apple M1 adds dedicated silicon for AI, Amazon is going one step farther. Its new AWS Inferentia chips are designed specifically for its own AI processes. The chip is an Application Specific Integrated Circuit (ASIC), or chipset designed for a very specific task load, not something more universal like a standard GPU. That means that there are not wasted processes for converting load, making them more efficient for their purpose.
Amazon looks to use these chips to replace its reliance on NVIDIA for AWS infrastructure. The company's AI chips are generally modified GPUs, meaning that there is overhead in tasks, though they are far more efficient than a standard CPU. This move will be initially aimed at Alexa workloads for rollout. The goal is to speed up the already impressively fast responses for Alexa, which are almost entirely processed at the server level. In fact, only the wake word is processed on-device. Stormacq, an AWS advocate, calculates a 30 percent cost reduction and 25 percent performance increase using the new chips.
Amazon will also allow developers to access the increased performance of its new chips through the AWS Neuron SDK, bringing the power of Inferentia to AI workflows like MXNet, PyTorch, and TensorFlow.