Artificial intelligence is all the rage these days, and for good reason. It has the potential to change the world in ways we can't even imagine. However, there are a few challenges that need to be addressed before AI can reach its full potential. Syntiant is leading the way in addressing one of these challenges: deploying artificial intelligence at the edge.
Simply put, edge computing is the deployment of computing resources at the periphery of a network. This can be done in a variety of ways, but the most common scenario is to place compute and storage closer to the devices that are generating or consuming data. This means that devices are not required to individually communicate with the outside world and can instead communicate with a local system. That system can then combine the data from all of the internal devices and push them to the outside world in a single packet.
Moving this type of processing from the cloud to the edge can help in a variety of ways, including increased responsiveness, decreased latency, reduced infrastructure costs, increased privacy and security, and lower bandwidth usage.
There are many benefits to deploying artificial intelligence at the edge, but some of the most important ones are increased battery life, decreased latency, and improved privacy and security.
One of the biggest benefits of edge AI is that it allows devices to use less power. Syntiant's technology brings powerful machine learning from the cloud into even the tiniest devices, thereby greatly increasing battery life while reducing infrastructure costs.
In addition, by deploying AI at the edge, Syntiant is able to reduce latency by up to 100x. This is a huge boon for applications that require real-time responses, such as self-driving cars and industrial automation. By keeping data closer to the devices that generate it, Syntiant is able to improve privacy and security. This is because less data needs to be sent to the cloud, where it is more vulnerable to attack.
Syntiant's breakthrough silicon architecture combined with an edge-optimized training pipeline and data platform are enabling edge AI deployments everywhere. Their technology brings powerful machine learning from the cloud into even the tiniest devices, thereby greatly increasing battery life, privacy and responsiveness, while simultaneously decreasing infrastructure costs.
The company has focused on making the technology as physically small as possible, this means that, not only can it be built into items such as temperature sensors, but can also be built into consumer products. Now you can build a pair of earbuds that have proper AI built into them, meaning they can accomplish a lot of processing on the device. Imagine being able to wear earbuds but still have access to Alexa, Google, or Siri skills without internet access while on a run.
If you're looking to deploy artificial intelligence at the edge, Syntiant is the company to watch. They're leading the way in making AI more efficient and secure, and we're just getting started. To learn more about their technology and how it can help you build your next product, check out their website.
Scott is a developer who has worked on projects of varying sizes, including all of the PLUGHITZ Corporation properties. He is also known in the gaming world for his time supporting the rhythm game community, through DDRLover and hosting tournaments throughout the Tampa Bay Area. Currently, when he is not working on software projects or hosting F5 Live: Refreshing Technology, Scott can often be found returning to his high school days working with the Foundation for Inspiration and Recognition of Science and Technology (FIRST), mentoring teams and helping with ROBOTICON Tampa Bay. He has also helped found a student software learning group, the ASCII Warriors, currently housed at AMRoC Fab Lab.
All right, so we're going to introduce Kurt Busch with Syntiant Corporation. Welcome to our coverage here at CES. So Kurt, tell me what you what's-. Tell us all about Syntiant, give us your elevator pitch.
So thank you for having us today. We're super excited to be here at CES. Since it was started in 2017, with this idea that the world needs a new type of processor, we had seen that deep learning or artificial intelligence has transformed, compute in the data center. And we are bringing about that same transformation in edge devices. So Syntiant recently shipped over, we have 20 million units deployed in the field today. And they're doing things like voice recognition, some sensor work, some image recognition, basically, what AI at the edge does, it allows your machines to be able to listen, to be able to see to be able to feel, and to be able to speak. And that is what Syntiant is doing is we are making AI truly a pervasive technology.
So talk to me then about some of the actual, practical, that maybe folks don't even know there's a Syntiant chip in a device or item that they have, what type of things are you embedding in now?
Sure. So we are, we're designed today into things as small as earbuds too as large as automobiles. And Syntiant claim to fame is we bring a large amount of deep learning performance into an extremely small power envelope. So we can add AI to any type of battery-powered device. And it's quite obvious that you see it in earbuds because you don't have a very big battery to add a voice interface to an earbud. But also in an automobile where you want to do always-on AI, perhaps sitting on a car battery, sitting on the 12-volt battery for two weeks in an airport parking lot, right. So any type that you want to have you, anytime you have some kind of energy, energy restraint, this Syntiant chip is the right solution to bring AI to that device.
So as far as processing power, you know, oftentimes now we, you know, it used to be easy. When back in the early days, you had the 80 80286 80. You know you had this real easy progression of chips to kind of understand where you're at. Today, if someone told me the Intel chip and whatever version it was, okay, I don't even know anymore. And they have so many cores or whatever. What are we? What are you guys actually looking at then in your, in your chip technology? It's got to be small if it can go on an earbud obviously. And you know, I've seen, you know, I remember days and outworked electronics for many years specifically in radar and that type of stuff. And those chips were not small. You know that technology was was big and could take a lot of power. So what are we talking about a dime, pinhead size chip?
So our first chip is approximately 1.4 by 1.6 millimeters. So quite small.
And we were scaling up to larger and larger chips. Our second generation, it's about four, four times that size. But still, things can be put into small battery-powered devices. And you've talked about an interesting comment is that we really haven't figured out how to judge performance. Like in the old days, we could look at Intel processors and say, Okay, I went from a 286 or 386 megahertz. But in deep learning, you really look at the specific workload. Is are you doing voice recognition? Are you doing an image? Are you doing a sensor and you look it, you know, can I do something like glass break for a year on a coin cell battery?
So you look at those types of-
And it's not like it's being used as a computer would be for multi-purposes, it has a distinct job to do. It's instinct subroutine, thus, the processing power really doesn't have to be extraordinary, it just has to do the job.
Exactly, it has to do the job. And usually, well, there is no substitute for power. So if you can, one of our claims to fame is we bring approximately 100 times the performance within the same energy envelope as a microprocessor or a microcontroller.
So if you have a 100 times increase in performance, you can really build a bigger network. That's right. And typically in deep learning networks, you look at it the number of parameters, so you say you know, a microprocessor or microcontroller may do a 10 to 20,000 parameter network, while a Syntiant chip may be doing a multi-million parameters network. So you build much bigger networks and have much more accurate applications. But I think you use the microprocessor analogy and I'd like to throw it out there is are we often think of that way our gen two processors. Is a Syntiant Core 2, we view that as our 386 and I'm showing my age but-
Sure, sure, sure.
I fit in here.
and the rest you know me with 386 Don't worry.
Yeah So well. Well if we do a history lesson is when the 386 came out, it was the first mass-market processor that ran multiple threads at the same time.
And it really allowed us to have the windows generation.
And Syntiant Core 2, allows us to run multiple heterogeneous neural networks at the same time. So we view that as our 386. This is the thing that you can run one network for image recognition, you could run one for voice, you could run one for sensors, all acting in parallel.
You know, my car, I have the ability to get on an app and kind of get a status check when it's been parked for a week or two. But after a couple of weeks to be parked in a parking garage, it starts to shut off some of that reporting just because the vehicle has been started probably the energy consumption. So I'm assuming that major car manufacturers are probably looking at you to be able to extend that lifetime that they can be providing those types of reports back and monitoring like you said, glass break, or whatever it may be. So that even if the car hasn't been started in two weeks, then
It's still, it's still checking in.
Yeah, with the Syntiant devices, you could run sort of the equivalent of sentry mode for several weeks without ever really impacting your 12-volt battery for a long period of time. And we see a lot of excitement in that.
So for the show, what are you guys looking to accomplish here to show? Just get an education, get more, you're looking to connect with industry clients or?
So CES has always been a great show for us. You know, the last couple of years, we've got a couple of Best of CES Innovation Awards. This CES, we have several partners that have announced products with us. We have a TWS reference design that we announced that helps customers add voice to earbuds in a very simple manner. We have an energy, a wireless energy solution with Energous that was announced. We have this really exciting medallion that basically with Zinfanite, that brings your smart speaker effectively to something that you carry around with you. Effectively it's your digital buddy, you would have an always-on smart speaker with you at any given time. And then we had an announcement with CeramicSpeed around using our device and industrial IoT to measure the lifespan of bearings. And so you can see really from consumer applications to kind of
All the way up to industrial
Industrial. Is that really AI is being deployed in all of these types of things
In the world, we live in. Well, we got Kurt on. I want to thank you for coming up with Syntiant. Is it Syntiant.com? Is that the best way to?
So S-Y-N-T-I-A-N-T.com. Everyone go check them out. Thanks for coming on sharing exciting stuff here and what you guys are doing it. It's Syntiant, Thank you so much.
Thank you very much. I'm excited to be here.
Yup, Thank you so much, Kurt.
TPN CES 2022 coverage is executive produced by Michele Mendez. Technical Directors are Kurt Corless and Adam Barker. Associate producers are Nancy Ertz and Maurice McCoy. Interviews are edited by Jo Mini. Hosts are Marlo Anderson, Todd Cochrane, Scott Ertz, Christopher Jordan, Daniele Mendez, and Allante Sparks. Las Vegas studio provided by HC Productions. Remote studio provided by PLUGHITZ Productions. This has been Tech Podcasts Network Production, copyright 2022.