If you're like most business owners, you're always on the lookout for new and innovative ways to increase revenue and stay ahead of the competition. One of the most promising areas of growth in today's market is artificial intelligence (AI). But implementing AI solutions can be expensive, especially if you need to purchase custom silicon chips. ArchiTek has developed a new chip called the ArchiTek Intelligence Pixel Engine (aIPE), which helps to solve this problem.
ArchiTek is a semiconductor company that designs and manufactures custom silicon chips. ArchiTek's mission is to provide the most innovative and cost-effective solutions for their customers' needs. ArchiTek was founded in 2016 by a team of experienced engineers from Samsung, LG, and SK Hynix. The company offers a variety of products, including the ArchiTek Intelligence Pixel Engine (aIPE) and the ArchiTek Machine Learning Processor (aMLP).
aIPE is a programmable, image processing, dedicated LSI engine that combines custom hardware architecture - optimized for low power consumption and low latency - with innovative software flexibility using virtual engine technology. This makes aIPE versatile and affordable for all AI solutions-from computer vision to deep learning.
The aMLP is an application-specific integrated circuit (ASIC) designed for machine learning applications. It features an efficient data path and low-latency processing, as well as support for a wide range of algorithms.
The aIPE is ideal for any business that wants to implement AI solutions but doesn't want to break the bank. It's perfect for small and medium-sized businesses, as well as start-ups. The aIPE can be used in a variety of applications, including computer vision, deep learning, machine learning, and robotics.
The ArchiTek Intelligence Pixel Engine (aIPE) is unique in that it offers both programmability and flexibility. Other AI chips are either fixed or require significant programming effort for customization. The aIPE also has lower power consumption and latency than most other chips on the market, making it an ideal choice for embedded systems.
The aIPE can be used in a number of different ways. For example, it can be used to develop new algorithms or to optimize existing ones. It can also be used to improve the accuracy of predictions and the speed of decision-making.
In the example device during the interview. the company showed off a live camera feed with native object identification. While a cool demo, this particular example shows a lot of promise. A non-connected camera could use the chip to locate and identify people to help with autofocus. It could also be integrated into cars, trucks, and bicycles to help with autonomous driving, braking, and more.
ArchiTek's aIPE is versatile and affordable silicon for all AI solutions - from computer vision to deep learning. If you're looking for an AI solution that won't break the bank, the aIPE is a perfect choice. ArchiTek's mission is to provide the most innovative and cost-effective solutions for their customers' needs, and the aIPE is just one example of their commitment to this goal. To learn more about the company and its products, head to their website.
Scott is a developer who has worked on projects of varying sizes, including all of the PLUGHITZ Corporation properties. He is also known in the gaming world for his time supporting the rhythm game community, through DDRLover and hosting tournaments throughout the Tampa Bay Area. Currently, when he is not working on software projects or hosting F5 Live: Refreshing Technology, Scott can often be found returning to his high school days working with the Foundation for Inspiration and Recognition of Science and Technology (FIRST), mentoring teams and helping with ROBOTICON Tampa Bay. He has also helped found a student software learning group, the ASCII Warriors, currently housed at AMRoC Fab Lab.
Help support our coverage using Blubrry. The community that gives creators the ability to make money, get detailed audience measurements and host their audio and video, get 30 days to try out the service using promo code BLUBRRY004. That's B-L-U-B-R-R-Y-0-0-4.
Alright, everyone. So this is one of our first demos we've had, believe it or not here at the show. So I want to welcome Hassan from ArchiTek. He's the CEO of the company. Go ahead, Hassan and tell us a little bit about the company and what we're gonna see here. Oh, we gotta get him mic'd up. Okay, go ahead.
Yeah, so it's ArchiTek.
So the first word is from architecture.
And the last word is technology. And it's combined this, you know, so architecture. So what you're actually doing is we're making a chip. It's a small LSA. And it goes to power, like intelligent AI applications running on battery.
And handheld devices. So just think of, you know, your cell phone or a mobile phone, which is intelligent enough to identify objects.
Something like that. And, yeah, I have a demo. And I think that's the easiest way to
See what we are offering.
Yep, go ahead and talk about what we're gonna see here.
Yeah. So can you see this demo?
So he's the CEO of ArchiTek.
And he has this small camera over here.
And this has power in the chip, the brains inside.
So it's a very small piece of equipment.
And this is being powered by a battery pack.
It is actually running an AI application, which can actually detect people or objects.
If you maybe want to put your water bottle in front, it will show your water bottle on this monitor.
Oh, yes, it does.
We can bring up our PTZ cam over there. We should be able to zoom into that. See what it looks like up on the screen because that's pretty nifty. Now, what are the applications for this technology? What are you looking at?
Yeah. So many other people are actually doing something similar, but they're mostly doing it in the cloud, using, you know, servers, very powerful servers. So we have a handheld solution, which, again, uses very little power. So this can be used for many different applications. One application that comes to mind is drones, you know, where you know, you're running on battery. And it doesn't have too much battery too because it has a supply.
It doesn't have much to do to actually detect objects. So it can bump into anything.
So if you're having a self flying drone, you wanted not to bump into things. So this is one way you can avoid obstacles like that.
So this actually does the identification? Does it tell you that's a water bottle, that's a person or just does the detection only of the object?
Well, detection is the most difficult thing. Because it's a camera, it doesn't really know.
What a person is or a car is.
So detection is the most difficult part. It requires a lot of computation power. So that's what we do. Later on, you know how to avoid that, that can be done using applications. It's just software related. It's not such a big thing.
But it's the actual detection within the camera, the camera being able to focus in like what we see here, folks. As the camera is showing the water bottle, we can see it, it's circled up on it. And basically, then you would take that information in a different application. And then the different application would say, Okay, that's a water bottle.
So in other words, what we bet here and you could probably within the software as well, you could take a snap at this point of that particular object and put it in, you know, if you're running an additional software application,
You could do the snap and say, "Okay, we've captured that, let's identify it later." That's very interesting. So what are the, I guess for a better word, what are the limitations of object detection?
Of course, one of the limitations is, it goes for any AI application, but you have to train the AI engine. So this someone has trained actually, to detect approximately 90 different objects. So if you have more than that, and in the world, you have so many objects.
So you have to train for that and you have to build the model. And that requires a lot of effort.
So depending upon the customer, of course, we can, of course, try to build that model. But we normally leave that to the customers. We don't go into that area. What we focus on is to, if we have the model, then we can infer from that.
I get you.
And tell him the results.
I see. So if the model is basically, I need to see people, I need to see bags, I need to see whatever the list is, then you train the system to see those particular objects. So that's really cool. Because you think about. You could almost train this to see a knife, see a handgun.
Absolutely. Facial recognition in the crowd.
See different types of things that you're looking for, that we want to catch. Or maybe it's a, you have to actually physically see it. So wow, there's a whole host of things. That's very, very cool. And the best part is, it's just running on a chip that again, while you're wearing a vest, this could go on a very small application and the camera pointing at a hallway or something.
Like you may get an idea it could be used for the police force.
Something you know, they were something similar.
So and they can, as you said, you know, they can detect guns or dangerous items.
So then they don't have to keep on looking. The gamma ray was looking for them.
It relieves their burden on that point.
I was just about to say, specifically, the Secret Service and people like that, who come in and patrol the same place every day. And they're looking for suspicious objects that weren't there the day before.
What's the size limit then? What's the detection limit from a size standpoint?
So that I think goes again into the area of the camera's capacity.
So we have provided a very economical solution. So we have not used a very high quality camera, but even then you can see it detects a lot of objects at this range.
So if you have a better camera. If you have a camera, which can detect more distance.
That's a different thing. We are actually concentrating on the processing power, not on the camera.
Yes. So that it's just like anything else. If you ever telescoping see the moon better with your eyes? So it all depends on the optics that you're attaching to the processor.
Very, very, very cool. So ArchiTek. A-R-C-H-I-T-E-K.com?
.ai. And you guys are looking for B2B partners integration? What's the goal?
Well. Yeah. B2B because we make the chip and it's a device.
So it doesn't run by itself. We of course provide the software to run it.
So but it has to be put into products. It could be put into security cameras, as I said, or it could be even put on drones. It could even be used for automotive use, autonomous vehicles. We actually can even run an application which can take your car and tell it where it is at every moment and make a map around it.
Where's the company based out of?
We are based in Japan actually.
So we are here from Japan.
Okay, fantastic. All right, everyone. ArchiTek and we've got Hassan here and of course the CEO is doing the. Shuichi Takada. so Domo Arigatou. Thank you very much for coming.
Shuichi Takada Thank you.
All right. Thank you so much.
TPN CES 2022 coverage is executive produced by Michele Mendez. Technical Directors are Kurt Corless and Adam Barker. Associate producers are Nancy Ertz and Maurice McCoy. Interviews are edited by Jo Mini. Hosts are Marlo Anderson, Todd Cochrane, Scott Ertz, Christopher Jordan, Daniele Mendez, and Allante Sparks. Las Vegas studio provided by HC Productions. Remote studio provided by PLUGHITZ Productions. This has been Tech Podcasts Network Production, copyright 2022.