ArchiTek: Versatile and affordable silicon for AI solutions @ CES 2022 - Show Notes

ArchiTek: Versatile and affordable silicon for AI solutions @ CES 2022

Monday Feb 28, 2022 (00:09:02)


If you're like most business owners, you're always on the lookout for new and innovative ways to increase revenue and stay ahead of the competition. One of the most promising areas of growth in today's market is artificial intelligence (AI). But implementing AI solutions can be expensive, especially if you need to purchase custom silicon chips. ArchiTek has developed a new chip called the ArchiTek Intelligence Pixel Engine (aIPE), which helps to solve this problem.

Who is ArchiTek?

ArchiTek is a semiconductor company that designs and manufactures custom silicon chips. ArchiTek's mission is to provide the most innovative and cost-effective solutions for their customers' needs. ArchiTek was founded in 2016 by a team of experienced engineers from Samsung, LG, and SK Hynix. The company offers a variety of products, including the ArchiTek Intelligence Pixel Engine (aIPE) and the ArchiTek Machine Learning Processor (aMLP).

ArchiTek Intelligence Pixel Engine (aIPE)

aIPE is a programmable, image processing, dedicated LSI engine that combines custom hardware architecture - optimized for low power consumption and low latency - with innovative software flexibility using virtual engine technology. This makes aIPE versatile and affordable for all AI solutions-from computer vision to deep learning.

ArchiTek Machine Learning Processor (aMLP)

The aMLP is an application-specific integrated circuit (ASIC) designed for machine learning applications. It features an efficient data path and low-latency processing, as well as support for a wide range of algorithms.

Who can use the aIPE?

The aIPE is ideal for any business that wants to implement AI solutions but doesn't want to break the bank. It's perfect for small and medium-sized businesses, as well as start-ups. The aIPE can be used in a variety of applications, including computer vision, deep learning, machine learning, and robotics.

The ArchiTek Intelligence Pixel Engine (aIPE) is unique in that it offers both programmability and flexibility. Other AI chips are either fixed or require significant programming effort for customization. The aIPE also has lower power consumption and latency than most other chips on the market, making it an ideal choice for embedded systems.

What are some uses for the aIPE?

The aIPE can be used in a number of different ways. For example, it can be used to develop new algorithms or to optimize existing ones. It can also be used to improve the accuracy of predictions and the speed of decision-making.

In the example device during the interview. the company showed off a live camera feed with native object identification. While a cool demo, this particular example shows a lot of promise. A non-connected camera could use the chip to locate and identify people to help with autofocus. It could also be integrated into cars, trucks, and bicycles to help with autonomous driving, braking, and more.


ArchiTek's aIPE is versatile and affordable silicon for all AI solutions - from computer vision to deep learning. If you're looking for an AI solution that won't break the bank, the aIPE is a perfect choice. ArchiTek's mission is to provide the most innovative and cost-effective solutions for their customers' needs, and the aIPE is just one example of their commitment to this goal. To learn more about the company and its products, head to their website.

Interview by Todd Cochrane of Geek News Central and Christopher Jordan of The Talking Sound.

Sponsored by:
Get $5 to protect your credit card information online with Privacy.
Amazon Prime gives you more than just free shipping. Get free music, TV shows, movies, videogames and more.
The most flexible tools for podcasting. Get a 30 day free trial of storage and statistics.


Scott Ertz

Episode Author

Scott is a developer who has worked on projects of varying sizes, including all of the PLUGHITZ Corporation properties. He is also known in the gaming world for his time supporting the rhythm game community, through DDRLover and hosting tournaments throughout the Tampa Bay Area. Currently, when he is not working on software projects or hosting F5 Live: Refreshing Technology, Scott can often be found returning to his high school days working with the Foundation for Inspiration and Recognition of Science and Technology (FIRST), mentoring teams and helping with ROBOTICON Tampa Bay. He has also helped found a student software learning group, the ASCII Warriors, currently housed at AMRoC Fab Lab.


Powered by Privacy


Powered by and Grammarly

Erin Hurst (00:07)

Help support our coverage using Blubrry. The community that gives creators the ability to make money, get detailed audience measurements and host their audio and video, get 30 days to try out the service using promo code BLUBRRY004. That's B-L-U-B-R-R-Y-0-0-4.

Todd Cochrane (00:27)

Alright, everyone. So this is one of our first demos we've had, believe it or not here at the show. So I want to welcome Hassan from ArchiTek. He's the CEO of the company. Go ahead, Hassan and tell us a little bit about the company and what we're gonna see here. Oh, we gotta get him mic'd up. Okay, go ahead.

Hassan Toorabally (00:48)

Yeah, so it's ArchiTek.

Todd Cochrane (00:50)


Hassan Toorabally (00:51)

So the first word is from architecture.

Todd Cochrane (00:54)


Hassan Toorabally (00:54)

And the last word is technology. And it's combined this, you know, so architecture. So what you're actually doing is we're making a chip. It's a small LSA. And it goes to power, like intelligent AI applications running on battery.

Todd Cochrane (01:12)


Hassan Toorabally (01:13)

And handheld devices. So just think of, you know, your cell phone or a mobile phone, which is intelligent enough to identify objects.

Todd Cochrane (01:20)


Hassan Toorabally (01:20)

Something like that. And, yeah, I have a demo. And I think that's the easiest way to

Todd Cochrane (01:25)


Hassan Toorabally (01:25)

See what we are offering.

Todd Cochrane (01:27)

Yep, go ahead and talk about what we're gonna see here.

Hassan Toorabally (01:30)

Yeah. So can you see this demo?

Christopher Jordan (01:33)


Todd Cochrane (01:34)


Hassan Toorabally (01:34)

So he's the CEO of ArchiTek.

Todd Cochrane (01:37)


Hassan Toorabally (01:38)

And he has this small camera over here.

Todd Cochrane (01:42)


Hassan Toorabally (01:43)

And this has power in the chip, the brains inside.

Todd Cochrane (01:47)


Hassan Toorabally (01:47)

So it's a very small piece of equipment.

Todd Cochrane (01:49)


Hassan Toorabally (01:50)

And this is being powered by a battery pack.

Todd Cochrane (01:53)


Hassan Toorabally (01:54)

It is actually running an AI application, which can actually detect people or objects.

Todd Cochrane (02:01)


Hassan Toorabally (02:01)

If you maybe want to put your water bottle in front, it will show your water bottle on this monitor.

Todd Cochrane (02:08)

Oh, yes, it does.

Christopher Jordan (02:10)

We can bring up our PTZ cam over there. We should be able to zoom into that. See what it looks like up on the screen because that's pretty nifty. Now, what are the applications for this technology? What are you looking at?

Hassan Toorabally (02:21)

Yeah. So many other people are actually doing something similar, but they're mostly doing it in the cloud, using, you know, servers, very powerful servers. So we have a handheld solution, which, again, uses very little power. So this can be used for many different applications. One application that comes to mind is drones, you know, where you know, you're running on battery. And it doesn't have too much battery too because it has a supply.

Todd Cochrane (02:47)


Hassan Toorabally (02:47)

It doesn't have much to do to actually detect objects. So it can bump into anything.

Todd Cochrane (02:52)


Christopher Jordan (02:52)


Hassan Toorabally (02:52)

So if you're having a self flying drone, you wanted not to bump into things. So this is one way you can avoid obstacles like that.

Todd Cochrane (02:59)

So this actually does the identification? Does it tell you that's a water bottle, that's a person or just does the detection only of the object?

Hassan Toorabally (03:09)

Well, detection is the most difficult thing. Because it's a camera, it doesn't really know.

Todd Cochrane (03:15)


Hassan Toorabally (03:15)

What a person is or a car is.

Todd Cochrane (03:17)


Hassan Toorabally (03:17)

So detection is the most difficult part. It requires a lot of computation power. So that's what we do. Later on, you know how to avoid that, that can be done using applications. It's just software related. It's not such a big thing.

Todd Cochrane (03:30)

But it's the actual detection within the camera, the camera being able to focus in like what we see here, folks. As the camera is showing the water bottle, we can see it, it's circled up on it. And basically, then you would take that information in a different application. And then the different application would say, Okay, that's a water bottle.

Hassan Toorabally (03:53)


Todd Cochrane (03:54)

So in other words, what we bet here and you could probably within the software as well, you could take a snap at this point of that particular object and put it in, you know, if you're running an additional software application,

Hassan Toorabally (04:05)


Todd Cochrane (04:06)

You could do the snap and say, "Okay, we've captured that, let's identify it later." That's very interesting. So what are the, I guess for a better word, what are the limitations of object detection?

Hassan Toorabally (04:22)

Of course, one of the limitations is, it goes for any AI application, but you have to train the AI engine. So this someone has trained actually, to detect approximately 90 different objects. So if you have more than that, and in the world, you have so many objects.

Todd Cochrane (04:39)


Hassan Toorabally (04:39)

So you have to train for that and you have to build the model. And that requires a lot of effort.

Christopher Jordan (04:45)


Hassan Toorabally (04:45)

So depending upon the customer, of course, we can, of course, try to build that model. But we normally leave that to the customers. We don't go into that area. What we focus on is to, if we have the model, then we can infer from that.

Todd Cochrane (05:00)

I get you.

Hassan Toorabally (05:01)

And tell him the results.

Todd Cochrane (05:02)

I see. So if the model is basically, I need to see people, I need to see bags, I need to see whatever the list is, then you train the system to see those particular objects. So that's really cool. Because you think about. You could almost train this to see a knife, see a handgun.

Christopher Jordan (05:27)

Absolutely. Facial recognition in the crowd.

Todd Cochrane (05:28)

See different types of things that you're looking for, that we want to catch. Or maybe it's a, you have to actually physically see it. So wow, there's a whole host of things. That's very, very cool. And the best part is, it's just running on a chip that again, while you're wearing a vest, this could go on a very small application and the camera pointing at a hallway or something.

Hassan Toorabally (05:58)


Todd Cochrane (05:59)


Hassan Toorabally (06:00)

Like you may get an idea it could be used for the police force.

Todd Cochrane (06:03)


Hassan Toorabally (06:04)

Something you know, they were something similar.

Todd Cochrane (06:06)


Hassan Toorabally (06:06)

So and they can, as you said, you know, they can detect guns or dangerous items.

Christopher Jordan (06:11)


Hassan Toorabally (06:11)

So then they don't have to keep on looking. The gamma ray was looking for them.

Todd Cochrane (06:15)


Hassan Toorabally (06:15)

It relieves their burden on that point.

Christopher Jordan (06:18)

I was just about to say, specifically, the Secret Service and people like that, who come in and patrol the same place every day. And they're looking for suspicious objects that weren't there the day before.

Todd Cochrane (06:29)

What's the size limit then? What's the detection limit from a size standpoint?

Hassan Toorabally (06:36)

So that I think goes again into the area of the camera's capacity.

Todd Cochrane (06:40)


Hassan Toorabally (06:41)

So we have provided a very economical solution. So we have not used a very high quality camera, but even then you can see it detects a lot of objects at this range.

Todd Cochrane (06:52)


Hassan Toorabally (06:52)

So if you have a better camera. If you have a camera, which can detect more distance.

Todd Cochrane (06:56)

Right. Right.

Hassan Toorabally (06:58)

That's a different thing. We are actually concentrating on the processing power, not on the camera.

Todd Cochrane (07:03)

Yes. So that it's just like anything else. If you ever telescoping see the moon better with your eyes? So it all depends on the optics that you're attaching to the processor.

Christopher Jordan (07:12)


Todd Cochrane (07:13)

Very, very, very cool. So ArchiTek.

Hassan Toorabally (07:21)


Todd Cochrane (07:22)

.ai. And you guys are looking for B2B partners integration? What's the goal?

Hassan Toorabally (07:29)

Well. Yeah. B2B because we make the chip and it's a device.

Todd Cochrane (07:33)


Hassan Toorabally (07:33)

So it doesn't run by itself. We of course provide the software to run it.

Todd Cochrane (07:37)


Hassan Toorabally (07:38)

So but it has to be put into products. It could be put into security cameras, as I said, or it could be even put on drones. It could even be used for automotive use, autonomous vehicles. We actually can even run an application which can take your car and tell it where it is at every moment and make a map around it.

Todd Cochrane (08:00)

Where's the company based out of?

Hassan Toorabally (08:01)

We are based in Japan actually.

Todd Cochrane (08:03)

Japan. Oh.

Hassan Toorabally (08:04)

So we are here from Japan.

Todd Cochrane (08:05)

Okay, fantastic. All right, everyone. ArchiTek and we've got Hassan here and of course the CEO is doing the. Shuichi Takada. so Domo Arigatou. Thank you very much for coming.

Hassan Toorabally (08:15)

Shuichi Takada Thank you.

Todd Cochrane (08:20)

All right. Thank you so much.

Erin Hurst (08:25)

TPN CES 2022 coverage is executive produced by Michele Mendez. Technical Directors are Kurt Corless and Adam Barker. Associate producers are Nancy Ertz and Maurice McCoy. Interviews are edited by Jo Mini. Hosts are Marlo Anderson, Todd Cochrane, Scott Ertz, Christopher Jordan, Daniele Mendez, and Allante Sparks. Las Vegas studio provided by HC Productions. Remote studio provided by PLUGHITZ Productions. This has been Tech Podcasts Network Production, copyright 2022.

We're live now - Join us!



Forgot password? Recover here.
Not a member? Register now.
Blog Meets Brand Stats