Using AI technology to revolutionize e-commerce @ CES 2022 - Show Notes Using AI technology to revolutionize e-commerce @ CES 2022

Thursday Jan 27, 2022 (00:10:23)


One of the more difficult things about running an online retailer is the images. One of the most difficult industries online is fashion. Every product has to have so many phones, and you can only go into so much detail with the number of models at your disposal. While some industries, such as the automotive industry, have gone nearly entirely computer-generated imagery, fashion has not been able to follow suit. Now, with the work done by, that is all changing.

What is is an award-winning fashion tech start-up that uses innovative AI technology to revolutionize e-commerce for retailers and consumers alike. The firm's virtual fashion models allow brands and retailers to create personalized shopping experiences for their customers, saving time and money in the process. is changing the way businesses think about e-commerce, and is quickly becoming a leading authority in the space!

How does work? has the ability to create entirely AI-created human models to show off a company's clothing. The company's virtual fashion models are powered by innovative AI technology that allows retailers to create personalized shopping experiences for their consumers.

The firm's virtual models are able to replicate any outfit imaginable, giving businesses a new way to diversify their brands and appeal to a wider range of consumers. They are changing the face of e-commerce and is quickly becoming a top player in the industry.

Why should I use

If you run an online fashion brand, there are a number of reasons why you might engage with the company's technology. The first is that the time from the initial design of a product until it can be online is greatly reduced. This is because the product doesn't have to be manufactured before the images can be worked on. The system is able to import the 3D models from the design process and put them right onto the virtual models.

In addition to not having to wait for production to complete, you also don't have to go through the process of hiring a team. There are no photographers, assistants, or models involved. You also don't have to secure a photo studio in which to take all of the photos. This makes it both quicker and less expensive to get the images for the website. You also don't have to worry about licensing the photos, especially if you're planning to go market internationally.

The real benefit, however, is that you can show a wide range of examples without a large additional cost. You can show off a virtual model of every shape and size wearing the various sizes of clothing. You can even adjust your models to the nation in which they are being displayed. In Japan, you can show a model with a Japanese look, while showing a model with a Russian look in Russia. This can make it more natural for the local population to see how someone like them looks in the clothing.


The company has been implementing its technology with s number of clients and is looking to expand its reach. To learn more about and how it can help your business, head over to their website.

Interview by Todd Cochrane of Geek News Central.

Sponsored by:
Get $5 to protect your credit card information online with Privacy.
Amazon Prime gives you more than just free shipping. Get free music, TV shows, movies, videogames and more.
The most flexible tools for podcasting. Get a 30 day free trial of storage and statistics.


Scott Ertz

Episode Author

Scott is a developer who has worked on projects of varying sizes, including all of the PLUGHITZ Corporation properties. He is also known in the gaming world for his time supporting the rhythm game community, through DDRLover and hosting tournaments throughout the Tampa Bay Area. Currently, when he is not working on software projects or hosting F5 Live: Refreshing Technology, Scott can often be found returning to his high school days working with the Foundation for Inspiration and Recognition of Science and Technology (FIRST), mentoring teams and helping with ROBOTICON Tampa Bay. He has also helped found a student software learning group, the ASCII Warriors, currently housed at AMRoC Fab Lab.


Powered by Privacy


Powered by and Grammarly

Erin Hurst (0:07)

Help support our coverage using Blubrry. The community that gives creators the ability to make money, get detailed audience measurements, and host their audio and video. Get 30 days to try out the service using promo code BLUBRRY004. That's B-L U-B-R-R-Y 004.

Todd Cochrane (0:27)

So what I want to do is introduce Harold Smeeman. I hope I pronounced that correctly. Did I get it right?

Ugnius Rimsa (0:35)


Todd Cochrane (0:35)

Okay, sorry, go ahead.

Ugnius Rimsa (0:36)

I'm the other co-founder. So Harold actually went to the airport to pick up her lost luggage. So they lost all our marketing material. So we've been trying to get it back for the past day, basically.

Todd Cochrane (0:46)

Ah, you know, don't you hate that when the airline sees that? Well, we got a chance here to get people to handle what you guys should. So it's la la, la la. Okay. Introduce the company. Go ahead. I'm sorry.

Ugnius Rimsa (1:01)

So yeah, so the company is Lalaland AI. I'm one of the co-founders, chief data officer and what we do is create synthetic models that don't exist for the e-Commerce Industry. So we use generative AI to generate photorealistic human models, and then offer that to fashion brands and e-commerce brands, to basically speed up the whole process.

Todd Cochrane (1:19)

So how does that actually work? What is the process? Because when people see say AI, now they kind of like, okay, I've heard AI a lot. What is that process look like when you work with a customer?

Ugnius Rimsa (1:31)

From the customer side it's more like, “Oh, let's kind of wind it back here”.

Todd Cochrane (1:35)


Ugnius Rimsa (1:35)

Let me quickly explain the tech kind of short way. So you can imagine as two neural networks, one's a detective and one's a forger. So the detective is actually the one that sees the real models. And it's basically a classifier that can tell that's a real person, not. and then the forger has never actually seen a human. It starts off with like random noise and over millions and millions of iterations, it reaches a point where the detector basically can't tell anymore if it's a forgery or it's real. And that's the point of photorealism and that's how we actually create these models. Then from the brand perspective, like a fashion brand, we offer them as a complete portfolio of early, you know, generate complete models. And then, for example, you need let's say, a middle-aged plus-size model. That's what they select on our platform. They select the hairstyle, if they want makeup or not. And then the next step is basically they upload the garments. So let's say a t-shirt, they specify I want this to be a tight, tight-fit, loose-fit. And then within 72 hours, we deliver the finished product, as they are published on the website.

Todd Cochrane (2:33)

Really! So it's a complete design of- that's amazing. So they upload in probably the initial base design of the t-shirt and then you guys do the fitting or do you guys actually design the design?

Ugnius Rimsa (2:36)

No. So they upload already something that's designed. What we call pack shots. So what they do is, it's called the ghost mannequin effect. It's like a green screen mannequin where they put the t-shirt on? So it creates this floating effect and that's what we get as input from different poses, basically. And that's what we use to map to the multiselection like in the previous step. And most of the time, the rents already have this before they go into mass production. They do these packshots so they can show retailers if they're actually interested or not before they go manufacture 1000s of copies of that shirt. All that we're seeing nowadays is also a lot of brands are like moving to fully 3D design. So they are basically, already designing the clothing in 3D software. And that's already also good input for our software, we can use that already to basically map to the model.

Todd Cochrane (3:38)

So you deliver to them an image that they can put on their e-commerce site as well, or where is that data used mostly by the end client?

Ugnius Rimsa (3:48)

So primarily, it's on the product detail page on their websites on e-commerce, and ideas basically, we can offer different skin complexion, different body types. For example, when you change from S to M you see a different model appear. You can also try and match it to your skin color so you can have a better idea, if the color matches then this way, we're trying to reduce the return rates. But yeah, so that's kind of what they do with it on the website. It's up to them. We deliver the kind of right now like an image basically and then they can use it for marketing, they can use for product detail page and so on,

Todd Cochrane (4:18)

Is the actual generated image is not a real person though right? It is an AI-generated image of a person. Right?

Ugnius Rimsa (4:25)

Exactly. So this person, you wouldn't find them anywhere in the real world. And that's also for a lot of brands that work in different countries it's really good for copywriting perspective because they don't have to manage 30 different copyright licenses for different regions and it's kind of the whole process.

Todd Cochrane (4:41)

Can you see examples of the end product on your website?

Ugnius Rimsa (4:45)

Yeah, on the website, if you go, basically all the models on our website are basically our generating models. Yeah, so you guys can play around with there as well there's some little demo.

Todd Cochrane (4:56)

So how long have you guys been working on this and are you ready to come to market? Are you working with a number of clients already?

Ugnius Rimsa (5:03)

So we've been doing this for roughly two years and we do have clients, beforehand, we had quite a lot of POCs. And this is the kind of starting off now we're trying to aim for recurring revenue. But because we're kind of coming out of the r&d phase where we know we spent a year and a half basically actually getting the generation to work to a high enough level. And now we're basically kind of moving to expand into getting brands on board.

Todd Cochrane (5:27)

So is your goal at the show to do that is to get brands on board?

Ugnius Rimsa (5:30)

Yeah. So we're trying to kind of put our foot into the US market, talk to American brands as well.

Todd Cochrane (5:36)

Where's the company based out of?

Ugnius Rimsa (5:37)

We're actually based in Amsterdam, in the Netherlands. That's where we started. And now we're kind of trying to see a more global picture.

Todd Cochrane (5:45)

Yeah, that's awesome. You know, I think we're getting to the point where now they're able to do audio reproduction of voices, which is kind of scary in its own itself. But now you've got aI generated humans, essentially, that from a realistic standpoint, where do you think? If I look at that image, can I tell that that is not a real person?

Ugnius Rimsa (6:10)

So, we actually have like a, in our slide deck, we would sometimes throw in a real model in between the fake models to see if people were able to differentiate, and most of them they can't, and sometimes even point out the real model, like start to look for problems where they okay, I can tell that she's fake because of this and that. Whereas actually, they're pointing out the real model, and okay, her nose is like a bit too off, but it's actually a real person that they're, like how to say criticizing rather than a generative model. So like in that sphere in the 2D sphere, it's pretty or likes in a non-video environment, it's pretty hard to tell. Video, I think some need some time to catch up in terms of generated video. You can find little glitches here and there but with 2d images, it's pretty hard to tell.

Todd Cochrane (6:49)

That is amazing. You know, I guess too, if they can make an artificial generate, and I don't know, if I'm using the right word, that is officially generated human as a digital image. I guess they can do a pretty good job making not that I need a, I actually need a clone. I need a real clone but not a digital picture of me. But that is pretty cool. If I think about that for a second in the use, where else can this go?

Ugnius Rimsa (7:20)

So for now we focus on kind of E-commerce, but we've had people from the gaming industry reach out basically recruiting a population of diverse heresy? And so that's one sphere marketing. So for example generating slightly different advertisements based on natural person viewing. So they would see different people in an advertisement based on you know.

Todd Cochrane (7:38)

Based on the targeting of the magazine, or wherever there may be.

Ugnius Rimsa (7:41)

Yeah, exactly. So there are still quite a lot of things that we need to explore and see where we can apply this. For now, we still have a pretty small team at the moment we are going to focus on the fashion industry. But you know, like, in the future, you never know where it's going to go.

Todd Cochrane (7:53)

So, do you think, let me just ask an ethical question. Are you ethical? Do you have to disclose this as a digitally rendered model? Do you think you're? That's a question because people are going to see the models and they're not going to know, Are we in a point where we're going to have to designate that something's digitally rendered versus real?

Ugnius Rimsa (8:18)

In my opinion, well, kind of depends on the brand, if they really want to let I guess their customers know if they think that's maybe a risk to you know, losing potential buyers in the future. From our side, I think, for us is basically, since you can't differentiate it for us, I think it's not really an issue. But some brands have expressed okay, maybe our, you know, our clients wouldn't really want to think this is real, but it's actually and fake. And then maybe a little sticker or something's synthetic might be also pretty cool in terms of-

Todd Cochrane (8:46)

Most people don't know it, but most car commercials today are completely CG. They're very rarely even using a real vehicle in many of the, it's all digitally rendered. So we're seeing a lot of digitally rendered stuff now, in the I guess for the non-physical flesh world. So that's very, very interesting. Very, very cool. So folks, if you want to find out more information, go check this out. I'm definitely going to go check it out because I want to see the images myself. You go over to Lala. I think I pronounced this right, L-A-L-A-L-A, Laland AI. So yeah, Lalalandai and definitely check it out. I hope you guys get your marketing materials. I know how aggravating that can be.

Ugnius Rimsa (9:34)

Yeah, fingers crossed but I think we'll be fine. The best thing is that we're here.

Todd Cochrane (9:36)

That's the main thing too. Right. You have that show, the website, and everything else. Thank you so much for coming on. Appreciate it. Have a great show.

Ugnius Rimsa (9:42)

Thank you. You too.

Erin Hurst (9:46)

TPN CES 2022 coverage is executive produced by Michele Mendez. Technical Directors are Kurt Corless and Adam Barker. Associate producers are Nancy Ertz and Maurice McCoy. Interviews are edited by Jo Mini. Hosts are Marlo Anderson, Todd Cochrane, Scott Ertz, Christopher Jordan, Daniele Mendez, and Allante Sparks. Las Vegas studio provided by HC Productions. Remote studio provided by PLUGHITZ Productions. This has been Tech Podcasts Network Production, copyright 2022.

We're live now - Join us!



Forgot password? Recover here.
Not a member? Register now.
Blog Meets Brand Stats