Fashion is moving faster than ever. A designer wants to see how a jacket behaves in leather, wool and silk without cutting three prototypes. A game developer wants digital clothing that folds, stretches and swings like the real thing. An online shopper stares at a sweater on her phone, wondering how it will drape on her body instead of a model’s.
In all three cases, the problem is the same: Texture doesn’t translate. And solving that problem is where Jesus Aguilar is focusing his research.
Aguilar is a computer science undergraduate student in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, who is working at an unusual intersection of computer vision, artificial intelligence, or AI, and fashion. His goal is to create AI-powered algorithms that understand not just how clothing looks, but how it behaves.
Screens are excellent at showing silhouette and color. But they are far less convincing when it comes to weight, stiffness, drape and surface detail — the tactile qualities that determine whether a garment feels luxurious or flimsy, structured or fluid. Today, many existing garment-digitization tools stop at shape.
“Right now, the AI systems I tested can convert images into 3D models of garments,” Aguilar says. “But if we talk about textures, it’s not something that’s covered and that’s what we wanted to work on.”
That limitation has real consequences. For designers, it means physical prototyping, cutting, sewing and discarding samples to test fabrics. For consumers, it fuels uncertainty, returns and waste. And for digital creators in gaming and entertainment, it results in clothing that looks right until it moves.
Aguilar’s project centers on testing and extending a model called ChatGarment, which uses computer vision to convert images — and in some cases text or video — into 3D digital garments. The tool performs well on simple designs like dresses and gowns, which dominate its original training data. However, Aguilar pushed it further, deliberately feeding it more complex garments with layered construction and intricate details.
“I mainly tested how this model would perform on garments outside of the data set they used originally,” he says. “For more complex details, it wasn’t well-trained to detect different patterns.”
As part of the Fulton Forge Student Research Expo, Aguilar worked under the supervision of Pavan Turaga, director of The GAME School and electrical engineering professor at ASU, to assemble a dataset of roughly 50 textures, photographing fabric types that challenge current models. The goal was not just better visuals, but to teach systems to recognize fabric properties that influence how clothing moves and feels.

Where code meets cloth
The work grew out of the Fulton Schools Grand Challenges Scholars Program, where Aguilar first encountered research that applied computer vision to unconventional problems. Fashion was not an obvious destination.
“I’m not a fashion-inclined person,” he says. “So, this was something new for me and I think that’s part of why I decided to take it.”
That adventurous inclination led him into collaboration with Galina Mihaleva, a celebrated fashion technologist and artist and an associate professor in ASU FIDM. Mihaleva teaches FSH 344 Fashion Design and Wearable Technology in ASU’s fashion program. Her class blurs boundaries between design, engineering and social impact, encouraging students to embed sensors, microcontrollers and responsive materials into garments that communicate messages about health, environment and identity.
“I teach a synergy between different disciplines,” Mihaleva says. “Designers have to negotiate between aesthetics and technology. Engineers have to think about bodies, materials and meaning.”
Last November, Aguilar supported Mihaleva’s work at the Scottsdale Waterfront Wearable Technology Fashion Showcase by helping students implement interactive elements through code. In one project, moisture sensors triggered water pumps that activated hydrochromic paint, allowing garments to visibly respond to environmental conditions. His role ranged from troubleshooting hardware to writing software that synchronized movement, color and data.
“Jesus brought extraordinary dedication, intelligence and generosity to our fashion show,” Mihaleva says. “His guidance elevated the entire production, and his professionalism and creativity left a lasting impression on the team.”
That studio experience reinforced Aguilar’s view of AI as a creative partner rather than a replacement for human judgment.
“I didn’t want to do something just for the sake of making things fast,” he says. “It’s not about eliminating tasks that are vital to the creative process. It’s about giving designers tools that let them experiment.”
In practical terms, that means letting designers test textures digitally before committing to physical materials and seeing how those changes affect drape and structure. It also opens the door to more sustainable workflows, reducing waste by limiting unnecessary samples and unsold inventory.
The implications extend beyond fashion retail. In gaming, film and virtual reality, realistic clothing motion is one of the hardest details to simulate convincingly. Texture-aware garment models could make digital characters seem more human, while reducing the time artists spend manually tweaking animations.

Tailored for the future
Ross Maciejewski, director of the School of Computing and Augmented Intelligence, says Aguilar’s work is emblematic of a broader shift in computer science education.
“Computer science is everywhere,” Maciejewski says. “From how we shop and design clothes to how we care for the vulnerable and protect the planet, future computer scientists like Jesus will make contributions to all aspects of life.”
Teaching machines to understand texture is not trivial. It requires richer training data, multiple viewing angles, material labels and new ways to translate visual cues into physical behavior. Aguilar is clear-eyed about the challenges.
“One of the things that surprised me was how few studies and reliable tools there are in this area,” he says. “There’s a lot of work to be done.”
For now, Aguilar continues to expand his dataset, refine model performance and collaborate across disciplines. His future path remains open.
“If I had told myself before college that I’d be working on computer vision and fashion, I would have said, ‘What are you doing?’” he says.
Still, his interest in computer vision has only grown.
As fashion becomes increasingly digital — from online shopping to virtual runways — the ability to translate texture may prove as important as translating shape. Aguilar’s work suggests a future where garments can be tested, worn and understood before they ever exist physically. A future where texture finally makes the leap from fabric to screen and where the gap between what clothing looks like and how it feels begins to close.



