One of Filippo Nassetti’s AI-generated eyewear concepts Image: Filippo Nassetti
You might not consider glamorous fashion eyewear and the large data sets of artificial intelligence (AI) to be particularly well-suited bedfellows, and you would be correct to a certain extent. But the emerging technology’s reach is going farther than you might think.
While AI is having a significant impact in clinical technology and lens manufacturing, its use in eyewear is more circumspect, at least for the moment. Platforms like ChatGPT may have grabbed the headlines for its ability to create human-text based on context and past conversations, but text to image-based AI software such as Dezgo, OpenAi and DeepAI have all grabbed the attention of designers inside and outside of the eyewear sector.
While very much an exploration of concept and potential, architect and computational designer Filippo Nassetti’s AI Eyewear project highlights a new way in which designers are finding inspiration. Nassetti’s conceptual collection has been realised by text-to-image tool, Midjourney and is part of a longer project that has also explored headwear and masks. Inspired by biomorphic patterns and textures found in biological and mineral structures. Each frame is created by asking the AI in Midjourney to compose it by natural forms, such as the growth of fibrous structures or the corrosion of metal.
‘While I still value the physical object as the main outcome of my process, I enjoy leaning into the production of concepts with AI,’ says Nassetti. ‘These tools are allowing the designer to widely expand the initial pool of concepts, out of which to eventually select something to be developed.’
Nassetti notes the disconnect between his concepts and the functionality requirements of eyewear products and the production methods required to make them but says this can still be an important part of the creative process.
‘This disconnect can also become a way to explore new ideas, that would be otherwise left behind. For instance, in eyewear design, a distinction between lenses and frame is generally axiomatic: you start the design by thinking of them as separate.
‘AI productions blur the boundaries; patterns appear that beautifully merge parts in a continuous whole. Instead of reading it as a mistake, this can be a starting point for a radical re-thinking of the product.’
Nassetti says the speed at which designers can work their way through ideas, with AI tools like Midjourney, will be able to expand natural imagination. He concludes: ‘The challenge I see for the future, will be how to match such velocity with the workflows of design development, from 3D modelling to the production and refinement of prototypes.’
On the fence
Freelance eyewear designer, Andy Sweet, says he has seen some of the designs created by platforms like Midjourney, but says he is yet to be convinced that its use would speed up his own processes. As such, Sweet does not use AI in any of his eyewear projects.
‘I’ve seen AI create very cool images of frames, but in some cases, those frames that would clearly come up against quite serious manufacturing challenges, with potentially significant cost implications or frames that would be impossible to wear,’ says Sweet.
‘I think in its current guise AI could be useful for coming up with numerous designs concepts quickly, however the challenge would then be to translate these into anatomically correct, wearable frames that are cost-effective to manufacture. I could see myself using AI for repetitive and non-creative tasks, for example creating technical drawings from designs, generating mood boards either for clients or for personal inspiration, rendering designs, and so on.’
The emergence of AI and its potential power has caused concern across many industries and even though use in the eyewear sector is limited, there are still use cases that could cause problems.
‘A potential issue with using AI is copyright infringement deriving from the software scraping the web as part of its “inspiration”,’ says sweet. ‘If AI incorporates a motif or design feature that is copyrighted without the designer or client being aware of its origin then this could have serious legal implications, which may not come to light until after the frames go on sale.’
He adds that another issue could come from showing attention-grabbing images to clients because they would want that exact frame despite the manufacturing and wearability pitfalls that come with it.
TD Tom Davies managing director Tom Davies also has concerns about AI, especially when it comes to knowledge-sharing. ‘I was asked to work with Ai-Da and while it did interest me, I also felt slightly alarmed teaching an AI platform 23 years of bespoke eyewear production knowledge,’ says Davies.
‘I suspect there will be an app for it in a few years and there will be a market for it, but it will be at the low end of things,’ he predicts. ‘3D printing will continue to get cheaper and younger people will get more adventurous, with creative apps driven by AI dominating the direct-to-consumer eyewear market. This won’t be at the high end, though. People will crave the personal touch and opticians will still need to take care of eye health with enhanced clinical testing that the smartphone-based systems in the pipeline won’t be able to match. That said, I don’t think it will be a quick seismic shift, it will happen over a decade or so and evolve just as our industry always has.’
The right fit
With the growth of online retail in eyewear over the past decade, the need for reliable virtual try on (VTO) systems has increased exponentially. Cast your minds back to VTO systems of seven or eight years ago and you will probably remember images or live views featuring frames with no base curve that looked like they had floated on to the subject’s face and then disappeared at the slightest movement of the head.
Today’s VTOs are much more realistic and reliable thanks to artificial intelligence and, in particular, the branch known as machine learning where data and algorithms are used to imitate the way humans learn over time to gradually improve the machine’s accuracy.
Fittingbox (pictured right), which lists Marchon, Alain Afflelou, Essilor and Fielman among its clients, was one of the first to launch the ‘virtual mirror’ concept that allowed consumers to try glasses on via a desktop or mobile device, in real time, through augmented reality.
The company’s use of machine learning and deep learning has elevated its VTO platform to include diminished reality functionality that overcomes the issue of myopic patients not being able to clearly see the mobile device or screen on which they are using the virtual mirror.
The Frame Removal tool’s deep learning algorithms detect each pixel belonging to the glasses worn by the user in a process known as semantic segmentation. Using a DeepLabV3+ algorithm, the pixels are classified into three categories: background, lenses and frame. Once this is complete, the frame is removed from view.
The data set used by the Frame Removal algorithm has already been captured by the company’s own existing VTO software, which generates images of glasses in augmented reality based on certain ‘ground truths,’ such as facial features and landmarks. The ability to learn what pixels belong to frames and then decide that they are not ground truths facilities the recognition of the physical frame and removed it in real time from the VTO without the user having to take them off.