Sketches and models are an essential stage in the engineering design process. However, 2D computer-aided design models do not always have the precision needed to solve complex problems.
For several years, engineers worldwide have been increasingly using 3D modeling to enhance the efficacy and aesthetics of their designs. By manipulating polygons, edges, and vertices in a 3D environment, 3D modeling brings designs to life in a three-dimensional space.
As the human imagination plays such a vital role in the design process 3D modeling is a complex process that takes a lot of time and patience to master. But recent advances in AI by two major research teams could soon pave the way for 3D modeling from 2D images, which could revolutionize the design process not just for engineers, but architects, animators and video game designers too.
A team of Google researchers has created a technology that can combine thousands of tourist images into accurate 3D renderings that enable you to explore the image's entire scene from different angles.
This is known as "NeRF in the Wild" or "NeRF-W" because it takes an existing idea called Neural Radiance Fields (NeRF) from previous research by Google and applies it to thousands of images found on websites such as Flickr.
NeRF-W is an advanced, neural network-driven platform that provides geometric scene information while eliminating non-essential elements in the photo, such as other people or cars. It can also smooth out the lighting changes you get when analyzing images taken at various times of the day.
Nvidia
Nvidia researchers recently developed an AI system capable of generating a full 3D model from any 2D image.
Named "DIB-R," the algorithm takes a picture of a 2D object, such as a car, and predicts what it will look like if it were 3D. DIB-R stands for 'differentiable interpolation-based renderer,' which means it combines the 2D image and makes assumptions based on a 3D understanding of the world. This is remarkably similar to how the 2D feedback we get from our eyes is transformed into a 3D mental picture in our minds.
Also, unlike the Google research outlined above, this technology also accounts for the image's texture and depth. One day, the team hopes that the technology would allow AI to build completely immersive 3D worlds in milliseconds based solely on a bank of 2D images.
Why this is good news for engineers
Using machine learning to create 3D models could improve the engineering design process in the following ways:
The research featured in this piece could one day give engineers the design tools they need to finish projects more quickly, efficiently, and within budget than ever before.