Sketches and models are an essential stage in the engineering design process. However, 2D computer-aided design models do not always have the precision needed to solve complex problems.
For several years, engineers worldwide have been increasingly using 3D modeling to enhance the efficacy and aesthetics of their designs. By manipulating polygons, edges, and vertices in a 3D environment, 3D modeling brings designs to life in a three-dimensional space.
As the human imagination plays such a vital role in the design process 3D modeling is a complex process that takes a lot of time and patience to master. But recent advances in AI by two major research teams could soon pave the way for 3D modeling from 2D images, which could revolutionize the design process not just for engineers, but architects, animators and video game designers too.
A team of Google researchers has created a technology that can combine thousands of tourist images into accurate 3D renderings that enable you to explore the image's entire scene from different angles.
This is known as "NeRF in the Wild" or "NeRF-W" because it takes an existing idea called Neural Radiance Fields (NeRF) from previous research by Google and applies it to thousands of images found on websites such as Flickr.
NeRF-W is an advanced, neural network-driven platform that provides geometric scene information while eliminating non-essential elements in the photo, such as other people or cars. It can also smooth out the lighting changes you get when analyzing images taken at various times of the day.
Nvidia researchers recently developed an AI system capable of generating a full 3D model from any 2D image.
Named "DIB-R," the algorithm takes a picture of a 2D object, such as a car, and predicts what it will look like if it were 3D. DIB-R stands for 'differentiable interpolation-based renderer,' which means it combines the 2D image and makes assumptions based on a 3D understanding of the world. This is remarkably similar to how the 2D feedback we get from our eyes is transformed into a 3D mental picture in our minds.
Also, unlike the Google research outlined above, this technology also accounts for the image's texture and depth. One day, the team hopes that the technology would allow AI to build completely immersive 3D worlds in milliseconds based solely on a bank of 2D images.
Why this is good news for engineers
Using machine learning to create 3D models could improve the engineering design process in the following ways:
- It is more representative of your final product.
Although experienced engineers can easily envision a completed design based on a 2D drawing alone, it might be challenging for your customers or less-experienced staff members of your company to imagine what the final design would look like. A 3D model would go some way to helping stakeholders visualize your idea more deeply.
- It's easier to sell your idea.
Since 3D models offer a better representation of designs, when you need to impress a client or help them imagine a project, they are more successful than 2D drawings. For example, suppose you're trying to sell your design to a manufacturer. In that case, a 3D model will more easily get the quality of your design across than any two-dimensional drawing ever could.
- It's easier to find potential errors in your design.
3D models could help protect your business from expensive mistakes by helping you and others catch mistakes before production begins. A 2D drawing can be more abstract, causing errors that could result in substantial costs if the errors go unnoticed until after production begins.
The research featured in this piece could one day give engineers the design tools they need to finish projects more quickly, efficiently, and within budget than ever before.