Embracing the Text-to-CAD Revolution: An Insightful Reader’s View

By on May 18th, 2023 in Ideas, news

Tags: , , ,

“Pig wearing backpack” AI-generated 3D model [Source: Sketchup]

A reader posted their thoughts on the upcoming Text-to-CAD revolution.

One of our readers recently shared their opinions on the exciting, imminent Text-to-CAD, or Text-to-3D, revolution. This cutting-edge AI concept remains relatively unexplored but promises to be a game changer.

It’s a quantum leap from the well-established “Text-to-Image” services, which have the power to spawn stunning visuals from simple text prompts. While these services have empowered users to create high-quality imagery in an instant for a plethora of applications, they’ve also sparked some controversy. Critics argue that Text-to-Image services constitute a form of theft, as the AI algorithms are primarily trained on human-created images.

The debate over this issue is a conversation for another day. However, it’s worth noting that similar methods can be employed for generating 3D CAD files, which are essentially just another type of digital information, not unlike an image. If prompts can be used to conjure images, they can certainly be leveraged to create 3D CAD models.

The Journey Towards Text-to-CAD: Progress and Predictions

Several enterprises are currently investing their resources into developing this technology. While it’s true that progress in this area lags behind the advancements made by image-based services, it’s essential not to lose sight of the bigger picture. Technological breakthroughs are occurring at an unprecedented pace, and it’s more than likely that we’ll see powerful 3D model generators in the not-too-distant future.

Reginald Raye, one of our readers, was so inspired by our coverage of Text-to-CAD that he penned his own “Risks and Opportunities” piece. Raye and I concur that the emergence of this technology is inevitable. However, he highlights several potential hazards associated with its adoption:

  • Mass production of weapons or offensive content
  • A deluge of low-quality models flooding model repositories
  • Infringement of copyright protections
  • A dominance of western design principles over non-western traditions, or digital colonialism

While these issues indeed pose risks for Text-to-CAD, they mirror problems already present today, albeit on a potentially larger scale due to increased speed. AI, much like atomic energy, is a potent tool that can be wielded for both good and ill.

Shaping the Future of Text-to-CAD: Suggestions for a Robust System

Raye suggests four strategies that could aid in the creation of a more beneficial and effective Text-to-CAD system:

Curation: Prioritize training with high-quality 3D models, rather than the deluge of substandard models often found in open-access online repositories.

Parameterization: Ensure the generated 3D models can be parameterized, enabling precise specification of measurements suited for the part.

Design patterns: Allow standard features to be invoked for practicality, such as a grippy handle for various 3D models.

Filtering: Implement safeguards in generators to prevent the creation of inappropriate or harmful content.

The advent of Text-to-CAD technology is imminent, whether we’re ready for it or not. It’s heartening to see more people pondering the implications of this transformative tech. I recommend delving into Raye’s comprehensive post on this subject.

Via Tomo

By Kerry Stevenson

Kerry Stevenson, aka "General Fabb" has written over 8,000 stories on 3D printing at Fabbaloo since he launched the venture in 2007, with an intention to promote and grow the incredible technology of 3D printing across the world. So far, it seems to be working!

2 comments

  1. Part 2, with regards to the strategies, I mostly agree. I want to add that curation can be automated. There are also tools that can improve the quality of models, both on the input and output sides. You could train an AI to identify different flaws in models, and either reject the models or offer corrections. I agree with parameterization and design patterns. My stance on filtering is covered in the “offensive content” point of my Part 1 comment. I agree that harmful content should be filtered at least in public systems, but let’s not get over-zealous with the censorship. Let people make what they want for their own private use. Filtering would be more useful in the curating step, removing low-quality models.

  2. Part 1, addressing the potential hazards:

    – Weapons: AI could make the modeling step easier, but unless it is trained specifically on weapon parts, it would need to be guided step-by-step to create a useful geometry. That’s not something an amateur could do. Even after the modeling step, it’s still limited by the manufacturing step, which text-to-CAD wouldn’t address. Someone who knows what they’re doing can already make weapons without the AI (and with non-AM tools). Someone who doesn’t know what they’re doing isn’t likely to be helped much by text-to-CAD.
    – Offensive content: Different people find different things offensive and non-offensive. People who want to consume certain types of content don’t find that content offensive. If it’s for their own private use, I don’t think anyone has the right to restrict them.
    – Low quality models flooding repositories: The whole concept of repositories becomes obsolete with text-to-3D. You’ll just generate what you need when you need it. You would need to make sure that the generated models are of good enough quality, but there are ways to do that.
    – Copyright infringement: Even if the AI is trained on copyrighted materials, it is not copying it as is. It is generating new models, so I, for one, don’t see it as infringement. It may fall under “derivative work”, depending on how closely it resembles the originals. But IMO, derivative works should not be subject to copyright restrictions in general, because restrictions on derivative works limit innovation.
    – Dominance of western design: These systems are gradually becoming available for custom deployments, so you could train your own copy of the AI on eastern samples if you wanted eastern design. It’s also likely that as these systems proliferate, there will be demand for different design styles, so we will see AIs trained on more diverse datasets.

Leave a comment