
Researchers have developed a method of measuring the effectiveness of AI-based additive manufacturing systems.
AI technology is popping up everywhere these days, and certainly in the 3D print world. There are several areas of possible application, ranging from advanced slicing processes to spaghetti detection.
One area of increasing concern as the number of end-use production parts grows is quality control. Production parts must meet a series of engineering tolerances, and that’s mostly determined by using the best print parameters.
The problem is that there are so many print parameters to tweak, and the optimum values vary depending on the specific part being considered. Several parties are now using AI tech to develop tools that attempt to optimize the print parameters.
This is a very good objective, but how well do they work? This is quite a complex question because the AI systems are designed to work on variable input part geometries and 3D printing systems.
The researchers found existing research has explored a variety of AI applications to the 3D print space, but typically in a very narrow fashion. For example, most studies used ChatGPT, while many other AI tools exist.
The researchers sought to find a way to measure the effectiveness of these tools. They explain:
“This study aims to evaluate the effectiveness of LLMs on a range of FDM-specific tasks by establishing a comprehensive benchmark dataset, referred to as FDM-Bench. FDM-Bench addresses two critical areas: user query response and G-code anomaly detection, both of which are essential for improving print quality and providing effective support for FDM users.
To ensure a thorough evaluation, FDM-Bench includes queries representing a broad spectrum of user expertise, from beginners to advanced researchers in FDM technology. Additionally, we generate G-codes with various types of anomalies and create multiple samples for each type by adjusting different parameters. This approach enables a comprehensive assessment of the models’ capabilities in accurately detecting these defects.”
The new benchmark was applied to four current state-of-the-art LLMs: GPT-4o, Claude 3.5 Sonnet, Llama 3.1-70B, and Llama 3.1-405B. They found that the larger models tended to find problems in GCODE better than the smaller open-source models.
The work is comprehensive and could be the first of a line of AI benchmarks for 3D printing. AI is still very new to the industry, and it’s not even clear where the application of the technology will provide the most benefits.
Benchmarks will eventually become important in this area. As AI tech increasingly drives 3D printing, there must be a way to determine how well it works. Otherwise, we’ll have to rely on marketing statements, which is never a good idea.
Via ArXiv