The competition in AI is not just about building smarter models anymore. It is also about how reliable those models are when used in real-world tasks. Recently, a senior AI leader at AMD made headlines by openly criticizing Claude Code, saying it “cannot be trusted to perform complex engineering tasks.”
This kind of public criticism is rare, especially between major players in the tech industry. But it highlights a growing concern: can AI tools truly handle high-stakes, real-world engineering work?
What Sparked the Criticism?
According to reports, the comment came after months of internal frustration at AMD. Engineers working with AI coding tools found limitations when handling complex, multi-step engineering problems.
While AI coding assistants have improved rapidly, they still struggle with:
- Deep technical accuracy
- Long, multi-step reasoning
- Understanding complex system dependencies
This is where the gap between “helpful assistant” and “reliable engineer” becomes clear.
What Is Claude Code?
Claude Code is a developer-focused AI tool built by Anthropic. It is designed to:
- Help write and debug code
- Assist with software development tasks
- Improve developer productivity
For many developers, tools like Claude Code are already useful for:
- Writing boilerplate code
- Explaining code snippets
- Speeding up simple tasks
But when it comes to complex engineering workflows, expectations are much higher.
Why This Matters
This criticism is not just about one tool. It reflects a bigger issue in the AI industry.
1. AI vs Real Engineering Work
AI tools are great at generating code quickly. But real engineering involves:
- Understanding system architecture
- Handling edge cases
- Ensuring reliability and safety
AI still struggles with these areas.
2. Trust Is the Real Challenge
For AI to be widely adopted in engineering, it must be trusted.
If developers feel that:
- AI outputs are unreliable
- Mistakes are hard to detect
- Results require constant verification
then AI becomes more of a helper than a replacement.
3. High Stakes Require High Accuracy
In industries like semiconductors, errors are costly. A small mistake in code or design can lead to:
- Hardware failures
- Financial losses
- Delays in product development
This is why companies like AMD are cautious about relying too heavily on AI tools.
The Bigger AI Tooling Battle
The AI coding space is becoming highly competitive. Companies like:
- OpenAI
- Anthropic
are all building tools aimed at developers.
The focus is shifting from:
- “Can AI write code?”
to - “Can AI write code you can trust?”
This is a much harder problem.
What Developers Should Take Away
If you are using AI coding tools today, here is the practical takeaway:
- Use AI for speed, not final decisions
- Always review and test AI-generated code
- Avoid relying on AI for critical system design
AI is a powerful assistant, but it is not yet a fully reliable engineer.
What Comes Next?
Criticism like this will likely push AI companies to improve their tools faster. We can expect:
- Better reasoning capabilities
- Improved accuracy in complex tasks
- Stronger validation and testing features
The goal is clear: move from “helpful” to “dependable.”
Final Thoughts
AMD’s public criticism of Claude Code is a reminder that AI still has limits. While the technology is advancing quickly, trust and reliability remain key challenges.
The future of AI in engineering will depend not just on how fast it can generate code, but on how well it can get things right.



