
Coplay vs. Unity AI Assistant
May 15, 2025
Jos van der Westhuizen
The team at Unity has done a solid job upgrading Muse into their latest AI Assistant within Unity. We've taken it for a spin, and here are our thoughts on how it compares to Coplay for now.
Available Models
First and foremost, the models you can choose from are crucial. In Coplay, you can select any of the latest AI models, which is vital for performance—and we've seen Claude 3.7 outshine older models like GPT4o by a significant margin. Unity keeps its model under wraps and doesn’t offer users the ability to switch models themselves.

Bring your own key
Unity doesn't seem to support a bring-your-own-key plan—they utilize a points system instead. Coplay, on the other hand, allows you to input your API key from Anthropic to pay only for what you use.

Modes
Unity’s AI Assistant features three modes: code, run, and ask. In code mode, it writes scripts for game interaction (e.g. Could you help me write a script that saves high scores locally on the player’s device?) in run mode, it executes specific actions or generates code (e.g. Replace all objects named 'Tree' with the prefab in attachment) ; in ask mode, it answers basic questions about Unity.

Conversely, Coplay has four modes and continually updates them. Our modes automatically detect when it should plan, chat, or act on a user’s request. The modes distinguish between the approach you want to take for any task. E.g., Agent mode will be more thorough and take careful steps in complex tasks whereas UI mode will generate UI in a single AI step.

In the near future, Coplay will automatically switch between different modes during execution, acting like a team of specialized agents working together. Each agent handles its area of expertise and streams the results back to the main thread of your task.
Agent vs Chat
Coplay adopts an agentic approach—responses are more conversational, and users can steer the AI via natural language corrections. The agent assesses the context within your project and takes actions accordingly. Many users run long, background tasks—sometimes covering 250+ turns—while refining their game design decisions.

Unity uses the more classic chat prompt approach in which each thread with the AI covers a smaller task and prefers fewer steps.

Context
Based on actions taken in our experiments, Unity seems to have functions that allow them to pull in the relevant context at the right time.
Coplay takes a similar approach. We also combine this with a specialized RAG approach for your Unity project. We parse all assets and scripts in your project to create relevant metadata (including visual analyses) and then embed these for fast lookup. In addition, our agents have visual access [link to post/video of visual feedback] to your project to ensure that the changes they make are what were intended.

Autocompletion and Pipelines
A common studio need is predicting upcoming actions and automating repetitive weekly tasks. Coplay addresses this with a feature enabling developers to record their in-Unity actions and replay them with natural language tweaks. It’s perfect for live operations—importing assets, creating prefabs, adjusting scriptable object properties, and more.
For quick, short-term tasks, Coplay’s tab-complete feature predicts subsequent changes—like name or position updates—letting you zip through edits with a simple keystroke.

Unity’s AI assistant does not offer these automation features.
Transparency
Early user feedback highlighted the importance of understanding what the AI is thinking and doing. Similar to how Deepseek pushed OpenAI to show model reasoning. With Coplay, you can view all reasoning steps and raw action results—no secrets. Unity’s assistant is less open in this regard.
Coplay also has a .coplayrules file which allows users to change the system prompt to better suit their needs and specific game development habits. We’re working on a self-learning system, which would continually improve Coplay for your own use cases as you use it – and the learned tricks would be fully visible to users to help them validate and edit them as they wish.

Generators (ui and scene vs animation and sound)
Unity has some excellent new tools called generators that allow you to generate specific assets in your game really easily. They can generate textures, animations, and sounds. These tools are excellent and allow fine-grained control of what you want to generate. Their AI assistant is also able to call these tools as needed from the main thread. In the future, we hope to be able to access these generator tools from within Coplay as well.
On the other hand, Coplay has equivalent tools for creating UI in Unity, and for generating scenes. One nice bonus is that you can generate the UI or scene using existing assets and art that you might have in your project. Coplay also partners with popular AI tools such as Meshy, Hunyan, and Tripio to enable generation of textures and 3D models.

Capabilities
Here’s a quick rundown of what each platform can do in the editor:
Coplay | Unity AI Assistant | |
---|---|---|
Add components | ✅ | ❌ |
Edit component properties | ✅ | ❌ |
Create, edit, and place prefabs | ✅ | ✅ |
Create, edit, and place game objects | ✅ | ✅ |
Change project settings | ✅ | ❌ |
Conclusion
We’re excited for the Unity team’s launch of their AI Assistant and think it both marks a notable upgrade from Muse as well as a meaningful advancement of the AI gaming space more broadly. Everything is progressing very fast right now and their approach is different than ours: this version of AI Assistant leans more toward a one-shot approach—where you give an instruction, and it executes or responds while Coplay offers a conversational agent that continually works in the background, adjusting as you steer it.
We’re collaborating closely with the Unity team to explore synergies between our tools, and the future looks promising. These are early days still and we can’t wait to see what Coplay can help unlock within Unity soon! If you want to check out what we’re building, please join us on Discord.