Be a part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra
Canadian AI startup Cohere — cofounded by one of many authors of the unique transformer paper that kickstarted the big language mannequin (LLM) revolution again in 2017 — right this moment unveiled Command A, its newest generative AI mannequin designed for enterprise functions.
Because the successor to Command-R, which debuted in March 2024, and Command R+ following it, Command A builds on Cohere’s give attention to retrieval-augmented era (RAG), exterior instrument use and enterprise AI effectivity — particularly close to compute and the pace at which it serves up solutions.
That’s going to make it a sexy possibility for enterprises trying to acquire an AI benefit with out breaking the financial institution, and for functions the place immediate responses are wanted — equivalent to finance, well being, medication, science and regulation.
With quicker speeds, decrease {hardware} necessities and expanded multilingual capabilities, Command A positions itself as a robust different to fashions equivalent to GPT-4o and DeepSeek-V3 — traditional LLMs, not the brand new reasoning fashions which have taken the AI {industry} by storm currently.
Not like its predecessor, which supported a context size of 128,000 tokens (referencing the quantity of knowledge the LLM can deal with in a single enter/output alternate, about equal to a 300-page novel), Command A doubles the context size to 256,000 tokens (equal to 600 pages of textual content) whereas enhancing total effectivity and enterprise readiness.
It additionally comes on the heels Cohere for AI — the non-profit subsidiary of the corporate — releasing an open-source (for analysis solely) multilingual imaginative and prescient mannequin referred to as Aya Imaginative and prescient earlier this month.
A step up from Command-R
When Command-R launched in early 2024, it launched key improvements like optimized RAG efficiency, higher data retrieval and lower-cost AI deployments.
It gained traction with enterprises, integrating into enterprise options from firms like Oracle, Notion, Scale AI, Accenture and McKinsey, although a November 2024 report from Menlo Ventures surveying enterprise adoption put Cohere’s market share amongst enterprises at a slim 3%, far under OpenAI (34%), Anthropic (24%), and even small startups like Mistral (5%).

Now, in a bid to turn into an even bigger enterprise draw, Command A pushes these capabilities even additional. In response to Cohere, it:
- Matches or outperforms OpenAI’s GPT-4o and DeepSeek-V3 in enterprise, STEM and coding duties
- Operates on simply two GPUs (A100 or H100), a serious effectivity enchancment in comparison with fashions that require as much as 32 GPUs
- Achieves quicker token era, producing 156 tokens per second — 1.75x quicker than GPT-4o and a couple of.4x quicker than DeepSeek-V3
- Reduces latency, with a 6,500ms time-to-first-token, in comparison with 7,460ms for GPT-4o and 14,740ms for DeepSeek-V3
- Strengthens multilingual AI capabilities, with improved Arabic dialect matching and expanded assist for 23 world languages.
Cohere notes in its developer documentation on-line that: “Command A is Chatty. By default, the mannequin is interactive and optimized for dialog, that means it’s verbose and makes use of markdown to spotlight code. To override this habits, builders ought to use a preamble which asks the mannequin to easily present the reply and to not use markdown or code block markers.”
Constructed for the enterprise
Cohere has continued its enterprise-first technique with Command A, making certain that it integrates seamlessly into enterprise environments. Key options embody:
- Superior retrieval-augmented era (RAG): Permits verifiable, high-accuracy responses for enterprise functions
- Agentic instrument use: Helps complicated workflows by integrating with enterprise instruments
- North AI platform integration: Works with Cohere’s North AI platform, permitting companies to automate duties utilizing safe, enterprise-grade AI brokers
- Scalability and value effectivity: Non-public deployments are as much as 50% cheaper than API-based entry.
Multilingual and extremely performant in Arabic
A standout characteristic of Command A is its skill to generate correct responses throughout 23 of probably the most spoken languages all over the world, together with improved dealing with of Arabic dialects. Supported languages (based on the developer documentation on Cohere’s web site) are:
- English
- French
- Spanish
- Italian
- German
- Portuguese
- Japanese
- Korean
- Chinese language
- Arabic
- Russian
- Polish
- Turkish
- Vietnamese
- Dutch
- Czech
- Indonesian
- Ukrainian
- Romanian
- Greek
- Hindi
- Hebrew
- Persian
In benchmark evaluations:
- Command A scored 98.2% accuracy in responding in Arabic to English prompts — increased than each DeepSeek-V3 (94.9%) and GPT-4o (92.2%).
- It considerably outperformed rivals in dialect consistency, reaching an ADI2 rating of 24.7, in comparison with 15.9 (GPT-4o) and 15.7 (DeepSeek-V3).

Constructed for pace and effectivity
Pace is a vital issue for enterprise AI deployment, and Command A has been engineered to ship outcomes quicker than a lot of its rivals.
- Token streaming pace for 100K context requests: 73 tokens/sec (in comparison with GPT-4o at 38/sec and DeepSeek-V3 at 32/sec)
- Sooner first token era: Reduces response time considerably in comparison with different large-scale fashions
Pricing and availability
Command A is now obtainable on the Cohere platform and with open weights for analysis use solely on Hugging Face beneath a Artistic Commons Attribution Non Business 4.0 Worldwide (CC-by-NC 4.0) license, with broader cloud supplier assist coming quickly.
- Enter tokens: $2.50 per million
- Output tokens: $10.00 per million
Non-public and on-prem deployments can be found upon request.
Business reactions
A number of AI researchers and Cohere group members have shared their enthusiasm for Command A.
Dwaraknath Ganesan, pretraining at Cohere, commented on X: “Extraordinarily excited to disclose what now we have been engaged on for the previous few months! Command A is superb. May be deployed on simply 2 H100 GPUs! 256K context size, expanded multilingual assist, agentic instrument use… very happy with this one.”
Pierre Richemond, AI researcher at Cohere, added: “Command A is our new GPT-4o/DeepSeek v3 degree, open-weights 111B mannequin sporting a 256K context size that has been optimized for effectivity in enterprise use instances.”
Constructing on the inspiration of Command-R, Cohere’s Command A represents the following step in scalable, cost-efficient enterprise AI.
With quicker speeds, a bigger context window, improved multilingual dealing with and decrease deployment prices, it gives companies a robust different to present AI fashions.