Be part of our every day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Study Extra
Midjourney is greatest referred to as one of many main AI picture mills — with practically 20 million customers on its Discord channel, based on third-party trackers, and presumably extra atop that on its web site — however its ambitions are starting to increase.
Following the information in late summer time 2024 that it was constructing its personal computing and AI {hardware}, the corporate this week launched a brand new analysis paper alongside machine studying consultants at New York College (NYU) on coaching text-based massive language fashions (LLMs) equivalent to Meta’s open supply Llama and Mistral’s eponymous supply fashions to put in writing extra creatively.
The collaboration, documented in a new analysis paper revealed on AI code group Hugging Face, introduces two new technieques — Diversified Direct Desire Optimization (DDPO) and Diversified Odds Ratio Desire Optimization (DORPO)— designed to increase the vary of potential outputs whereas sustaining coherence and readability.
For a corporation that’s greatest recognized for its diffusion AI picture producing fashions, Midjourney’s new strategy to rethinking creativity in text-based LLMs reveals that it isn’t limiting its ambitions to visuals, and that, an image might not truly be value a thousand phrases.
May a Midjourney-native LLM or fine-tuned model of an current LLM be within the playing cards from the small, bootstrapped startup? I reached out to Midjourney founder David Holz however have but to listen to again.
No matter a first-party Midjourney LLM providing, the implications of its new analysis transcend tutorial workouts and could possibly be used to assist gas a brand new wave of LLM coaching amongst enterprise AI groups, product builders, and content material creators trying to enhance AI-generated textual content.
It additionally reveals that regardless of latest curiosity and funding amongst AI mannequin suppliers in new multimodal and reasoning language fashions, there’s nonetheless loads of juice left to be squeezed, cognitively and performance-wise, from traditional Transformer-based, text-focused LLMs.
The issue: AI-generated writing collapses round homogenous outputs
In domains like fact-based Q&A or coding help, LLMs are anticipated to generate a single greatest response.
Nonetheless, inventive writing is inherently open-ended, that means there are numerous legitimate responses to a single immediate.
For an instance offered by the Midjourney researchers, given a immediate like “Write a narrative a few canine on the moon”, the LLM may discover a number of various paths like:
- An astronaut’s pet canine by accident left behind after a lunar mission.
- A canine who finds itself in a futuristic canine house colony.
- A stranded canine that befriends an alien species.
Regardless of this vary of potentialities, instruction-tuned LLMs usually converge on related storylines and themes. This occurs as a result of:
- Put up-training strategies prioritize consumer desire over originality, reinforcing widespread however repetitive responses.
- Instruction tuning usually smooths out variation, making fashions favor “secure” responses over distinctive ones.
- Present diversity-promoting strategies (like temperature tuning) function solely at inference time, moderately than being baked into the mannequin’s studying course of.
This results in homogenized storytelling, the place AI-generated inventive writing feels repetitive and lacks shock or depth.
The answer: modifying post-training strategies to prioritize range
To beat these limitations, the researchers launched DDPO and DORPO, two extensions of current desire optimization strategies. The core innovation in these approaches is the usage of deviation—a measure of how a lot a response differs from others—to information coaching.
Right here’s the way it works:
- Throughout coaching, the mannequin is given a writing immediate and a number of potential responses.
- Every response is in comparison with others for a similar immediate, and a deviation rating is calculated.
- Uncommon however high-quality responses are weighted extra closely in coaching, encouraging the mannequin to study from various examples.
By incorporating deviation into Direct Desire Optimization (DPO) and Odds Ratio Desire Optimization (ORPO), the mannequin learns to provide high-quality however extra assorted responses.
This technique ensures that AI-generated tales don’t converge on a single predictable construction, however as a substitute discover a wider vary of characters, settings, and themes—simply as a human author may.
What Midjourney’s researchers did to realize this
The research concerned coaching LLMs on inventive writing duties utilizing a dataset from the subreddit r/writingPrompts, a Reddit group the place customers put up prompts and reply with quick tales.
The researchers used two base fashions for his or her coaching:
- Meta’s Llama-3.1-8B (an 8-billion-parameter mannequin from the Llama 3 sequence).
- Mistral-7B-v0.3 (a 7-billion-parameter mannequin from Mistral AI).
Then, they took these fashions by means of the next processes:
- Supervised Positive-Tuning (SFT): The fashions have been first fine-tuned utilizing LoRA (Low-Rank Adaptation) to regulate parameters effectively.
- Desire Optimization:
- DPO and ORPO have been used as baselines—these customary strategies deal with enhancing response high quality primarily based on consumer desire alerts.
- DDPO and DORPO have been then utilized, introducing deviation-based weighting to encourage extra distinctive responses.
- Analysis:
- Automated analysis: Measured semantic and stylistic range utilizing embedding-based strategies.
- Human analysis: Judges assessed whether or not outputs have been various and interesting in comparison with GPT-4o and Claude 3.5.
Key Coaching Findings:
- DDPO considerably outperformed customary DPO by way of output range whereas sustaining high quality.
- Llama-3.1-8B with DDPO achieved the perfect stability of high quality and variety, producing responses that have been extra assorted than GPT-4o whereas sustaining coherence.
- When dataset dimension was decreased, DDPO fashions nonetheless maintained range, although they required a sure variety of various coaching samples to be absolutely efficient.
Enterprise implications: what does it imply for these utilizing AI to provide inventive responses — equivalent to in advertising copywriting, company storytelling, and movie/TV/online game scripting?
For AI groups managing LLM deployment, enhancing output range whereas sustaining high quality is a important problem. These findings have vital implications for organizations that depend on AI-generated content material in purposes equivalent to:
- Conversational AI and chatbots (making certain assorted and interesting responses).
- Content material advertising and storytelling instruments (stopping repetitive AI-generated copy).
- Recreation improvement and narrative design (creating various dialogue and branching storylines).
For professionals chargeable for fine-tuning and deploying fashions in an enterprise setting, this analysis offers:
- A brand new strategy to LLM post-training that enhances creativity with out sacrificing high quality.
- A sensible different to inference-time range tuning (equivalent to temperature changes) by integrating range into the educational course of itself.
- The potential to develop extra partaking AI purposes, from AI-assisted writing instruments to digital assistants that may adapt their responses dynamically.
For these dealing with AI mannequin orchestration and automation, this analysis highlights:
- The significance of tuning fashions on the coaching stage, decreasing the necessity for post-processing changes at deployment.
- A solution to introduce adaptive storytelling into AI-driven purposes, making certain variability whereas conserving content material high quality excessive.
- A way for making LLM outputs extra human-like, which is essential for purposes requiring interactive storytelling, buyer engagement, or dynamic content material creation.
The way forward for AI generated inventive initiatives appears to be like brilliant
The success of DDPO and DORPO demonstrates that coaching LLMs with diversity-focused targets can yield vital enhancements in inventive writing. Some concepts embrace:
- Integrating deviation-based studying into enterprise AI fashions to reinforce response range in customer-facing purposes.
- Exploring how these strategies apply to different generative duties, equivalent to AI-powered poetry, screenwriting, or sport storytelling.
- Growing hybrid coaching approaches that stability range and instruction-following capabilities for AI assistants.
For these all in favour of making use of these strategies, the researchers plan to make their code publicly obtainable on this GitHub Repository
Whether or not you’re fine-tuning LLMs for enterprise purposes or optimizing large-scale AI orchestration, this research offers actionable insights into how fashions might be extra dynamic, partaking, and attentive to inventive duties.
By adopting these strategies, AI groups can transfer past inflexible, formulaic outputs—constructing AI techniques that aren’t solely sensible but additionally actually imaginative.