Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, knowledge, and safety leaders. Subscribe Now
Deciding on AI fashions is as a lot of a technical choice and it’s a strategic one. However selecting open, closed or hybrid fashions all have trade-offs.
Whereas talking at this yr’s VB Remodel, mannequin structure consultants from Normal Motors, Zoom and IBM mentioned how their firms and prospects contemplate AI mannequin choice.
Barak Turovsky, who in March grew to become GM’s first chief AI officer, mentioned there’s plenty of noise with each new mannequin launch and each time the leaderboard adjustments. Lengthy earlier than leaderboards have been a mainstream debate, Turovsky helped launch the primary massive language mannequin (LLM) and recalled the methods open-sourcing AI mannequin weights and coaching knowledge led to main breakthroughs.
“That was frankly in all probability one of many greatest breakthroughs that helped OpenAI and others to begin launching,” Turovsky mentioned. “So it’s really a humorous anecdote: Open-source really helped create one thing that went closed and now possibly is again to being open.”
Components for selections differ and embrace price, efficiency, belief and security. Turovsky mentioned enterprises generally favor a combined technique — utilizing an open mannequin for inside use and a closed mannequin for manufacturing and buyer dealing with or vice versa.
IBM’s AI technique
Armand Ruiz, IBM’s VP of AI platform, mentioned IBM initially began its platform with its personal LLMs, however then realized that wouldn’t be sufficient — particularly as extra highly effective fashions arrived available on the market. The corporate then expanded to supply integrations with platforms like Hugging Face so prospects might decide any open-source mannequin. (The corporate lately debuted a brand new mannequin gateway that offers enterprises an API for switching between LLMs.)
Extra enterprises are selecting to purchase extra fashions from a number of distributors. When Andreessen Horowitz surveyed 100 CIOs, 37% of respondents mentioned they have been utilizing 5 or extra fashions. Final yr, solely 29% have been utilizing the identical quantity.
Selection is vital, however generally an excessive amount of alternative creates confusion, mentioned Ruiz. To assist prospects with their method, IBM doesn’t fear an excessive amount of about which LLM they’re utilizing throughout the proof of idea or pilot section; the primary objective is feasibility. Solely later they start to take a look at whether or not to distill a mannequin or customise one primarily based on a buyer’s wants.
“First we attempt to simplify all that evaluation paralysis with all these choices and give attention to the use case,” Ruiz mentioned. “Then we determine what’s the finest path for manufacturing.”
How Zoom approaches AI
Zoom’s prospects can select between two configurations for its AI Companion, mentioned Zoom CTO Xuedong Huang. One entails federating the corporate’s personal LLM with different bigger basis fashions. One other configuration permits prospects involved about utilizing too many fashions to make use of simply Zoom’s mannequin. (The corporate additionally lately partnered with Google Cloud to undertake an agent-to-agent protocol for AI Companion for enterprise workflows.)
The corporate made its personal small language mannequin (SLM) with out utilizing buyer knowledge, Huang mentioned. At 2 billion parameters, the LLM is definitely very small, however it could actually nonetheless outperform different industry-specific fashions. The SLM works finest on advanced duties when working alongside a bigger mannequin.
“That is actually the facility of a hybrid method,” Huang mentioned. “Our philosophy may be very simple. Our firm is main the way in which very very similar to Mickey Mouse and the elephant dancing collectively. The small mannequin will carry out a really particular activity. We’re not saying a small mannequin might be adequate…The Mickey Mouse and elephant might be working collectively as one group.”