BharatGen has introduced Param2, a 17-billion-parameter multilingual MoE model optimised for Indic languages, strengthening India's sovereign AI capabilities and digital mission ...
BharatGen Param2 17B MoE, unveiled at India AI Impact Summit 2026, advances multilingual AI with NVIDIA, empowering India's digital transformation.
In a bid to encourage ecosystem participation, BharatGen will also release the model, documentation and post-training workflows through its Hugging Face repository.
​If Nvidia integrates Groq’s technology, they solve the "waiting for the robot to think" problem. They preserve the magic of AI. Just as they moved from rendering pixels (gaming) to rendering ...
Here is Grok 4.20 analyzing the Macrohard emulated digital human business. xAI’s internal project — codenamed MacroHard (a ...
MiniMax M2.5 delivers elite coding performance and agentic capabilities at a fraction of the cost. Explore the architecture, ...
AI isn’t just cutting labor; it’s generating revenue, and Nvidia’s earnings call made it clear. Click here for more ...
Q4 2026 Earnings Call February 25, 2026 5:00 PM ESTCompany ParticipantsToshiya Hari - Vice President of Investor Relations ...
Sarvam AI launches two advanced LLM models, 30B and 105B, outperforming competitors in key benchmarks, focusing on Indian language support.
Havells India is using Ansys Fluent powered by CUDA-X for fluid simulations, achieving sixfold faster results and reducing time to market. Larsen & Toubro Semiconductor is running Cadence Spectre X on ...
JBT Marel's dozens of CIC facilities around the world enable customers to work side by side with experts to run their products through JBT Marel equipment and solve efficiency, quality, and regulatory ...
Users running a quantized 7B model on a laptop expect 40+ tokens per second. A 30B MoE model on a high-end mobile device ...