AI & RoboticsNews

DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI

Chinese AI startup DeepSeek has quietly released a new large language model that’s already sending ripples through the artificial intelligence industry — not just for its capabilities, but for how it’s being deployed. The 641-gigabyte model, dubbed DeepSeek-V3-0324, appeared on AI repository Hugging Face today with virtually no announcement, continuing the company’s pattern of low-key but impactful releases.

What makes this launch particularly notable is the model’s MIT license — making it freely available for commercial use — and early reports that it can run directly on consumer-grade hardware, specifically Apple’s Mac Studio with M3 Ultra chip.

“The new DeepSeek-V3-0324 in 4-bit runs at > 20 tokens/second on a 512GB M3 Ultra with mlx-lm!” wrote AI researcher Awni Hannun on social media. While the $9,499 Mac Studio might stretch the definition of “consumer hardware,” the ability to run such a massive model locally is a major departure from the data center requirements typically associated with state-of-the-art AI.

DeepSeek’s stealth launch strategy disrupts AI market expectations

The 685-billion-parameter model arrived with no accompanying whitepaper, blog post, or marketing push — just an empty README file and the model weights themselves. This approach contrasts sharply with the carefully orchestrated product launches typical of Western AI companies, where months of hype often precede actual releases.

Early testers report significant improvements over the previous version. AI researcher Xeophon proclaimed in a post on X.com: “Tested the new DeepSeek V3 on my internal bench and it has a huge jump in all metrics on all tests. It is now the best non-reasoning model, dethroning Sonnet 3.5.”

This claim, if validated by broader testing, would position DeepSeek’s new model above Claude Sonnet 3.5 from Anthropic, one of the most respected commercial AI systems. And unlike Sonnet, which requires a subscription, DeepSeek-V3-0324‘s weights are freely available for anyone to download and use.

How DeepSeek V3-0324’s breakthrough architecture achieves unmatched efficiency

DeepSeek-V3-0324 employs a mixture-of-experts (MoE) architecture that fundamentally reimagines how large language models operate. Traditional models activate their entire parameter count for every task, but DeepSeek’s approach activates only about 37 billion of its 685 billion parameters during specific tasks.

This selective activation represents a paradigm shift in model efficiency. By activating only the most relevant “expert” parameters for each specific task, DeepSeek achieves performance comparable to much larger fully-activated models while drastically reducing computational demands.

The model incorporates two additional breakthrough technologies: Multi-Head Latent Attention (MLA) and Multi-Token Prediction (MTP). MLA enhances the model’s ability to maintain context across long passages of text, while MTP generates multiple tokens per step instead of the usual one-at-a-time approach. Together, these innovations boost output speed by nearly 80%.

Simon Willison, a developer tools creator, noted in a blog post that a 4-bit quantized version reduces the storage footprint to 352GB, making it feasible to run on high-end consumer hardware like the Mac Studio with M3 Ultra chip.

This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple Nvidia GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. This efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance.

China’s open source AI revolution challenges Silicon Valley’s closed garden model

DeepSeek’s release strategy exemplifies a fundamental divergence in AI business philosophy between Chinese and Western companies. While U.S. leaders like OpenAI and Anthropic keep their models behind paywalls, Chinese AI companies increasingly embrace permissive open-source licensing.

This approach is rapidly transforming China’s AI ecosystem. The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers.

The business logic behind this strategy reflects market realities in China. With multiple well-funded competitors, maintaining a proprietary approach becomes increasingly difficult when competitors offer similar capabilities for free. Open-sourcing creates alternative value pathways through ecosystem leadership, API services, and enterprise solutions built atop freely available foundation models.

Even established Chinese tech giants have recognized this shift. Baidu announced plans to make its Ernie 4.5 model series open-source by June, while Alibaba and Tencent have released open-source AI models with specialized capabilities. This movement stands in stark contrast to the API-centric strategy employed by Western leaders.

The open-source approach also addresses unique challenges faced by Chinese AI companies. With restrictions on access to cutting-edge Nvidia chips, Chinese firms have emphasized efficiency and optimization to achieve competitive performance with more limited computational resources. This necessity-driven innovation has now become a potential competitive advantage.

DeepSeek V3-0324: The foundation for an AI reasoning revolution

The timing and characteristics of DeepSeek-V3-0324 strongly suggest it will serve as the foundation for DeepSeek-R2, an improved reasoning-focused model expected within the next two months. This follows DeepSeek’s established pattern, where its base models precede specialized reasoning models by several weeks.

“This lines up with how they released V3 around Christmas followed by R1 a few weeks later. R2 is rumored for April so this could be it,” noted Reddit user mxforest.

The implications of an advanced open-source reasoning model cannot be overstated. Current reasoning models like OpenAI’s o1 and DeepSeek’s R1 represent the cutting edge of AI capabilities, demonstrating unprecedented problem-solving abilities in domains from mathematics to coding. Making this technology freely available would democratize access to AI systems currently limited to those with substantial budgets.

The potential R2 model arrives amid significant revelations about reasoning models’ computational demands. Nvidia CEO Jensen Huang recently noted that DeepSeek’s R1 model “consumes 100 times more compute than a non-reasoning AI,” contradicting earlier industry assumptions about efficiency. This reveals the remarkable achievement behind DeepSeek’s models, which deliver competitive performance while operating under greater resource constraints than their Western counterparts.

If DeepSeek-R2 follows the trajectory set by R1, it could present a direct challenge to GPT-5, OpenAI’s next flagship model rumored for release in coming months. The contrast between OpenAI’s closed, heavily-funded approach and DeepSeek’s open, resource-efficient strategy represents two competing visions for AI’s future.

How to experience DeepSeek V3-0324: A complete guide for developers and users

For those eager to experiment with DeepSeek-V3-0324, several pathways exist depending on technical needs and resources. The complete model weights are available from Hugging Face, though the 641GB size makes direct download practical only for those with substantial storage and computational resources.

For most users, cloud-based options offer the most accessible entry point. OpenRouter provides free API access to the model, with a user-friendly chat interface. Simply select DeepSeek V3 0324 as the model to begin experimenting.

DeepSeek’s own chat interface at chat.deepseek.com has likely been updated to the new version as well, though the company hasn’t explicitly confirmed this. Early users report the model is accessible through this platform with improved performance over previous versions.

Developers looking to integrate the model into applications can access it through various inference providers. Hyperbolic Labs announced immediate availability as “the first inference provider serving this model on Hugging Face,” while OpenRouter offers API access compatible with the OpenAI SDK.

DeepSeek’s new model prioritizes technical precision over conversational warmth

Early users have reported a noticeable shift in the model’s communication style. While previous DeepSeek models were praised for their conversational, human-like tone, “V3-0324” presents a more formal, technically-oriented persona.

“Is it only me or does this version feel less human like?” asked Reddit user nother_level. “For me the thing that set apart deepseek v3 from others were the fact that it felt more like human. Like the tone the words and such it was not robotic sounding like other llm’s but now with this version its like other llms sounding robotic af.”

Another user, AppearanceHeavy6724, added: “Yeah, it lost its aloof charm for sure, it feels too intellectual for its own good.”

This personality shift likely reflects deliberate design choices by DeepSeek’s engineers. The move toward a more precise, analytical communication style suggests a strategic repositioning of the model for professional and technical applications rather than casual conversation. This aligns with broader industry trends, as AI developers increasingly recognize that different use cases benefit from different interaction styles.

For developers building specialized applications, this more precise communication style may actually represent an advantage, providing clearer and more consistent outputs for integration into professional workflows. However, it may limit the model’s appeal for customer-facing applications where warmth and approachability are valued.

How DeepSeek’s open source strategy is redrawing the global AI landscape

DeepSeek’s approach to AI development and distribution represents more than a technical achievement — it embodies a fundamentally different vision for how advanced technology should propagate through society. By making cutting-edge AI freely available under permissive licensing, DeepSeek enables exponential innovation that closed models inherently constrain.

This philosophy is rapidly closing the perceived AI gap between China and the United States. Just months ago, most analysts estimated China lagged 1-2 years behind U.S. AI capabilities. Today, that gap has narrowed dramatically to perhaps 3-6 months, with some areas approaching parity or even Chinese leadership.

The parallels to Android’s impact on the mobile ecosystem are striking. Google’s decision to make Android freely available created a platform that ultimately achieved dominant global market share. Similarly, open-source AI models may outcompete closed systems through sheer ubiquity and the collective innovation of thousands of contributors.

The implications extend beyond market competition to fundamental questions about technology access. Western AI leaders increasingly face criticism for concentrating advanced capabilities among well-resourced corporations and individuals. DeepSeek’s approach distributes these capabilities more broadly, potentially accelerating global AI adoption.

As DeepSeek-V3-0324 finds its way into research labs and developer workstations worldwide, the competition is no longer simply about building the most powerful AI, but about enabling the most people to build with AI. In that race, DeepSeek’s quiet release speaks volumes about the future of artificial intelligence. The company that shares its technology most freely may ultimately wield the greatest influence over how AI reshapes our world.


Author: Michael Nuñez
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Midjourney’s surprise: new research on making LLMs write more creatively

AI & RoboticsNews

From alerts to autonomy: How leading SOCs use AI copilots to fight signal overload and staffing shortfalls

Cleantech & EV'sNews

Rivian's R1T is flexing its might, proving it's more than just an everyday electric truck

Cleantech & EV'sNews

Bluetti Spring Sale takes 54% off power stations with exclusive savings, Save $700 during Aventon e-bike sale, Greenworks, more

Sign up for our Newsletter and
stay informed!

Share Your Thoughts!

This site uses Akismet to reduce spam. Learn how your comment data is processed.