{"id":1896,"date":"2026-02-27T15:57:35","date_gmt":"2026-02-27T05:57:35","guid":{"rendered":"https:\/\/rhinoeasy.com\/?p=1896"},"modified":"2026-02-27T15:57:35","modified_gmt":"2026-02-27T05:57:35","slug":"exclusive-startup-aiming-to-break-nvidias-strangehold-on-ai-data-center-workloads-raises-10-25-million","status":"publish","type":"post","link":"https:\/\/rhinoeasy.com\/?p=1896","title":{"rendered":"Exclusive: Startup aiming to break Nvidia\u2019s strangehold on AI data center workloads raises $10.25 million"},"content":{"rendered":"<p>A London-based startup founded by two Cambridge-trained neuroscientists has raised $10.25 million for their startup Callosum, which is building software that orchestrates AI workloads across a mix of different chip types\u2014challenging the industry\u2019s dependence on running ever bigger models on banks of identical Nvidia GPUs.<\/p>\n<p>The company also announced it is receiving research funding from the U.K. government, which is looking for ways to build so-called sovereign cloud infrastructure for AI that would be independent, or at least not solely reliant, on U.S. technology providers.<\/p>\n<p>Callosum cofounders Danyal Akarca and Jascha Achterberg, who met during their PhD studies at Cambridge around 2019, have software that can distribute AI tasks across chips from different manufacturers\u2014be it Nvidia GPUs, AMD processors, Amazon Web Services\u2019 custom Trainium and Inferentia silicon, or newer designs from startups like Cerebras and SambaNova\u2014extracting performance advantages from each.<\/p>\n<p>The funding round was led by Plural, the European early-stage venture fund cofounded by Wise\u2019s Taavet Hinrikus and Ian Hogarth, who also served as the first chair of the U.K.\u2019s AI Safety Institute. Angel investors including Charlie Songhurst, Stan Boland of FiveAI, and John Lazar of the Royal Academy of Engineering also participated. Separately, the U.K. government\u2019s Advanced Research and Invention Agency (ARIA) is providing grant funding to the company to accelerate R&#038;D on integrating novel chip technologies into its platform\u2014though ARIA is not an investor in the round itself, Akarca said in an interview with Fortune.<\/p>\n<p>The company\u2019s thesis is rooted in the cofounders\u2019 academic research at the intersection of neuroscience and computing: The human brain doesn\u2019t achieve intelligence by copying one type of neuron billions of times, but by combining many different specialized cell types and circuits that work together. They believe AI computing should follow the same principle.<\/p>\n<p>\u201cBig labs are currently betting that one model will rule them all. We think that\u2019s wrong, and our work proves this,\u201d Akarca said. \u201cNature shows that real intelligence emerges from many systems working together.\u201d<\/p>\n<p>Callosum enters a market undergoing a profound structural shift. After years in which AI spending was dominated by training massive foundation models on racks of identical Nvidia GPUs, the industry is now pivoting toward inference\u2014the process of actually running trained models to produce outputs. Deloitte has estimated that inference workloads will account for roughly two-thirds of all AI compute in 2026, up from a third in 2023, and that the market for inference-optimized chips will grow to more than $50 billion this year. That shift is creating openings for a diverse array of chipmakers to challenge Nvidia\u2019s dominance.<\/p>\n<p>Callosum is betting it can be the software layer that ties this increasingly fragmented hardware landscape together. Its platform works across multiple cloud providers, including AWS, Google Cloud, and Microsoft Azure, and is designed so that customers don\u2019t have to re-architect their existing cloud setups to use it. \u201cIt\u2019s a software product which takes your AI workload and orchestrates it across the different multi-cloud setup that you might use,\u201d Akarca said.<\/p>\n<p>The cofounders argue the approach yields large gains on complex, real-world tasks that involve many different types of decisions\u2014such as automating computer use or processing enterprise workflows. For tasks like these, Callosum says, its system can deliver twice the accuracy, sevenfold faster performance, and at a fourfold lower cost compared with running the same workloads on identical hardware.<\/p>\n<p>Achterberg explained that the accuracy gains come from the nature of the problems being solved. \u201cSimple problems, single models are perfectly fine,\u201d he said. But complex enterprise tasks are a different matter. \u201cAutomating how computers are used, automating payments, for example\u2014these are problems that we focus on. They are inherently heterogeneous,\u201d Achterberg said. \u201cThere\u2019s actually many, many, many steps involved in solving the problem, and a single model actually isn\u2019t always optimal.\u201d<\/p>\n<p>Different parts of a complex workflow may require different things: Some steps need very fast, cheap models that can iterate rapidly through trial and error, while others require larger, more capable reasoning. By matching each subtask to the right model running on the right hardware, Callosum says it can outperform the conventional approach of throwing one powerful model at the entire problem.<\/p>\n<p>Callosum is targeting two types of customers: companies building multi-agent AI systems that need superior performance across complex workflows, and emerging chip manufacturers that want to demonstrate their hardware\u2019s capabilities at scale. \u201cWhat we want is that all these new chip technologies, which are amazing, have amazing performance, amazing benefits, find a way into the market where we can actually realize them,\u201d Achterberg said.<\/p>\n<p>The company is also collaborating with companies working on new ways to connect racks of AI chips within data centers\u2014which is called \u201cinterconnect\u201d\u2014including those developing networking based on photonics, technology that transmits data using light instead of electrical pulses. These technologies are designed to address bottlenecks that come from having to shuffle data around within a data center\u2014a challenge that grows more complex as different chip types need to communicate with one another.<\/p>\n<p>Looking ahead, the cofounders say they plan to use the funding to expand their London-based team, begin scaling into the U.S., and start building out their own complementary hardware infrastructure. Their longer-term ambition extends beyond software to fundamentally rethinking data center design itself.<\/p>\n<p>\u201cEveryone assumed chip diversity was a disadvantage to be managed. We saw the opposite, that it\u2019s an advantage to be exploited,\u201d Achterberg said. \u201cWe\u2019re not optimizing one algorithm on top of the existing stack. We\u2019re using software to control all the levers across the entire system, extracting benefits from diversity that others dismiss.\u201d<\/p>\n<p>Hogarth, the partner at Plural, said in a statement: \u201c[Callosum\u2019s] vision for a multi-model, multi-chip future could be transformative and positions them to compete with the world\u2019s biggest chip and model makers. These are serious founders tackling a serious mission.\u201d<\/p>\n<p><strong>Source: RhinoEasy News<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A London-based startup founded by two Cambridge-trained neuroscientists has raised $10.25 million for their startup Callosum, which is building software<\/p>\n","protected":false},"author":1,"featured_media":1895,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"colormag_page_container_layout":"default_layout","colormag_page_sidebar_layout":"default_layout","footnotes":""},"categories":[9],"tags":[],"class_list":["post-1896","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-finance"],"_links":{"self":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/posts\/1896","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1896"}],"version-history":[{"count":0,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/posts\/1896\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=\/wp\/v2\/media\/1895"}],"wp:attachment":[{"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1896"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1896"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rhinoeasy.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1896"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}