I think Taalas has a bright future and could end up owning edge AI chips potentially putting small models into laptops or workstation and be the hard coded brain of cars and robots. But I also have to imagine if Cerebras purchased Taalas or licensed their tech in a partnership, they produce hard coded wafers with Cerebras redundant pathways and wafer cooling. You could see frontier models air cooled with crazy small energy foot print and absolutely unimaginable speed.
u/Asgard_Heima
I keep hearing the cuda moat called out by financial analysts with no real world experience or data scientist to talk to.
Data Scientists and AI Engineers use PyTorch and TensorFlow for 99%+ of accelerator hardware work. The major users of CUDA are the model builders such as OpenAI that just signed up with Cerebras. They only use it because Nvidia requires it for the overly complex distributed compute architecture of Nvidia. Cerebras removes the need to deal with kernel optimizations and handling network latency completely with a single massive WSE system in a box. CUDA is actually a competitive disadvantage when you realize its complexity for less performant results vs Cerebras.
Last, even if there was a transition required for some edge case uses, you can use AI to transition your code for near zero effort. This will be a fast realization as the number of users running on Cerebras hardware increases. The places getting deep into the weeds of CSoft like OpenAI and AWS are going to be building the best models going forward.