Tensormesh raises $4.5M to squeeze more inference out of AI server loads; also, Palantir enters $200M partnership with telco Lumen
Oct 24, 2025
06:35
forum Ask episode
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
Tensormesh uses an expanded form of KV Caching to make inference loads as much as ten times more efficient.
Plus, Palantir said on Thursday it had struck a partnership with Lumen Technologies that will see the telecommunications company using the data management company's AI software to build capabilities to support enterprise AI services.