AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Challenges of Running a Machine Learning Workload on a Device
The challenge is coming onto model size. The second challenge clearly comes down to the inference latency. This is what would require a full stack research and optimization. Make sure from a model system algorithms for to software stack and eventually to the hardware silicon processor, we have to look at everything to being working very nicely together.