The ghost text co-pilot is using a smaller model, right? It's using codecs. Part of the trade-off with great power comes super long latency. The more powerful models also stream in their responses like one token at a time. And we saw that completion acceptance rates in regions outside of the US or whatever were way lower.
In this supper club episode of Syntax, Wes and Scott talk with Matt Rothenberg and Idan Gazit from GitHub about GitHub Next, Copilot, AI based projects at Github, and what the future is for developers with AI.
Show Notes
Shameless Plugs
Tweet us your tasty treats