AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Probabilistic Alignment Grammar
The model that I devised here was based on a PhD thesis by Karl DeMarkin at MIT in 1996 which was far before I started researching. The idea is that you do binary merges of characters based largely on frequency until you find words and if you're familiar with byte pairing coding BPE it's basically the same idea. And my contribution on top of that was largely to apply it to the multilingual setting where instead of doing binaryMerges of characters in one language you jointly do binary merging of characters in two languages. It also incorporated the kind of non-parametric Bayesian statistics that I talked about before. There's an ON to the six sampling algorithm included in learning this