AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Google DeepMind's Alpha Fold 3 paper extends beyond Alpha Fold 2 by predicting interactions of proteins with various molecules beyond just proteins alone. This advancement is essential for fields like drug discovery where understanding how proteins interact with other molecules is crucial. Alpha Fold 3's architecture shares similarities with its predecessor but can now predict diverse molecule structures. The model's ability to take in general molecules and predict their structures marks a significant leap from Alpha Fold 2.
Alpha Fold 3 introduces a diffusion model designed to delve into predicting how proteins interact with a wide array of molecules beyond isolated protein structures. This innovation extends its utility to simulating various molecular interactions, including molecular modifications, interactions with DNA, RNA, and potential drug molecules. The model's diffusion foundation broadens its applicability and underscores progress in accurately modeling complex molecular structures.
Despite its advancements, Alpha Fold 3 faces constraints such as a 5,000 token limit that influences the complexity of structures it can effectively model. For larger molecular systems like nucleic acids and organelles, the model's performance may not be fully discernible due to these limitations. The model's server accessibility for scientists offers a research tool with notable capabilities for studying molecular structures but presents challenges for more intricate structures.
Alpha Fold 3's release as a research tool provides scientists with valuable insights into predicting molecular structures. Its versatility in handling diverse molecules and interactions signals promising applications in various fields, including pharmaceuticals and biochemistry. As the model evolves and addresses challenges like token limitations and scalability for complex structures, its contributions to molecular biology and drug discovery are anticipated to grow significantly.
ByteDance releases a new high-quality ID customization tool called PULID, offering a next-level approach to ID customization without the need for extensive tuning. This tool allows users to paste an image of their face and receive impressive customized results. It introduces a tuning-free ID customization approach, revolutionizing the creation of personalized IDs.
Udo, a leading AI audio tool, presents the 'InPainting' feature, enabling users to customize sections of generated songs by modifying lyrics or other elements. This advanced capability enhances the creativity and personalization options for audio content creators.
Open UI, a pioneering no-code web design tool, redefines UI creation with instant and intuitive design capabilities. Users can effortlessly generate UI components by pasting a screenshot and requesting alterations, making web design more accessible and efficient.
Open UI's future vision includes publishing a roadmap to engage users in planning and development. Interested individuals can collaborate, provide feedback, and contribute to this innovative tool's growth by joining the Open UI community.
Recent advancements in AI diffusion methods, such as 'PU LID' by ByteDance, and innovations in ID customization tools like 'PULID' represent cutting-edge applications of AI technology, promising enhanced levels of customization and adaptability in various fields.
Udo's introduction of the 'InPainting' feature signifies a pivotal advancement in audio editing, enabling users to seamlessly customize song segments in real-time by modifying lyrics and other audio components. This groundbreaking capability offers creative possibilities and flexibility in music production.
ByteDance launches 'PU LID', a cutting-edge ID customization tool that sets a new standard for customization without the need for extensive tuning. This tool allows users to personalize their IDs by simply uploading an image, streamlining the ID customization process.
Open UI revolutionizes web design by offering instant UI creation tools that empower users to generate UI components dynamically, simplifying the web design process. With its no-code approach and intuitive design features, Open UI sets a new standard for efficient UI development.
Open UI's future plans include roadmap publication and user engagement initiatives to foster collaboration and feedback from the community. Individuals can contribute to Open UI's growth by providing input, ideas, and participation in shaping the tool's evolution.
Recent advancements in the AI landscape, including innovative features like 'InPainting' in Udo for audio editing and the 'PU LID' by ByteDance for ID customization, showcase the continuous evolution of AI technologies for enhanced customization and adaptability across various applications.
Udo's introduction of the 'InPainting' feature represents a significant leap in audio editing capabilities, enabling users to customize select segments of generated songs by altering lyrics and audio elements. This innovative functionality opens up new avenues for creative expression and personalization in music production.
ByteDance's latest release, 'PULID', redefines the ID customization landscape by offering a high-quality, tuning-free approach to personalized ID generation. By enabling users to upload their image for customization, PULID streamlines the process and enhances the level of customization available.
The current wave of AI advancements, such as innovative offerings like the 'InPainting' feature in Udo and 'PU LID' by ByteDance for ID customization, showcases the transformative potential of AI in offering advanced customization options. These tools pave the way for future design and personalization trends with cutting-edge capabilities.
Hey π (show notes and links a bit below)
This week has been a great AI week, however, it does feel like a bit "quiet before the storm" with Google I/O on Tuesday next week (which I'll be covering from the ground in Shoreline!) and rumors that OpenAI is not just going to let Google have all the spotlight!
Early this week, we got 2 new models on LMsys, im-a-good-gpt2-chatbot and im-also-a-good-gpt2-chatbot, and we've now confirmed that they are from OpenAI, and folks have been testing them with logic puzzles, role play and have been saying great things, so maybe that's what we'll get from OpenAI soon?
Also on the show today, we had a BUNCH of guests, and as you know, I love chatting with the folks who make the news, so we've been honored to host Xingyao Wang and Graham Neubig core maintainers of Open Devin (which just broke SOTA on Swe-Bench this week!) and then we had friends of the pod Tanishq Abraham and Parmita Mishra dive deep into AlphaFold 3 from Google (both are medical / bio experts).
Also this week, OpenUI from Chris Van Pelt (Co-founder & CIO at Weights & Biases) has been blowing up, taking #1 Github trending spot, and I had the pleasure to invite Chris and chat about it on the show!
Let's delve into this (yes, this is I, Alex the human, using Delve as a joke, don't get triggered π)
TL;DR of all topics covered (trying something new, my Raw notes with all the links and bulletpoints are at the end of the newsletter)
* Open Source LLMs
* OpenDevin getting SOTA on Swe-Bench with 21% (X, Blog)
* DeepSeek V2 - 236B (21B Active) MoE (X, Try It)
* Weights & Biases OpenUI blows over 11K stars (X, Github, Try It)
* LLama-3 120B Chonker Merge from Maxime Labonne (X, HF)
* Alignment Lab open sources Buzz - 31M rows training dataset (X, HF)
* xLSTM - new transformer alternative (X, Paper, Critique)
* Benchmarks & Eval updates
* LLama-3 still in 6th place (LMsys analysis)
* Reka Core gets awesome 7th place and Qwen-Max breaks top 10 (X)
* No upsets in LLM leaderboard
* Big CO LLMs + APIs
* Google DeepMind announces AlphaFold-3 (Paper, Announcement)
* OpenAI publishes their Model Spec (Spec)
* OpenAI tests 2 models on LMsys (im-also-a-good-gpt2-chatbot & im-a-good-gpt2-chatbot)
* OpenAI joins Coalition for Content Provenance and Authenticity (Blog)
* Voice & Audio
* Udio adds in-painting - change parts of songs (X)
* 11Labs joins the AI Audio race (X)
* AI Art & Diffusion & 3D
* ByteDance PuLID - new high quality ID customization (Demo, Github, Paper)
* Tools & Hardware
* Went to the Museum with Rabbit R1 (My Thread)
* Co-Hosts and Guests
* Graham Neubig (@gneubig) & Xingyao Wang (@xingyaow_) from Open Devin
* Chris Van Pelt (@vanpelt) from Weights & Biases
* Nisten Tahiraj (@nisten) - Cohost
* Tanishq Abraham (@iScienceLuvr)
* Parmita Mishra (@prmshra)
* Wolfram Ravenwolf (@WolframRvnwlf)
* Ryan Carson (@ryancarson)
Open Source LLMs
Open Devin getting a whopping 21% on SWE-Bench (X, Blog)
Open Devin started as a tweet from our friend Junyang Lin (on the Qwen team at Alibaba) to get an open source alternative to the very popular Devin code agent from Cognition Lab (recently valued at $2B π€―) and 8 weeks later, with tons of open source contributions, >100 contributors, they have almost 25K stars on Github, and now claim a State of the Art score on the very hard Swe-Bench Lite benchmark beating Devin and Swe-Agent (with 18%)
They have done so by using the CodeAct framework developed by Xingyao, and it's honestly incredible to see how an open source can catch up and beat a very well funded AI lab, within 8 weeks! Kudos to the OpenDevin folks for the organization, and amazing results!
DeepSeek v2 - huge MoE with 236B (21B active) parameters (X, Try It)
The folks at DeepSeek is releasing this huge MoE (the biggest we've seen in terms of experts) with 160 experts, and 6 experts activated per forward pass. A similar trend from the Snowflake team, just extended even longer. They also introduce a lot of technical details and optimizations to the KV cache.
With benchmark results getting close to GPT-4, Deepseek wants to take the crown in being the cheapest smartest model you can run, not only in open source btw, they are now offering this model at an incredible .28/1M tokens, that's 28 cents per 1M tokens!
The cheapest closest model in price was Haiku at $.25 and GPT3.5 at $0.5. This is quite an incredible deal for a model with 32K (128 in open source) context and these metrics.
Also notable is the training cost, they claim that it took them 1/5 the price of what Llama-3 cost Meta, which is also incredible. Unfortunately, running this model locally a nogo for most of us π
I would mention here that metrics are not everything, as this model fails quite humorously on my basic logic tests
LLama-3 120B chonker Merge from Maxime LaBonne (X, HF)
We're covered Merges before, and we've had the awesome Maxime Labonne talk to us at length about model merging on ThursdAI but I've been waiting for Llama-3 merges, and Maxime did NOT dissapoint!
A whopping 120B llama (Maxime added 50 layers to the 70B Llama3) is doing the rounds, and folks are claiming that Maxime achieved AGI π It's really funny, this model, is... something else.
Here just one example that Maxime shared, as it goes into an existential crisis about a very simple logic question. A question that Llama-3 answers ok with some help, but this... I've never seen this. Don't forget that merging has no additional training, it's mixing layers from the same model so... we still have no idea what Merging does to a model but... some brain damange definitely is occuring.
Oh and also it comes up with words!
ThursdAI - Recaps of the most high signal AI weekly spaces is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Big CO LLMs + APIs
Open AI publishes Model Spec (X, Spec, Blog)
OpenAI publishes and invites engagement and feedback for their internal set of rules for how their models should behave. Anthropic has something similar with Constitution AI.
I specifically liked the new chain of command (Platform > Developer > User > Tool) rebranding they added to the models, making OpenAI the Platform, changing "system" prompts to "developer" and having user be the user. Very welcome renaming and clarifications (h/t Swyx for his analysis)
Here are a summarized version of OpenAI's new rules of robotics (thanks to Ethan Mollic)
* follow the chain of command: Platform > Developer > User > Tool
* Comply with applicable laws
* Don't provide info hazards
* Protect people's privacy
* Don't respond with NSFW contents
Very welcome effort from OpenAI, showing this spec in the open and inviting feedback is greately appreciated!
This comes on top of a pretty big week for OpenAI, announcing an integration with Stack Overflow, Joining the Coalition for Content Provenance and Authenticity + embedding watermarks in SORA and DALL-e images, telling us they have built a classifier that detects AI images with 96% certainty!
im-a-good-gpt2-chatbot and im-also-a-good-gpt2-chatbot
Following last week gpt2-chat mystery, Sam Altman trolled us with this tweet
And then we got 2 new models on LMSys, im-a-good-gpt2-chatbot and im-also-a-good-gpt2-chatbot, and the timeline exploded with folks trying all their best logic puzzles on these two models trying to understand what they are, are they GPT5? GPT4.5? Maybe a smaller version of GPT2 that's pretrained on tons of new tokens?
I think we may see the answer soon, but it's clear that both these models are really good, doing well on logic (better than Llama-70B, and sometimes Claude Opus as well)
And the speculation is pretty much over, we know OpenAI is behind them after seeing this oopsie on the Arena π
you can try these models as well, they seem to be very favored in the random selection of models, but they show up only in battle mode so you have to try a few times https://chat.lmsys.org/
Google DeepMind announces AlphaFold3 (Paper, Announcement)
Developed by DeepMind and IsomorphicLabs, AlphaFold has previously predicted the structure of every molecule known to science, and now AlphaFold 3 was announced which can now predict the structure of other biological complexes as well, paving the way for new drugs and treatments.
What's new here, is that they are using diffusion, yes, like Stable Diffusion, starting with noise and then denoising to get a structure, and this method is 50% more accurate than existing methods.
If you'd like more info about this very important paper, look no further than the awesome 2 minute paper youtube, who did a thorough analysis here, and listen to the Isomorphic Labs podcast with Weights & Biases CEO Lukas on Gradient Dissent
They also released AlphaFold server, a free research tool allowing scientists to access these capabilities and predict structures for non commercial use, however it seems that it's somewhat limited (from a conversation we had with a researcher on stage)
This weeks Buzz (What I learned with WandB this week)
This week, was amazing for Open Source and Weights & Biases, not every week a side project from a CIO blows up on... well everywhere. #1 trending on Github for Typescript and 6 overall, OpenUI (Github) has passed 12K stars as people are super excited about being able to build UIs with LLms, but in the open source.
I had the awesome pleasure to host Chris on the show as he talked about the inspiration and future plans, and he gave everyone his email to send him feedback (a decision which I hope he doesn't regret π) so definitely check out the last part of the show for that.
Meanwhile here's my quick tutorial and reaction about OpenUI, but just give it a try here and build something cool!
Vision
I was shared some news but respecting the team I decided not to include it in the newsletter ahead of time, but expect open source to come close to GPT4-V next week π
Voice & Audio
11 Labs joins the AI music race (X)
Breaking news from 11Labs, that happened during the show (but we didn't notice) is that they are stepping into the AI Music scene and it sounds pretty good!)
Udio adds Audio Inpainting (X, Udio)
This is really exciting, Udio decided to prove their investment and ship something novel!
Inpainting has been around in diffusion models, and now selecting a piece of a song on Udio and having Udio reword it is so seamless it will definitely come to every other AI music, given how powerful this is!
Udio also announced their pricing tiers this week, and it seems that this is the first feature that requires subscription
AI Art & Diffusion
ByteDance PuLID for no train ID Customization (Demo, Github, Paper)
It used to take a LONG time to finetune something like Stable Diffusion to generate an image of your face using DreamBooth, then things like LoRA started making this much easier but still needed training.
The latest crop of approaches for AI art customization is called ID Customization and ByteDance just released a novel, training free version called PuLID which works very very fast with very decent results! (really, try it on your own face), previous works like InstantID an IPAdapter are also worth calling out, however PuLID seems to be the state of the art here! π₯
And that's it for the week, well who am I kidding, there's so much more we covered and I just didn't have the space to go deep into everything, but definitely check out the podcast episode for the whole conversation. See you next week, it's going to be π₯ because of IO and ... other things π
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode