AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
LARC is a new multi-modal foundation model that utilizes an audio input and a language input to generate captions, analyze music, and provide predictions for diverse time series data.
Time GPT-1 is a new foundation model that focuses on time series prediction. It aims to generate accurate predictions for diverse data sets that were not seen during training, showcasing its potential for various time series forecasting tasks.
This paper explores the theory of mind capabilities in language models, specifically their ability to predict the beliefs and thoughts of different actors in a given scenario. The research shows that current language models, such as GPT-4 and Palm2, struggle to consistently demonstrate theory of mind abilities when applied to concrete tasks.
This paper introduces a hyper-attention mechanism that addresses the computational bottlenecks associated with traditional attention mechanisms in models like transformers. The hyper-attention mechanism provides near-linear time complexity, allowing for more efficient processing of long context inputs.
Southeast Asian countries are adopting a more business-friendly approach to AI regulation, in contrast to the EU's strict regulations. The Association of Southeast Asian Nations (ASEAN) is working on a draft guide for AI ethics and governance, which emphasizes considering cultural differences and does not prescribe specific risk categories.
China is targeting a 50% increase in domestic computing power by 2025 as the AI race with the US intensifies. The plan includes increasing computing power from 197 exaflops to 300 exaflops and aims to drive economic output by investing in computing infrastructure.
The US is tightening export controls on computer chips to China and expanding curbs to other countries. The move affects companies like Nvidia and TSMC and aims to prevent advanced chip technology from reaching China where it could be used for military purposes.
AI image detectors have been used to claim that a photograph of a burnt corpse of a baby killed in Hamas's attack on Israel was generated by AI, discrediting the real human consequences of the conflict. The use of AI-generated misinformation can undermine efforts to show the truth and advocate for human rights. The accuracy of AI image detectors is questionable and should be approached with skepticism.
AI-generated images are being used to spread misinformation about real events, such as claiming that a photograph of a dead baby in a conflict was generated by AI. This discredits the real human impact of these events and raises concerns about the spread of fake news. It is important to remain skeptical of claims that images are AI-generated and rely on reliable sources for accurate information.
Our 141st episode with a summary and discussion of last week's big AI news, now back with the usual hosts!
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai
Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.
Timestamps + Links:
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode