#141 - Adobe AI upgrades, Ernie 4.0, TimeGPT, No Fakes Act, AI drones
Oct 26, 2023
auto_awesome
Adobe is enhancing its AI tools, pushing generative features in Photoshop and Illustrator. Baidu's Ernie 4.0 emerges as a rival to GPT-4, sparking debates about AI capabilities. The No Fakes Act aims to protect performers from unauthorized AI replicas. Tensions rise in AI hardware as Arm China staff form a new startup amidst industry shifts. Microsoft dives into AI vulnerabilities with a lucrative bug bounty program, while the ChatGPT app faces slowing growth amid fierce competition. Ethical concerns also loom as AI advances in military applications.
Adobe is upgrading its generative AI models for Photoshop, Illustrator, and Express.
Character.AI introduces group chats where people and multiple AIs can communicate.
Meta's AI celebrity faces resistance as Kendall Jenner becomes 'Billie'.
Dropbox releases AI-powered Dash in open beta along with a web interface redesign.
Google Cloud offers legal indemnification and Baidu unveils Ernie 4.0 AI model rivaling GPT-4.
Deep dives
LARC: A Multi-Modal Foundation Model for Music
LARC is a new multi-modal foundation model that utilizes an audio input and a language input to generate captions, analyze music, and provide predictions for diverse time series data.
Time GPT-1: A Foundation Model for Time Series
Time GPT-1 is a new foundation model that focuses on time series prediction. It aims to generate accurate predictions for diverse data sets that were not seen during training, showcasing its potential for various time series forecasting tasks.
How Far are Language Models from Agents with Theory of Mind?
This paper explores the theory of mind capabilities in language models, specifically their ability to predict the beliefs and thoughts of different actors in a given scenario. The research shows that current language models, such as GPT-4 and Palm2, struggle to consistently demonstrate theory of mind abilities when applied to concrete tasks.
Hyper-Attention: Long Context Attention in Near Linear Time
This paper introduces a hyper-attention mechanism that addresses the computational bottlenecks associated with traditional attention mechanisms in models like transformers. The hyper-attention mechanism provides near-linear time complexity, allowing for more efficient processing of long context inputs.
Southeast Asia takes business-friendly approach to AI regulation
Southeast Asian countries are adopting a more business-friendly approach to AI regulation, in contrast to the EU's strict regulations. The Association of Southeast Asian Nations (ASEAN) is working on a draft guide for AI ethics and governance, which emphasizes considering cultural differences and does not prescribe specific risk categories.
China aims to increase domestic computing power
China is targeting a 50% increase in domestic computing power by 2025 as the AI race with the US intensifies. The plan includes increasing computing power from 197 exaflops to 300 exaflops and aims to drive economic output by investing in computing infrastructure.
US tightens export controls on computer chips to China
The US is tightening export controls on computer chips to China and expanding curbs to other countries. The move affects companies like Nvidia and TSMC and aims to prevent advanced chip technology from reaching China where it could be used for military purposes.
AI image detectors used to discredit real horrors of war
AI image detectors have been used to claim that a photograph of a burnt corpse of a baby killed in Hamas's attack on Israel was generated by AI, discrediting the real human consequences of the conflict. The use of AI-generated misinformation can undermine efforts to show the truth and advocate for human rights. The accuracy of AI image detectors is questionable and should be approached with skepticism.
AI-generated images used to spread misinformation
AI-generated images are being used to spread misinformation about real events, such as claiming that a photograph of a dead baby in a conflict was generated by AI. This discredits the real human impact of these events and raises concerns about the spread of fake news. It is important to remain skeptical of claims that images are AI-generated and rely on reliable sources for accurate information.
Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.