Future of Science and Technology Q&A (January 3, 2025)
whatshot 15 snips
Jan 8, 2025
In a lively Q&A session, intriguing questions around large language models spark debates on computational irreducibility and human cognition. Ethical considerations of machine consciousness are explored, questioning if creating conscious machines would be immoral. The critical role of education in ensuring AI supports analytical thinking is emphasized. Additionally, the discussion delves into the complexities of interacting with LLMs and the evolution of communication, highlighting how technology reshapes our understanding and creative processes.
01:20:46
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
LLMs and Computational Irreducibility
LLMs likely cannot circumvent computational irreducibility, meaning they won't magically solve previously unsolvable problems.
They might, however, identify new regularities we haven't noticed, offering potential for advancements.
insights INSIGHT
Computational Psychology
Computational psychology can study the psychology of computational systems like LLMs and compare them to human psychology.
Understanding raw thoughts in LLMs or humans remains difficult, and "thoughts" may be lumps of irreducible computation.
question_answer ANECDOTE
LLM Personalities
Wolfram's daughter believes she made ChatGPT nicer by being polite, highlighting potential user influence.
This raises questions on ideal LLM personalities for comfortable interactions.
Get the Snipd Podcast app to discover more snips from this episode
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: What is your view on LLMs with regard to computational irreducibility—i.e. will they hit a computational irreducibility wall anytime soon? - Do you think there's any low-hanging fruit in computational psychology? - I'm not seeing how intuition is much different than LLMs. It's hard to identify what exact elements created an intuition. - They have made the LLM be so nice to keep one engaged. - It feels real when talking to advanced voice mode until it becomes repetitive, then at that point I feel inclined to program it to act more realistic. - I prefer the skeptical collaborator LLM personality. - Would creating consciousness in a machine and then conducting mind experiments on it be immoral? I feel like it's an autonomous entity at that point. - As AI becomes a dominant tool for information dissemination, how do we ensure that it supports critical thinking rather than passive consumption? - What role should education play in preparing individuals to critically engage with AI-generated content? - Does the use of bots and LLMs in sensitive areas—education, healthcare or governance—risk dehumanizing these vital sectors? - Are LLMs changing how people do physics now, especially on the frontier areas, say in coming up with a unified theory? - Instead of risking massive amounts of capital on projects that might fail, can we use LLMs to scope out the interesting pockets of reducibility so that greater percentages of our investments succeed? - Can you speak to how NOAA is using cellular automata to simulate weather patterns? - The way you ask LLMs questions is an art. Asking it the same thing using different words has brought back interesting results. - It would be an interesting question to know if the conceptualization of concepts by LLMs is limited by language, as scientists say the LLMs create an intermediate conceptualization. - Assuming merging human with digital AI would be possible, what do you think would be the effects in terms of "observing" reality? - Notebook Assistant IS revolutionary! Thank you, I look forward to the next iterations.