AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Monical and Reverie are personal search engines that index and search over a collection of personal data points, such as journals, articles, bookmarks, contacts, tweets, and social media posts. Monical is a full-text search engine that allows searching for names, locations, and keywords, while Reverie is a semantic search engine that finds related content based on provided articles or phrases. The projects were built using the ink programming language and focus on providing a useful tool for efficiently searching personal data. The projects demonstrate the exploration of creating customized tools to enhance personal productivity and organization.
The summary mentions the creation of two programming languages, Ink and Oak. Ink originated as an experimentation project during an undergraduate computer science course and was further developed over the summer as a personal interpreter. Oak was then designed as a more refined version of Ink, incorporating syntax and stylistic elements that suited the creator's personal taste and aesthetic preferences. While inspired by other programming languages such as JavaScript, Lua, and Python, both languages reflect the creator's own choices and adjustments. The projects exemplify the process of building and refining programming languages to suit personal needs and preferences.
The summary explores the advantages of personal search engines, such as Monical and Reverie, in organizing and retrieving personal data. While the amount of data people generate might not always be substantial, these search engines prove useful for individuals who actively produce data points through activities like writing journals, collecting articles, and bookmarking web pages. The ability to index and search through personal data enables efficient retrieval of information specific to an individual's interests and needs. Additionally, the summary touches upon the distinction between intentionally produced data, like notes and blog posts, and unintentionally produced data, such as browsing history, highlighting the potential for deriving insights from both types. The projects demonstrate the value of personalized tools for efficiently organizing and accessing personal data.
The creator adopts a strategy of seeking regular private feedback through demo sessions while engaging in public discussions on platforms like Twitter to expand their network and gather diverse perspectives. Private feedback sessions offer deeper conversations with individuals who have expertise in relevant areas, providing valuable insights and inspiration for the creator's projects. On the other hand, public discussions facilitate the discovery of new sources of information, accelerate innovation cycles, and aid in the exploration of broader fields. The projects emphasize finding a balance between depth and breadth of feedback to foster continuous development and improvement.
Browsers and search tools are powerful tools to explore the web, but finding interesting and relevant content can be a challenge. The podcast discusses the limitations of browsing and the need for efficient noise filters to separate valuable information from the overwhelming amount of content available. Extensions like Pocket and Curious offer convenient ways to save and organize content, making it easier to access and review highlights. These tools are innovative approaches to navigating the vast sea of information on the internet.
The podcast delves into the concept of the web browser as a powerful tool for augmenting intelligence. The speaker emphasizes that web browsers can serve as a user agent, enhancing the user's experience of working with information. By building intelligence augmenting tools directly into browsers, such as summarization or content curation features, users can benefit from improved information processing and analysis. The browser is seen as an ideal starting point for building these augmentations, with potential to extend into the real world in the future.
The episode highlights the significance of notational intelligence in representing ideas effectively. Notation, used in various domains like mathematics and music, allows for better communication and manipulation of abstract concepts. The speaker proposes the idea of creating new notations, both physical and digital, that enable more precise and intuitive representation of thoughts. These new notations have the potential to revolutionize how ideas are expressed and understood, offering more flexibility and control in the process.
The podcast delves into the interface between humans and AI systems, aiming to create more effective and mutually aligned interactions. The speaker discusses the limitations of existing Turing machine-based computing models for simulating neural networks and proposes the idea of developing more direct manipulation interfaces. These interfaces would enable humans to interact with AI models through tools and operations, rather than relying solely on natural language instructions. The goal is to create a shared vocabulary and a more intuitive interface that allows for greater control and understanding in working with AI systems.
The podcast episode discusses the fascinating idea that computing is not limited to silicon circuits and can be extended to various materials and substrates. This opens up the possibility for computers to become more diverse in their capabilities and tasks, allowing for a future where different kinds of computation can be performed by different types of hardware. The speaker highlights the importance of exploring this possibility and the potential it holds for expanding the concept of computing.
The episode delves into the relationship between machine learning algorithms and available hardware. It references an article by Sarah Hooker called 'The Hardware Lottery' which asserts that certain ML algorithms gained dominance not only due to their superiority but also because they were a good fit for the existing hardware, such as transformers exploiting parallelism with GPUs. The episode suggests that a more diverse and easily accessible range of hardware backends could encourage greater algorithmic diversity, reducing bias and presenting new opportunities in the AI space.
The podcast explores the balance between external validation and pursuing personal interests in the tech industry. It acknowledges the pressure to achieve certain goals and metrics, such as getting on the front page of platforms like Hacker News, but emphasizes that it's okay to care about external validation. The speaker shares their own experience of achieving some of these goals, but also highlights the importance of staying true to personal interests and building things that are genuinely interesting. The episode encourages embracing the natural process of learning, exploration, and attracting like-minded individuals in the tech community.
The podcast touches on the value of writing and storytelling as a creative process. It suggests that writing serves as a means to mold and freeze ideas, enabling better understanding and communication. The episode highlights the importance of authenticity in writing, emphasizing that there are no rules except for effective communication. The speaker shares their own writing workflow, including the practice of writing drafts, reading aloud, and the iterative process of refining ideas. Additionally, the episode discusses the power of language and notations, and the potential for their transformation in computing and other fields.
The podcast addresses the challenge of navigating goals and external validation in the tech industry. It recognizes the dissonance between messages of pursuing passion and the reality of tangible metrics and titles. The episode acknowledges the natural desire to achieve recognition, such as getting on product hunt or starting a company, and asserts that it's okay to care about these goals. However, the speaker suggests that true validation comes from building things genuinely interesting to oneself and attracting like-minded individuals. The podcast encourages aspiring tech professionals to pursue their interests while understanding that achieving external validation is a byproduct of that process.
In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee.
Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of language. He builds interfaces and knowledge tools that expand the domain of thoughts we can think and qualia we can feel. Linus has been writing online since 2014–his blog boasts half a million words–and has built well over 100 side projects. He has also spent time as a software engineer at Replit, Hack Club, and Spensa, and was most recently a Researcher in Residence at Betaworks in New York.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Linus’s background and interests, vision-language models
* (07:45) Embodiment and limits for text-image
* (11:35) Ways of experiencing the world
* (16:55) Origins of the handle “thesephist”, languages
* (25:00) Math notation, reading papers
* (29:20) Operations on ideas
* (32:45) Overview of Linus’s research and current work
* (41:30) The Oak and Ink languages, programming languages
* (49:30) Personal search engines: Monocle and Reverie, what you can learn from personal data
* (55:55) Web browsers as mediums for thought
* (1:01:30) This AI Does Not Exist
* (1:03:05) Knowledge representation and notational intelligence
* Notation vs language
* (1:07:00) What notation can/should be
* (1:16:00) Inventing better notations and expanding human intelligence
* (1:23:30) Better interfaces between humans and LMs to provide precise control, inefficiency prompt engineering
* (1:33:00) Inexpressible experiences
* (1:35:42) Linus’s current work using latent space models
* (1:40:00) Ideas as things you can hold
* (1:44:55) Neural nets and cognitive computing
* (1:49:30) Relation to Hardware Lottery and AI accelerators
* (1:53:00) Taylor Swift Appreciation Session, mastery and virtuosity
* (1:59:30) Mastery/virtuosity and interfaces / learning curves
* (2:03:30) Linus’s stories, the work of fiction
* (2:09:00) Linus’s thoughts on writing
* (2:14:20) A piece of writing should be focused
* (2:16:15) On proving yourself
* (2:28:00) Outro
Links:
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode