AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Prompting techniques have evolved to optimize for instruction and generate more reliable responses. The introduction of instruct models allows for a more user-friendly experience, reducing the mental effort required to craft prompts. The focus has shifted from leading the model with complex prompts to simple and clear instructions, enabling users to get desired outputs with ease.
During the training process, models go through different stages, including the fine-tuning and post-training optimization. Checkpoints are used for testing and evaluation, and feedback from users helps in refining and improving the models. The models are continuously enhanced, addressing limitations and optimizing for specific tasks to ensure better performance and reliability.
Prompt design has become more sophisticated, allowing for broader generalization and fine-tuning capabilities. The use of larger datasets and reinforcement learning techniques has improved the performance of models for specific tasks. Furthermore, the introduction of retrieval augmented generation, as seen in modern models like ChatGPT Plus, enhances the prompt design by allowing users to upload documents and incorporate retrieval-based search into the generation process.
GPT-4 Vision has the potential to revolutionize industries such as quality control, blind assistance, and agriculture. It will require training on larger amounts of text to capture the essence of an author's style and develop stories incrementally. Additionally, advances in prompt engineering and technology will enable GPT models to write good novels with improved emotional reactions and qualitative characteristics.
GPT-4 will eventually be capable of writing good novels that capture an author's style and narrative flow. It will require training on larger amounts of text and annotations to understand and develop the intricacies of an author's storytelling. Success will hinge on the model's ability to evolve stories incrementally and effectively convey the author's vision and intent.
Jailbreaking, while interesting, is seen as a gimmick and not a significant achievement. Currently, the focus is on harnessing the capabilities of GPT models through improved prompt engineering and training on larger text inputs. The next era of prompt engineering will likely involve leveraging GPT for capabilities like vision and audio to enhance the language model's understanding and generation of text.
Prompt engineers need to understand the capabilities and limitations of the model they are working with. They must analyze where the model excels and where it falls short by closely observing its performance on different prompts. It is important to identify patterns and keywords that trigger desired outcomes and to iterate on prompt design based on the model's behavior. Prompt optimization techniques can be useful in finding the best prompts for specific tasks. Prompt engineers should focus on leveraging the model's knowledge and providing clear and specific instructions to achieve desired results.
Prompt engineers should continuously iterate on prompts to refine and improve the performance of the model. They can start with descriptive prompts and gradually optimize them to achieve better outcomes. It is important to experiment with different prompt structures, explore variations, and analyze the model's responses. By honing their intuition and understanding the model's strengths, prompt engineers can create effective prompts that elicit the desired information or behavior from the model.
While prompt engineers may not have access to the specific training data used by models, it is still possible to create effective prompts by leveraging available data sources and understanding the model's knowledge and capabilities. Automated prompt optimization techniques can assist in finding the best prompts by utilizing optimization algorithms to fine-tune the prompts for desired outcomes. Prompt engineers should focus on generating prompts that align with the model's past training data and capitalize on its ability to recognize patterns and contexts.
The podcast episode emphasizes the importance of having a process to learn and improve. The speaker shares personal experiences from writing novels to learning how to code, highlighting the benefits of constantly making things and not letting self-doubt or limitations define one's potential. They encourage creatives, entrepreneurs, and prompt engineers to have a growth mindset, seek feedback, and have a systematic approach to honing their skills and improving their capabilities.
The podcast dives into the challenge of predicting the future capabilities of AI models. The speaker highlights the difficulty in foreseeing the specific capabilities that will emerge as models scale up. They discuss how the training process can result in unexpected outcomes and capabilities, and how different approaches can yield varying results. They also emphasize the importance of continuous testing, evaluation, and feedback to uncover and understand the true potential of AI models. Overall, they suggest that curiosity, open-mindedness, and experimentation are key to discovering and leveraging AI capabilities effectively.
Discussing Prompt Engineering and recent OpenAI developments with ex-OpenAI Creative Apps and Scientific Communicator Andrew Mayne
Timestamps:
00:00:00 - Teaser Reel Intro
00:01:01 - Intro / Andrew's background
00:02:49 - What was it like working at OpenAI when you first joined?
00:12:59 - Was Andrew basically one of the earliest Prompt Engineers?
00:14:04 - How Andrew Hacked his way into a tech job at OpenAI
00:17:08 - Parallels between Hollywood and Tech jobs
00:20:58 - Parallels between the world of Magic and working at OpenAI
00:25:00 - What was OpenAI like in the Early Days?
00:30:24 - Why it was hard promoting GPT-3 early on
00:31:00 - How would you describe the current 'instruction age' of prompt design?
00:35:22 - What was GPT-4 like freshly trained?
00:39:00 - Is there anything different about the raw base model without RLHF?
00:42:00 - Optimizations that go into Language models like GPT-4
00:43:30 - What was it like using DALL-E 3 very early on?
00:44:38 - Do you know who came up with the 'armchair in the shape of an avocado' prompt at OpenAI?
00:45:48 - Did you experience 'DALL-E Dreams' as a part of the DALL-E 2 beta?
00:47:16 - How else has prompt design changed?
00:49:27 - How has prompt design changed because of ChatGPT?
00:52:40 - How to get ChatGPT to mimick and emulate personalities better?
00:54:30 - Mimicking Personalities II (How to do Style with ChatGPT)
00:56:40 - Fine Tuning ChatGPT for Mimicking Elon Musk
00:59:44 - How do you get ChatGPT to come up with novel and brilliant ideas?
01:02:40 - How do you get ChatGPt to get away from conventional answers?
01:05:14 - Will we ever get single-shot, real true novelty from LLM's?
01:10:05 - Prompting for ChatGPT Voice Mode
01:12:20 - Possibilities and Prompting for GPT-4 Vision
01:15:45 - GPT-4 Vision Use Cases/Startup Ideas
01:21:37 - Does multimodality make language models better or are the benefits marginal?
01:24:00 - Intuitively, has multimodality improved the world model of LLM's like GPT-4?
01:25:33 - What would it take for ChatGPT to write half of your next book?
01:29:10 - Qualitatively, what would it take to convince you about a book written by AI? What are the characteristics?
01:31:30 - Could an LLM mimick Andrew Mayne's writing style?
01:37:49 - Jailbreaking ChatGPT
01:41:12 - What's the next era of prompt engineering?
01:45:50 - How have custom instructions changed the game?
01:54:41 - How far do you think we are from asking a model how to make 10 million dollars and getting back a legit answer?
02:01:07 - Part II - Making Money with LLM's
02:11:32 - How do you make a chat bot more reliable and safe?
02:12:12 - How do you get ChatGPT to consistently remember criteria and work within constraints?
02:12:45 - What about DALL-E? How do you get it to better create within constraints?
02:14:14 - What's your prompt practice like?
02:15:10 - Do you intentionally sit down and practice writing prompts?
02:16:45 - How do you build an intuition around prompt design for an LLM?
02:20:00 - How do you like to iterate on prompts? Do you have a process?
02:21:45 - How do you know when you've hit the ceiling with a prompt?
02:24:00 - How do you know a single line prompt is has room to improve?
02:26:40 - Do you actually need to know OpenAI's training data? What are some ways to mitigate this?
02:30:40 - What are your thoughts on automated prompt writing/optimization?
02:33:20 - How do you get a job as a prompt engineer? What makes a top tier prompt engineer different from an everyday user?
02:37:20 - How do you think about scaling laws a prompt engineer?
02:39:00 - Effortless Prompt Design
02:40:52 - What are some research areas that would get you a job at OpenAI?
02:43:30 - The Research Possibilities of Optimization & Inference
02:45:59 - If you had to guess future capabilities of GPT-5 what would they be?
02:50:16 - What are some capabilities that got trained out of GPT-4 for ChatGPT?
02:51:10 - Is there any specific capability you could imagine for GPT-5? Why is it so hard to predict them?
02:56:06 - Why is it hard to predict future LLM capabilities? (Part II)
02:59:47 - What made you want to leave OpenAI and start your own consulting practice?
03:05:29 - Any remaining advice for creatives, entrepreneurs, prompt engineers?
03:09:25 - Closing
Subscribe to the Multimodal By Bakz T. Future Podcast!
Spotify - https://open.spotify.com/show/7qrWSE7ZxFXYe8uoH8NIFV
Apple Podcasts - https://podcasts.apple.com/us/podcast/multimodal-by-bakz-t-future/id1564576820
Google Podcasts - https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Jha3p0ZnV0dXJlL2ZlZWQueG1s
Stitcher - https://www.stitcher.com/show/multimodal-by-bakz-t-future
Other Podcast Apps (RSS Link) - https://feed.podbean.com/bakztfuture/feed.xml
Connect with me:
YouTube - https://www.youtube.com/bakztfuture
Substack Newsletter - https://bakztfuture.substack.com
Twitter - https://www.twitter.com/bakztfuture
Instagram - https://www.instagram.com/bakztfuture
Github - https://www.github.com/bakztfuture
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode