This chapter explores the vital discussions surrounding the trustworthiness of machine learning in critical applications, emphasizing the need for explainability and diligent data practices to mitigate risks. It highlights the societal impacts of biased training data and the evolving human roles in customer service as AI takes on more routine tasks, ultimately fostering job creation through technology. The conversation underscores the responsibility of AI developers to ensure just and equitable outcomes in their systems.
1:04 Jason intros Alexandr
2:19 Alexandr shares his personal startup history
5:17 How & why did Scale start?
8:26 What is the best example of Scale in practice? What problem are they solving?
10:44 Video demo of Scale's platform
15:34 Acquiring the scale.com domain name & insights on the unique spelling of Alexandr
17:31 How does Scale deal with data-sharing between customers?
21:34 LIDAR vs. non-LIDAR... or both?
32:29 When will we have capable self-driving vehicles from Palo Alto to San Francisco? Over/under 2030? How will gov't regulations affect self-driving?
36:03 China vs. US in the race of self-driving
41:22 Explainability in ML
47:26 Does it matter that we sometimes don't know the answer to ML systems?
51:39 Should explainability have to be proven in ML?
55:13 How should inherently biased data-sets (like US justice system) be handled via ML?
1:00:00 Importance of focusing on higher-value work
1:02:41 Are dangers of AI overblown?
1:08:50 Will "General AI" happen in our lifetime?
1:12:26 What's the next major AI trend after self-driving?
1:23:46 Does Alexandr remember a time before the Internet?
1:26:35 Jason plays "good tweet/bad tweet" with Alexandr