undefined

Alexander Mattick

Leading voice in Yannic's Discord community, known for impressive technical depth in AI.

Top 3 podcasts with Alexander Mattick

Ranked by the Snipd community
undefined
22 snips
Nov 8, 2022 • 2h 10min

#79 Consciousness and the Chinese Room [Special Edition] (CHOLLET, BISHOP, CHALMERS, BACH)

Francois Chollet, an AI researcher at Google Brain and creator of Keras, joins a panel featuring philosopher David Chalmers and cognitive scientists to delve into the Chinese Room argument. They explore whether machines can genuinely understand language or only simulate it. The discussion challenges conventional views on consciousness, emphasizing that true understanding stems from complex interactions rather than mere rule-following. Insights into syntax versus semantics reveal the deeper philosophical implications of AI and the nature of consciousness.
undefined
16 snips
Oct 23, 2022 • 32min

Neural Networks are Decision Trees (w/ Alexander Mattick)

#neuralnetworks #machinelearning #ai  Alexander Mattick joins me to discuss the paper "Neural Networks are Decision Trees", which has generated a lot of hype on social media. We ask the question: Has this paper solved one of the large mysteries of deep learning and opened the black-box neural networks up to interpretability? OUTLINE: 0:00 - Introduction 2:20 - Aren't Neural Networks non-linear? 5:20 - What does it all mean? 8:00 - How large do these trees get? 11:50 - Decision Trees vs Neural Networks 17:15 - Is this paper new? 22:20 - Experimental results 27:30 - Can Trees and Networks work together? Paper: https://arxiv.org/abs/2210.05189 Abstract: In this manuscript, we show that any feedforward neural network having piece-wise linear activation functions can be represented as a decision tree. The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is. We believe that this work paves the way to tackle the black-box nature of neural networks. We share equivalent trees of some neural networks and show that besides providing interpretability, tree representation can also achieve some computational advantages. The analysis holds both for fully connected and convolutional networks, which may or may not also include skip connections and/or normalizations. Author: Caglar Aytekin Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
undefined
Feb 28, 2022 • 51min

#66 ALEXANDER MATTICK - [Unplugged / Community Edition]

Join Alexander Mattick, a prominent voice in Yannic's Discord community and an AI aficionado, as he dives deep into the intricacies of neural networks. He reveals fascinating insights on spline theory and the complexities of abstraction in machine learning. The discussion also touches on the balance between exploration and control in knowledge acquisition, alongside the philosophical implications of causality and discrete versus continuous modeling. Alex champions the value of a broad knowledge base, illustrating how diverse insights can enhance problem-solving.