Topics covered in this podcast include the recommended use of Psycopg 3 for new projects and its native asyncio support, the features of the dacite library for creating data classes from dictionaries, the new Rust implementation of PIP with exciting features, the challenges of writing flaky tests and concurrency issues, and discussions about data testing and experiences with email clients.
PsychoPG3 is recommended for new projects, while PsychoPG2 is still maintained for legacy projects.
Dacite simplifies the creation of data classes from dictionaries, supporting nested structures, type checking, and custom type hooks.
Deep dives
PsychoPG3 is the new present and PsychoPG2 is the past
PsychoPG, the popular PostgreSQL adapter for Python, has released PsychoPG3 as the latest version. The announcement signifies that PsychoPG3 is now the present and recommended for new projects, while PsychoPG2 is still maintained but considered the past. PsychoPG3 brings new features such as native async IO support, support for more Python types like enums and Postgres types, including multi-range, and improved parameter bindings. The community encourages developers to consider using PsychoPG3 for new projects and provides resources for comparison with PsychoPG2.
Simplifying data class setup with DaySite
The built-in data classes in Python offer a convenient way to define classes with predefined attributes. However, dealing with complex data structures and conversions from dictionaries can be cumbersome. DaySite, a Python library, simplifies this process by automating the creation of data classes from dictionaries. By using the DaySite library, developers can easily convert complex JSON-like dictionaries into data classes. DaySite supports nested structures, type checking, optional fields, union types, forward references, collections, and custom type hooks. While DaySite is not a data validation library, it streamlines the process of converting dictionaries into complex and properly typed data classes.
Identifying and Handling Flaky Tests
The podcast episode discusses the concept of flaky tests and provides insights on how to handle them effectively. Flaky tests are tests that produce varying results upon repeated execution, making them unreliable indicators of code quality. The episode highlights various scenarios that can cause flaky tests, such as concurrency issues, floating-point arithmetic precision, unpredictability in external systems, and unexpected interactions with the global interpreter lock (GIL) in Python. The podcast suggests strategies to address these issues, including proper use of locks, setting explicit timeouts for external systems, and implementing statistical tolerance for floating-point arithmetic. Additionally, the podcast recommends utilizing testing plugins like pytest-repeat and Hypothesis to mitigate flaky test cases. By following these approaches, developers can improve the reliability and accuracy of their test suites.