Test & Code cover image

Test & Code

Latest episodes

undefined
Oct 5, 2020 • 29min

133: Major League Hacking - Jon Gottfried

Hackathons have been spreading around the world; many at university campuses. Major League Hacking, MLH, has been encouraging and helping hackathons. Hacking can be thought of as tinkering. Taking things apart and putting them back together as an interesting experience. There's always been some of this as part of software culture. The people at Major League Hacking have taken this to a whole new level, bringing together Tech creators who enjoy playing around with and crating new technology, on campuses, and now in virtual spaces, all over the world. Jonathon Gottfried, one of the cofounders of Major League Hacking, joins the show to talk about: hacker meetups and events hackathons what it's like to go to a hackathon how to help out with hackathons as an experienced engineer, even virtually as a mentor hackathons continuing virtually during the pandemic internships and fellowships on open source projects to help students gain experience, even during the pandemic MLH approach to internships, giving interns a support group, including peers, mentors, and project maintainers and MLH itself Special Guest: Jon Gottfried.Sponsored By:Datadog: Modern monitoring & security. See inside any stack, any app, at any scale, anywhere. Visit testandcode.com/datadog to get started.monday.com: Creating a monday.com app can help thousands of people and win you prizes. Maybe even a Tesla or a MacBook.Links:Major League Hacking ★ Support this podcast on Patreon ★
undefined
Sep 28, 2020 • 41min

132: mocking in Python - Anna-Lena Popkes

Using mock objects during testing in Python. Anna-Lena joins the podcast to teach us about mocks and using unittest.mock objects during testing. We discuss: the different styles of using mocks pros and cons of mocks dependency injection adapter pattern mock hell magical universe and much more Special Guest: Anna-Lena Popkes.Sponsored By:Talk Python Training: Online video courses for Python developersPyCharm Professional: Try PyCharm Pro for 4 months and learn how PyCharm will save you time. Promo Code: TESTANDCODE22HoneyBadger: When bad things happen, it's nice to know that Honeybadger has your back. 30% off for first 6 months when you mention Test & Code Podcast when signing up.Links:Personal webpage of Anna-Lena PopkesMagical Universe — Awesome Python features explained using the world of magicTest & Code 102: Cosmic Python, TDD, testing and external dependencies — The episode where Harry Percival discusses mocking.Talk: Harry Percival - Stop Using Mocks (for a while) - YouTube — Talk: Harry Percival - Stop Using Mocks (for a while)unittest.mock AutospeccingMock Hell Talk (45 min version) Edwin Jung - PyCon 2019 Mock Hell Talk (30 min version) - Edwin Jung - PyConDE PyCon EstoniaKI macht Schule!Talk Python #186 : 100 Days of Python in a Magical Universe ★ Support this podcast on Patreon ★
undefined
Sep 21, 2020 • 9min

131: Test Smarter, Not Harder

Some people avoid writing tests. Some drudge through it painfully. There is a better way. In this episode, I'm going to share some advice from Luke Plant on how to "Test Smarter, Not Harder".Sponsored By:Talk Python Training: Online video courses for Python developersDatadog: Modern monitoring & security. See inside any stack, any app, at any scale, anywhere. Visit testandcode.com/datadog to get started.Links:Test smarter, not harder - lukeplant.me.uk — The original article by Luke ★ Support this podcast on Patreon ★
undefined
Sep 13, 2020 • 36min

130: virtualenv activation prompt consistency across shells - an open source dev and test adventure - Brian Skinn

virtualenv supports six shells: bash, csh, fish, xonsh, cmd, posh. Each handles prompts slightly differently. Although the virtualenv custom prompt behavior should be the same across shells, Brian Skinn noticed inconsistencies. He set out to fix those inconsistencies. That was the start of an adventure in open source collaboration, shell prompt internals, difficult test problems, and continuous integration quirks. Brian Skinn initially noticed that on Windows cmd, a space was added between a prefix defined by --prompt and the rest of the prompt, whereas on bash no space was added. For reference, there were/are three nominal virtualenv prompt modification behaviors, all of which apply to the prompt changes that are made at the time of virtualenv activation: If the environment variable VIRTUAL_ENV_DISABLE_PROMPT is defined and non-empty at activation time, do not modify the prompt at all. Otherwise: If the --prompt argument was supplied at creation time, use that argument as the prefix to apply to the prompt; or, If the --prompt argument was not supplied at creation time, use the default prefix of "() " as the prefix (the environment folder name surrounded by parentheses, and with a trailing space after the last paren. Special Guest: Brian Skinn.Sponsored By:Talk Python Training: Online video courses for Python developersHoneyBadger: When bad things happen, it's nice to know that Honeybadger has your back. 30% off for first 6 months when you mention Test & Code Podcast when signing up.PyCharm Professional: Try PyCharm Pro for 4 months and learn how PyCharm will save you time. Promo Code: TESTANDCODE22Links:virtualenvInitial issue that started the adventurefinal PRpent: pent Extracts Numerical Text -- Mini-language driven parser for structured numerical dataLightening talk on pent ★ Support this podcast on Patreon ★
undefined
Sep 7, 2020 • 42min

129: How to Test Anything - David Lord

I asked people on twitter to fill in "How do I test _____?" to find out what people want to know how to test. Lots of responses. David Lord agreed to answer them with me. In the process, we come up with lots of great general advice on how to test just about anything. Specific Questions people asked: What makes a good test? How do you test web app performance? How do you test cookie cutter templates? How do I test my test framework? How do I test permission management? How do I test SQLAlchemy models and pydantic schemas in a FastAPI app? How do I test warehouse ETL code? How do I test and mock GPIO pins on hardware for code running MicroPython on a device? How do I test PyQt apps? How do I test web scrapers? Is it the best practice to put static html in your test directory or just snippets stored in string variables? What's the best way to to test server client API contracts? How do I test a monitoring tool? We also talk about: What is the Flask testing philosophy? What do Flask tests look like? Flask and Pallets using pytest Code coverage Some of the resulting testing strategies: Set up some preconditions. Run the function. Get the result. Don't test external services. Do test external service failures. Don't test the frameworks you are using. Do test your use of a framework. Use open source projects to learn how something similar to your project tests things. Focus on your code. Focus on testing your new code. Try to architect your application such that actual GUI testing is minimal. Split up a large problem into smaller parts that are easier to test. Nail down as many parts as you can. Special Guest: David Lord.Sponsored By:Datadog: Modern monitoring & security. See inside any stack, any app, at any scale, anywhere. Visit testandcode.com/datadog to get started.Talk Python Training: Online video courses for Python developers ★ Support this podcast on Patreon ★
undefined
Aug 28, 2020 • 18min

128: pytest-randomly - Adam Johnson

Software tests should be order independent. That means you should be able to run them in any order or run them in isolation and get the same result. However, system state often gets in the way and order dependence can creep into a test suite. One way to fight against order dependence is to randomize test order, and with pytest, we recommend the plugin pytest-randomly to do that for you. The developer that started pytest-randomly and continues to support it is Adam Johnson, who joins us today to discuss pytest-randomly and another plugin he also wrote, called pytest-reverse.Special Guest: Adam Johnson.Sponsored By:HoneyBadger: When bad things happen, it's nice to know that Honeybadger has your back. 30% off for first 6 months when you mention Test & Code Podcast when signing up.PyCharm Professional: Try PyCharm Pro for 4 months and learn how PyCharm will save you time. Promo Code: TESTANDCODE22Talk Python Training: Online video courses for Python developersLinks:pytest-randomly: pytest plugin to randomly order tests and control random.seedpytest-reverse: pytest plugin to reverse test order.Empirically revisiting the test independence assumptionpytest-xdistfactory_boy FakerNumPyHyrum's Law ★ Support this podcast on Patreon ★
undefined
Aug 24, 2020 • 42min

127: WFH, WTF? - Tips and Tricks for Working From Home - Reuven Lerner & Julian Sequeira

Many people have been working from home now that are not used to working from home. Or at least are working from home more than they ever did before. That's definitely true for me. Even though I've been working from home since March, I wanted some tips from people who have been doing it longer. Julian Sequeira, of PyBites fame, has been working from home for about a year. Reuven Lerner, an amazing Python trainer, has been working from home for much longer. We originally had a big list of WFH topics. But we had so much fun with the tips and tricks part, that that's pretty much the whole episode. But there's lots of great tips and tricks, so I'm glad we focused on that.Special Guests: Julian Sequeira and Reuven Lerner.Sponsored By:Talk Python Training: Online video courses for Python developersDatadog: Modern monitoring & security. See inside any stack, any app, at any scale, anywhere. Visit testandcode.com/datadog to get started.Links:PyBites — Julian's site for teaching PythonTeaching Python and data science around the world — Reuven LernerBonbon - WikipediaTest & Code Mailing List — Join for your chance to win a free course from Talk Python Training. One course given away every week for 6 weeks. ★ Support this podcast on Patreon ★
undefined
Aug 17, 2020 • 32min

126: Data Science and Software Engineering Practices ( and Fizz Buzz ) - Joel Grus

Researches and others using data science and software need to follow solid software engineering practices. This is a message that Joel Grus has been promoting for some time. Joel joins the show this week to talk about data science, software engineering, and even Fizz Buzz. Topics include: Software Engineering practices and data science Difficulties with Jupyter notebooks Code reviews on experiment code Unit tests on experiment code Finding bugs before doing experiments Tests for data pipelines Tests for deep learning models Showing researchers the value of tests by showing the bugs found that wouldn't have been found without them. "Data Science from Scratch" book Showing testing during teaching Data Science "Ten Essays on Fizz Buzz" book Meditations on Python, mathematics, science, engineerign and design Testing Fizz Buzz Different algorithms and solutions to an age old interview question. If not Fizz Buzz, what makes a decent coding interview question. pytest hypothesis Math requirements for data science Special Guest: Joel Grus.Sponsored By:PyCharm Professional: Try PyCharm Pro for 4 months and learn how PyCharm will save you time. Promo Code: TESTANDCODE22Links:Ten Essays on Fizz Buzz (with discount) by Joel GrusI don't like notebooks. (presentation) ★ Support this podcast on Patreon ★
undefined
Aug 7, 2020 • 1h

125: pytest 6 - Anthony Sottile

pytest 6 is out. Specifically, 6.0.1, as of July 31. And there's lots to be excited about. Anthony Sottile joins the show to discuss features, improvements, documentation updates and more. Full release notes / changelog Some of what we talk about: How to update (at least, how I do it) Run your test suites with 5.4.3 or whatever the last version you were using Update to 6 Run again. Same output? Probably good. If there are any warnings, maybe fix those. You can also run with pytest -W error to turn warnings into errors. Then find out all the cool stuff you can do now New Features pytest now supports pyproject.toml files for configuration. but remember, toml syntax is different than ini files. mostly quotes are needed pytest now includes inline type annotations and exposes them to user programs. Most of the user-facing API is covered, as well as internal code. New command-line flags --no-header and --no-summary A warning is now shown when an unknown key is read from a config INI file. The --strict-config flag has been added to treat these warnings as errors. New required_plugins configuration option allows the user to specify a list of plugins, including version information, that are required for pytest to run. An error is raised if any required plugins are not found when running pytest. Improvements You can now pass output to things like less and head that close the pipe passed to them. thank you!!! Improved precision of test durations measurement. use --durations=10 -vv to capture and show durations Rich comparison for dataclasses and attrs-classes is now recursive. pytest --version now displays just the pytest version, while pytest --version --version displays more verbose information including plugins. --junitxml now includes the exception cause in the message XML attribute for failures during setup and teardown. Improved Documentation Add a note about --strict and --strict-markers and the preference for the latter one. Explain indirect parametrization and markers for fixtures. Bug Fixes Deprecations Trivial/Internal Changes Breaking Changes you might need to care about before upgrading PytestDeprecationWarning are now errors by default. Check the deprecations and removals page if you are curious. -k and -m internals were rewritten to stop using eval(), this results in a few slight changes but overall makes them much more consistent testdir.run().parseoutcomes() now always returns the parsed nouns in plural form. I'd say that's an improvement Special Guest: Anthony Sottile.Sponsored By:Datadog: Modern monitoring & security. See inside any stack, any app, at any scale, anywhere. Visit testandcode.com/datadog to get started.Links:pytest Changelog / Release NotesDeprecations and Removals — pytest documentation ★ Support this podcast on Patreon ★
undefined
Aug 3, 2020 • 44min

124: pip dependency resolver changes

pip is the package installer for Python. Often, when you run pip, especially the first time in a new virtual environment, you will see something like: WARNING: You are using pip version 20.1.1; however, version 20.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. And you should. Because 20.2 has a new dependency resolver. Get in the habit, until October, of replacing pip install with pip install --use-feature=2020-resolver. This flag is new in the 20.2 release. This new pip dependency resolver is the result of a lot of work. Five of the people involved with this work are joining the show today: Bernard Tyers, Nicole Harris, Paul Moore, Pradyun Gedam, and Tzu-ping Chung. We talk about: pip dependency resolver changes user experience research and testing crafting good error messages efforts to improve the test suite testing pip with pytest some of the difficulties with testing pip working with a team on a large project working with a large code base bringing new developers into a large project Special Guests: Bernard Tyers, Nicole Harris, Paul Moore, Pradyun Gedam, and Tzu-ping Chung.Sponsored By:PyCharm Professional: Try PyCharm Pro for 4 months and learn how PyCharm will save you time. Promo Code: TESTANDCODE22Links:Changelog — pip 20.2 documentation — Including --use-feature=2020-resolverpypa/pip: The Python package installer — github repotesting pip - documentationpip - The Python Package Installer — pip 20.2 documentationChanges to the pip dependency resolver in 20.2 — Changes to the pip dependency resolver in 20.2 ★ Support this podcast on Patreon ★

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app