Weekly Dev Tips cover image

Weekly Dev Tips

Latest episodes

undefined
Jan 8, 2018 • 9min

Maintain Legacy Code with New Code

Maintain Legacy Code with New CodeMany developers work in legacy codebases, which are notoriously difficult to test and maintain in many cases. One way you can address these issues is by trying to maximize the use of new, better designed constructs in the code you add to the system.Sponsor - DevIQThanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.Show Notes / TranscriptLegacy code can be difficult to work with. Michael Feathers defines legacy code in his book, Working Effectively with Legacy Code, as "code without tests", and frequently it's true that legacy codebases are difficult to test. They're often tightly coupled, overly complex, and weren't written with modern understanding of good design principles in mind. Whether you're working with a legacy codebase you've inherited, or one you wrote yourself over some period of time, you probably have experienced the pain that can be involved with trying to change a large, complex system that suffers from a fair bit of technical debt and lacks the safety net of tests.There are several common approaches to working with such codebases. One simple approach, that can be appropriate in many scenarios, is to do as little as possible to the code. The business is running on it, none of the original authors are still with the company, nobody understands it, so just keep your distance and hope it doesn't break on your watch. Maybe in the meantime someone is working on a replacement, but you have no idea if or when that might ever ship, and anyway you have other things you need to work on that are less likely to keep you at work late or bring you in on the weekends. I don't have any solid numbers on how much software falls into this category, but I suspect it's a lot.The second approach is also common, and usually takes place when the first one isn't an option because business requirements won't wait for a rewrite of the current system. In this case, developers must spend time working with the legacy system in order to add or change functionality. Because it's big, complex, and probably untestable, changes and deployments are stressful and error-prone, and a lot of manual testing effort is required. Regression bugs are common, as tight coupling within the system means changes in one area affect others areas in often inexplicable and unpredictable ways. This is where I think the largest amount of maintenance software development takes place, since let's face it most software running today was written without tests but still needs to be updated to meet changing business needs.A third approach some forward-thinking companies take, understanding the risks and costs involved in full application rewrites, is to invest in refactoring the legacy system to improve its quality. This can take the place of dedicated effort focused on refactoring, as opposed to adding features or fixing bugs. Or it can be a commitment to follow the Boy Scout Rule such that every new change to the system also improves the system's quality by improving its design (and, ideally, adding tests). Some initial steps teams often take when adopting this approach are to ensure source control is being used effectively and to set up a continuous integration server if none is in place. An initial assessment using static analysis tools can establish the baseline quality metrics for the application, and the build server can track these heuristics to help the team measure progress over time. This approach works well for systems that are mission-critical and aren't yet so far gone into technical debt that it's better to just declare "technical bankruptcy" and rewrite them. I've had success working with several companies using this approach - let me know if you have questions about how to do it with your application.Now let's stop for a moment and think about why working with legacy code is so expensive and stressful. Yes, there's the lack of tests which limits our confidence that changes to the code don't break things unintentionally, but that's based on a root assumption. The assumption is that we're changing existing code and therefore, other code that depends on it might break unexpectedly. What if we break down that assumption, and instead we minimize the amount of existing code we touch in favor of writing new code. Yes, there's still some risk that our changes to allow incorporating our new code might cause problems, but outside of that, we're able to operate in the liberating zone of green field development, at least on a small scale.When I say write new code, I don't mean go into a method, add a new if statement or else clause, and start writing new statements in that method. That's the traditional approach that tends to increase complexity and technical debt. What I'm proposing instead is that you write new classes. You put new functionality into types and methods that didn't exist before. Since you're writing brand new classes, you know that no other code in the system currently has any dependencies on the code you're writing. You're also free to unit test your new classes and methods, since you're able to write them in a way that ensures they're loosely coupled and follow SOLID principles.So, what does this look like in practice? Frequently, the first step will be some kind of refactoring in order to accommodate the use of a new class. Let's you've identified a big, complex method that currently does the work that you need to change, and in a certain case you need it to do something different. Your de facto approach would be to dive into the nested conditional statements, find the right place to add an else clause, and add the new behavior there. The alternative approach would be to put the new behavior into a new method, ideally in a new type so that it's completely separate from any existing structures. A very basic first step could be to do exactly what you were going to do, but instead of putting the actual code into the else clause, instantiate your new type and call your new method there instead, passing any parameters it might require. This works well if what you're adding is fairly complex, since now you have a much easier way to test that complex code rather than going through an already big and complex method to get to it.Depending on the conditions that dictate when your new behavior should run, you might be able to get out of using the existing big complex method at all. Let's say the existing method is called BigMethod. Move BigMethod into a new class called Original and wherever you had code calling BigMethod change it to call new Original().BigMethod(). This is one of those cases where you're forced to change the existing code in order to prepare it for your new code, so you'll want to be very careful and do a lot of testing. If there are a lot of global or static dependencies running through BigMethod, this approach might not work well, so keep that in mind. However, assuming you're able to pull BigMethod into its own class that you then call as needed, the next step is to create another new class for your new implementation. We'll call the new class BetterDesign and we'll keep the method named BigMethod for now so that if we want we can use polymorphism via inheritance or an interface. Copy BigMethod from the Original class to your BetterDesign class and modify it so it only does what your new requirements need. It should be much smaller and simpler than what's in Original. Now, find all the places where you're instantiating Original and put in a conditional statement there so you'll instantiate BetterDesign instead, in the appropriate circumstances. At this point you should be able to add the behavior you need, in a new and testable class, without breaking anything that previously depended on BigMethod. If you have more than a few places where you need to decide whether to create Original or BetterDesign, look at using the Factory design pattern.By adjusting the way we maintain legacy systems to maximize how much new behavior we add through new classes and methods, we can minimize the likelihood of introducing regressions. This improves the code quality over time, increases team productivity, and makes the code more enjoyable to work with. If you have experience working with legacy code, please share it in this show's comments at www.weeklydevtips.com/015.Show Resources and LinksWorking Effectively with Legacy CodeTechnical DebtRefactoringBoy Scout RuleSOLID Principles
undefined
Dec 11, 2017 • 5min

Smarter Enumerations

Smarter Enumerations Enumerations are a very primitive type that are frequently overused. In many scenarios, actual objects are a better choice. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Enums are an extremely common construct in applications. They provide a simple way to give labels to numeric values. They're especially useful for efficiently capturing a set of flag values by using binary AND and OR operations on values set to powers of 2. However, as primitive value types, they don't have the capability to add behavior to the values they represent, and this often results in a particular flavor of the primitive obsession code smell that I discussed in episode 12. One of the first signs that you're stretching the limits of an enum in C# is if you find that you want to display the names associated with the values to the user, and some of the names should have spaces in them when displayed. For example, you might have a Roles enum that includes a SalesRepresentative name. If you display that in a dropdownlist in the UI, you'll want to have a space between Sales and Representative. There are a few hacky ways to achieve this. The first would be to parse the name of the enum and insert spaces anywhere you find capital letters in the middle of the string. Another common one is to add an attribute that contains the user-friendly version of the enum's name, and if this attribute is present, use it when displaying the enum's name. Both of these can work, but they're not ideal. They both require more code outside of the enum, making it harder to work with, and scattering logic related to the enum into other types. While we're on the topic of displaying enum values to end users, another fairly common requirement in this area is to control which enum options are displayed to the user. Once again, you can use attributes to control this behavior, or maybe even some kind of naming convention for the enum labels (maybe add a Visible or Hidden suffix and then strip off the suffix when displaying the name). As you can guess, both of these approaches just lead you further down the path of cluttering up your non-enum code to accommodate the lack of behavior within the enums themselves. What you really need is a better abstraction. Enumeration Classes The pattern I favor is the SmartEnum class, also known as the Strongly Typed Enum Class. With this pattern, you start with a class definiton that includes the basic capabilities of an enum type, such as having a simple name and value. Then, you define the set of available options as static properties on the class. For example, if you were creating a Roles enumeration class, you would add static properties on the Roles class for things like Administrator or SalesRepresentative. These static properties would be of type Roles (or Role, as you prefer). Working with these static instances mirrors working with enums. You can simply type Roles (dot) and your IDE will show you the set of static properties that represent the possible options, just the same as an enum. Since you're representing your options as a class, you now have the ability to add any behavior you require. If you need to display the value in a certain way, you can add a property or method to do so. If you need to add metadata that will determine when or whether a particular option is visible or available to a given user, you can add this as well. When you do, the business logic you're adding is encapsulated within the enumeration class, rather than spread throughout your user interface code. If you're looking to get started with this approach, I've created a GitHub repo and Nuget package at Ardalis.SmartEnum. I've also written several articles over the years on this topic that I'll add to the show notes for this episode, which you'll find at weeklydevtips.com/014. Show Resources and Links SmartEnum (GitHub) SmartEnum (Nuget) Listing Strongly Typed Enum Options in C# Enum Alternatives in C#
undefined
Dec 4, 2017 • 7min

Be Thankful and Show Gratitude

Be Thankful and Show Gratitude It's highly unlikely that you're a software developer who works in a vacuum. Here are a few tips for showing your gratitude to the people, companies, products, and tools that help you to be successful. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last year around Thanksgiving I published an article about showing gratitude as a software developer. I'll link to it in the show notes and I encourage you to read it if you find this topic interesting. The topic of showing gratitude and being thankful, specifically as software developers, remains relevant today, so I thought it worth revisiting. Since you're listening to this podcast, I'm going to go out on a limb and assume you're a software developer. A programmer. A coder. Maybe that's not your title, and maybe it's not even your main responsibility, but you've built software. In building that software, regardless of your platform or language of choice, you've almost certainly leveraged a wide variety of resources that helped you along the way. You may not even realize, or perhaps you've taken for granted, some of the things that helped you. As the saying goes, sometimes you don't know how much you miss something until it's gone. Many of the most valuable resources we have available to us are provided freely by others. If those others feel unappreciated, they may take their passion and energy elsewhere, so don't assume that just because someone isn't charging you money for their efforts, that they don't value things you might do, non-monetarily. Let's consider a few simple examples to highlight this point. One is StackOverflow. You've probably used it, since it's the de facto standard question and answer site for software development. When you find that answer you were looking for, try to give it an upvote. And while you're at it, vote the question up, too, since someone had to ask it in order for you to get the answer you needed. Some publications, like Medium, provide a way for you to show appreciation by liking or clapping for an article. Be sure to show your support for content you find valuable by taking advantage of these features. In addition, you can share content you find interesting on social media with a quick tweet or post on Facebook or your own blog (thus producing some additional content of your own). Of course, for a podcast like this one, leaving a review in iTunes or Stitcher is highly appreciated (assuming it's a good review). Reviews help your favorite podcasts get discovered by more people, and also encourage publishers to keep producing content. It can be difficult sometimes to record content in a vacuum and send it out to the Internet, not knowing who is actually listening to it, or how they're feeling about it. It's very different from public speaking because of this lack of feedback. Reviews, as well as comments on individual show pages, are one way you can let publishers know they're being heard and appreciated. You're probably using some open source tools as part of your development. Most open source projects I work with today are hosted on GitHub. If you find a particular project helpful or interesting, see if you can help support it. In GitHub, starred repositories are easier for you to find later. In addition, from their docs, "Starring a repository also shows appreciation to the repository maintainer for their work. Many of GitHub's repository rankings depend on the number of stars a repository has. For example, repositories can be sorted and searched based on their star count." Of course, you can also take to social media or any of the other things I mentioned to show support, as well as offering to help by adding issues, fixing issues via pull requests, or offering to help document the project. Often end users can provide extremely valuable documentation since the maintainer of the project may not realize the ways in which many developers use their library or tools. By showing appreciation for the tools and resources you use to be successful, you're doing a few things. You're helping to ensure these (generally free) resources continue to exist. This is obviously good for you. You're also setting an example for others, who may do the same, which magnifies your own contributions to further help support these resources. Again, good for you. You're also potentially developing positive relationships within the developer community. Who knows which tweet, comment, or pull request of yours that expresses gratitude will lead to a connection that culminates in a new contract or job opportunity. People get invited to help support projects they support. People want to work with supportive, helpful people. Aside from "being nice" or being "the right thing to do", actively showing gratitude within your professional community costs you nearly nothing but can yield tangible benefits in your career. If you found this particular episode helpful, please consider leaving a comment on the show notes pages. If there's a way that you like to show gratitude, or if you're someone who offers their time for free and there's a way you like to get positive encouragement from your users or audience, please share it. Additional Ways To Show Gratitude (add yours in comments below): Follow person/project on twitter Like their Facebook page Follow individual/project on GitHub Show Resources and Links Be a Thankful Developer (Medium - revised 2017) Be a Thankful Developer (original 2016 ardalis.com version) About GitHub Stars
undefined
Nov 20, 2017 • 7min

Primitive Obsession

Primitive ObsessionPrimitive Obsession describes code in which the design relies too heavily on primitive types, rather than solution-specific abstractions. It often results in more verbose code with more duplication of logic, since logic cannot be embedded with the primitive types used.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptPrimitives refer to built-in types, like bool, int, string, etc. The primitive obsession code smell refers to overuse of primitive types to represent concepts that aren't a perfect fit, because the primitive supports values that don't make sense for the element they're representing. For example, it's not unusual to use a string to represent a ZIP Code value or a Social Security Number. Many systems will use an int to represent a value that cannot be negative, such as the number of items in a shopping basket. In such a case, if the system even bothers to enforce the invariant stating that shopping basket quantity must be positive, it must do so somewhere other than in the type representing the quantity. Ideally, the shopping basket or basket item type would enforce this, but again in many designs the shopping basket item quantity is simply a property that can be set to anything. In which case any service, UI call, etc. that manipulates a basket item would first need to ensure it was being set properly. This can result in a great deal of duplicate code, with the usual technical debt that arises when you violate the Don't Repeat Yourself principle. In some places, someone will forget to perform the checks, or they'll perform them differently, and bugs will creep in. Or the rules will be updated, but not everywhere, which results in the same inconsistent behavior. When you work with too primitive of an abstraction, you end up having to code around this deficiency every time you work with the type.EncapsulationI've talked about encapsulation before - it's obviously an important concept in software design. By choosing to represent a concept with a primitive, you give up the ability to leverage encapsulation when working with this concept in your solution. The biggest problem with primitive obsession is that it results in a lot of behavior being added around the types in question, rather than encapsulated within them. Instead of having to check, probably in many places, that Quantity is positive or that a string represents a valid ZIP code, it's far better to create a type to represent the concept in question, along with its rules.Such types should typically be immutable value objects that cannot be created in an invalid state (and thus need not be validated where they are passed in as parameters). It's useful to have easy ways to cast primitives to and from these value objects, but this should be done only at the edges of the application (user input/output, persistence). Try to use the value object as much as possible within your actual business logic or domain model, rather than a primitive representation of the type.You can make working with your new type about as easy as working with the primitive it's replacing by making sure you override its ToString method. You can also handle comparisons and equality, and configure implicit and explicit casting operators. Jimmy Bogard wrote an article about 10 years ago that describes how to do exactly this for a simple ZIP Code type in C# - there's a link in the show notes. Yes, you'll end up with a dozen or so lines of code in your ZIP Code class instead of just using a string, but any logic that relates to ZIP Codes will also live in this class, rather than being scattered throughout your application.When you represent a concept in your system with a primitive type, you're asserting that the concept can be represented by any value that type can hold. If you expose method signatures that accept primitive values, the only clue you might offer to clients of that method could be the names of the parameters. Invalid values might no immediately be discovered, or if they are, the related errors might be buried within the behavior of the method, rather than immediately apparent. If instead you use a separate value object to represent a concept, a method that accepts parameters using this type will be much easier for clients to work with. If there are exceptions related to type conversion, they will be discovered immediately when the client attempts to create an instance of the value object, and this behavior will be consistent everywhere, unlike different methods that may or may not perform validity checks on their inputs.You can learn more about the primitive obsession code smell and literally dozens of others, along with how to refactor them, in my Pluralsight course, Refactoring Fundamentals.Show Resources and LinksEncapsulationDon't Repeat YourselfRefactoring for C# DevelopersRefactoring FundamentalsDealing with Primitive Obsession - Jimmy BogardDesign Smell: Primitive Obsession - Mark Seeman
undefined
Nov 13, 2017 • 6min

Encapsulating Collection Properties

Encapsulating Collection Properties Encapsulation is a key aspect of object-oriented programming and software engineering. Unfortunately, many systems fail to properly encapsulate collection properties, resulting in reduced quality. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Encapsulation essentially means hiding the inner workings of something and exposing a limited public interface. It helps promote more modular code that is more reliable, since verifying the public interface's behavior provides a high degree of confidence that the object will interact properly with collaborators in a system. One area in which encapsulation often isn't properly followed is with collection properties. Collection Properties Any time you have an object that has a collection of related or child objects, you may find this represented as a collection property. If you're using .NET and Entity Framework, this property is often referred to as a navigation property. Some client code can fetch the parent object from persistence, specify to EF that it should load the related entities, and then navigate from the parent object to its related objects by iterating over an exposed collection property. For example, a Customer object might have a set of Orders they've placed previously. This could be represented most simply by having a public List property on the Customer class. This property must expose a getter, and in many cases system designs will have it expose a public setter as well. In that case, any code in the system would be able to set a Customer's order collection to any list of Orders, or to null. This could obviously result in undesired behavior. Some developers might offer token resistance to this total lack of encapsulation by removing the setter (or making it private), but the damage is done as long as the property exposes a List data type, with all of its mutable functionality. This kind of design exposes too much functionality from the Customer, since it inherently allows any client code that works with a Customer to: Directly add or remove an order to/from the Customer Clear all orders from the Customer In these cases, the Customer object in question has no way of controlling, preventing, or even detecting these changes to its Orders collection. Why is this important? Well, there is probably a decent amount of workflow involved in placing a new order for a customer. It's probably not sufficient to simply add a new order without any additional work. Now, you can argue that somewhere there's a service that does all the required work, but how does the object model enforce the use of said service? If any client code can instantiate an order and add it to a customer, how is the design of the system leading developers toward doing the right thing (using a service, in this case)? On the other hand, if there is no way to directly add an order to a customer, developers will probably quickly discover that there is a service for this purpose, and it's more likely that this service will provide the only way of adding new orders to customers. In most cases, there are only certain operations on related collections that an object should expose, and these it probably wants to have direct control over. If Customer collaborators shouldn't be able to directly delete all of a customer's orders, don't expose the collection as a List. Instead, expose a ReadOnlyCollection, or an IEnumerable. Both EF 6 and EF Core support properly encapsulating collection navigation properties, so don't feel like you have to expose List types in order to keep EF happy. Check out the links in the show notes at WeeklyDevTips.com/011 to see how to configure EF to support proper collection encapsulation. Show Resources and Links Encapsulated Collections in EF Core Exposing Private Collection Properties to Entity Framework Encapsulation Exposing Collection Properties
undefined
Nov 6, 2017 • 4min

Pain Driven Development

Pain Driven DevelopmentPain Driven Development, or PDD, is the practice of writing software in such a way that you only "fix" problems when they are causing pain, rather than trying to preempt every possible issue.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptMany of you have probably heard of various "DD" approaches to writing software. There's TDD, or Test Driven Development. There's BDD, for Behavior Driven Development. In this tip, I want to introduce you to another one, PDD: Pain Driven Development.Pain Driven DevelopmentSoftware development is full of principles, patterns, and best practices. It can be tempting, especially when you've recently learned about a new way of doing things, to want to apply it widely to maximize its benefits. Some time ago, when XML was a new thing, for instance, Microsoft went all-in with it. They decided to "XML ALL THE THINGS" and in some places, this was great. And in many cases, not so much. In my own experience, I find this is often the case when I'm learning a new design pattern or trying to fully understand a particular principle. It can be easy, when you're constantly on the lookout for applications of recent knowledge, to find excuses to apply these techniques.One particular set of principles that many object-oriented programmers know are the SOLID principles. I have a course on SOLID on Pluralsight that I encourage you to check out, which covers these principles in depth. One thing that is worth remembering, though, is that you shouldn't, and honestly can't, apply all of the principles to every aspect of your software. You need to pick your battles. You need to actually ship working software. You don't know when you begin a project where extension is going to be necessary, so you can't anticipate every way in which you might support the Open-Closed Principle for every class or method in your program. Build and ship working software, and let feedback and new requirements guide you when it comes to applying iterative design improvements to your code. When you're back in the same method for the Nth time in the last month because yet another requirement has changed how it's supposed to work, that's when you should recognize the pain your current design is causing you. That's where Pain Driven Development comes into play. Refactor your code so that the pain you're experiencing as a result of its current design is abated.Extreme Programming introduced the concept of YAGNI, or You Ain't Gonna Need It. PDD is closely aligned with this concept. YAGNI cautions against building things you might need in the application, and instead favors building only what's required today (but in a responsible manner, so you can revise the design in the future). PDD offers similar guidance, but from a different perspective. The message with PDD is, follow YAGNI and build only what is required today, but recognize when you'll "need it" by the pain the current design causes you as you try to work around/with it.Well-designed code is enjoyable to work with. If you frequently find yourself frustrated with the code you're working with, see if you can identity the source(s) of the pain, and apply refactoring techniques to alleviate the problem.Show Resources and LinksPain Driven Development (PDD)SOLID Principles of OODRefactoring Fundamentals
undefined
Oct 16, 2017 • 7min

Data Transfer Objects (part 2)

Data Transfer Object Tips (Part 2) One classification of objects in many applications is the Data Transfer Object, or DTO. Here are some more tips that may help you avoid problems when using these objects. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last week we talked about the definition of a DTO and how they're typically used. This week we'll cover a few more common problems with them and offer some Dos and Don'ts. Mapping and Factories It's fairly common to need to map to a DTO and another type, such as an entity. If you're doing this in several places, it's a good idea to consolidate the mapping code in one place. A static factory method on the DTO is a common approach to this. Note that this isn't adding behavior to the DTO, but rather is just a static helper method that we're putting on the DTO type for organizational purposes. I usually name such methods with a From prefix, such as FromCustomer(Customer customer) for a CustomerDTO type. There's a simple example in the show notes for episode 8. public class CustomerDTO { public string FirstName { get; set; } public string LastName { get; set;} public static CustomerDTO FromCustomer(Customer customer) { return new CustomerDTO() { FirstName = customer.FirstName, LastName = customer.LastName }; } } You can also use a tool like AutoMapper, which will eliminate the need to use such static factory methods. I usually quickly end up moving to AutoMapper if I have more than a couple of these methods to write myself. What about attributes? It's common in ASP.NET MVC apps to use attributes from the System.ComponentModel.DataAnnotations namespace to decorate model types for validation purposes. For example, you can add a Required attribute to a property, and during model binding if that property isn't, an error will be added to a collection of validation errors. Since these attributes don't impact your ability to work with the class as a DTO, and since typically the DTO is tailor made for the purpose of doing this binding, I think it's perfectly reasonable to use these attributes for this purpose. You can rethink this decision if at some point the attributes start to cause you pain. Follow Pain Driven Development (PDD): if something hurts, take a moment to analyze and correct the problem. Otherwise, keep on delivering value to your customers. If you're not a fan of attribute-based validation, you can use Fluent Validation and define your validation logic using a fluent interface. You'll find a link in the show notes. Keeping DTOs Pure Avoid referencing non-DTO or primitive types from your DTOs. This can pull in dependencies that can make it difficult to secure your DTO. In some cases, it can introduce security vulnerabilities, such as if you have methods accepting input as DTOs, and these DTOs reference entities that your app is directly updating in the database. An attacker could guess at the structure of the entity and perhaps its navigation properties and could add or update data outside of the bounds of what you thought you were accepting. Take care in your update operations to only update specific fields, rather than model binding an entity object from external input and then saving it. DTO Dos and Don'ts Let's wrap up with some quick dos and don't for Data Transfer Objects: Don't hide the default constructor Do make properties available via public get and set methods Don't validate inputs to a DTO Don't add instance methods to your DTO Do consolidate mapping logic into static factories Do consider moving to AutoMapper if you have more than a few such factory methods Do feel free to use attributes to help with model validation Don't reference non-DTO types, such as entities, from DTOs Show Resources and Links AutoMapper Pain Driven Development (PDD) Fluent Validation
undefined
Oct 9, 2017 • 4min

Data Transfer Objects (part 1)

Data Transfer Object Tips (Part 1) One classification of objects in many applications is the Data Transfer Object, or DTO. Here are some tips that may help you avoid problems when using these objects. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Data Transfer Objects, or DTOs, are, as the name suggests, objects whose main purpose is to transfer data. In Object Oriented Programming, we typically think about objects as encapsulating together state, or data, and behavior. DTOs have the distinction of being all about the data side of things, without the behavior. Why do we need DTOs? DTOs are used as messages. They transfer information from one part of an application to another. Depending on where and how they transfer information, they might have different names. Often, they're simply referred to as DTOs. In some cases, you may see them characterized as View Models, API Models, or Binding Models. Not all view models in MVC apps are DTOs, but many can and probably should be. For instance, in an ASP.NET MVC application, you typically don't want to have any behavior in the ViewModel type that you pass from a controller action to a view. It's just data that you want to pass to the view in a strongly typed fashion. If you're following the MVVM pattern to build apps using WPF or something similar, then your ViewModel in that scenario is supposed to have behavior, not be a DTO. Ideally we'll come up with a better name for ViewModels in MVC apps, but obvious choices like ViewData are already overloaded. Why shouldn't DTOs have behavior? DTOs don't have behavior because if they did, they wouldn't be DTOs. Their entire purpose is to transfer data, not to have behavior. And because they are purely data objects, they can easily be serialized and deserialized into JSON, XML, etc. Your DTO's data schema can be published and external systems can send data to your system in a wire format that your system can translate into an instance of your DTO. If your DTO has behavior on it, for instance to ensure its properties are only set to valid values, this behavior won't exist in the string representation of the object. Furthermore, depending on how you coded it, you might not be able to deserialize objects coming from external sources. They might not follow contraints you've set, or you might not have provided a default public constructor, for instance. The goal of DTOs is simply to hold some state, so you can set it in one place and access it in another. To that end, the properties on a DTO should all have public get and set methods. There's no need to try to implement encapsulation or data hiding in a DTO. That's it for this week. Next week I'll talk some more about DTOs and provide a list of Do's and Don'ts.
undefined
Sep 25, 2017 • 12min

Prefer Custom Exceptions

Prefer Custom ExceptionsLow level built-in exception types offer little context and are much harder to diagnose than custom exceptions that can use the language of the model or application.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptGiven the choice, avoid throwing basic exception types like Exception, ApplicationException, and SystemException from your application code. Instead, create your own exception types that inherit from System.Exception. You can also catch common but difficult-to-diagnose exceptions like NullReferenceException and wrap them in your own application-specific exceptions. You should think about your application exceptions as being part of your domain model. They represent known bad states that your system can find itself in or having to deal with. You should be able to use your ubiquitous language to discuss these exceptions and their sources within the system with your non-technical domain experts and stakeholders. Let's talk about a few different examples.Throwing Low-Level ExceptionsConsider some code that does the following:public decimal CalculateShipping(string zipCode){ var area = GetAreaFromZipcode(zipcode); if (area == null) { throw new Exception("Unknown ZIP Code"); } // perform shipping calculation}The problem with this kind of code is, client code that attempts to catch exceptions resulting from the shipping calculation are forced to catch generic Exception instances, instead of a more specific exception type. It takes very little code to create a custom exception type for application-specific exceptions like this one:public class UnknownZipCodeException : Exception{ public string ZipCode { get; private set; } public UnknownZipCodeException(string message, string zipCode) : base(message) { ZipCode = zipCode; }}In fact, in many cases you can create an overload that sets a standard default exception message, so you're consistent and your code is more expressive with fewer magic strings. Add this overload to the above exception, for instance:public UnknownZipCodeException(string zipCode) :this("Unknown ZIP Code",zipCode){}And now the original code can change to:public decimal CalculateShipping(string zipCode){ var area = GetAreaFromZipcode(zipcode); if (area == null) { throw new UnknownZipCodeException(zipCode); } // perform shipping calculation}Now client code can easily catch and handle the UnkownZipcodeException type, resulting in a more robust and intuitive design.Replace Framework Exceptions with Custom ExceptionsAn easy way to make your software easier to work with, both for your users and for developers, is to use higher level custom exceptions instead of low level exceptions. Low level exceptions like NullReferenceException should rarely be returned from business-level classes, where most of your custom logic should reside. By using custom exceptions, you make it much more clear to everybody involved what the actual problem is. You're working at a higher abstraction level, using the language of the business domain.For example, let’s say you’re writing an application that works with a database. Perhaps it’s an ASP.NET Core application in the medical or insurance industry, and it references individual customers as Subjects. Within some business logic dedicated to creating an invoice, recording a prescription, or filing a claim, there’s a reference to the Subject Id that is invalid. When your data layer makes the request and returns from the database, the result is empty.var subject = GetSubject(subjectId);subject.DoSomething();Obviously in this code, if Subject is null, the last line is going to throw an exception (you can avoid this by using the Null Object Pattern). Let’s further assume that we can’t handle this exception here – if the subject id is incorrect, there’s nothing else for this method to do but throw an exception, since it was going to return the subject otherwise. The current behavior for a user, tester, or developer is this:Unhandled Exception:System.NullReferenceException: Object reference not set to an instance of an object.One of the most annoying things about the NullReferenceException is that it is so vague. It never actually specifies which reference, exactly, was not set to an instance of an object. This can make debugging, or reporting problems, much more difficult. In the above example, we’re not specifically throwing any exception, but we are allowing a NullReferenceException to be thrown in the event that we’re unsuccessful in looking up a Subject for a given ID. It’s still a part of our design to rely on NullReferenceException, though in this case it’s implicit. What if instead of returning null from GetSubject we threw a SubjectNotFoundException? Or if we weren’t sure that an exception made sense in every scenario, what if we checked for null and then threw a better exception before moving on to work with the returned subject, like in this example:var subject = GetSubject(subjectId);if (subject == null) throw new SubjectNotFoundException(subjectId);subject.DoSomething();If we don’t follow this approach, and instead we let the NullReference propagate up the stack, it’s likely (if the application doesn’t simply show a Yellow Screen of Death or a default Oops page) that we will try to catch NullReferenceException and inform the user of what might be the problem. But by then we might be so far removed from the exception that even we can’t know for sure what might have been null and resulted in the exception being thrown. It's also possible that this exception might be thrown in the middle of a long multi-line LINQ statement or object initializer, making it difficult to know what, exactly, was null. Raising a more specific, higher level exception makes our own exception handlers much easier to write.Writing Custom ExceptionsAs I described earlier, it's very easy to write a custom exception for the case where no Subject exists for a given Subject ID. You should name it something very specific, and end the name with the Exception suffix. In this case, we're going to call it SubjectDoesNotExistException (or maybe SubjectNotFoundException), since that seems very clear to me. You can create a class that inherits from Exception and use constructor chaining to pass in some information to the base Exception constructor, like this:public class SubjectDoesNotExistException : Exception{ public SubjectDoesNotExistException(int subjectId) : base($"Subject with ID \"{subjectId}\" does not exist.") { }}(you'll find code samples in the show notes for www.weeklydevtips.com/007)Now in the example above, with no error handling in place, the user will get a message stating "Subject with ID 123 does not exist." instead of "Object reference not set to an instance of an object." which is far more useful for debugging or reporting purposes. In general, you should avoid putting custom logic into your custom exceptions. In most scenarios, custom exceptions should consist only of a class definition and one or more constructors that chain to the base Exception constructor.If you follow domain-driven design, I recommend placing most of your business-logic related exceptions in your Core project, within your domain model. You should be able to easily unit test that these exceptions are thrown when you expect them to be from your entities and services.Show Resources and LinksPrefer Custom Exceptions to Framework ExceptionsDomain-Driven Design FundamentalsNull Object PatternDon't Repeat Yourself (DRY)
undefined
Sep 18, 2017 • 5min

Make It Work. Make It Right. Make It Fast.

Make It (Work|Right|Fast)Don't fall into the premature optimization trap. Follow this sequence when developing new features.Sponsor - DevIQThanks to DevIQ for sponsoring this episode!Show Notes / TranscriptThere's a three step process that I first heard of from Kent Beck. Following these steps when implementing a new feature can help you remain focused on getting the work done, and can avoid falling into the trap of premature optimization.The First Step: Make it workThe first step is to make it work. Since we're talking about software, there is no cost of materials. You can make the code do what it's supposed to do in whatever ugly, messy manner you want, so long as it works. Don't waste time worrying about whether your approach is ideal, your code elegant, or your design patterns perfect. If you can see multiple ways to do something, and you're not sure which is best, pick one and go with it. You can leave a TODO comment or make a note in your notebook that you keep with you as you code if you think it's important enough to revisit. Otherwise, when you're done, be sure it works, and works repeatably. You should have some kind of automated tests that demonstrate that it works. I should probably also note that this process works best with small units of work. In fact, kanban demonstrates that your overall process will be improved if you work on the smallest scoped work items you can. You should be able to follow all three of these steps multiple times per day. If you're spending days or longer just trying to make it work, you need to come up with a smaller "it" and get the reduced scope item to work, first. Then move on to steps two and three before continuing on with the larger scoped work.The Second Step: Make it rightOnce you have a working solution, and an inexpensive way to ensure it remains working while you modify it, follow the refactoring fundamentals to improve your code's design. Look for code smells. Follow software principles. Make sure it's good enough that when you return to it, you'll be able to understand and change it without too much effort (or someone else will be able to do so). Tests serve as a great form of documentation, especially if you name them well. If you think you need more tests, or you need to better organize your tests, this is the time to do so. But stop when you have enough tests that when they're green, you're confident your code does what it should. Don't chase some arbitrary metric beyond this point, when you could be delivering more value in the form of more features or bug fixes.The Third Step: Make it fastIf it's not fast (in terms of performance) enough already, now is the time to measure and tune the application's performance. Performance characteristics of the system should be described just like other system requirements, and effort should be made on improving performance only until these measurable requirements are met (otherwise, how will you know when you're done?). For some applications, there is great ROI for every small bit of performance improvement. This is true of large, public ecommerce sites like Amazon.com, where they've measured customer cart abandonment levels increasing based on milliseconds of additional latency. However, most applications have less stringent requirements, and in many cases users who have no choice but to use the system for their job. In such cases, you want to provide the user with good enough performance, but remember that beyond good enough is waste. If users don't really notice the difference between 1 second page load times and 800ms page load times, you probably don't need to spend several hours trying to trim 200ms when that time could have been spent fixing a bug that's been plaguing users for weeks.SummaryYour key takeaways from this episode should be:Work on small pieces of work. For each piece:Make it workMake it rightMake it fastStop working on the code as soon as it work. Stop cleaning it up and adding tests as soon as you're confident it works and is clean enough to maintain next time someone needs to touch it. Stop tuning its performance as soon as it's good enough. If you follow these steps, you'll stay as productive as possible, you'll ship quality software, and you won't get mired in analysis paralysis or gold plating your code. Check the show notes at weeklydevtips.com/006 for a bunch of links to more information on many of the things I mentioned in this episode.Show Resources and LinksKanban: Getting StartedRefactoring FundamentalsList of Code SmellsList of Software PrinciplesUnit Test Naming ConventionMeasuring and Tuning Web PerformanceBeyond Good Enough is Waste

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app