Weekly Dev Tips cover image

Weekly Dev Tips

Latest episodes

undefined
Dec 24, 2018 • 6min

Why is Immutability Desirable?

Why Immutability is Desirable This week's tip is on the topic of immutability, and why it's often considered a good thing for your data structures. I'll share my thoughts on the topic in a moment, but first a quick note from this week's sponsor. Sponsor - devBetter Group Career Coaching for Developers If you're not advancing as quickly in your career as you'd like, you may find value in joining a semi-formal career and technical coaching program like devBetter.com. I launched devBetter a few months ago and so far we have a small group of motivated developers meeting every week or two. I answer questions, review code, suggest areas in which to improve, and occasionally assign homework. Interested? Learn more at devBetter.com. Show Notes / Transcript Let's talk about immutability. This topic is on my mind because I just wrote an article about getting language support for immutability and, more broadly, DDD value objects, in C#. Value objects area DDD pattern, and one of their defining characteristics is that they're immutable. I keep using that word, so I should probably define it. An immutable data structure (an object in C#) is one that, once created, cannot have its state modified. I mention wanting language support for this feature in C# - that's not to imply that you can't create immutable objects today. It's just a lot of manual work, and easy to get wrong or screw up in the future because nothing enforces immutability at the class level. Typically in C# to create an immutable object you create a class with properties that lack setters, and then you assign values to these properties in the class constructor. Short of some reflection trickery, instances of this type cannot have their properties modified once they've been instantiated. Why might we want this? The biggest advantage I get from immutability in objects is knowledge that instances of these types are always in a valid state. That means any method using such types doesn't have to waste effort trying to verify they're in a valid state. Here's a common example. Imagine your system needs to work with date ranges that include a start and end date. You might have many methods that take in two DateTime types, and in these methods you always expect the end date to be later than the start date. So, being a good programmer, you write a guard clause to ensure start date precedes end date. This logic ends up scattered all over the place, and maybe sometimes you forget or don't bother with it, so not it's not even enforced consistently, allowing bugs to creep in. What if instead you created an immutable DateRange class, passed the start and end dates into its constructor, and ensured they were valid there? If not, you'd throw an appropriate exception. Now, any method that was accepting a start and end date can just take in a DateRange instead, shortening these methods' parameter lists. And they can remove all of their validation on start date and end date because that's now done in the DateRange class. Why can you trust that it was done? Because of immutability. If it was valid when it was created, it must still be valid now since it couldn't be changed in the meantime. Your validation logic only has to be performed in one place, and immutability gives you guarantees that it will be applied so you don't have to defensively code for it everywhere. Another advantage immutability offers is thread safety. You don't have to worry about race conditions or synchronization issues between different threads when they work with immutable objects. Why not? Simply because the objects can't change. Operating on immutable objects may produce new instances of objects, but this typically doesn't pose an issue for multi-threaded applications. The issue is more commonly something like an instance that two threads are referencing, and each thread tells the instance to increment a counter at the same time. The end result may be unexpected due to how the calculation may be completed. This is typically overcome through the use of locks, but you can pass around immutable objects between threads all you want and never have to worry about this issue. Have you ever passed an object to a method, and then found yourself surprised when the method modified the object? I generally dislike this kind of thing, and C# even has a new keyword in that will ensure this doesn't happen. I'll link to more on the in keyword in the show notes, but it has some restrictions and hasn't seen widespread adoption, yet. Another way to ensure that the state of an instance you pass to a method isn't modified within that method is to use an immutable object. This makes it much easier to reason about and debug your code, and can have performance benefits since objects that won't be modified can always be passed by reference, without copying them to the stack. Immutable types are much easier to test than other types, and for this as well as the above reasons using them appropriately can lead to better, more maintainable code. Eliminate the primitive obsession code smell, better encapsulate concepts that tie together several values, and force validation (and other business rules) to live with these values instead of in the types that use them. You should find over time that your domain model becomes much cleaner as a result. Show Resources and Links devBetter 5 Benefits of Immutable Objects Guard Clauses The C# in parameter modifier Refactoring and Code Smells
undefined
Dec 17, 2018 • 6min

Avoid Lazy Loading in ASP.NET Apps

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis. This is episode 34, in which we'll talk about lazy loading in ASP.NET and ASP.NET Core apps, and why it's evil. If you’re enjoying these tips, please leave a comment or rating in your podcast app, tell a friend about the podcast, or follow us on twitter and retweet our episode announcements so we can increase our audience. I really appreciate it. Avoid Lazy Loading in ASP.NET (Core) Apps This week's tip is on the topic of lazy loading using Entity Framework or EF Core in ASP.NET and ASP.NET Core apps. Spoiler alert: don't do it. Keep listening to hear why. Sponsor - devBetter Group Career Coaching for Developers Last week I announced my new developer career coaching program, devBetter. If you're not advancing as quickly in your career as you'd like, and you could use someone in your corner pushing you to succeed and opening up doors to new opportunities, consider joining a handful of like-minded developers at devbetter.com. Show Notes / Transcript Lazy loading is a feature that EF6 has had for a long time and EF Core only recently added with version 2.1. The way it works is, entities are fetched without their related entities, and these related entities are loaded "just in time" as code references them. This seems to follow the best practice of deferred execution, but unfortunately the downsides far outweigh the benefits the vast majority of the time in this case. I recommend disabling lazy loading in all ASP.NET and ASP.NET Core apps. Let's look at why. On any given web request, the goal should be to return a response to the client as quickly as possible. The fewer out-of-process calls the request needs to make before it can return a response, the faster it will be, all things being equal. If a round-trip to the database takes 20ms and processing the request requires 10 database calls, then assuming they can't be made in parallel the minimum time for this is 200ms. If the same data could be fetched in a single round-trip to the database, it would cut page load time by 180ms, not counting the time to execute the queries themselves which might also be faster if done in one batch. When you use lazy loading, your code will make more calls to the database than if you had used eager loading. It's also deceptively easy to write code that will result in lazy loading being done within some kind of loop, resulting in dozens or hundreds of database calls. This can be difficult to detect in development and even in testing, but in production where usually there are more users and larger sets of data in use, the problem can have huge performance implications. I have a GitHub repo that demonstrates lazy loading using ASP.NET MVC 5 and EF 6 and also ASP.NET Core with EF Core. I encourage you to download it and run it yourself. It demonstrates the problem using a conference web site as its sample data. There are conference sessions. Each session has one or more speakers presenting it. Each session can have one or more tags. For sample data I have 2 sessions with 2 speakers and 3 tags total. Displaying the page shows each session and its speakers, tags, and description, all done with some simple razor code in the view. The initial query just pulls in the sessions - the speakers and tags are lazy loaded. How many queries do you think this page makes to the database? Let's think about how many it should make. Assuming the site's just loaded and has no cache in place, it should be able to load the data for this page using a single query. At worst a couple of queries. This kind of data is also highly cacheable, so after the first load the page should render with 0 queries. For this reason I like to say "caching hides many sins" because even if you do use lazy loading and have way too many queries, if you add caching it'll be the rare user who has to suffer for it. Coming back to the sample, with 2 sessions, 2 speakers, and 3 tags, the page makes 22 database queries to render the page using lazy loading. It should be clear that this number is going to grow rapidly as the number of sessions, speakers, and tags increases. Most conferences have more than 2 sessions, after all, but during development maybe only a couple are used and only one user is hitting the page, so the performance impact might not be felt until the worst possible time: the day of the conference. At which point it may be too late to fix and redeploy the code. Lazy loading is a tool that makes sense in certain situations. It's especially effective when the application and the database are colocated and there's just one user. If you're writing bookkeeping software that runs locally and communicates with a local database, it might make sense to use lazy loading as the user navigates around the system rather than trying to eager load all of the data. But in the context of processing a single web request, when every extra trip to the database slows the page down further, and where it can be easy to inadvertently add dozens or more requests, you should avoid it. Show Resources and Links devBetter Avoid Lazy Loading Entities in ASP.NET Apps Lazy Loading GitHub Sample How to Disable Lazy Loading in EF That’s it for this week. Thank you for subscribing to Weekly Dev Tips, and we’ll see you next week with another great developer tip.
undefined
Dec 10, 2018 • 9min

Use the Right Object Lifetime

Use the right object lifetime This week we talk about object lifetimes, why they matter, and how to choose the right one. We'll focus a little bit on Entity Framework since it's very popular and also very frequently misconfigured. Sponsor - devBetter Group Career Coaching for Developers This week I'm announcing my new developer career coaching program, devBetter. If you're not advancing as quickly in your career as you'd like, and you could use someone in your corner pushing you to succeed and opening up doors to new opportunities, check it out at devbetter.com. Show Notes / Transcript If you're not using dependency injection or following the dependency inversion principle in your code, you probably don't care much about object lifetimes. You can probably just instantiate new instances anywhere you need them and then let them be destroyed when they go out of scope. In this case, you probably have no use for an IoC or DI container. However, your code is probably also very tightly coupled, making it more difficult to test and reconfigure in the future. If it's working for you, keep at it, but if you're feeling pain from the coupling, I encourage you to check out my SOLID principles and Refactoring courses on Pluralsight to learn a different way to compose things. If you are using DI and containers, like most developers using ASP.NET Core where it's built-in, or even ASP.NET MVC, you've probably encountered the concept of object lifetimes before. There's some variety in the nomenclature for some of the options, but using the terminology of ASP.NET Core's container, there are three main kinds of object lifetimes: transient, scoped, and singleton. Let me cover these briefly and apologies if you're already well-versed in this topic. Transient scope refers to objects that are created any time they're requested. If you request an instance of a type from the container, and that type's scope is transient, you're getting a brand new instance. If you ask for a type and that type has a constructor parameter that's configured to be transient, that constructor parameter is going to be a brand new instance, every time. Scoped lifetime refers to objects that, once created, live on for the duration of a given HTTP request. If you ask the container for an instance multiple times within an HTTP request, you'll get the same instance every time. Regardless of where the request to the DI container comes from within the web request, the same object instance is returned from the DI container. The first call to get an instance of the type will get a new instance; every subsequent request for that type will get this same instance. Singleton scope is simple - there's only one instance. The first time an instance is requested, it's created, or you can create it during startup and add it to the container then. After that, the same instance is used everywhere. So, which one should you use? Well, naturally, it depends. Since we don't have a lot of time, let's just look at a couple of scenarios that involve Entity Framework or EF Core, which I'll refer to collectively as EF. EF should be set up with a scoped lifetime, so that within a given web request, exactly one instance of an EF DbContext is used. In ASP.NET Core when configuring EF Core, the helper methods take care of this for you, so you never have to make a decision about what lifetime to use. In EF 6, you had to figure it out yourself. And either way, if you're using the repository pattern, you have to make the right choice for your repository instances, too. It's important that the right choice is used for EF DbContexts, specifically because they track the entities they work with. As such, you can't have an entity that is tracked by multiple DbContext instances - that will cause an exception. You also typically don't want to share entity instances between requests - that can cause bugs when two requests are making changes to an entity they both think they have exclusive access to. So let's say you configure your repository instances to be transient. That means if you have two different classes within a request, like a controller and a service, that both need the same kind of repository, they'll each get a different, newly created instance. And assuming nothing else needs a DbContext, the first instance of the repository will get a new DbContext, and the second instance of the repository will reuse the same DbContext. There's probably no need to have two separate repository instances in this case, but there shouldn't be any bugs. Now let's say you configure your repository instances to behave as singletons. Consider the same scenario in which a given web request needs the repository first in a controller and later in a service. The very first request to the web server will result in a newly created repository instance (which will be reused) and a newly created dbcontext that's then passed to the controller. Then in that same request, when the service is created, it is passed this same repository instance, which still has the same DbContext associated with it. The request completes just fine. Now a subsequent request comes in. It will once again use the same repository instance, which still has a reference to the same DbContext. But that instance was scoped to a web request that has completed, so it's not going to work for this request. Or consider another scenario, in which two requests are occurring at the same time. Both will share the repository, and its dbcontext, so any entities created and tracked will be shared between the two requests. If one request makes a partial update, and the other request calls SaveChanges, the update will occur immediately, perhaps resulting in an error due to database constraints. This same thing can happen if you configure your dbcontext to be a singleton. So, in the case of EF DbContexts and Repositories, the key takeaway is that their lifetimes should match, and their lifetimes in web applications should be Scoped. For other kinds of services, especially other ORMs like NHibernate, it's important you understand exactly how these types should be configured when it comes to their object lifetimes for web scenarios. Show Resources and Links devBetter Dependency Injection Dependency Inversion Principle SOLID principles Refactoring
undefined
Nov 12, 2018 • 9min

How much do you make?

Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis. This is episode 32, in which we'll talk a little about money, salaries, and workplace taboos on sharing details about such things. If you’re enjoying these tips, please leave a comment or rating in your podcast app, tell a friend about the podcast, or follow us on twitter and retweet our episode announcements so we can increase our audience. I really appreciate it. How much do you make? This week we talk about money. Specifically, how do you feel about discussing your salary with your coworkers and peers? Why do you feel the way you do? Sponsor - Ardalis Services Does your development team need a force multiplier to level up their quality? Contact Ardalis Services to see how we can help. Show Notes / Transcript I suspect this episode will be both interesting and perhaps somewhat uncomfortable for many listeners. A lot of people have an innate aversion to discussing salary numbers, money, etc. Let's start by examining this, and then move on to look at ways you can maximize what you get in exchange for the value you provide to your employer or customer. Let's talk salary. I'll start, though since I've been self-employed for a while now, I'm going to cheat a bit and provide a quick story from early on in m career. When I got my first job in the late 90s, when the dot-com bubble was still expanding, I took an offer with a salary of $37,000. The consulting company I was working for was expanding rapidly. I was hired in August; another batch of new hires started in January. In speaking with them, I learned that they had been given offers of$42,000! I was understandably annoyed by this. Here I was, with six months' more experience and already providing value at a billable client and I was making 88% of what these new hires were making on day one. Let's stop here for a moment. What are you thinking as I relate this story? Did I break some unwritten rules by discussing salary with these new associates of mine? Was it my fault that I was annoyed by this perceived inequality? Maybe you're thinking, "Sure, you just demonstrated why it's a bad idea to talk salary with your coworkers, Steve. All it does is create bad blood and drama." If this is where you're coming from, I'm warning you now that I simply don't understand this position. I'll try to empathize, but at the end of the day I just don't get it, as will become clear in just a moment. If, after hearing the rest of the story, you want to help me understand why you still think keeping salary a secret is better for you, please leave me a note in the comments. So, why did I want to discuss salary with my coworkers? The answer is simple: I wanted data. I wanted intel. I wanted to be able to make informed decisions about my career. I had aggregate data at my fingertips in the form of annual surveys conducted by my school's career placement office, but I wanted to know what someone in the exact same position at the exact same company was making, and I was able to easily acquire this information by simply striking up a friendly conversation. Now, given this situation, what would you do? Regardless of whether you would ever actively inquire about salary, let's say your new team member volunteers their salary to you over lunch before you're able to stop them. The cat's out of the bag. You have the information. They're making a significant amount more than you, despite having less experience and having been hired into the same role. I imagine the following options are available: Do nothing. Forget about it. It's none of your business. Hold onto the information. Maybe consider it when you next get a pay raise or promotion, and use it to consider whether to ask for more. Start sending out resumes to other companies who might pay the same or more than what your company is now paying. If you get an offer, maybe your company will match it to keep you. Go to your manager and demand an increase. I'm probably leaving out some other options - feel free to add them in the show comments. Back to my story. After learning I was making $5k/year less than the batch of new hires, I went to my manager. I explained, respectfully, that I didn't think it was equitable for me to be making significantly less than the folks they'd just hired. My manager very quickly agreed to immediately adjust my salary to match, but asked me not to advertise that he was doing so to my coworkers. I agreed. In hindsight, perhaps I shouldn't have been so quick to agree to this, but at the time it seemed a small price to pay to get the immediate pay increase, and in any case it was just a verbal request, not a legal document I was asked to sign. So, who won and who lost in this story, if there were in fact any winners or losers? I'm happy with the outcome, since it meant I was able to keep up with the rapidly rising salaries of the time without having to change companies. I really liked where I worked and wasn't contemplating looking elsewhere. If I'd been ambivalent or hostile toward my current employer, I'd likely have taken a different approach. Did the new hires lose anything by sharing information with me? Not that I can tell. Some companies try to enact gag order policies that may go so far as to threaten employees with termination if they share compensation details, but most of these clauses are unenforceable in my experience. I am not a lawyer so do your own research before acting on this opinion, though. In any case, I wasn't asked how I knew what I knew, so no individual was called out as a result my acting on the information I acquired. That leaves my employer. They "lost" in that they now had to pay one more of their employees the same rate they were paying others in the same position. Their profit margin shrunk slightly. But they also won in that they retained a valuable employee who was almost always billing, even while the market grew even tighter for software developers. I stayed there for another 4 years because they continued to grow my compensation and I continued to enjoy working there. If after a year or two I'd found myself underpaid by 20% or more there, I'd very likely have jumped to another position, costing them a highly billable consultant which is how they earned all their revenue. Could this have gone badly? Perhaps. It's easy to say in hindsight that it was a good move, but what if instead of bumping my pay my manager had fired me for breaking the company's rules about discussing compensation (we didn't have any, but say we did). This was a risk, but I had risk tolerance and I felt the risk likelihood was small. I hadn't yet really expanded my lifestyle and expenses from that of a college student, Iso my expenses were small relative to my income, and I didn't yet have children. The market was also great, and I'm sure if I'd been let go I'd have gotten another offer within a few weeks. Like today in 2018, everybody seemed to be hiring. Would I have made the same choices if unemployment were high, layoffs were going on everywhere, and I didn't know how I'd make ends meet if I lost my job? Maybe not. But I'd still want as much accurate intel as I could get so that I could make the best decision for me given whatever the circumstances. Let's wrap up by considering who stands to gain from keeping salary details secret. For many listeners I suspect you can't imagine working somewhere that had transparent compensation details, but as a former Army officer I can tell you it's not a big deal. Everybody in the military, and in government service, knows exactly how much everybody else is making. You can check out the pay scale any time you want. It's not an issue. So, it shouldn't be assumed that secret compensation is somehow the only way to do things. It should by now be obvious that the ones who stand to gain the most from keeping salaries secret from one another are the company's owners. By paying different amounts for potentially the same work, they're able to increase profit margins. Pay differences can be warranted, and whether they are or not, they can seed discontent and hurt morale. Or they can empower employees to ask for what they think they're worth, as in my case, which can end up costing the company more in payroll. If companies pay different amounts to different individuals, and this is transparent, they need a way to justify this decision. This can require more communication. Just as it gives the employee more information and freedom to make decisions, it limits the company's freedom to negotiate from a position of having more information than the other party. In negotiations and economics in general, when one party has a better information than another, they can use this to their advantage and get themselves a better deal. By sharing information, employees aren't gaining an unfair advantage, they're merely eliminating an unfair advantage their employer previously held. Show Resources and Links What Color is Your Parachute (book) Great tips on job hunting and career - I read an earlier edition many years ago 2018 US Military Pay Scale
undefined
Oct 22, 2018 • 5min

Breaking Bad Coding Habits

Breaking Bad Coding Habits This week guest Joe Zack talks about how to apply the power of habit to break bad coding habits. Joe is a software developer based in Central Florida. He is a host of the Coding Blocks podcast and is particularly excited about Search Engines and the JAMStack these days. Sponsor - Ardalis Services Does your development team need a force multiplier to level up their quality? Contact Ardalis Services to see how we can help. Show Notes / Transcript Hello my name is Joe Zack, and I’m a long time developer and podcaster over at Coding Blocks. I’m also a huge sucker for the Business-y PopSci Self-Help kind of books that you see on Top Seller lists. I like to take the lessons from those books and try apply them to my programming. One book I particularly enjoyed was "The Power of Habit" by Charles Duhigg, check the show notes for a link. This book describes the building of habits as the process of taking explicit procedural actions, and turning them into implicit declarative actions. And hey, that sounds kinda like programming to me! We programmers figure out precisely what operations need to occur to fulfill our requirements, and we write programs to automate those operations so that we can deal things at a higher level of abstraction. That enables us to combine and compose these programs to solve even bigger problems without getting tripped up on tiny little details. "The Power of Habit" book promises that building good habits is much like building a SOLID API. You spend the time up front building good habits, and then you get a multiple of that time back with reduced maintenance costs over time. But what if your API isn’t so SOLID? What if you’ve developed some bad habits that you would like to change? Well then, you’re in luck - because the book spends a lot of time looking at how habits can be changed and I’m here to share some of that with you. An example of a bad programming habit that I have is only considering the "happy path” operations that need to happen to meet a requirement. I tend to focus too much on how to make something work, and not enough on how to handle problems that might arise in the real-world. In writing about changing habits, author Charles Duhigg encourages me to determine the cue, routine, reward, and craving in this bad habit. If I can determine those 4 aspects of the habit, then I can figure out how best to change it. In this example, I can look back and see that my undesirable behavior most often occurs when I’m estimating tickets, so that is my “cue”. The “routine” is my act of imagining the work that needs to happen to fulfill the requirements of the ticket and then estimating how long that will take. My “reward” is that I can get back to programming, which is an activity that I enjoy a lot more than estimating. In fact, the consequences for my bad estimates are typically deferred until I actually start working on those tickets. The final aspect of a habit is the “craving”. This is the anticipation of the reward. Knowing that I can get back to “real work” once I complete my estimates provides an incentive for working quickly, rather than accurately. According to the book, the trick to changing habits is to recognize the cue, craving and reward - and to replace the routine. In my example, a good tactic would be to replace my current imaginative process with a more disciplined approach. Perhaps adding together a separate estimate for the happy path and one for dealing with exceptions would encourage me to look at the bigger picture and would lead to a more accurate result. I’m going to give this a shot, and see how it goes. In the meantime I hope that you take a moment to consider how healthy your programming habits are. If there are any habits that you are unhappy with, then remember that you can change them by recognizing the cue, craving and reward - and changing the routine that you perform in response to those stimuli. Keep doing the right thing, and eventually you’ll codify that habit into your mental muscle memory so that the good behaviors flow without you having to think explicitly about it and you can operate efficiently at a higher level of abstraction. Thanks for having me on the show Steve! Show Resources and Links The Power of Habit: Why We Do What We Do in Life and Business On Audible Power of Habit Summary
undefined
Oct 15, 2018 • 4min

On Code Smells

I've talked quite a bit about code smells over the course of my career. My Refactoring Fundamentals and Azure Refactoring courses on Pluralsight both discuss the topic, with the former going into great depth and covering literally dozens of code smells. The course is over 8 hours long, but it not only demonstrates tons of code smells but also shows how to refactor to improve your code in response to them. It's important to note that code smells represent things in your code that are potentially bad. They should catch your attention, and you should think about whether, in context, the smell in question is acceptable or not. Sometimes, it's perfectly fine, or it's not worth the effort to refactor to a different design. If you've never heard of the term code smell, I encourage you to look into it. There are some links in the show notes for this episode. One benefit of learning about code smells mirrors a benefit of learning about design patterns, which is that these named smells allow you to identify and communicate concepts quickly with other developers. For example, if you're discussing some code and mention it seems to have 'primitive obsession', that term refers to a specific code smell which is well-documented and which has certain known refactoring approaches. By using this term, you convey a lot of information in just two words that otherwise might have required a great deal more explanation. It can be useful as well to learn about different categories of code smells. These categories include things like Bloaters, Obfuscators, and Couplers, as well as smells specific to kinds of code, like testing smells. These categories help as you're learning about code smells because they let you see a variety of smells that all have similar impacts on the code. Bloaters tend to result in code becoming larger than necessary. Couplers introduce unnecessary coupling into the application. Obfuscators make it more difficult to quickly understand how some part of your application works. And test smells make tests more difficult to write and maintain, or less reliable when run. Some code smells you can identify with static code analysis tools, like NDepend. For instance, you can easily write a query in NDepend to return all methods over a certain number of lines of code. These kinds of tools can help you identify potential problem areas in your code so you can better direct your refactoring efforts. I may dive into some different code smells, and how to correct them, in future tips. In the meantime, if you want to get up to speed the best resource I can recommend is my Refactoring Fundamentals course, on Pluralsight. Show Resources and Links Refactoring Fundamentals Azure Developer: Refactoring Code Code Smells Refactoring Book (classic 1999) Refactoring Book (2nd Ed.) (Available 31 Dec 2018)
undefined
Oct 8, 2018 • 5min

Shared Kernel as a Package

Shared Kernel as a Package Code shared between applications within an organization is typically referred to as a shared kernel in domain-driven design. This week's tip discusses this approach and how best to do the sharing. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript If you've written more than one application, or worked for a company that has more than one, you've probably shared code between the applications. There are a variety of approaches to this, one of the most awful being the One Solution To Rule Them All approach, in which every bit of code ever written is added to a single code repository and a single solution file. One or more projects in this solution become the shared projects used by many different applications. The benefit of this approach is that developers can easily view and even debug all of the code possibly used by anything. Changes that might break dependent projects are often discovered quickly. However, if a single project requires an update to shared code, it's not easy to have one project depend on a different version of the shared library than another. Even if you use more than one solution, if you're sharing code between multiple solutions at the file system level, you're probably in this boat. In Domain-Driven Design, the Shared Kernel is code that more than one bounded context depends on. The contract between the shared kernel code and its dependents is that the shared kernel code doesn't change unless all downstream dependencies agree with the change. Often it's one team maintaining the shared kernel and its dependent projects, in which case this is pretty easy, but in larger organizations there may be an approval process involving several teams. When updates do occur, they should be decoupled from dependencies such that they can pull in the update when they're ready. This enables updating the shared kernel code without having to test and update every downstream dependency immediately. In .NET, one way to gain the ability to have dependent projects pull in the latest updates to the shared kernel whenever they're ready is to use a Nuget package. Any time an update is made to the shared kernel, its package should be updated and its version updated. For example, you might initially have Acme.SharedKernel version 1.0.0, which two projects reference. Project A needs additional functionality, and it's agreed to place it in the shared kernel. A new package is published, with version 1.0.1. Project A updates its version of the package to require 1.0.1 and is able to be deployed. Project B continues to depend on version 1.0.0 and can continue with development and/or remain deployed using this version. Project B can choose when and how often to update which version of the shared kernel package it uses. If you follow this approach, there are a few things that you may find helpful. First, use continuous integration for your shared kernel library. When you make updates to it, the automated build should compile it, run tests (yes, it should have tests), update its version number, and publish it. This ensures you have a consistent process, which is important especially when we're talking about deploying versioned packages. Next, you'll want to have a way to share the package between your developers and build machines. One nice thing about Nuget is that any file share can serve as a Nuget server, so at a minimum you can simply drop versioned nupkg files into a particular file share. Alternately, you can use an actual Nuget server, such as one built into Jetbrains TeamCity or VSTS/Azure DevOps. You can use a cloud-based solution like myget, if you prefer. In any case, you simply need a way to distibute your shared, versioned packages. With these fairly small pieces in place, you should find that you're able to decoupled your shared kernel package from its dependents such that you can make updates to it as required and pull in those updates only as needed by each dependency. You should also find that, being a separate solution with a separate automated build, it's less likely that developers will make cavalier changes to the shared kernel, so it should become more stable by default and should only be updated when truly needed by its downstream dependencies. And of course, you should do whatever you can to minimize the things your shared kernel code depends on, since it's going to be depended on by most of your applications. Keep it lightweight and don't depend on anything from it that you can avoid. Show Resources and Links Domain-Driven Design
undefined
Oct 1, 2018 • 9min

Applying Pain Driven Development to Patterns

Applying Pain Driven Development to Patterns This week we talk about specific ways you can apply my strategy of Pain Driven Development to the use of design patterns. This is an excerpt from my Design Pattern Mastery presentation that goes into more detail on design patterns. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript I talked about Pain Driven Development, or PDD, in episode 10 - check out that episode first if you're not familiar with the practice. I've recently been focusing a bit on some design patterns. An easy trap to fall into with design patterns is trying to apply them too frequently or too soon. PDD suggests waiting to experience pain while trying to work with the application's current design before you attempt to refactor to improve its design by applying a design pattern. In this tip, I'll walk through a few common steps where applying a specific pattern may be helpful. To begin, let's assume we have a very simple web application. Let's say it's using MVC, and there's a controller that needs to be used to return some data fetched from a database. It could be an API endpoint or a view-based page - the UI format isn't important in this case. The absolute simplest thing you can do in this situation is hard code your data access code into your controller. So, assuming you're using ASP.NET Core and Entity Framework Core, you could instantiate an DbContext in the controller and use that to fetch the data. This works and meets the immediate requirement, so you ship this version. A little bit later, your application has grown more complex. You have some filters that also use data, along with other services. You start to notice occasional bugs from EF and realize that you've introduced a bug. By instantiating a new DbContext in each controller, but occasionally passing around entities between parts of the application, EF gets in a state where entities are tracked by one instance but you're trying to operate on them with another instance of DbContext. You need to use a single EF Core DbContext per web request, which is to say it should have a "Scoped" lifetime. Fortunately, ASP.NET Core makes it very easy to achieve this by configuring your DbContext inside of ConfigureServices. In fact, if you don't read the docs, you probably don't even know what lifetime EF Core is using, because it's hidden within an extension method. In any case, once you configure DbContext in ConfigureServices, you need a way to get it into your Controller(s). To do this requires the Strategy pattern, covered in episode 19. If you're familiar with dependency injection, you've used the Strategy pattern. Add a constructor to your Controller, pass in the DbContext, and set a private local field with the value passed into the constructor. Do this anywhere you're otherwise newing up the DbContext. Remind yourself 'new is glue'. You just fixed an issue with too tight of coupling to the instantiation process by using the service collection built into ASP.NET Core, an IOC container, essentially a factory on steroids. Your EF Core lifetime bug is now fixed, so you ship the code. Some more time passes, the application has grown, and now there are a bunch of controllers and other places that all have DbContext injected into them. You've noticed some duplication in how code works with the DbContext. You've also found that it's tough to unit test your classes that have a real DbContext injected, except by configuring EF Core to use its In Memory data store. This works, but you'd prefer it if your unit tests truly had no dependencies so you could just test behavior, not low-level data access libraries. You decide that you can solve both of these problems by introducing the Repository pattern, which is just a fancy name for an abstraction used to encapsulate the low level details of your data access. You create a few such interfaces, implement them with DbContext, and make sure your Controllers and other classes that were directly using DbContext now have an interface injected instead. Along the way you fix a couple of bugs you discovered that had grown due to duplicate code that had evolved differently, but which should have remained consistent. When you're done, the only types that know about DbContext directly are your concrete Repository implementations. Your application is growing more popular now, and some of the pages are really hammering the database. Their data doesn't change very often, so you decide to add some caching. Initially you start putting the caching logic directly in your data access code in your repository implementations that use EF Core, but you quickly find that there is a lot of duplication and your once-simple repositories are now growing cluttered with a lot of caching logic. What's more changing the details of what is cached how is requiring you to touch and re-touch the repository types again and again. Your current approach is obviously violating both the Single Responsibility and Open-Closed principles, two of the SOLID principles. You recognize that you can apply the Decorator (or Proxy) pattern by moving the caching logic into a CachedRepository type, which you can choose when and where to use on a per-entity basis simply by adjusting the type mapping in your application's ConfigureServices method. With this in place, you're able to quickly apply caching where appropriate, and ship a better performing version of your application. Over time, as you built out your repositories, you kept basic methods for creating, reading, updating, and deleting entities in one place. Maybe you implemented a generic repository, or used a base class. You were careful not to expose IQueryable interfaces from your Repositories, so their query details didn't leak throughout your application. However, to support many different kinds of queries, with different filters and including different amounts of data from related types, you found that you needed to add many additional methods and overloads. In addition to a simple List method on your Order repository, you needed ListByCustomer, ListByProduct, ListByCompany, not to mention ListWithOrderDetails and other variations. Some of your repositories were growing quite large, and included quite a bit of complex query logic, which wasn't always easy to reuse even between methods in the same repository. To address this pain, you applied the Specification pattern, which treats each unique query as its own type. Using this approach, you were able to create specifications like OrdersByCustomer, OrdersByProduct, and OrdersByCompany which included the appropriate OrderDetails if desired, or included an option to specify whether to include it. Your Repository implementations dropped down to just simple CRUD methods, with the List method now taking in a Specification as a parameter. Hopefully this helps you see how you can recognize a certain kind of pain, and respond to that pain by refactoring to use a specific design pattern. If you keep your code clean and simple, it's fairly easy to do this kind of refactoring as you need it, so there's no need to try and use every pattern you know speculatively as you begin a project. Show Resources and Links Design Pattern Library Refactoring Fundamentals Pain Driven Development SOLID Principles
undefined
Sep 10, 2018 • 6min

How Do You Even Know This Crap?

How Do You Even Know This Crap? This week we have a special guest offering a dev tip - please welcome Scott Hanselman who blogs at Hanselman.com and has a great long-running podcast, Hanselminutes. Scott's going to share with us some tips on how you can leverage your experience to know when a problem you're facing should already have a solution somewhere. Here's Scott. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript You can view Scott's article on this topic called How Do You Even Know This Crap on his site. Show Resources and Links How Do You Even Know This Crap Scott's Blog Hanselminutes Podcast
undefined
Aug 27, 2018 • 7min

Layering Patterns on Repositories

Layering Patterns on Repositories This week we're sticking to the patterns and repositories theme. I started down the design patterns path with Episode 17 so start at least from there if you want to listen to the sequence more-or-less in order. In this episode, we'll look at some combinations with other patterns that make using the Repository pattern even more attractive. Sponsor - DevIQ Thanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos. Show Notes / Transcript Last week I mentioned how the Repository pattern works well with the Strategy pattern, so that you can use dependency injection with it. This is the first of many ways in which you can combine Repository with other patterns, and is probably the most powerful of them all (though probably taken for granted by many). I described the strategy pattern in episode 19. Let's look at two other patterns now that we can combine with Repository. First, let's talk about a common pain point with repositories: custom queries. I talked about the need to encapsulate query logic so it doesn't leak out of your repository in episode 18. However, I saved a powerful technique for doing so until this tip. (Sorry, there's only so much I can put into each of these and keep them short.) If you follow my earlier advice and you don't leak query logic outside of your repositories, it's likely your repository implementations have a bunch of custom query methods. Maybe you have standard GetById and List methods, but then you also have ListByState, ListByModel, ListByOwner, etc. Maybe you have methods that correlate directly to business use cases or even UI concerns, like ListPremiumCustomers or ListForSearchScreen. The point is, you may find yourself with the code smell of too many one-off custom query methods on your repositories. This is pretty common, and the worse it gets the more cumbsersome it becomes to work with the repositories. The solution to this problem is to introduce another pattern. The Specification pattern is designed to encapsulate a query within an object. I mentioned it briefly in episode 24. It's especially useful when you're using an ORM tool like EF or EF Core because not only can you encapsulate filter expressions, but you can also specify which properties to eager load. Thus, you can create a specification for a shopping basket type that might be called BasketWithItemsByCustomerId or something similar. A typical specification I use will include the filter expression (to be used with a Where LINQ expression) and will let me specify which properties and subproperties I want the query to return with it. What are the benefits of using this pattern? First, you eliminate duplication of query logic if you were previously letting client code create queries on the fly. Second, you establish a library of known queries that your development team can review, reuse, and discuss. These should be organized so they're extremely discoverable so there's minimal need to try and reinvent the wheel when someone needs a particular query that already exists. They also help clean up your repositories, eliminating most of the scenarios where you would need non-generic repository methods, and thus dramatically reducing how many repository implementations you need to write and maintain. Your repository code will better follow the Single Responsibility Principle and the Open/Closed Principle, and you won't need a bunch of custom IWhateverRepository interfaces. Finally, you can easily unit test your specifications' filter logic to ensure it's correct and provide examples for the team. Another useful pattern you can use with repository is the proxy or decorator pattern to add caching. I call this the CachedRepository pattern and I've written a number of articles about it. I mention both patterns because they're functionally the same, but differ based on intent. A proxy controls access to something. A decorator adds behavior to something. A CachedRepository controls access to the real, underlying repository, exposing it only when the result isn't in the cache. In this way, it's a proxy. But it also is responsible for adding caching behavior to any repository. In this way, it's a decorator. Either way, it's an extremely useful pattern. Most applications make a lot of queries to their database for results that don't change frequently. A lot of applications use a database to define some or all of their navigation, or the contents of common dropdownlists on forms. These and other common results are great candidates for caching, but often this behavior isn't added because of the work and complexity involved. Adding caching to a method in a data access repository isn't ideal, since it couples two unrelated concerns and breaks the single responsibility principle. It's also not very reusable. A better approach is to create a generic CachedRepository that can be used for any type that would benefit from caching. Determining whether or not to use this caching functionality can be controlled centrally for the application wherever its services are configured. Circling back around to the specification pattern, you can combine it with the CachedRepository to help with key generation. Every cache entry needs to have a unique key, and you need to take care when constructing keys that you take into account any variables or parameters that were used for a particular query. Your specification objects know exactly which parameters they require, and can easily expose a cache key property that can be used by your CachedRepository. You can also add a property to toggle whether certain specifications should be cached at all, if that's helpful. If you'd like to see what this looks like in a simple sample application, check out the eShopOnWeb sample on GitHub. I have a link in the show notes. There's also a free 110-page eBook that goes along with the sample that I encourage you to check out. I developed both the book and the sample for Microsoft as a free resource and they're both up-to-date with .NET Core 2.1 as of July 2018. Do you think your team or application could be improved by better use of design patterns? I offer remote and onsite workshops guaranteed to improve your coding skills and application code quality. Contact me at ardalis.com and let's see how I can help. Show Resources and Links Repository Pattern SOLID Principles Specification Pattern Building a CachedRepository in ASP.NET Core Introducing the CachedRepository Pattern Building a CachedRepository via Strategy Pattern

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app