We use DDD at the current company I work in and to be honest, I detest it so much that sometimes it makes me wonder if I even want to continue in the programming space (been at it for 20 years).
Don't get me wrong, DDD has meaning and purpose, but some companies are applying it as a badge to be obtained instead of pondering the question, do you really need to rewrite everything following DDD?
In our case, simple CRUD APIs that in "regular programming" might take a couple 200 line files have turned into unmanageable nightmares in DDD that take you at least a couple of days of really intensive investigation to understand, because it have been divided in more that 25 files that hold 3 or 4 lines of code at most, with so many abstraction layers that it's impossible for the best of us to follow in one go.
Now, you could make the argument "You Are Doing It Wrong(tm)" but since I'm just a drone in this specific scheme and there's no wiggle room for anything (the team is quite inflexible on this) I have to follow it to the letter.
Just giving my two cents, again, not depreciating DDD, it has its purpose but in my opinion, it's for very specific projects.
Tactical/technical DDD patterns should only be used for parts of the code where there is a lot of business agility required, so the behavior of your code changes a lot, and you have a tight feedback loop with your business unit.
Your story sounds like they implemented a "technical DDD top-level architecture" (TM), whatever that may be. (I'd assume layers of abstractions coupled with logic spread all over the place, without any added benefit.)
You see this a lot when people read some stuff about DDD, and they start experimenting with the technical/tactical patterns, because this is the aspect that makes most sense to a technical audience.
In reality the tactical/technical DDD patterns should only be applied in the core part of your business (i.e. the thing that gives you a strategic advantage over your competitors.), because that typically needs to change a lot, so having a common language/model with the business tends to be worth the extra upkeep required when opting for more flexible models.
Identifying what the core part is of your business (most likely it's not authentication, billing, invoicing, content management, ...) is one of the more important (and most difficult) aspects of DDD.
Right, communicating to the business is actually the core message of DDD. It should potentially be called “anthropological design” since the “domain experts” are a synonym for your non-technical users of your software (the domain is the business domain, which is whatever your software is trying to do for them). The message is that you have to observe your users in their natural habitat.
Let me put it this way, when Twitter started out, they did not have tweets. They had posts, and the act of posting to Twitter was called twittering. They were not associated with birds (actually more with whales lol). The idea of birds and tweeting actually came later with a third-party client interacting with their API.
Eric Evans in the early aughts now makes a big splash with this outrageous statement, where many of us graybeards would instead say “if it ain't broke don't fix it”: Eric Evans would recommend that the posts table in the database be renamed to the tweets table. Version 2 of the API should not reference “posts” or post_ids, but rather tweets and tweet_ids.
Why?! Those sorts of migrations are painful and clumsy! Yes, Eric says. (He is not stupid.) Maybe it's a lost cause. But, Eric remarks on two things:
1. There is no reason to believe, given software’s previous performance, that any amount of upfront planning is going to generate the most consistent useful model before the software is built and we can interact with it. So you're going to want to iterate. What are the systematic obstacles to renaming the table and the API, and can we overcome them so that we can do lots of little experiments?
2. Something else that is clumsy and painful, is when your users come to you reporting a problem with Widgets or whatever, and you go off and you fix the FactoryService to add some new functionality to widgets, tell the user that their problem is fixed, and they go and do the thing again and run into the same problem that they ran into, “it's not fixed yet!”. Why did this happen? One big reason is that the word “widget” means something different in the database versus the backend, or in the backend versus the frontend, or in the frontend versus the real world. Twitter might get some other notion of “topics” and they roll it out and everyone starts to call them “posts”, now the topic table holds posts and the posts table holds tweets, and you're always looking for “posts” in the wrong table now.
So, you should rename the table because first, this should be a possible thing for you to do and building up that sort of leverage is going to pay dividends later, and second, the less friction we can have by transforming the way we developers speak into the way that our users speak, is going to pay dividends too.
This anthropology is kind of the core part of DDD, I don't understand why people try to do DDD as design patterns rather than saying that it's the users who unwittingly dictate the design, as we redesign around them to reduce friction.
It's similar to, I don't understand why people find it hard to draw context boundaries in DDD. So bounded contexts are a programming idea, in programming we call them namespaces, they exist to disambiguate between two names that are otherwise the same. DDD says that we need to do this because different parts of the business will use the same word to refer to different things, and trying to get either side of the business to use some different word is error-prone and a losing proposition. So instead we need namespaces, so that both of our domain experts can speak in their own language and we can understand them both because in this context we use this namespace, in that context we use that namespace. So: where do you draw the boundary? In other words how big should your modules be? (Or these days, for “module” read “microservice”.)
Simple: you partition users into groups, based on the sorts of things that they seem to care about when they are interacting with the system, and the different ways that they talk about the world. The bounded context is not an “entity” or a “strong entity” or a service-discovery threshold, rather it is an anthropological construct just like everything else in DDD. “The people in shipping care about this for one reason, the people in billing care about it for another, they don't usually talk to each other, but I guess sometimes they do...” sounds like you've got a shipping module/microservice and a billing module/microservice. The boundary is the human boundary.
Similarly for “should I use events or RPC?” ... Does someone from shipping ever come up to the billing department and say “The delivery costs a ton more because XYZ, the customer said they preferred to pay more rather than cancel the order, I am gonna stay here in billing until this critical task is complete,” or whatever, or would they prefer an asynchronous process like email, “we will just put it on the shelf until we can pay to safely ship it.” Different industries would have different standards here! If it's something that has no shelf life, that delivery does not want to keep in the shelves for one second longer than it has to, then that drives the different behavior. Only way you can know is by observing your users in their natural habitat.
I'd advice people new to DDD to involve themselves with understanding the Strategic Design parts first. The rationale, pros and cons for using it etc. And stay well away from tactical patterns until you know the role of DDD wrt the objectives you want to achieve. Strategic vs. tactical are completely different concerns. The problem with a lot of DDD information is that people tend to dive into tactical way to early and introduce all kinds of architecture that is not needed. Note that result of strategic analysis may well be to conclude that a simple CRUD design is the best way forward (for a subdomain or even the entire project). The non-technical domain understanding is most important, and also helps heaps in keeping your (non-technical) stakeholders in the loop throughout the development process.
I found this book very good in that regard. The first half of the book is on strategy and emphasizes its importance. It also makes it clear that only a subset of your system is suitable for DDD (for example, not the CRUD bits). I also found it much clearer and less verbose than the Evans book.
> On the other hand, many of them have little to do with the basic premise of DDD.
Honestly, I see no relation at all. Those technical patterns can be applied the same way whether you model your code after the domain or not, and you don't need them at all to model your code after the domain.
Good post. The technical patterns are nigh useless unless you're going all in on event sourcing and/or CQRS - and perhaps even then. KISS > DDD.
To me, what matters are the strategic patterns, i.e. how you think and talk about your domain. What a lot of (microservice) software gets wrong is bounded contexts and APIs, which can be improved through event storming (discussing what kind of things happen in your domain) and context mapping (how that [sub]domain interacts with others). And then there's ubiquitous language, or calling the thing what it is to the business, not some programmer gobbledygook.
I'm an Argentine Spanish-English bilingual programmer. I try to program exclusively in English, but when developing integrations to Argentine systems I often have to fall back to spanglish code.
In general it's preferable to use only english because the code is more readable (because the language's keywords and APIs are in english), but also to maintain consistency with noun-adjective order, verb tenses, etc. Some local concepts can be easily translated (or at least it would seem so) like invoice-factura ... but then inevitably I arrive at concepts for which there is no obvious translation (so I could make one up, but it would not be clear enough for others) or for which the translation doesn't exactly line up.
I think this is common all around the world. For example, the more my teams get close to money and regulations the more we have to use words of our own language. It would be pointless trying to invent a translation of very precise legal words for concepts that exist only in our own country.
Any tips on to persuade a stakeholder , senior leader, or your team lead that their area of focus is not a core part of the business without them feeling defensive about their status within the company?
Any tips of how to do DDD when every area is considered a core priority of the business because uncomfortable conversations are hard?
"There is some benefits to what you're proposing. My primary concern is cost. I see no technical difficulties with doing it <this other way>, and one of the benefits of that is that you'd have it ready in a tenth of the time. If I'm wrong, we can always change our minds and build it your way. That would barely take any additional time at all, since what I'm proposing is so simple to begin with.
"So, what do you say? Would you like to try to get it done by November, or should we labour over it until next June? If I understand the business figures right, if we can get it out in November, we'll make $6,000,000 more – and it would all be because you made the right technical decision here."
> "So, what do you say? Would you like to try to get it done by November, or should we labour over it until next June? If I understand the business figures right, if we can get it out in November, we'll make $6,000,000 more – and it would all be because you made the right technical decision here."
... <End of Year Review>
"We were able to get this out by end of November, thanks to your good technical decision. $6,000,000 saved! Congrats and great job. Accordingly, you've qualified for an annual bonus of a $100 Amazon Gift Card and the default 1.5% raise.
> Any tips on to persuade a stakeholder , senior leader, or your team lead that their area of focus is not a core part of the business without them feeling defensive about their status within the company?
To get buy-in you have to provide value for them that aligns with their needs. If they desire a delusional feeling of outsized importance within an organization then you need to be quite creative. It's more likely that their needs are simpler though. They need to feel they are getting value from you, even if the other parts of the business hold more of the cards.
Try to find simple things to fix for them, choose to build out features with their input, and when building new features for the main stakeholders try and prioritize items which help multiple stakeholders. This is good practice in any case because in the long-term things can change quite significantly within organizations and you don't want to be perceived as only an ally of some within the business.
UPDATE: Beware of quantifying too much. This really impresses some people but will make others feel really small. You may completely lose a connection with one of the smaller stakeholders by quantifying every detail and calling out big numbers around the large stakeholders. You need to work qualitatively for the most part for them. If you want to use numbers, pilot something with a small stakeholder and make a 25% increase to their sales (or a cost reduction). If big holders can get a similar percentage then that translates to big numbers. It feels like a big number to both groups.
I have a few ideas, in fact I gave a talk about that a long time ago [0], but I think one of my friends Marijn gave a good suggestion on twitter [1]: don't sell DDD, but fix your the problem your boss has.
I think the question was more how to sell not-doing-DDD.
Reasonable people (including DDD advocates) clearly understand that tactical patterns can easily be misapplied, and that the strategic part of it is more important, but inexperienced programmers jump in on the "new" fad (it's not even that new, which is most perplexing to me), and misapply all the patterns they possibly can.
I see there's a lot of similar sentiment here, so I think the question really is: how to convince inexperienced "converts" where the right boundary of applying tactical DDD solutions is?
One cause of misapplying tactical patterns is learning. When people start learning something, they do it badly and in inappropriate contexts. The solutions to this are:
A. Don't learn things.
B. Dedicate some amount of time to learn something in a sandbox before using it on the job.
C. Once you've learned something well enough to see the error of your ways, dedicate some amount of time to clean up your old work.
I know A is a tongue-in-cheek, but I think you underestimate human's ability to apply things by learning from good learning materials.
B and C are a single thing, and unfortunately, C does not happen, which is why any particular methodology gets a bad rep. And it's obviously already happening with DDD (judging by the polarized sentiments around here).
And finally, while I do believe abstraction is the ultimate tool of the human mind (and mathematics is the purest form of abstraction we are capable of), I do not think it suits all brains equally, and not everybody will be equally capable of ever getting the right understanding. Basically, your architecture can be _too smart_ if you are looking to hire actual, real-world developers and software engineers, and have them be efficient.
> Your story sounds like they implemented a "technical DDD top-level architecture" (TM), whatever that may be. (I'd assume layers of abstractions coupled with logic spread all over the place, without any added benefit.)
Think about for example a planning component for hospital beds... There are a lot of parts that are really straightforward to implement, but for these planning components it might make more sense to develop an in-house component. (Assuming existing constraint solvers and/or rule engines are not a viable solution for you in this particular scenario.)
If your business is talking about updating/deleting/inserting data, you are not describing the actual reasoning behind the change. For DDD, it makes sense to figure out why exactly you are doing the things you do, and model these explicitly in your systems for those parts that matter.
The stereo-typical example for this is an address change: if you model this as an "AddressUpdated", you might as well use CRUD, as this does not specify the intent of the change.
You could change an address because it contained a typo, but you can also change it because someone moved. These might lead to different outcomes, so in DDD you would typically model these as "AddressTypoCorrected" and "ContactMoved".
There are other fine-grained aspects, for example the need of an identity for an object: physical money is considered a value object (so it has no specific ID per instance) in almost all contexts, unless you are the national bank: all of a sudden the identity (serial number of the bill) does matter, so the same "thing" might have different "models" within different parts of the business.
Other examples might be value objects for specific areas, for example weight. Typically weight starts out as a number, and all of a sudden there might be a need to add a precision, mark it as an estimate, or have it "unknown". In order to avoid if-statements all over the code, you construct a "weight value object" that properly manages all of these peculiarities in a single place (i.e. what's the result of an estimate+undefined etc.)
You clearly have some positive experience with DDD, and I'm definitely not trying to say that DDD is broken by design or anything like that.
I'm sure there are successful and maintainable projects that utilize this design approach.
Nonetheless, the only thing I could think of after reading your example is just how many subtle bugs and inconsistent behaviors this engine will have with various edgecases, so I'm still pretty convinced I'd rather implement it with less abstraction/indirection.
It might just be a difference of character at the end of the day, because I do agree that what you write sounds great. I just see it more like a triangle in the CAP theorem, where the edges are speed, abstraction/extensibility and stability/consistency
> Nonetheless, the only thing I could think of after reading your example is just how many subtle bugs and inconsistent behaviors this engine will have with various edgecases, so I'm still pretty convinced I'd rather implement it with less abstraction/indirection.
This example clarifies the intent behind DDD: make the implicit explicit and make sure there is awareness about all the edge cases.
It might be as simple as contacting a user in case you have an uncovered edge case, but at least you'd be aware that your system is unable to handle edge case X. (In non-DDD scenarios this would just be a bug that emerges - implicit behavior.)
Yes! When the project becomes large enough, a lot of value loss can be prevented by discovering edge cases before implementation, and DDD practically forces that to happen. Less things discovered by devs means less design cycles, which means less effort lost in design and implementation—of course only as long as proper grooming is done to avoid implementing things out of customer priority.
> Nonetheless, the only thing I could think of after reading your example is just how many subtle bugs and inconsistent behaviors this engine will have with various edgecases, so I'm still pretty convinced I'd rather implement it with less abstraction/indirection.
This line of reasoning cuts both ways: how many bugs and inconsistent behaviors often pop up because developers rushed to write code without gathering enough requirements on the domain, how many productivity problems are caused by growing the system by accretion where it can, and how many rewrites were required just to fit the system's domain to the problem and shed technical debt from the accretion.
I'm sorry you're having that experience. DDD is specifically aimed at tackling complexity, as it says on the cover. Part of the problem is that complexity is relative to the observer, how experienced they are in that particular domain, etc. Good abstractions make complexity manageable, bad ones create more complexity. And that's another problem: a domain might be quite straightforward but bad explanations, missing information, bad abstractions, etc can make it seem more complex.
Your colleagues need to remember that DDD is supposed to be applied pragmatically. If the structure causes more navigation work than needed, simplify it. If the problem could be solved with a simple CRUD system, do that. If most of the problem is CRUD, but there's one particularly complex bit that changes a lot and requires a lot of flexibility, isolate that part, so that the simple and complex parts can have a simple integration, don't leak into each other, and can evolve at their own speeds.
I think the main problem is the Blue Book (Domain Driven Design) contains mostly technical advice. If my memory is correct there's only one of the last chapter about the organizational aspect.
Implementing DDD on the other hand is a lot better about this. Surely because it has been written 10 years later. So most people should start with it.
But anyway when people say they're using DDD, if they can't point you to some domain experts, a dictionary of the ubiquitous language or a mapping of what they do they're not using DDD.
> I'm sorry you're having that experience. DDD is specifically aimed at tackling complexity, as it says on the cover. Part of the problem is that complexity is relative to the observer, how experienced they are in that particular domain, etc.
I'd say part of the problem is that DDD critics conflate DDD with overly complex, enterprisey models that don't match their personal preferences on the acceptable tradeoffs between complexity and correctness.
As DDD comes up sounding like too much work to implement too much complexity that brings too little value, they flag it as a concern.
What I believe is missing from this discussion is the scenario where DDD practices are not followed and consequently teams are forced to iterate and reimplements projects or parts of it just to fit requirements that emerged because some aspects of the domain model weren't looked into. Design by accretion is largely accepted, as is technical debt, but they do have a cost.
You make quite the point here. That is the advantage I see for DDD; you have dependency on service that may change, DDD will make your life easier when switching, otherwise there will be weeks of rewriting code. And what happens during transition times (thinking about CRM for example) where you have to keep connected to both systems?.
This said, in my situation it's like trying to kill flies with a muon cannon (fyi. this gun is fictional and it's exaggerated to drive the point), it's cool'n's*it but the same could have been done with a newspaper or your hand. To maintain the muon cannon you need the entire Fermilab team, your hand... well, it's your hand.
>I'd say part of the problem is that DDD critics conflate DDD with overly complex, enterprisey models
Every text I've ever written on DDD has made it pretty clear that these patterns are at the very heart of what it is. I've never seen one that says "look, all this stuff is optional, write your software however, here's how to really get to grips with the domain model".
I don't think it's the critics conflating. It is what it is.
I read a bit about DDD but never really went in-depth with it, like I never read the book or anything.
Instead I just try to absorb the major takeaways that I got from what I've read:
1. Bring in people with domain knowledge to help you understand the expected behavior of the system.
2. Try to establish a consistent language that is used in both verbal conversations as well as code.
I feel like those are good, easy-to-understand principles, and I've never understood why the whole DDD space is taken up by this insanely complicated terminology and theory. It's so off-putting.
This is exactly spot on and my go-to approach as well. I always thought myself of a huge fan of DDD, simply because I think it makes so much sense to discuss the architecture directly with the stakeholders until everyone technical and nontechnical alike really agrees about the existence, relation and constraints of entities: By using common, ubiquitous language and just giving the same things the same consistent, common-sense names while giving different things different names.
It's a godsend for the top-down inside-out approach of requirements engineering that I like and teach.
Then I met some people the actual local DDD meetup group and was shocked that that part basically made up effectively zero percent of their discussion and the rest was taken up by talk about adapters, hexagonal architecture and all kinds of artificial design patterns that cultivate complexity and self-importance. I've been careful to call myself a DDD advocate ever since.
IMHO DDD went through the same unfortunate descent that Agile did: An originally really great idea and common-sense approach that went on to get bastardized into a cargo cult by coaches who like to produce sheets for BS bingo.
I also had the unfortunate experience of working with some die-hards who believe DDD dictates that "the code and only the code should reflect the business requirements 1:1", with emphasis on the "only". Which means that things that are often configurable in other systems are instead hardcoded and scattered in the codebase done by those. In early stage startups this is a death kneel.
I signed for a course on DDD thinking it will be about Data-Driven Design. It was about Domain-Driven Design and it's like the exact opposite.
Data-DD is focus on data and keep it simple. Domain-DD is another one of these architecture astronauts fads where you introduce new abstractions and then spend most of the time wondering whether penguin is a bird or a fish for the purpose of your application.
> Domain-DD is another one of these architecture astronauts fads where you introduce new abstractions and then spend most of the time wondering whether penguin is a bird or a fish for the purpose of your application.
That doesn't sound right. Domain-driven design is just a technique to design a data model that fits your application domain, and designs the whole application around that data model.
To put it differently, with DDD first you define a data structures, and afterwards the application is just operators over said data structure.
The only reason why in DDD you would care about whether "a penguin is a fish or a bird" is if your businesses required handling fishes or birds in fundamentally different and incompatible ways, which would require you to write special purpose code that crossed all layers.
It sounds like you've been experiencing a kind of analysis paralysis with the DDD tag.
> Domain-DD is another one of these architecture astronauts fads where you introduce new abstractions
Reading this I guess you followed the wrong course. One that shoved you Tactical Patterns down the throat without explaining well-enough why you should use them. DDD in the core is not about architecture, but about *common understanding* of the domain, *before* you start coding and hacking on a particular architecture. The domain understanding should help you make appropriate architecture decisions, but in now way dictate what architecture designs to use.
> it have been divided in more that 25 files that hold 3 or 4 lines of code at most, with so many abstraction layers that it's impossible for the best of us to follow in one go.
When you put engineers in charge you get overengineering and when you put managers you get underengineering.
This is so true, and has been forever. In the early 90s I worked on a system where you couldn't just write structs, rather you had to submit their definition to a guy who entered the details into a database, and there was a daily run to generate the C header files from that database. To this day I'm convinced the only reason it was done this way was that it could be done this way.
Getting experienced senior engineers, probably they have seen both; and in the best case they have worked on a middle ground project so they experienced how to do it and how not to do it.
I think you need to have a few of those "I've use this pattern and had to stay up 24 hours to meet a deadline because shit was way too complicated for the delivered value" to instil a healthy fear of overcomplicating solutions.
I've worked with developers with 5+ years of experience that haven't went trough that (either corporate culture allowed them to deliver minimum value in 5 days or jumping projects before it gets to the WTF stage of complexity). It's hard to learn if you never get burned.
At the moment, there's no way out, plannign stipulates that all our microservices must be rewritten to accomodate this super abstract DDD design (and youy're right, it was an engineer who created our current layout)
> When you put engineers in charge you get overengineering and when you put managers you get underengineering. Is there a way out?
The whole idea is to bring them together "in the same room" and allow common understanding of the domain so they stay on the same page throughout the project. Then they'll do each what they are best at (managers whip up glossy slides, and devs crank out reams of code ;)
I noticed something similar. A minimal PR to introduce DDD on our code base ballooned the codebase by something like 1,000 lines, smattered all over the code base. I think it would have ballooned it by about 12-15,000 in total in the end if we'd used it everywhere. That would have been fertile breeding ground for bugs.
The ideas made sense on logically complex code that required frequent refactoring, but the strict separation between all the different layers simply led to a lot of code in most instance. Far too much.
We also couldnt agree on where the limits of the bounded contexts really lay. Most documentation on this issue is a mere handwave saying "you figure it out" or "it'll become clear when you do these exercises with the business" (it didnt), which is odd given how vitally important it is and how damaging it is to bound the wrong things.
> We also couldnt agree on where the limits of the bounded contexts really lay. Most documentation on this issue is a mere handwave saying "you figure it out" or "it'll become clear when you do these exercises with the business" (it didnt), which is odd given how vitally important it is and how damaging it is to bound the wrong things.
This is the hardest part of software design. No wonder there are no clear cut rules on how to do it. You have to be both a domain and implementation expert to get the boundaries right on the first try.
Yeah, it is tricky. I've rewritten many a code base because I drew the original boundaries in the wrong place and their logical locations only became clear in retrospect.
I'm not so convinced that it's something you can get just by better communicating with "the business" either.
This is partly what infuriated me so about the extreme amount of code required to follow DDD patterns - 3-4x the amount of code means 3-4x the cost of that rewrite if you get the boundary wrong.
I find the original tactical DDD patterns as useful as the gang-of-four OOP patterns these days. Modern languages made the latter irrelevant. Modern DDD practice emphasizes getting the strategic aspects of DDD right: language and boundaries.
Doesn't matter if your code has a type named `Aggregate` in it. Matters if you get your consistency boundaries right.
> I'm not so convinced that it's something you can get just by better communicating with "the business" either.
I don't have a good answer for this. I personally try to keep my modules small so that there's not more than a ~week worth of stuff to redo if understanding of business (or business itself) changes. I often fail too.
>Doesn't matter if your code has a type named `Aggregate` in it. Matters if you get your consistency boundaries right
I tended to find a lot written about the former and very little written about the latter.
In fact I found essentially zero practical or actionable advice about getting the boundaries right beyond just "it's important to get it right". DDD doesnt appear to have a coherent opinion beyond that.
Not in the books nor in blog posts written about the topic.
It reminded me a bit of how scrum had a lot to say about standups (the color of that shed was well defined) but very little to say about refactoring.
I've found that separating around "abstraction levels", to use Clean Code lingo, really helps for defining good boundaries. This is really super-emphasised in Clean Code, but I find this the most important concept in the book.
Examples:
Are you writing to disk? Printing to the screen? Saving to the database? This is a different "abstraction level" than your data processing, those things should be separated by a boundary.
Are you doing hardcore math to procedurally generate a terrain? It shouldn't be on the same layer you push the triangles to the screen.
It's also about isolation: you want to control the Framebuffers in your 3D renderer? Don't let the Framebuffer class with OpenGL calls ever leak outside the renderer, even though there's encapsulation. Just use a data class to communicate between renderer/non-renderer code.
--
However, the part that is not really discussed on these books is that we should be vigilant not to "over-abstract". Sometimes you have a certain level of abstraction distributed over two or more classes, only to mirror some kind of external structure or mental mode, but in reality you want the code for those things to be together in the same class/method.
One example I run across a lot in 3D renderers is wrapping internal 3D library concepts like "vertex buffer", "context" and "framebuffer" in separate abstract objects, even when they really don't need to be abstract.
For example: "Open GL renderer" will only ever be able to call "OpenGL vertex buffer", "OpenGL context" and "OpenGL framebuffer", while Vulkan will only call the Vulkan equivalents. This means you don't need a "framebuffer" abstraction, you can have it on the same layer as the renderer. You might need a data-only, non-abstract "framebuffer" class to control it from the outside, though.
>However, the part that is not really discussed on these books is that we should be vigilant not to "over-abstract".
To be fair, I've never really seen any process/paradigm address this. I've mostly done it based upon gut feel - sometimes abstractions seem like too much of an imposition. Other times it feels critical.
DDD just says "do way too much. all the time".
I imagine one day there will be an abstraction calculus but software engineering aint there yet.
This is a bit different. You are talking about a domain where the developer is usually the domain expert. You know very well what "procedurally generated terrain", "3D renderer", "database", "screen" and "framebuffer" are. Even if you don't - good and unambiguous definitions are usually just a couple internet searches away.
Now imagine you need to encode behaviours for a system where domain experts use terms and jargon you've never heard before. Even worse, users of the same system coming from different departments use the same terms to mean different things. How do you draw the boundaries there? That's what the GP finds disappointing - there is no single guide or reliable process to jump into a new domain and get the boundaries right.
That's not how lines of code work - more lines can be more clear and easier to write. There are extremely concise mathematical proofs that require extreme amount of time to produce, whereas a more verbose proof takes a simpler approach but one that involves less thinking.
If there were a way to produce bounded context boundaries by following a general pattern / algorithm, what would that be? I fail to see how they can be created usefully without lot's of conversations with domain experts plus elbow grease. It's the part of software that remains entirely art and not science.
* Try to mininimize the overlap between bounded contexts - loosely coupling domain models.
* Default to "too large" to begin with and institute a pattern for breaking down a bounded context into two smaller contexts.
I tend to find any process that defaults to "have more conversations/interactions" defaults to wheel spinning without some sort of specific plan about what those interactions would entail.
> Try to mininimize the overlap between bounded contexts - loosely coupling domain models.
What if this doesn't lead to a more accurate representation of the domain? What if two contexts are just coupled in the business, for good reasons?
> Default to "too large" to begin with and institute a pattern for breaking down a bounded context into two smaller contexts
This is exactly the ambiguous advice that you are rallying against. How do you know when to break down a context?
It's like, building any system involving many actors and actions is hard, that has nothing to do with software. We're just digitizing the same patterns and behaviors that people have used to run companies for hundreds of years.
People want a playbook to be followed to arrive at a "perfect" domain model or architecture. I'm sorry, that sounds pretty farfetched to me. It reminds me of how we first started thinking about computability theory, when David Hilbert proposed that we should be able to devise an algorithm that could decide the truth or falsity of any logical statement (the Entscheidungsproblem). Hilbert was one of the smartest mathematicians to ever live, and he was very confident that this could be done.
Well, Alan Turing, Kurt Gödel, and Alonzo Church (not slouches in their own right) all smashed that idea with various proofs. The truth can often be counterintuitive. I am sorry that the world is complex, I also wish it weren't so.
I think you misunderstand. I said that I thought that DDD would proscribe something along these lines. I am not endorsing this as a fully complete, usable process, I am saying that something like this is both possible and necessary for the paradigm to function.
It's a critical topic that is right at the heart of DDD. I researched up and down on this topic and unlike 'where to use a factory" the DDD community refuses to go into even as much detail on this topic than I just did with my half baked comment.
I am not trying to "complete" DDD here. I think DDD should largely be consigned to the trash heap.
> This is exactly the ambiguous advice that you are rallying against. How do you know when to break down a context?
This is a very important question. I will bite and give a guidance because no one is willing to give advice here. If you development team grows over 10 SE. Then you need to split it. So your bounded context has a complete team on it becoming experts in the subject as well as experts in the software implementation. Each team should be able to work independently of other teams.
I've been replying to you in other threads. There is no answer to this question, it is an art. I understand that it's frustrating, but that doesn't make it any less true.
As far as what the goal of the art is - the goal is to avoid linguistic and semantic ambiguities in the ubiquitous language. There is even a section entitled "Recognizing Splinters Within a Bounded Context" where specific examples are given:
* duplicate concepts
* false cognates
If you have truly duplicate concepts across contexts, this is a symptom of the lines not properly being drawn, and perhaps a new, shared context is missing.
False cognates are the bread and butter of bounded contexts though - these occur when two areas of a business use the same exact word for something, but they mean slightly different things _depending on the business context_. The example in the book given is the notion of a "Charge," which customer invoicing and bill payment departments might both use. But each department only cares about certain pieces of data of a charge, so if one "Charge" model were created, it would be more complicated because it had to worry about all the different ways that the teams use it. And even worse, sometimes they are used in _conflicting ways_. That is a semantic collision, creating ambiguity in the model.
This is what a bounded context is meant to address. Each department gets its own model, each with its own version of Charge. The code and data is fit for the specific business purpose it's serving, instead of having a one-size-fits all model that gets the job done, but is more complicated to use in all contexts.
Honestly curious, have you read the book? I still don't think it will give you what you're looking for in terms of a prescriptive formula for "doing DDD right," but there's quite a bit of guidance in there.
I'm familiar with the idea that "duplicate concepts" indicate that you should have a separate bounded context, from, I think Martin Fowler's blog? This is actually partly what I was referring to when I said hand waving.
It's conceptually similar to answering the question "How do I know where the borders of Germany lie?" by saying "ask the first person you see if they speak German".
It also conflicted with a process I followed, which was to essentially create a team glossary and agree to semantically disambiguate terms which had multiple different meanings (e.g. linux user/website user instead of just user) and even just "ban" the usage of terms which got overloaded too much.
(I discovered that semantic collisions didn't just present problems in code, it often prevented you and your team from having coherent conversations).
This could, of course, then put everything we touched as a team into the same bounded context. Or not...?
>The example in the book given is the notion of a "Charge," which customer invoicing and bill payment departments might both use. But each department only cares about certain pieces of data of a charge
It sounds like they're essentially saying (not explicitly, but via assumption), that your software should follow conway's law.
Nonetheless, this example screams "bug alert" to me, since assumptions made by departments (and, as a consequence software systems) about what they should care about are where the really nasty bugs lie - frequently driven by misunderstanding between departments about terms (e.g. what counts as a user).
In a typical pattern of buzzword adoption, your horrible architecture isn't DDD just because someone calls it so; it's just bad design.
In particular, pulverized source files and excessive abstraction layers are characteristic symptoms of dogmatic, value-oblivious impractical design: quite the opposite of thinking hard about a meaningful domain model in order to use it as a shared language.
> Now, you could make the argument "You Are Doing It Wrong(tm)"
I always hate these arguments - for me, whether a particular programming paradigm is 'good' or 'bad' for an organisation comes down to: "what will my least senior developer do with this?". If it tends to produce tangled nightmares, then it's not a good paradigm, it's about how the weakest link will use it, not the strongest ones.
I agree no paradigm is particularly great for beginners - OOP leads to junior devs separating their code prematurely/inaccurately, writing horrendous inheritance trees etc., functional programming can lead to some really opaque code that feels like you're trying to solve a puzzle when reading it. I do think good principles exist however:
- try to write code that can be easily unit tested
Well, I don't think anything changed. We already knew there was no Silver Bullet.
To paraphrase Brooks, no pattern or methodology is gonna make development "more productive, more reliable or simpler". All patterns and methodologies require a skilled practitioner, but someone skilled enough can even eschew them and still make good software, so in the end they don't really matter.
The point of patterns/methodologies is purely to facilitate communication between experts, not to guide.
In the end, good software is not about following recipes, especially complex recipes. It's a craft.
As for where do we go, what we need is the foundation for personally re-discovering and internalising those same methodologies and patterns, rather than blindly following them. Sweezyjeezy already enumerated some ideas above, and I have done the same in the past: https://news.ycombinator.com/item?id=27987498
Wait until the market stabilizes and new dev don't outnumber more senior ones 10:1.
Until then, we have to stick to patterns that are very simple, easy to teach and have wide pits of success. We also have to lean heavily on systems that easy to replace and easy to refactor, and can be managed by the few very experienced devs. That's why platforms like Ruby on Rails did so well, even though they can be divisive.
> for me, whether a particular programming paradigm is 'good' or 'bad' for an organisation comes down to: "what will my least senior developer do with this?"
I wish ideas like these were more prevalent or that, at least, people considered that in many cases the quality of a method or practice is not an intrinsic attribute, but a matter of suitability to a particular environment.
That's the problem I have with agile enthusiasts in general. I've seen agile methods operating wonders on many projects, but I've seen it failing on much more, and it's completely OK to assume that the method has its assumptions/conditions/limitations but enthusiasts, instead, blame the company and practitioners for not having understood and applied it appropriately.
That's not a good heuristic IMO; it boils down to evaluating every item in terms of what a toddler could do with it. There are situations in which it makes sense, but you can't build a civilization like that. The correct way is to keep toddlers / "least senior developers" away from powerful/dangerous things, until they grow up to the point they can be taught to use them responsibly.
DDD has some great ideas about how to model things and talk about things. But it is not a concrete architecture prescribing a particular number of layers or lines of code.
I read a preprint of Eric Evans book before it came out, and then bought a copy when it got published.
As someone who had been in the industry 6-7 years at that point, it really resonated - he was describing modes of success and failure I had seen but didn’t really have names for. Much of the usefulness of the book was just to put names on these things.
What has happened to ‘DDD’ in the meantime surprised me. It never occurred to me from the original book that a methodology of strict practices could emerge from it. To me that wasn’t the sense if it at all.
DDD doesn't prescribe any structure really, other than saying you should separate out different contexts and use unified terminology throughout the business, which I don't think anyone would argue against.
Are you talking about design patterns by any chance? Maybe things like repositories, adapters, small and focused service classes and the like?
Any engineering methodology that requires 500+ pages to explain should be hard avoided. The size of those DDD books always made me run away. Good engineering should always come back to KISS. Abstract only when it helps reduce complexity.
Don't get me wrong, I'm no DDD expert by any stretch.
I think the main topic and real difficultly of DDD is figuring out specifically where and how to separate out different contexts (the so-called 'bounded context' in DDD parlance).
Creating microservices is a good example. Where should the responsibility for a single microservice start and finish? What might the implications for scalability and extensibility be? What does this mean for data storage? What data will be shared or replicated between microservices and how will this be done?
Answering these kinds of questions is hard and has big implications for your teams and for your business.
DDD provides insights to understanding how a business works for people who are not domain experts themself, and are tasked to translate business requirements to code. This insight helps make appropriate decisions about those "most critical aspects of software development", nothing more, nothing less. Whether you use factories or not is a much lower-level technical decision far removed from the essence and rationale of 'doing DDD'.
That SO article if from '09 when 'big OOP' was all the hype. Hyping things to bigger proportions than they should be are a problem in IT. We just saw it for 'Microservices', for instance. These hypes serve to overpromise what you'll get, and sow confusion for years to come. In that regard I hope that DDD will not climb the hype cycle again, and we'll stay calm and just use what works.
I think most important to realize that DDD is just another tool in your toolbox, and can be used alongside all / most of the other tools you already use. Event storming can be a nice way to quickly kick off the elaboration process, should the method appeal to you.
It predates Brazil, Office Space, Dilbert, etc. After reading this book, and observing management, it's hard to imagine it's not all deliberate. There's just something inherently evil in bureaucracy.
> ...you could make the argument "You Are Doing It Wrong™"
Ages ago my company's study group tackled Applying Use Cases: A Practical Guide. https://www.amazon.com/Applying-Use-Cases-Practical-Guide/dp... After all the monkey motion with UML, schemes, design patterns, etc, this book was like a clarion blasting away ignorance and ambiguity.
It was so clear. Do the use cases. Then directly derive architect from those use cases. Voila! Impossible to fuck up.
However. Young me learned a very valuable lesson.
Nothing is so obvious and virtuous and good that some whackadoodles cannot, will not comprehend it.
But why?
Obstinance? Actual confusion? Inability to suspend disbelief? White knuckled desperate grasp on prior beliefs? Fear? Moral and philosophical opposition? Refusal to concede control (power)?
I have no idea why.
Whatever the root cause, I've experienced these impasses so many times, I've simply given up.
I eventually learned to do whatever it takes to publicly appease the tyrannical gods of confusion, then do any actual work as able on the down low.
Much like most things engineers bitch about, the tool isn't to blame here (as you say yourself) - it's the wrong tool for the wrong job.
There's nothing about DDD that says you can't make a simple CRUD API, if that's all that is required.
DDD's principle value is one of ubiquity (sold as "Ubiquitous Language" but I posit that "Ubiquity" is more accurate) - does your code do what the organisation does, and vice versa?
Not just using the same terminology, but using the same workflow.
Now if what the organisation does is basic stuff, then your code should be basic, too.
If there is an asymmetry between what the org and code do.. there's pain.
DDD principals should only be applied to reduce complexity. If the problem space is really just crud applications, DDD modeling or engineering is a waste of time and money.
But here’s where you have to be careful. It’s still useful to go through things like Event Storming and modeling domains to understand the problem space before deciding that basic crud is good enough.
I've been coding for over 15 years now and everytime someone says "Nah, we only need CRUD", they end up eating their words a couple of months down the line. The net result: domain logic all over the place.
People tend to think that applying DDD infers a tremendous overhead in the code. That doesn't have to be the case. However, some tend to go overboard and that's when I can understand the frustration.
> instead of pondering the question, do you really need to rewrite everything following DDD?
Anecdotal: I chatted with Eric Evans, author of the DDD book, at a conference once, and he stressed that DDD was only appropriate for certain parts of the system that called for it. I think he'd be as frustrated as you are by the situation you describe.
Parent poster also mentions in another post that they are also using Microservices.
I can see how excessive detail division into Microservices can create a nightmare. I have seen it over and over. We need better guidance on how to breakdown services and how much. Erik Evans touches on this https://www.youtube.com/watch?v=sFCgXH7DwxM but I still think he is to shy on giving advice. Microservices are valuable because they allow your team to work autonomously. So you you should roughly have one Microservice per team. It is not a hard rule at all, but it can help to see if your microservices are too granular. That is the number of Microservices should be about 5 to 7 times larger than the number of developers. If you have less than 2 developers per Microservice, you are probably creating a maintenance nightmare.
This is exactly the state of a project I'm working at, but without DDD and with microservices instead. It is probably more of a general problem of complexity growth that happens when anything goes wrong with architecture and isn't addressed properly not just a consequence of bad DDD.
I’ll bite: your team may say they are doing “domain driven design” but given from the description that they are not, you could just as well claim to be training alligators and be as correct.
However, you _are_ correct to say that DDD has a very limited use - as does the domain model pattern itself. Martin Fowler even calls this out in Patterns of Enterprise Application Architecture, imploring that it is necessary only when there are “complex and ever-changing business rules”. Most business systems should be multi-player Access databases, and instead have “a few sums and not-null checks”, thus should not use a domain model, thus should not use a technique aimed at designing and validating a domain model.
Python, FastAPI... so you can imagine how something as simple as a CRUD becomes quickly impossible to manage as soon as you ignore everything and start creating repositories, queries, use cases, etc. for a single endpoint.
I agree with you that a minimal implementation should handle it but... someone had wet dreams with DDD and everything has to be DDD now :)
Don't get me wrong, DDD has meaning and purpose, but some companies are applying it as a badge to be obtained instead of pondering the question, do you really need to rewrite everything following DDD?
In our case, simple CRUD APIs that in "regular programming" might take a couple 200 line files have turned into unmanageable nightmares in DDD that take you at least a couple of days of really intensive investigation to understand, because it have been divided in more that 25 files that hold 3 or 4 lines of code at most, with so many abstraction layers that it's impossible for the best of us to follow in one go.
Now, you could make the argument "You Are Doing It Wrong(tm)" but since I'm just a drone in this specific scheme and there's no wiggle room for anything (the team is quite inflexible on this) I have to follow it to the letter.
Just giving my two cents, again, not depreciating DDD, it has its purpose but in my opinion, it's for very specific projects.