4

Let's say there's the class Book, with different models in different endpoints:

Endpoint A (consumer):

class Book{
  Map<string,string> chapterName_content
}

Endpoint B (provider):

class Book{
  string [] chapterNames
  string [] content
}

Let's also assume that these are the structures that make the most sense within each endpoint, so changing them is not recommended.

Since the conversion is a direct one, I do not see the need to use Mapper classes or anything of the kind. To me the answer would be to create a DTO on Endpoint A that already does the mapping as such:

class BookDTO{
  string [] chapterNames
  string [] content
  
  public Book toBook()      
  public string toJson() 

  static BookDTO fromBook(Book book)
  static BookDTO fromJson(string jsonRepresentation)
}
  

Is not using an external class for the mapping a bad practice?

  • @Laiv I can't quite understand what you're trying to say. I asked if **not** using mappers is a bad practice. Not the other way round. – Carlos Coelho Sep 16 '20 at 18:53
  • Damn, my bad. Got It wrong. Removed the previus comment. Regarding the question, as usual the answer is "it depends". Out of curiosity, how `fromJson` turns the string into object? – Laiv Sep 16 '20 at 19:08
  • I've been using python and javascript lately and both have native support for this conversion. In a language where types must be specified like java, I would probably have looked for frameworks that allow that conversion by adding annotations. – Carlos Coelho Sep 16 '20 at 19:14
  • 1
    For the second case, rather than asking myself if it is bad practice or not, I would ask "Is it convenient?". Your code is a possible solution. Implementing mappers is another solution. Neither are bad practices but both can be bad solutions. Ask yourself about "adecuacy" and "convenience" and you will find the answer to this sort of "dilemas". – Laiv Sep 16 '20 at 19:41
  • 1
    _"Since the conversion is a direct one, I do not see the need to use Mapper classes or anything of the kind."_ Deciding to use a mapper isn't done based on a single dto/domain object, it's an architectural decision. If _some_ dtos warrant a mapper, it's generally advisable to run _all_ dtos through a mapper as a matter of code consistency. – Flater Sep 17 '20 at 10:54

3 Answers3

4

This depends heavily on the context of the system or project in stake:

  • If the DTO and the conversion/parsing code will expected to be under maintenance of the same programmer (at least, for a longer period), and

  • if the number of different conversions will expected not to be not more than the two shown in the example

  • if the number of DTOs are expected to stay so few that writing such individual conversion methods manually for each DTO is more efficient than creating a generator based on meta data/reflection, or more efficient than using some automapping tool,

then go ahead, leave the methods inside the DTO, as shown in the example. However, if one of the conditions above is violated, I would either introduce separate mapping classes, or go a step further and look for a more generic solution like the ones scetched in the third bullet point.

So in short, know your context - there is nothing like a "best" or "bad" practice without any context.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
1
  • What happens when you add a new member to the DTO?
  • What happens when you delete a member from the DTO?
  • What happens when you re-organise the DTO?

Will the Object be responsible for reading the old, and the new formats?

  • How will it handle data that it can no longer hold?
  • How will it fill in the blanks from older formats?

What you are breaking is the Single Responsibility Principle.

Now I am not evangelical about this. Having a DTO mixed with serialisation capabilities will work, it is a solution, and for small projects/early in a project it can make sense.

However as a long-term solution, particularly as the project matures and the DTO changes, it is a bad solution.

SRP points the main problem out as being one of usage and reading. Eg: you go to a book to deal with book related actions such as chapters. This assists usage by keeping the components tightly focused (aka cohesive). This then falls out into legibility as it reduces the amount of noise that has to be dealt with in understanding a piece of code.

Another principle being broken is that of coupling and Shearing Layers. This focuses on what changes and when. When you change a piece of behaviour in the DTO, it doesn't necessarily change the serialisation. Similarly new/updated serialisations have little effect on the DTO itself. Thus they are not highly correlated, and as such tend to change at different rates.

By coupling these two different shearing layers together you maximise how brittle the code is, increase the likelyhood of merge conflicts (due to multiple parallel commits), and make investigating the change history that much more difficult.

Kain0_0
  • 15,888
  • 16
  • 37
  • 1
    I disagree with your interpretation of SRP and objects. A `Book` is whatever your project defines a book as. If a book can be read *and* serialized to some external format according to requirements, then that is what a book is. Also, presumably these things are coupled, i.e. use similar data and logic. Therefore you can't put it in different classes. It depends on the exact case of course, but more often than not, separating tightly coupled things *out* of a class leads to less maintainable code. – Robert Bräutigam Sep 16 '20 at 07:57
  • @RobertBräutigam I appreciate that these things come down in large part to taste/dictation. I already pointed out that I'm no stickler for putting them together or apart, both can be made to functionally work, and indeed there are situations where it might be desirable. My experience tells me though that serialisation rarely ages well with a DTO. The choice of Repository generally is orthogonal to the choice of data model, ergo two decisions and not necessarily tightly coupled. What if the repository is instead a library, or a school bag. Why should a book care about sandwiches? – Kain0_0 Sep 16 '20 at 09:25
  • The "repository", json formats and some other things are **not** orthogonal to a data model. In fact both are (usually) *heavily* relying on the data model, with intricate, detailed knowledge about what data there is and how it all relates. Sure, the *format* itself might be independently created, but the *implementation* to create a message of that format must rely on the data that is available. I.e. it is tightly coupled to it. I.e. if you find yourself changing the data (which is supposed to be implementation detail btw.) you have to change all those who use it. – Robert Bräutigam Sep 16 '20 at 10:54
  • @RobertBräutigam I'm not contending that a mapper does not require knowledge of a DTO to perform a mapping to/from a serialisation format. In my experience though when a change does happen, those older serialised instances still need to be dealt with: migrated, read on demand, etc... Some formats are also beyond your own control, and still need to map to internal DTO's. The mapping may not be exhaustive, and may be convoluted. This knowledge is about how to decode/encode a given model from that format. That is different to what a given model means/is used in the program. – Kain0_0 Sep 16 '20 at 11:51
  • For all of those questions, the answer would be the same as when using a mapper - you update the mapping method. If we consider the purpose of the DTO *facilitating* the process of **T**ransfering **D**ata, can't we consider it parsing *from* and *to* Model part of its responsability? – Carlos Coelho Sep 16 '20 at 19:25
  • 1
    @CarlosCoelho On very focused cases yes I see that as reasonable, but that is where our opinions diverge. I consider a Mapper as responsible from transcoding one representation to another. A DTO in my eyes is the pseudo model POD type that exists as a high level concept across both formats, with a slight lean toward the specific representation in the programming language being used when they are different models. It is difficult to transcode behaviour, and often when its shared between many serialisation formats its not a good idea to carry that baggage around between communicating projects. – Kain0_0 Sep 16 '20 at 23:36
1

Adding relevant behavior to objects is usually a good idea. However, let's take it a step further. Why have the DTO in the first place? You just as well could add toJson() to the actual Book.

With that, you eliminate a class, you localize any potential changes of the Book to a single class, you don't need the method toBook(), etc. You get a lot of potential benefits, depending on the exact context of course.

I would argue that using an external class for conversion is the bad practice. That external class would need to know everything about this class, so essentially that would be a really tightly coupled class. That means any changes will likely impact both. That's usually not what we want.

Robert Bräutigam
  • 11,473
  • 1
  • 17
  • 36
  • The reason I don't tend to consider having the class Book itself being able to parse from and to json is that is is not really a Book behaviour as a domain object. I like to keep those responsabilities separated. With that said, DTO's whole purpose is to facilitate **T**ransfering **D**ata, hence why parsing can be considered part of its responsabilities. I must also say that I mostly agree that mappers only tend to add extra unecessary and confusing steps to linear solutions. – Carlos Coelho Sep 16 '20 at 19:02