My notes on The DCI Architecture:
- The first DCI example just looks like an ordinary function that accesses two objects.
- So specifically all methods that relate to other objects may be extracted to an function / method that represents the interaction between the objects.
- They call these methods a methodful role. How confusing is that? I thought the participants are roles. Let's see if I do understand that more clearly later.
- They really mean that with a role: Roles embody generic, abstract algorithms. Roles, really, ok, may be it makes sense later.
The fundamental problem solved by DCI is that people have two different models in their heads of a single, unified thing called an object. They have the what-the-system-is data model that supports thinking about a bank with its accounts, and the what-the-system-does algorithm model for transferring funds between accounts. Users recognize individual objects and their domain existence, but each object must also implement behaviors that come from the user's model of the interactions that tie it together with other objects through the roles it plays in a given Use Case.
- Well, obviously they should have named these algorithms a play instead of methodful roles.
... the mapping between the role view and data view—is also part of the user cognitive model. We call it the Context of the execution of a Use Case scenario.
- Ok, it's pretty clear what's meant by a context, the thing that needs to capture and name the roles of the objects that take part in the play ... what they now call a use case scenario?
So for now, my "how to DCI" would consist of:
- Create a class / object that contains the roles.
- Implement the method that runs the play.
I even think this can be further reduced to a function. So for the next few paragraphs I will check if there is a difference to a bare function.
The more dynamic operations related to the Use Case scenario come from the roles that the object plays. The collections of operations snipped from the Use Case scenario are called roles. We want to capture them in closed form (source code) at compile time, but ensure that the object can support them when the corresponding Use Case comes around at run time.
... an object of a class supports not only the member functions of its class, but also can execute the member functions of the role it is playing at any given time as though they were its own. That is, we want to inject the roles' logic into the objects so that they are as much part of the object as the methods that the object receives from its class at instantiation time.
Ah there it goes. Method injection it is. But why should an object have a role-specific method at all? Probably just because of data-hiding?
The software exhibits the open-closed principle Whereas the open-closed principle based on inheritance alone led to poor information hiding, the DCI style maintains the integrity of both the domain classes and the roles. Classes are closed to modification but are opened to extension through injection of roles.
Uh, that's why? Ok.
So instead of doing DCI, my suggestion is to open the data, screw the language integrated enforcement of the open-closed principle (and solve that by convention and tests), and implement simple play functions that receive data structures as arguments and name the parameters according to their rules. Probably we need some additional functions that are rule specific (the ones DCI needs to inject), and that's it.
My conclusion is that DCI may be a solution for a missing bit in the OO world but has no use when you are flexible enough to let go of data-hiding.
This is just an idea; try to get on an imaginative road with me.
How about making programming languages immutable?
What if the basic requirement of a project is that you cannot change code anymore as soon you've added new functionality or fixed a bug?
Think of a log structured filesystem, for example. All changes you can make to your code base are put on top of what is already there.
How would such a programming language work? What features would programmers need from an IDE that makes immutable programming feasible?
Is this just a crazy idea? Was it exercised in all details before computers evolved from punch card readers to having terminals? Or are we now capable to create something useful; may be something better, which builds on this approach?
What I need in one of my projects, is an serialization library for C# objects, that is independent on the encoding / decoding mechanism and offers quite a bit of flexibility beyond that.
This library should cover all ranges from an offset / byte-oriented fixed length byte records in network byte order to simple unordered name/value lines but in which the names can be different from the property / field name.
So what's needed is a solution that supports a number of different configuration aspects:
Every public field or property of a type should be encoded / decoded by default.
The attributes that can be put on the types, fields, and properties need to be extensible, but at the same time easily accessible. So there is some Reflection magic needed here. Performance comes later.
Type declaration independent configuration
Usually, attributes put quite a cognitive burden on these small little DTO objects. So a fluent interface should be provided, that allows a per type and per member configuration.
And because the Attribute classes already exist, they could be reused for the DSL based configuration. To help the caller to associate these Attribute instances with their members, the fluent interface should be simplified by extension methods.
All field or properties should be referred by lambda expressions, so that renaming a member does not affect the configuration.
Encoder/Decoder configuration, per type / primitive type
The encoding mechanism needs to be completely independent of the library. So we need type encoder / decoders that can be injected.
Composite encoders should drive the operation. They should be capable to iterate over the fields and properties and decide for each what to do.
Element encoders and decoders can be specified individually. So that for simple formats that require only a few of the numerous C# types, only the encoders for the ones that are actually used need to be specified.
I've written this post a few days ago, and had time to start implementing a library based on these requirements. If you have some time left, check it out on Github and tell me what you think. I will make an announcement as soon it looks more promising.