Armin's Journal My notes on the DCI architecture Mon, 04 Mar 2013 17:34:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201303041734-my-notes-on-the-dci-architecture&amp;name=My+notes+on+the+DCI+architecture&amp;date=March+04%2c+2013"></a></p> <p>My notes on <a href="">The DCI Architecture</a>:</p> <ul> <li>The first DCI example just looks like an ordinary function that accesses two objects.</li> <li>So specifically all methods that relate to other objects may be extracted to an function / method that represents the interaction between the objects.</li> <li>They call these methods a <em>methodful role.</em> How confusing is that? I thought the participants are roles. Let's see if I do understand that more clearly later.</li> <li>They really mean that with a role: <em>Roles embody generic, abstract algorithms.</em> Roles, really, ok, may be it makes sense later.</li> </ul> <blockquote> <p>The fundamental problem solved by DCI is that people have two different models in their heads of a single, unified thing called an object. They have the what-the-system-is data model that supports thinking about a bank with its accounts, and the what-the-system-does algorithm model for transferring funds between accounts. Users recognize individual objects and their domain existence, but each object must also implement behaviors that come from the user's model of the interactions that tie it together with other objects through the roles it plays in a given Use Case.</p> </blockquote> <ul> <li>Well, obviously they should have named these algorithms <em>a play</em> instead of <em>methodful roles</em>.</li> </ul> <blockquote> <p>... the mapping between the role view and data view—is also part of the user cognitive model. We call it the Context of the execution of a Use Case scenario.</p> </blockquote> <ul> <li>Ok, it's pretty clear what's meant by a context, the thing that needs to capture and name the roles of the objects that take part in the <em>play</em> ... what they now call a <em>use case scenario</em>?</li> </ul> <p>So for now, my "how to DCI" would consist of:</p> <ul> <li>Create a class / object that contains the roles.</li> <li>Implement the method that runs the <em>play</em>.</li> </ul> <p>I even think this can be further reduced to a function. So for the next few paragraphs I will check if there is a difference to a bare function.</p> <blockquote> <p>The more dynamic operations related to the Use Case scenario come from the roles that the object plays. The collections of operations snipped from the Use Case scenario are called roles. We want to capture them in closed form (source code) at compile time, but ensure that the object can support them when the corresponding Use Case comes around at run time.</p> <p>... an object of a class supports not only the member functions of its class, but also can execute the member functions of the role it is playing at any given time as though they were its own. That is, we want to inject the roles' logic into the objects so that they are as much part of the object as the methods that the object receives from its class at instantiation time.</p> </blockquote> <p>Ah there it goes. Method injection it is. But why should an object have a role-specific method at all? Probably just because of data-hiding?</p> <blockquote> <p>The software exhibits the open-closed principle Whereas the open-closed principle based on inheritance alone led to poor information hiding, the DCI style maintains the integrity of both the domain classes and the roles. Classes are closed to modification but are opened to extension through injection of roles.</p> </blockquote> <p>Uh, that's why? Ok. </p> <p>So instead of doing DCI, my suggestion is to open the data, screw the language integrated enforcement of the open-closed principle (and solve that by convention and tests), and implement simple <em>play</em> functions that receive data structures as arguments and name the parameters according to their rules. Probably we need some additional functions that are rule specific (the ones DCI needs to inject), and that's it.</p> <p>My conclusion is that DCI may be a solution for a missing bit in the OO world but has no use when you are flexible enough to let go of data-hiding.</p> Immutable and Incremental Programming Mon, 17 Dec 2012 11:39:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201212171139-immutable-and-incremental-programming&amp;name=Immutable+and+Incremental+Programming&amp;date=December+17%2c+2012"></a></p> <p>This is just an idea; try to get on an imaginative road with me.</p> <p>How about making programming languages immutable?</p> <p>What if the basic requirement of a project is that you cannot change code anymore as soon you've added new functionality or fixed a bug?</p> <p>Think of a <a href="">log structured filesystem</a>, for example. All changes you can make to your code base are put on top of what is already there.</p> <p>How would such a programming language work? What features would programmers need from an IDE that makes immutable programming feasible? </p> <p>Is this just a crazy idea? Was it exercised in all details before computers evolved from punch card readers to having terminals? Or are we now capable to create something useful; may be something better, which builds on this approach?</p> A Flexible Serialization Library Mon, 17 Dec 2012 10:02:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201212171002-a-flexible-serialization-library&amp;name=A+Flexible+Serialization+Library&amp;date=December+17%2c+2012"></a></p> <p>What I need in one of my projects, is an serialization library for C# objects, that is independent on the encoding / decoding mechanism and offers quite a bit of flexibility beyond that.</p> <p>This library should cover all ranges from an offset / byte-oriented fixed length byte records in network byte order to simple unordered name/value lines but in which the names can be different from the property / field name.</p> <p>So what's needed is a solution that supports a number of different configuration aspects:</p> <h3>Attributes</h3> <p>Every public field or property of a type should be encoded / decoded by default.</p> <p>The attributes that can be put on the types, fields, and properties need to be extensible, but at the same time easily accessible. So there is some Reflection magic needed here. Performance comes later.</p> <h3>Type declaration independent configuration</h3> <p>Usually, attributes put quite a cognitive burden on these small little DTO objects. So a fluent interface should be provided, that allows a per type and per member configuration.</p> <p>And because the Attribute classes already exist, they could be reused for the DSL based configuration. To help the caller to associate these Attribute instances with their members, the fluent interface should be simplified by extension methods.</p> <p>All field or properties should be referred by lambda expressions, so that renaming a member does not affect the configuration.</p> <h3>Encoder/Decoder configuration, per type / primitive type</h3> <p>The encoding mechanism needs to be completely independent of the library. So we need type encoder / decoders that can be injected.</p> <p>Composite encoders should drive the operation. They should be capable to iterate over the fields and properties and decide for each what to do.</p> <p>Element encoders and decoders can be specified individually. So that for simple formats that require only a few of the numerous C# types, only the encoders for the ones that are actually used need to be specified.</p> <h3>Wrapping Up</h3> <p>I've written this post a few days ago, and had time to start implementing a library based on these requirements. If you have some time left, <a href="">check it out on Github</a> and tell me what you think. I will make an announcement as soon it looks more promising.</p> A Separation Wed, 12 Dec 2012 12:12:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201212121212-a-separation&amp;name=A+Separation&amp;date=December+12%2c+2012"></a></p> <h3>TL;DR</h3> <p>Whenever I create a new module or component, I will first try to separate it into two parts:</p> <ul> <li>A descriptive, declarative part, in the form of a graph, or internal DSL.</li> <li>An executive part, in the form of a generator or an interpreter.</li> </ul> <p>And I think you should try that, too.</p> <h3>Why? Here is The Long Story</h3> <p>Programming, in the sense of telling a CPU what to do, independent of the methods involved, be it functional, logical, object-oriented, or actor-based, usually tells us not so much about our program, it tells us something about the CPU, the operating system, or the libraries involved.</p> <p>So if programs tell our CPUs and operating systems what to do, who tells our programs what to do?</p> <p>We do it, but that information is usually gone as soon we type the first line of code.</p> <p>So for any reasonable complex program, programming will lead to the original intentions getting lost because a program is more focused on towards the hardware and frameworks that execute it.</p> <p>So a program always does two things:</p> <ul> <li>It implements an idea or a requirement that is bound to a domain.</li> <li>It talks to the operating system and the CPU to implement that requirement.</li> </ul> <p>No matter how we decorate a program with meaningful variable names, it will always be obscured by its very own structure, by system calls, additional operators, symbols, lists, lambdas, which are unrelated to the original requirement. These are tools that are optimized to talk to the CPU, to the compiler, to APIs, but barely to humans.</p> <p>So now that we know there are two separate things, we could give the two parts names: The specification and the implementation. </p> <p>That is nothing new. Classic programming is about taking a specification, implementing it, and then forgetting it. </p> <p>But do modern (agile?) programmers use specifications anymore? From what I know, they talk to their customers in regular intervals and then repeatedly prune code into a shape so that it iterates towards their imagination of what they think the client needs. And often it is the programmer's imagination that needs these iterations, not the code. This "process" is the most effective way to develop software as far as I know. So frankly, here, the specification never existed. It can not get lost in translation. The problem seems to be solved.</p> <p>But to make changes to an existing code base requires a very deep understanding of what is going on in the implementation, and also, more importantly, about what was originally defined by the specification or was imagined in the head of the client.</p> <p>So one might wonder why we don't write specifications anymore. Or why, at times we did, it always felt pretty useless as soon the first code lines were written.</p> <p>The most basic problem with the specification is that it is never as detailed as the implementation, because a proper implementation also needs to take a lot of additional variables into account that usually can not be foreseen by a person that is not the programmer.</p> <p>And because programmers don't like to write or change specifications, all these important decisions and switches just appear in the code and never make its way back in to the specification.</p> <p><em>... more precisely, programmers don't like to do anything. Programmers are - by their very nature - very lazy people, because if they wouldn't be, they would not be good at programming, which requires a basic motivation to avoid and automate boring and repeating labor, which then may lead to a world where only programmers and robots are required anymore. The mad realization here is that lazy programmers create a society in which everyone but programmers can be lazy. And although power and wealth is probably a good compensation for that, I doubt that we can survive run by programmers who just want to be lazy but do the entire manual work that's left. The only solution to that problem is to replace programmers by artificial intelligence. Fortunately, we need only lazy programmers to do that.</em></p> <p>So we need to accept that a written or imagined specification can - by definition - never be so detailed than the code that runs it. Accepting that, we could both throw the idea of a "living" specification away and rebuild the specification (or intentional map) in our heads by reading a lot of code right before we want to do small changes, or we could finally accept that the code is the specification.</p> <p>And this is no news either, but compared to all the other progress we made yet, we are heavily struggling with that challenge for a long time now.</p> <p>For example, TDD or BDD are excellent examples of rudimentary attempts to bring back specifications into our programs by writing code that observes and verifies the behavior of programs. But even though these practices reduce bugs by a fair amount, they introduce yet another liability by adding a lot more code.</p> <p>What we want is less code, not more. And should testing really be complected with the specification? </p> <p>For once, we should not forget that a specification is pure in the sense that it defines what should happen. So whatever we test, it can never be the specification that is under test. That's one reason you never need to build a test case for a test, because the test ultimately defines what should happen, and so does a specification.</p> <p>Consequentially, we need to get aware that it is important to separate specification and implementation right in our code. </p> <p>One way to separate the specification from the implementation is to think about the specification as a simple data graph that is static and fixed once it has been built. A specification of a program should be an immutable graph that completely defines the dynamic behavior of a program.</p> <p>Compare that to markup or the source code that we compile. It has the same properties. It is a complete, immutable blueprint that specifies to some interpreter or the CPU how our program is to be executed.</p> <p>Most of the software projects I see today, mix the specification together with the implementation so that everything looks like a complected mashup of domain specific terms and executable code.</p> <p>We need the discipline to separate the specification from the program that runs it. We need to create languages (preferable internal DSLs) together with the appropriate domain specific data types that to allow us to create a domain specific specification, which then can be run by an interpreter.</p> <p>The language builds the data types that form the specification, which is then interpreted.</p> <p>Now, even that concept is not new either. Compilers, Browsers are all working this way. They take a specification in, and interpret or translate it.</p> <p>But if we know that this concept leads to the most sophisticated programs (namely the compiler or the browser), and probably the most complex and stable software besides the operating system, why don't we use this model to create our programs?</p> <p>One explanation could be that we are not smart enough. Abstractions like markup or programming languages take a long time to develop, and even then it may not be guaranteed that they foster change and can be extended easily.</p> <p>Also there is another scary element that eventually comes up in any fairly complex system. Executable parts, like Turing complete languages that compensate for abstractions we are not able to see yet. JavaScript, originally built to extend HTML, is a prominent example that is taking over the whole web right now.</p> <p>So we need to be aware that sometimes a specification needs complex executable parts, but these should be small and separate from the time and context the interpreter runs in.</p> <p>Instead of creating more powerful computer languages, we may need to craft libraries that enable us to create specifications and interpreters for the programs we want to build.</p> <p>This separation would have some positive consequences:</p> <ul> <li>We could understand our code again, which would result in much faster development and modification times.</li> <li>Everything we build would be portable. We already introduced a natural porting boundary. Only the interpreter part needs to be ported to another platform.</li> <li>This separation is scalable in the sense that once the interpreter has been implemented and new abstractions are found, the interpreter could be separated again into a specification part and an implementation part.</li> </ul> <p>Admittedly, and so far, this is a rather linear view of the relation between a specification and the interpreter. In reality it would be more like a number of specifications and interpreters working together. But as long the boundaries are clear and we are aware of them, I can imagine that such a basic separation principle could lead to better, more maintainable programs. Programs, which don't hide their business logic between layers of functions or classes.</p> <p>So how to start? My best guess is just to think first how a specific domain can be modeled, and if the problem can be clearly separated into a specification and an interpreter. If it can't, the domain needs to be untangled first or new abstractions need to be found.</p> <p>This idea is growing on me now, and I am thinking a lot about the declarative nature of specifications, and how they can stay separate of their execution.</p> <p>To summarize, I want to share my current ideas about program code the suites as a specification:</p> <ul> <li><p>Except when a particular order is part of the specification, individual elements in the same set are commutative; and whenever duplicated elements do not make sense, idempotent. Ordering and duplication should be explicitly specified.</p> <p>It is very important to stress here, that the order in which the specification is built should have a negligible effect on the resulting specification graph. </p> <p>Programmers that are used to functional programming languages have a clear advantage here.</p></li> <li><p>The resulting specification graph is immutable and can not be changed (it may be extended inside the interpreter, though).</p> <p>There might even be translators, acting like interpreters that translate one specification into another.</p></li> <li><p>The specification graph is completely built before the actual program runs and is separated from the interpreter.</p> <p>A specification never contains parts that adapt as the program runs. This is an intended limitation that strengthens the boundary between data that is changed while the program runs. In other words: A specification defines behavior, it does not behave.</p> <p>Of course the specification graph is known to the interpreter, but the DSL that is used to build the graph, is not.</p></li> <li><p>One specification is for one domain only, but may refer to other specifications of other domains.</p> <p>This is clearly a bottom up process in which abstractions may appear that belong to another or a new domain. </p> <p>The actual fun starts when two or more interpreters need to run in parallel to interpret specifications that define systems that affect each other. </p> <p>For such scenarios we may need to consider unification options to find a specification that is more generic and can be generated out of several other ones. This is obviously a hard problem, if not the hardest.</p></li> <li><p>Links to other specifications, data, or files are "by name" and not by a language construct.</p> <p>This is open for discussion, because interfaces (preferable memberless, generic interfaces that use types as tags) may be a good enough abstraction mechanism for referring to other specifications.</p> <p>But this is an implementation detail, and may differ from language to language.</p></li> <li><p>A specification should completely cover all aspects of the program behavior.</p> <p>Of course this is the ultimate goal, but we should never block development if we can't find another abstraction yet. For example, reasonable defaults are fine at the beginning and may later find a way into the specification so that they can be overridden.</p> <p>Additionally, we surely need to inject code into certain parts of the specification as long we are not yet able to specify what that code does in an abstract way. This can be considered as a last resort to fill abstraction gaps.</p></li> </ul> <p>So while I can not really grasp how a complex program could be <em>specified</em> instead of <em>programmed</em>, I can try to summarize what it would be like:</p> <ul> <li><p>Programs could be much more portable, most of the specification could be ported without changes.</p> <p>Of course, every platform is different, so parts of the low-level specification need to be extended to cover these differences.</p></li> <li><p>Specifications usually don't need to be tested anymore.</p> <p>The proper implementation of the interpreter needs to be tested. So interpreters do have their test-cases.</p> <p>Once an interpreter is fully tested, it can be guaranteed that the specification exactly results in what was specified. If it does not, the error is in the interpreter.</p> <p>So higher abstractions rely on the specification to fully cover all the details. If assertions in the sense of invariants are required, they need to be set up in the specification.</p> <p>But I am not sure if acceptance tests can be avoided. I do hope that they are a dual that can be extracted from the specification or that acceptance tests are part of the specification. And if this gets really messy, the specification may need to be extended to include another perspective.</p></li> <li><p>The program may be completely decoupled from the target language.</p> <p>With a bit of luck, there are only a limited number of abstractions and interpreters we need to cover. So these interpreters can then be all ported to multiple languages.</p> <p>We may get a problem with functions or expressions that are embedded in a specification. They may need to be convertible to a widely available language, like JavaScript.</p></li> <li><p>The program could be decoupled from the type of the implementation, be it a functional, class-based, or actor language.</p> <p>Depending on the nature of the specification, it might be more suited to be run in an interpreter that makes use of the actor model, for example. In an optimistic scenario, the type of implementation language could be chosen by the interpreter depending on the requirements of the specification.</p></li> <li><p>The program could run on multiple computers.</p> <p>Decoupling the implementation from the specification could enable single programs to run on multiple computers in a massively parallel setup while strongly adhering to the original specification.</p> <p>This is what scares me with actor based implementations when there is no central control of change. I don't mean the central control in the sense of a god actor, more in the sense of a controlled distribution and set up of the individual components. When multiple individual components run together, and are changed independently, indeterminism happens. But when they are fabricated from the same specification, they can run with the same properties and <em>think</em> independently, but are at the same time able to deterministically comply to the semantics in the original specification. </p> <p>There is probably a strong evolutionary reason of having the same DNA in each human cell. But right now, actors and software components in general are programmed with the DNA only existing in the mind of the humans who created them.</p></li> </ul> <p>That said, I think we should start small by setting up a <em>seed constraint</em>:</p> <ul> <li>We should try to create or favor code that can be split up in a descriptive part and an executive part. </li> </ul> <p>And I will try to set up a page with some of the C#/.NET libraries that are great candidates to build software that does not <em>forget</em> its specification. But I need <em>your</em> help for that. </p> <p>If you really made it down to here and you are a .NET developer, please send me all the libraries and frameworks you like to see on that list. Comments or <a href="">Twitter</a> preferred.</p> Share To Folder Tue, 04 Dec 2012 14:15:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201212041415-share-to-folder&amp;name=Share+To+Folder&amp;date=December+04%2c+2012"></a></p> <p>Today we got another app published to the Windows 8 store. It's a similar tool like our previously released <a href="">Share To Desktop</a>.</p> <p><a href="">Share To Folder</a> also registers itself as a share target for all applications that can share files. But instead of opening the file with a Desktop application, Share To Folder copies the shared files to a file system folder. </p> <p>And because you most probably will need the folder again, Share To Folder remembers all the destination folders for you.</p> <p>I hope you like it. Download it from <a href="">here</a>.</p> Share To Desktop Sat, 01 Dec 2012 14:02:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201212011402-share-to-desktop&amp;name=Share+To+Desktop&amp;date=December+01%2c+2012"></a></p> <p>My first Windows 8 application is now in the store. </p> <p>It's a simple but useful app. It allows you to share files from new Windows 8 style applications to your desktop applications.</p> <p>The trial edition runs 15 days. <a href="">Install it now</a>.</p> <p>I hope this could be useful.</p> SharedSafe Supports FTP Sat, 01 Dec 2012 13:57:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201212011357-sharedsafe-supports-ftp&amp;name=SharedSafe+Supports+FTP&amp;date=December+01%2c+2012"></a></p> <p>SharedSafe, a Windows file sharing tool I am developing for quite some time, is now supporting FTP server storage.</p> <p>SharedSafe is completely free as long only one live synchronization folder is used.</p> <p>Download the Windows client at <a href=""></a>.</p> Windows Store Registration - A Timeline Tue, 27 Nov 2012 11:50:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201211271150-windows-store-registration---a-timeline&amp;name=Windows+Store+Registration+-+A+Timeline&amp;date=November+27%2c+2012"></a></p> <p>As said before we made a bunch of small tools that fill some gaps in the Windows RT ecosystem. We built them in 5 days and wanted them to be published as fast as possible to the Windows Store to make some easy money.</p> <p>The only thing that blocked our submissions until today, was the imposed process to register me and my company as a Windows Store Apps developer.</p> <p>Even though my company is registered as a BizSpark startup, Microsoft needed two more verification procedures. One was a verification of the credit card, the other a verification of a official telephone book entry, and if that is available, a verification call to talk to me personally.</p> <p>A timeline after the credit card verification finally went through:</p> <p>Nov 7</p> <ul> <li>Received an email from Symantec about that they need an online telephone book entry or a signed letter by a notary.</li> </ul> <p>Nov 9</p> <ul> <li>Called "Deutsche Telekom" to make that entry. Except to spell the company name, they asked no further questions, they did not verify if the company existed, and they told me that the entry will be available by the 16th of November.</li> <li>Sent the <em>founding document</em> of my company to Symantec, in the hope that they could make an exception and don't require the phonebook online entry.</li> </ul> <p>Nov 14</p> <ul> <li>I asked Symantec by email if there is another option to speed this up a little and if they received the founding document of our company. Additionally I begged them to respond to my emails.</li> <li>Asked support of Microsoft if there is any progress.</li> </ul> <p>Nov 15</p> <ul> <li>Microsoft said I should contact Symantec. </li> <li>Answer from Symantec on the email from Nov 14th: The listing does not show up, no further options.</li> <li>Asked Symantec four questions which never got answered: <ul> <li>How are you exactly retrieving the listing entry from the Deutsche Telekom?</li> <li>How can this be called a "validation" at all, because both, the telephone number <em>and</em> the entry could have been faked?</li> <li>Microsoft already verified the company credit card, why is another verification required?</li> <li>... and a question to verify if they really got the right telephone number.</li> </ul></li> <li>Additionally I've sent them a scan of a confirmation letter I've received from the "Deutsche Telekom" that shows that my entry is listed (but not saying where).</li> </ul> <p>Nov 16</p> <ul> <li>No entry on the site <a href="">Das Telefonbuch</a> yet. I called Telekom directly to be sure that it will be published there. They could not tell me for sure and told me to wait a day.</li> <li>In the chat with Symantec. Tried to speed things up, but they needed the entry online. There was no other option.</li> </ul> <p>Nov 17 </p> <ul> <li>Microsoft support asked if there was any progress (a very nice gesture!).</li> <li>I've answered that I was in the live chat with Symantec, but the entry did not show up, and sent Microsoft the confirmation letter from "Deutsche Telekom".</li> <li>Received an answer from Symantec regarding that the document from "Deutsche Telekom" can not be accepted, the entry has to show up online.</li> </ul> <p>Nov 19</p> <ul> <li>Email from Microsoft that they their email server rejected the PDF attachment of the confirmation letter.</li> <li>Sent an email to "DasTelefonbuch" asking if and when the entry may show up. Never received an answer.</li> </ul> <p>Nov 20</p> <ul> <li>The entry finally showed up.</li> <li>Sent email to Symantec that the entry showed up.</li> <li>Sent an email to Microsoft with the exact query of the entry in "DasTelefonbuch". In addition I resent the scan of the entry confirmation.</li> <li>Live chat with Symantec, and callback. Finally. </li> <li>One issue with my email addresses, which I could not resolve, sent Microsoft an email about that issue.</li> <li>Answer from Microsoft: For some reason the attachments did not went through again and they could not read the entries, because it is in German.</li> <li>Answer from Microsoft: Suggestion for an alternative solution of the email problem.</li> </ul> <p>Nov 21</p> <ul> <li>Solved the email problem.</li> <li>Told Microsoft that the emails are fixed now and sent them a shortened link to "Das Telefonbuch" which only shows the company address and telephone number as a result, and a screenshot thereof. Disabled my OpenPGP signature, because I suspected that this caused the drops of the attachments on Microsoft's side.</li> </ul> <p>Nov 22</p> <ul> <li>Got an email from Microsoft stating that they have received everything they need <em>from Symantec</em> to "finish processing my account". It should be authorized in 2-3 business days. </li> </ul> <p>Nov 23-26</p> <ul> <li>Waiting for my account to be activated.</li> </ul> <p>Nov 27</p> <ul> <li>Checked online, our app is in the submission process.</li> <li>Received confirmation letter from Symantec that the identity verification is complete and Microsoft will now complete the activation of my account.</li> </ul> <p>Now, all together, we do have lost 20 days here, but I wonder for what? This process seems completely unsafe and redundant to me. I really do wonder why it exists at all. It took me two days to even recognize that Symantec was doing this identity verification on behalf of Microsoft.</p> <p>And why are there 20 days lost? Where is the time gone? Most of the time was lost in processes of which I think they could be automated and done in seconds rather than in days. Is there so much friction and bureaucracy involed? Are there actually people involved transferring phonebook entries, or doing the authorization process that took Microsoft another 5 days? And what exactly was the role of Symantec here?</p> <p>I must thank the support of Microsoft and Symantec for their patience with me, because in hindsight, none of my emails seem to have actually mattered. ... but who knows, may be it would have taken another week or two without them.</p> <p>I'm glad that we can now submit our apps now, which will take another ~10 days until they appear in the store, ... well, but only if they don't get rejected.</p> Three Tablets Sun, 25 Nov 2012 12:12:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201211251212-three-tablets&amp;name=Three+Tablets&amp;date=November+25%2c+2012"></a></p> <p>Yesterday I took a closer look at the new Microsoft Surface that a friend bought. And it was exactly what I've expected. A bit of a too thick, heavy tablet with a fixed stand that always adjusts the display at the wrong angle. But on the other hand, a great manufactured device, with a superb low resolution display (the ones you can see pixels on), and an operating system that works just fine.</p> <p>It feels more like low priced ultraportable touchscreen notebook.</p> <p>I've also took a look at the Vivo tab again, which seems to run slightly slower, and also has a low resolution display. But compared to the Surface, it is really light and thin, and actually feels like a tablet and not like picture frame.</p> <p>I'd prefer a Vivo tab, even though it is more expensive.</p> <p>What I don't like about both tablets, is the wide-screen format. Holding it in landscape feels unnatural for typing text. You can't reach the center of the screen with your thumbs and the split on-screen keyboard takes time to get used to. And in portrait mode it's also feels unnatural to read book pages on such a high screen.</p> <p>I may wait for a Windows 8 tablet that has iPad's screen aspect ratio.</p> <p>Speaking of the iPad, I've also played with the iPad Mini, and must say that Apple just did it again. </p> <p>For some reason this thing is magical. I instantly felt that it's something beyond all other tablets. </p> <p>It does have a lower resolution compared to the Nexus 7, but you just can not compare the two. The iPad Mini's display is much more vivid, the device is lighter, and the first time I took it from the stand it did not feel like a tablet, it felt more like a ultralight E-book reader, and that's probably what's so magical about it.</p> <p>For me, the iPad Mini is below the weight I would care for the energy it takes to hold it for a very long time. My body instantly recognized this fact and I suspect that this caused my "wow it's magic" response. </p> <p>Though I don't like the the two small bezels, I think of buying one for reading books on the couch.</p> Overspecification Fri, 16 Nov 2012 14:00:00 +0100 <p><a href="module:BlogEntryHeader?entry=journal%2f201211161400-overspecification&amp;name=Overspecification&amp;date=November+16%2c+2012"></a></p> <p>So yesterday I tried to integrate a comment system to my journal posts and what I found was a nice script from <a href="">IntenseDebate</a> that can be included where the comments should appear.</p> <p>Everything worked fine, until I needed to customize that thing to fit to the current style of my web site. </p> <p>IntenseDebate supports the customization of their CSS and so I got down into Chrome's DOM and looked at every small detail I found that does not match the style and look of the website.</p> <p>Deeper and deeper down the rabbit hole a pattern emerged. To completely adjust their CSS rules to the design I wanted, most of my changes were actually reverting or invalidating the CSS definitions they've made:</p> <p>Lot's of lines I've added, were similar to:</p> <pre><code>font: inherit; font-size: inherit; </code></pre> <p>Now I can imagine a few reasons for setting the text to a fixed font family and to a fixed size on their side: when larger fonts may break the layout, for example. But except their styled buttons, all their code layouts just fine in the browsers I tested it with. So I don't have any idea why they used fixed font sizes and fonts all over the place.</p> <p>I assume that this is just one example of overspecification. </p> <p>In case of CSS it is quite simple to use a relative <code>font-size</code>, for example. There are relative values like "smaller", or percentage values, but in other markup or programming languages, you need to get very creative to avoid overspecifing code.</p> <p>Another example of overspecification I found today - and that was why I decided to write about it - was in a plugin for Joomla:</p> <p>I recognized that our Disqus comments were note working on one of my product sites when it was accessed via HTTPS. Most likely such problems are caused by the <a href="">same origin policy</a>. And indeed, a quick look into Chrome's console showed that some scripts were blocked because of an attempt to load them via the unencrypted HTTP protocol.</p> <p>The fix was simple. I just removed the <code>http:</code> from the URLs that loaded to scripts. <a href="">Protocol Relative URLs</a> should work correctly in every current browser, so there is no reason to ignore them for loading scripts or CSS anymore.</p> <p>Now what can we learn from this? Is overspecification really that bad? Sometimes maybe ... and how can it be detected, and how can we get aware of code or markup that is overspecified? </p> <p>From what I know for sure is that avoiding overspecification, even though it introduces an operational dependency to its relative counterpart, simplifies the (re)usability of the component that needs to be embedded in another system.</p> <p>That brings me to the point, that whenever you encounter anything that is absolute in the markup or code you are building, I suggest to think about how that piece relates to its environment and if it could be specified in a way that is relative to what's already existing.</p>