Статьи

Тест как спецификация дизайна

Один из часто встречающихся вопросов в разработке корпоративного программного обеспечения — где происходит переход от архитектора (который разрабатывает программное обеспечение) к разработчику (который внедряет программное обеспечение).

Обычно архитектор разрабатывает программное обеспечение на более высоком уровне абстракции, а затем передает свой дизайн разработчикам, которые разбираются в более конкретном, детализированном дизайне, прежде чем превращаться в код уровня реализации. Передача обычно происходит через спецификацию архитектуры, написанную архитектором, состоящую из диаграмм классов UML, диаграмм последовательности или диаграмм перехода состояний … и т. Д. На основе понимания этих диаграмм разработчики продолжают писать код.

Тем не менее, прогресс не так гладко, как мы ожидаем. Между диаграммами архитектуры и кодом существует разрыв, и требуется преобразование. Во время такой трансформации существует вероятность неверного толкования, неверного предположения. Довольно часто система оказывается неправильно реализованной из-за недопонимания между разработчиком и архитектором. Такое недопонимание может либо из-за того, что архитектор не описал свой проект достаточно четко в спецификации, либо из-за того, что разработчик не обладает достаточным опытом, чтобы заполнить некоторые упущенные детали, которые архитектор считает очевидными.

One way to mitigate this problem is to have more design review meetings, or code review session to make sure what is implemented is correctly reflecting the design. Unfortunately, I found such review sessions are usually not happening either because the architect is too busy in other tasks, or she is reluctant to read the developer’s code. It ends up the implementation doesn’t match the design. Quite often, this discrepancy is discovered at a very late stage and left no time to fix. While developers start patching the current implementation for bug fixing or adding new features, the architect lose the control on the architecture evolution.

Is there a way for the architect to enforce her design at the early stage given the following common constraints ?

  1. The architect cannot afford frequent progress/checkpoint review meetings
  2. While making sure the implementation compliant with the design at a higher level, the architect doesn’t want to dictate the low level implementation details

Test As A Spec

The solution is to have the architect writing the Unit Tests (e.g. JUnit Test Classes in Java), which acts as the «Spec» of her design.

In this model, the architect will focus in the «interface» aspect and how this system interact with external parties, such as the client (how this system will be used), as well as the collaborators (how this system uses other systems).

 

The system will expose a set of «Facade» classes which fully encapsulate the system’s external behavior and act as the entry point to its client. By writing Unit Tests against these «Facades», the architect fully specify the external behavior of the system.

A set of «Collaborator» classes is also defined to explicitly capture the interaction of this system with other supporting systems. These «Collaborator» classes are specified in terms of Mock Objects so that the required behavior of supporting system is fully specified. On the other hand, the interaction sequence with the supporting systems are specified via «expectation» of Mock objects.

Now, the behavior of «Facade» and «Collaborator» is contained in a set of XUnit Test Cases, which acts as the design spec of the system. This way, the architect fully specify at the detail level the external behavior of the system while giving enough freedom for the developers to decide on the internal implementation structure. Typically, there are many «Impl Detail» classes which the Facade delegates to. These «Impl Detail» classes will invoke the «Collaborator interface» to get things done in some cases.

Note that the architect is not writing ALL the test cases. Architecture-level Unit Test are just a small subset of the overall test cases specifically focus in the architecture level abstraction. These tests are specifically written to ignore the implementation detail so that its stability will not be affected by change of implementation logic.

On the other hand, developers who code the «Impl Detail» classes will also provide a different set of TestCases that covers the «Impl Detail» classes. Usually, this set of «Impl Level TestCase» will change when the developers change the internal implementations.

By separating these 2 sets of test cases under different categories, they can evolve independently when different aspects of the system changes along its life cycle, and resulting in a more maintainable system as it evolves.

Lets look at an example …

 

Example: User Account Management

To illustrate, lets go through an example using a User Account Management system. There maybe 40 classes that implements this whole UserMgmtSystem. But the architecture-level test cases only focused in the Facade classes and specify only the «external behavior» of what this system should provide. It doesn’t touch any of the underlying implementation classes because those are the implementor’s choices which the architect doesn’t want to constrain.

** User Account Management System Spec starts here **

Responsibility:

  1. Register User — register a new user
  2. Remove User — delete a registered user
  3. Process User Login — authenticate a user login and activate a user session
  4. Process User Logout — inactivate an existing user session

Collaborators:

  • Credit Card Verifier — Tell if the user name match the the card holder
  • User Database — Store user’s login name, password and personal information

 

Unit Tests Code

public class UserAuthSystemTest {
UserDB mockedUserDB;
CreditCardVerifier mockedCreditCardVerifier;
UserAuthSystem uas;

@Before
public void setUp() {
// Setup the Mock collaborators
mockedUserDB = createMock(UserDB.class);
mockedCardVerifier =
createMock(CreditCardVerifier.class);
uas =
new UserAuthSubsystem(mockedUserDB,
mockedCardVerifier);
}

@Test
public void testUserLogin_withIncorrectPassword() {
String uName = "ricky";
String pwd = "test1234";

// Define the interactions with Collaborators
expect(mockUserDB.checkPassword(uName, pwd)))
.andReturn("false");
replay();

// Check the external behavior is correct
assertFalse(uas.login(userName, password));
assertNull(uas.getLoginSession(userName));

// Check the collaboration with collaborators
verify();
}

@Test
public void testRegistration_withGoodCreditCard() {
String userName = "Ricky TAM";
String password = "testp";
String creditCard = "123456781234";
expect(mockCardVerifier.checkCard(userName,creditCard)))
.andReturn("true");
expect(mockUserDB.addUser(userName, password)));
replay();
uas.registerUser(userName, creditCard, password));
verify();
}

@Test
public void testUserLogin_withCorrectPassword() { .... }

@Test
public void testRegistration_withBadCreditCard() { .... }

@Test
public void testUserLogout() { .... }

@Test
public void testUnregisterUser() { .... }
}

** User Authentication Subsystem Spec ends here **

 

Summary

This approach («Test» as a «Spec») has a number of advantages …

  • There is no ambiguation about the system’s external behavior and hence no room for mis-communication since the intended behavior of the system is communicated clearly in code.
  • The architect can write the TestCase at the level of abstractions she choose. She has full control in what she wants to constraint and what she wants to give freedom.
  • By elevating architect-level test cases as the spec of the system’s external behavior. They become more stable and independent of changes in implementation details.
  • This approach force the architect to think repeatedly what is the «interface» of the subsystem and also what are the collaborators. So the system design is forced to have a clean boundary.