Author Archives: Marco

Marco

Dependency Inversion Principle states that:

  • High level modules should not depend upon low level modules. Both should depend upon abstractions.
  • Abstractions should not depend upon details. Details should depend upon abstractions.

Achieving this will increase the reusability of the system modules by applying loose coupling to any dependencies among them. This will also make the system flexible enough to welcome and accept changes without harmful redesigning or restructuring efforts.

Note: high level modules are the ones that contain the system complex business logic and flows, and the low level modules are the ones that contain the system low level and basic operations like hardware access, network protocols...etc.

Let's consider the following class diagram that violates the Dependency Inversion Principle:

 

From the first look, we can notice that this system only reads characters from the keyboard and writes them to the printer, and nothing more. We also notice that the CopyManager (high level class) is so tightly coupled with the KeyboardReader and PrinterWriter (low level classes). Now let's consider we want to direct the output to a file or to the screen, or support to read the input characters from the screen or from a file; definitely the CopyManager will change to support these updates, and imagine that the CopyManager class contains complex business logic and flows and is hard to be tested. For sure it won't be a good practice to modify a core class like that with every single update or new added system feature.

Applying the Dependency Inversion Principle will resolve that matter, and will make the system more flexible, extendable, modifiable and testable.

A good practice to apply the Dependency Inversion Principle correctly is to revert the modules dependency while we think and design. That can be done by first defining the high level modules that contain the complex business logic that we wouldn't want to change, and then define an abstraction layer based on the high level modules needs, and make the high level modules depend on that abstraction layer. Also, to make sure we are applying the Dependency Inversion Principle right; we should seal off both the high level modules and their abstraction layer together in the same package (library), and the low level module in different package(s) (libraries); and that to guarantee total dependency isolation between the high level modules and the low level modules. Now we will not worry about the actual input sources from which we read the characters or the actual output sources to which we write them. See the diagram below that fulfills the Dependency Inversion Principle:

Marco

Single Responsibility Principle states that a class should have one and only one reason to change. That means, if we have two reasons for a class to change, we should split this class into two different classes instead of one, and each of which will be concerned only with one responsibility.

To explain this more, let’s have a look at the following class diagram:

Description: SRP-1.jpg

From the first look at the above class diagram, you will know that it is a real-world bank account representation, but with two extra responsibilities (other than modeling); which are saving and printing the bank account. Now the BankAccount class will have three reasons to change:

  • Updates in the BankAccount class structure itself (properties, and methods).
  • Updates in the way the BankAccount is saved (to database, to files …etc).
  • Updates in the way the BankAccount is printed (to the screen, to a printer …etc).

Now, in order to maintain the Single Responsibility Principle in the above example; we are going to divide the BankAccount class into three different classes, and each of which will have only one responsibility to take care of, and accordingly one reason to change.

See the class diagram below after applying the Single Responsibility Principle to it.

Description: SRP-2.jpg

Marco

Liskov Substitution Principle (LSP) states that every super type T can be replaced with one of its sub-types S without affecting the correctness of the program. In other words, sub-types of a super type should not alter the super type default behavior.

To explain this more; let’s have a look at the following code that violates LSP:

public class Rectangle {

    protected int width;

    protected int height;

    public int getWidth() {

        return width;

    }

    public void setWidth(int width) {

        this.width = width;

    }

    public int getHeight() {

        return height;

    }

    public void setHeight(int height) {

        this.height = height;

    }

    public int calculateArea() {

        return width * height;

    }

}

public class Square extends Rectangle {

    public void setHeight(int height) {

        this.height = height;

        this.width = height;

    }

    public void setWidth(int width) {

        this.height = width;

        this.width = width;

    }

}

Note: The Square class is altering its parent class default behavior by overriding both of its setWidth and setHeight methods; and that’s to always make sure that both of the Square width and height are equal!

Now, we can create a new Rectangle object and set its width and height, then calculate its area, and we can do the same with the Square.

 

From the first sight, everything looks ok; but actually it is not! Let’s see the following code:

// Test 1

Rectangle r1 = new Rectangle();

r1.setHeight(2);

r1.setWidth(3);

System.out.println("R1 area is: " + r1.calculateArea()); // R1 area is: 6

// Test 2

Rectangle r2 = new Square();

r2.setHeight(2);

r2.setWidth(3);

System.out.println("R2 area is: " + r2.calculateArea()); // R2 area is: 9

The second test shows that the above code violates Liskov Substitution Principle, because when we tried to substitute the Rectangle object r2 with a Square sub-type, it produced undesired area output of 9.

Ok then, knowing that the Square is an equal sided Rectangle, what we’re going to do to model this correctly without violating LSP!

The key here is to modify the super type (base class) a little bit, by removing both of the width and height setter methods, and provide both of their values when instantiating (constructing) a new Rectangle object. See the following code:

public class Rectangle {

    private int width;

    private int height;

    public Rectangle(int width, int height)

    {

        this.width = width;

        this.height = height;

    }

    public int getWidth() {

        return width;

    }

    public int getHeight() {

        return height;

    }

    public int calculateArea() {

        return width * height;

    }

}

public class Square extends Rectangle {

    public Square(int width)

    {

        super(width,width);

    }   

}

 

// Test 1

Rectangle r1 = new Rectangle(2,3);

System.out.println("R1 area is: " + r1.calculateArea()); // R1 area is: 6

// Test 2

Rectangle r2 = new Square(2);

System.out.println("R2 area is: " + r2.calculateArea()); // R2 area is: 4

Now, with this simple modification, the area of the Rectangle in both tests is calculated correctly, whether the actual object was a Rectangle or a Square, and without violating Liskov Substitution Principle.

Marco

In software design the use of interfaces to give a level of abstraction can be very tricky.

Consider we have an interface that has many methods, and a set of classes implementing that interface. Now if we want to create a new class that is interested only in a few methods of that interface, we will have to implement all the interface methods, even the ones we wouldn’t want to put in the new class. Now, one may leave the undesired methods bodies empty and another may throw a method not implemented exception, and that will cause an inconsistent and unexpected behavior at runtime.

In software design, that interface is called polluted or fat interface; because it has too many methods that not all of its implementing classes would want to know about. See the following diagram:

Interface segregation principle has solved this issue by dividing the polluted interface into smaller ones, and each class will decide which interface to implement according to its needs. See the following diagram:

Marco

Open close principle is a software design concept that tends to make a software design flexible enough for extension yet closed for modification. In other words adding new feature(s) to an existing source code is done by adding new classes rather than changing the core source code and structure.

Let’s try to implement a SoundProducer that outputs the sound of a Guitar in a way that violates the open close principle (see the next diagram). Now, in order to support a new musical instrument (Piano); we will have to change the main SoundProducer class to play the new instrument sound, and then after a short while the Flute came, and then the Clarinet, and then the Oboe, and …etc. The more instruments to support the more likely the SoundProducer class will change.

A good design will make the SoundProducer closed for modification, yet opened for extension (see the next diagram) and this by making the SoundProducer depend on abstraction (Instrument), now adding a new instrument (feature) will be done only by adding a new class extending the abstract Instrument class and implement its abstract method playSound without any changes done on the main SoundProducer class.