- Strategy is about trade-offs. So says Michael Porter.
Architecture , especially enterprise architecture, is also about trade-offs. In fact, you could say architecture is a technological implementation of strategy or maybe a form of technology strategy. This, of course, differs from strategy for a technological firm which is something else altogether.
To understand strategy , you first have to ask the question why? The whys lead on to the other questions and their answers.
This post looks at criteria of good software architecture. I will seek to explain the terms together with why they are important components of a good architectural system.
Functionality implies whether the system does what its supposed to do. Is it fit for purpose? Does it meet the user requirements outlined earlier? No amount of technical wizardry can save a project or architecture if the system built does not meet the requirements captured earlier. Hence it always better to spend more time capturing the requirements, talking with the project sponsor(s), the people who will be using the system, maybe even build prototypes so that ambiguity is reduced and/or eliminated. Perhaps you, as a SME (Subject Matter Expert), would have a better idea of the system required by the customer. “Since you’re not clear what you want , may I try and show you? “ In this case, the customer may not always be right! But on the other hand, do not treat your customer as a Big Moose! Duh! Good requirements capture is a prerequisite for good design , and finally good implementation. Functionality is meeting the user requirements and quality is exceeding them!
When it comes to evaluating a solution, you may be asked the question “ That’s fine; the system does what’s expected of it. But does it scale?”
A scalable solution is able to accommodate an increase in load i.e. an increase in the number of users of the system without a significant degradation in performance. What this means is can the system accommodate a larger no. of users than envisaged while building the system. When we talk about an architecture being scalable, we encounter 2 types of scalability.
The first is vertical scaling or scaling up. This is the easiest because it involves moving the existing system to better, faster hardware with faster processors and more memory or increasing the no. of processes supporting the application.
The second is horizontal scaling or scaling out. Scaling out is having the system available on multiple machines and is usually accomplished by sharing the increased load across multiple homogenous or in some cases heterogeneous hardware. A load balancer component is a critical part of such an architecture. The load balancer could be built in software but in most cases is a specialized piece of hardware.
Scaling out can be achieved over layered systems as well. Here, you can have multiple web servers, multiple application servers and finally multiple database partitions and/or master-slave databases.
Scalability is usually looked at from the perspective of load balancing.
But there are other dimensions of scalable architectures.
Is the system geographically scalable? Can it serve users across different geographical locations? Is it a true 24 * 7 system? How available is it? How much downtime will the system have? What percentile?
Is the system administratively scalable? Can you have the same distributed system serve multiple organizations? An application like SalesForce.com is an example of an administratively scaled system.
Is the system functionally scalable? How easy is it to add new functionality without breaking the existing architecture/system?
It is a principle of engineering that a bridge is not built to take the average load / traffic but it is built to take the maximum or maximum times a factor greater than 1. The same applies here!
Performance is one of the more important non-functional requirements expected from a large system, especially a transactional system. Here it is important that certain transactions ,such as in an ATM system, have a reliable time scale within which they are performed. Anything less than that is usually unacceptable to the user and consequently the customer/sponsor paying for the system.
Performance can be measured at different levels. It may be measured as the mean time for a transaction, it can also be measured as the maximum time expected from a transaction. Why is this important?
For example, consider a customer service system. The CSRs have to respond to customers; they are usually allocated a certain quota of calls which they have to attend to in an hour. But if they have a sluggish system to work with , the system becomes a drawback and they may resort to gaming or finding some workaround and leave the customer with a feeling of lack of good customer service through no fault of their own. Hence, performance issues are very much a part of requirements to be signed off on delivery of a system. Most contracts include a SLA that focuses on the performance aspects of the system. It’s just not “ is it fit for purpose?”; Is it fit for use? Can the customer use it over and over again without any significant degradation of performance and time? A software system , unlike humans, are built for those heavy loads. They are machines, after all!
To quote Wikipedia.org:
“Computer performance metrics include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, Instruction path length and speed up.”
Performance tuning strategies include code optimization, load balancing, caching strategy, and distributed computing, and self-tuning.
Code optimization usually indicates improvements in the algorithms used to perform a certain task. It could also mean devising a new algorithm. It might even involve some sort of hard-coding or even assembly language programming to deliver optimized performance. This is a highly specialized activity and requires a high degree of skill and knowledge.
Caching strategy is used to reduce latency. Frequently used items are stored in high-speed access memory so that they are brought closer to the point of use.
Load balancing is another strategy to spread load over multiple distributed systems so that no single layer or system becomes a bottleneck.
Distributed systems spread the load over multiple processors, systems. It usually relies on multi-tasking by dividing a job into discrete tasks that can be performed independently and then aggregate the results to be delivered to the user.
Finally self-tuning systems can take care of themselves. These are also known as autonomic systems i.e. they are self-diagnosing and exhibit adaptive behavior.
When we talk about availability with reference to computer systems/services, what we are referring to is high availability. As the term suggests, high availability is the expectation that the system / application /service is available at all times or almost all the time. Thus, we expect the system to be down very rarely i.e. downtime is minimized or is negligible. High availability suggests a highly reliable system.
There are 2 kinds of downtime: planned and unplanned. Planned downtime allows us to keep the system still available by using redundancy , where applicable or phased out rolling maintenance and system updates. Available systems try and reduce single points of failure (SPOFs) by adding redundancy.
A service level agreement (SLA) specifies the availability percentage usually in no. of hours over a period of time.
ITIL V3 Handbook says this about availability: “requires a design that considers the elimination of Single Points of Failure (SPOFs) and/or the provision of alternative components to provide minimal disruption to the business operation should an IT component failure occur. High availability solutions make use of techniques such as Fault Tolerance, resilience and fast recovery to reduce the number of incidents, and the impact of incidents”
To be continued____