Governance – Corporate,IT & SOA

Depiction of layers of the Service-oriented Ar...

Image via Wikipedia

Corporate governance

Corporate, or enterprise governance, establishes the rules and the manner in
which an enterprise conducts business, based upon its strategy, marketplace,
and principles of doing business. It defines for employees and for business
associates the processes that are used to conduct operations and the manner in
which people interact.
Beginning with the board of directors and extending throughout the organization,
there are many aspects and levels of corporate governance. All aspects of the
business are touched in some manner. Governance is applied to major
functional areas of an organization. Organizations govern their financial assets,
human resources, customer relations, intellectual property portfolio, and their
Information Technology.

Quote of the day:
It is the mark of an educated mind to be able to entertain a thought without accepting it. – Aristotle

Continue reading

Availability

To explain the background of MTBF calculation

Image via Wikipedia

Availability

 

Availability is a measure of the accessibility of a system or application, not including scheduled down-time. It can be measured as a ratio of expected system up-time relative to the total system time, which includes uptime
and recovery time when the system is down.

Even if you’re on the right track, you’ll get run over if you just sit there.

Will Rogers

Source: http://quotes4all.net/quote_1235.html

[Powered by QuotesPlugin v1.0 for Windows Live Writer]

Continue reading

Web-services, SOA, BPM & Cloud Computing – X

Chromium

Image via Wikipedia

Web-services, SOA, BPM & Cloud Computing – X

No series on cloud computing would be complete without alluding to Google’s audacious attempt at building an OS around the cloud computing paradigm.

Yes, I’m referring to the Google Chrome OS, a spin-off to the Google Chrome browser. The open source versions for the Chrome OS are the Chromium OS and the Chromium Browser respectively.

The Google Chrome OS

The Google Chrome OS is targeted specifically to netbooks, not the primary device of use, but a secondary, portable, lightweight device. The OS is small enough to be loaded on a USB drive and booted from the very same device. Applications on local storage are few and far between and most useful, user applications are based in the cloud. The user interface is minimalist much like the Chrome Browser. Boot time is very quick with Google software engineer Martin Bligh demonstrating a bootup time of four seconds.

Continue reading

Web Services, SOA, BPM, and Cloud Computing – IX

CRAY-1 (no longer used, of course) displayed i...

Image via Wikipedia

A discussion on web services, SOA, BPM and Cloud Computing would be incomplete without a post on grid computing.

Wikipedia starts their article on grid computing by saying that “Grid computing is the combination of computer resources from multiple administrative domains for a common goal.

So what does this mean?

In the first place, computing is about achieving a piece of work, what the work consists of is irrelevant for the definition.

Grid computing is about achieving or completing a humongous piece of work which if given to a single computer would take an inordinately large amount of time and would also in all probability lock up the CPU cycles of the machine, leading to that notorious reaction ‘My computer froze”.

For people who are maybe not technically minded but are aware of SETI@home (The Search For Extra Terrestrial Intelligence – this is a volunteer computing project that utilizes the unused CPU cycles of volunteer home and work PCs to analyze radio signals emanating from space for signs of some sort of intelligent life out there. This seeks to answer that philosophical question “Are we the only ones out here on Planet Earth? It cannot be – there must be someone out there in the vast reaches of the universe”.

What this implies that each volunteer machine downloads a set of radio signal data, analyses it and sends the results back to the SETI project server. The SETI@home application is a screen-saver to be loaded onto the client machine.

This is what in technical terms is known as CPU scavenging and volunteer computing.

Continue reading

ITIL – Service Strategy

An ITIL Foundation Certificate pin on a shirt.

Image via Wikipedia

The ITIL Service Lifecycle consists of 5 phases:

The first of which is Service Strategy i.e. the process of designing, developing and implementing service management as a strategic resource.

The Service Strategy is at the core of the Service Lifecycle ; the phases Service Design, Service Transition and Service Operation implement this strategy.

What is service strategy?

Strategy can have many definitions but its main goal is to identify the competition and to compete against it by differentiating oneself from the rest and delivering superior performance.

ITIL looks at the 4 Ps of strategy:

Perspective – clear vision & focus

Position – a stance that differentiates us from the competition

Plan – a notion or idea of how the organization should develop its competencies

Pattern – maintaining consistency in decisions and actions

A strategic perspective provides direction. A directionless strategy leads to a rudderless organization. Strategy needs to set a direction, a horizon to cross.

Positioning defines the organization; it is the defining characteristic; you cannot be all things to all people. Positioning narrows focus; it zones in on the factors that set the organization apart.

Positioning is the result of 3 broad inputs: market analysis, internal corporate analysis and competitor analysis.

Positioning is not static and evolves and changes over time.

Strategy as plan focuses on the steps to be taken to implement strategy.

Strategy is the procedures followed that lead to recurring successes.

Service Strategy is about answering hard questions as to what do we specialize in, what are our strategic assets, what are our competencies, what kind of services can we offer, how are we different?

Development and application of service strategy requires constant finessing; service strategy has to be forward looking.

Strategy is the cornerstone of  organizational success.

Have a great day!

Source: A Management Guide Service Strategy Based on ITIL V3

____________________________

Quote of the day:
The most dangerous strategy is to jump a chasm in two leaps. – Benjamin Disraeli

Related articles by Zemanta

 

Web Services, SOA, BPM, and Cloud Computing – VIII

Diagram showing overview of cloud computing in...

Image via Wikipedia

A series on cloud computing would not be complete without a post on virtualization.

Now, what is virtualization?

A definition of the term virtualization would go as follows:

Virtualization is technology that
allows application workloads to be
managed independent of host hardware.
Multiple applications can
share a single, physical server.
Workloads can be moved from one
host to another without downtime.
IT infrastructure can be managed
as a pool of resources, rather than
a collection of physical devices.

The ability of a single hardware system to support multiple virtual machines thus optimizing the use of the hardware and thus providing more bang for the buck is the cornerstone of virtualization technology.

Virtualization is about multi-tenancy i.e. the ability to have multiple applications residing on the same infrastructure.

Virtualization is most often implemented on x86 servers as either operating system
(OS) virtualization or hypervisor-based virtual machines. OS virtualization uses
a single instance of an operating system (such as Microsoft® Windows® or
Linux®) with the help of virtualization software, to host a large number of individual
workloads.
The hypervisor approach is completely different. A hypervisor is code shared
among the guest operating systems and the hardware. The guest operating systems
can be various versions of Windows and Linux, and can be mixed and
matched on the same system. (For example, Windows 2000, Windows Vista,
SLES 9 with Xen, and RHEL 5 without Xen can all operate simultaneously,
including standard and enterprise varieties of each, as well as both 32-bit and
64-bit implementations.)
The hypervisor ensures that each operating system instance gets its proper share
of hardware resources and also that activity in one virtual machine (VM), or partition,
does not impact any other partition or the overall system.

Virtualization is about intelligent sharing of computing, resources and storage. Virtualization is about being dynamic with your allocation of resources, computing and storage. It is juggling multiple balls or applications transparently without the complexities becoming evident to the user of the applications.

Virtualization allows you to be flexible with your allocation of resources.It allows for failover, load-balancing, disaster recovery and real-time server maintenance.

The complexity of virtualization needs a single interface from where this infrastructure can be managed.

Virtualization lends itself to reduction in cost i.e. in the spending on hardware and at the same time an increase in productivity of the hardware installed. However, it is not a silver bullet and brings with it complexities that would have been avoided in a non-virtual infrastructure. The need to balance performance needs with maximizing workload is what virtualizing organizations  grapple with.

Virtualization can help you maximize the value of your IT dollars:
● Business agility in changing markets
● Computing resources to meet both current and future needs within the existing
power envelope
● An IT infrastructure that is flexible and can scale with business growth
● Performance that can handle the most demanding applications
● An industry-standard platform architecture
● Intelligent management tools
● Servers with enterprise attributes—regardless of their size or form factor

 

Virtualization can help you improve IT services:
● Rapidly provision new application workloads—cut setup time from days or
weeks to minutes
● Improve IT responsiveness to business needs
● Eliminate planned downtime by moving workloads before hardware
is serviced
● Greatly reduce—even eliminate—unplanned downtime

Virtualization strategy is the backbone of cloud computing solutions. Without virtualization, cloud computing would have a mountain to climb, with virtualization, its the case of going up a hill but appearing to come down a mountain!

Have a great day!

Source: IBM White Paper

Virtualization strategy for mid-sized businesses
April 2009
IBM and VMware virtualization benefits for mid-sized businesses

Quote of the day:
In a time of universal deceit, telling the truth is a revolutionary act. – George Orwell

 

Share this post : Social! MSDN! Technet! Live! Blogmemes!
Buddymark! Complore! Del.icio.us! Del.iri.ous! Digg! Facebook! Reddit! Technorati! Yahoo!

Web Services, SOA, BPM, and Cloud Computing – VII

Outline of a cloud containing text 'The Cloud'

Image via Wikipedia

What, in heaven, is cloud computing? If you think you already know what cloud computing is about, then this post is not for you. But you can choose to read on, if you like.

The understanding of cloud computing can be as hazy as the term itself.

It seems as though there’s this cloud into which your input disappears and you receive your output , again, via the cloud or that is what all those diagrams depicting cloud computing seem to imply.

Cloud computing seems to be a definitely cloudy term to define the ability to access your applications wherever you go. Cloud computing harnesses or leverages the power of the internet to give you distributed applications that can be accessed from multiple devices (note , it is devices and not multiple computers; multiple devices include multiple computers! Sorry if I sound pedantic!)

Cloud computing definitions include “wherever you go, your applications are”, “the big rental station in the sky.”

The latter because in a multi-tenant cloud computing system, you are in effect sharing resources with other entities or enterprises , all transparent to you and to each other. Hey, what am I saying? Cloud computing is inherently multi-tenant , ask any blogger! But maybe we’re just referring to virtualization, eh? But I am getting ahead of myself here and let’s just start with the definition of cloud computing.

I have defined Cloud Computing elsewhere as:

Cloud computing is outsourcing your computing requirements on demand allowing agile response to ever changing business needs.

Cloud computing is a service. It is usually classified into 3 kinds:

Software As A Service (SAAS)

Platform As A Service  (PAAS)

and Infrastructure As A Service (IAAS).

Wow, you might say, that’s just fine. You’ve simplified it further for me (sarcastically). Now I’m even more confused!

Software As A Service is exactly that; it is a service that fulfils a certain application need , not locally but in the cloud. To give you an example, WebMail services such as Yahoo! Mail, GMail & Windows Live Mail are the simplest form of software as a service. Yes, webmail has been around for quite some time, you may say. But then its the definitions that are new, not the service itself. You may not remember the term ASP (Application Service Provider). Well , SAAS is just a new term for ASP. At your workplace, you may encounter CRM services such as SalesForce.com and Zoho CRM. These are examples of  SAAS applications offered as cloud offerings. These are a boon to non-profit and SMEs to allow them to ramp up quickly without any major up-front capital expenditure. Another relevant example of SAAS is QuestionPro.com and SurveyMonkey. These are  internet based market research tools for individuals and corporates.

Cloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run the application on the customer’s own computers and simplifying maintenance and support. (Sounds suspiciously like ASP!)

  • Network-based access to, and management of, commercially available (i.e., not custom) software
  • Activities that are managed from central locations rather than at each customer’s site, enabling customers to access applications remotely via the Web
  • Application delivery that typically is closer to a one-to-many model (single instance, multi-tenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics
  • Centralized feature updating, which obviates the need for downloadable patches and upgrades.

At the next level, is Platform As A Service, If you are a blogger and have your blogs hosted via a blogging service such as WordPress.com, then you are using a Platform As A Service. WordPress.com , in this case, is the platform provider for you to use the blogging service to create and post content.

Cloud platform services or "Platform as a Service (PaaS)" deliver a computing platform and/or solution stack as a service.It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers.

Finally, there’s IAAS or Infrastructure As A Service. If you decide to host your own web-site or move your blog to a hosting service such as VSNL or Yahoo! Small Business, then you are accessing Infrastructure As A Service, You have access to the infrastructure provided by the hosting service provider and you can install your applications within the constraints of supported programming language, supported database  and storage space provided. Of course, there are the other IAAS providers such as Amazon EC2 and Google App Engine, that you may be more familiar with , if you are technically minded. I chose to give you examples that we are familiar with in our use of the internet.

Cloud infrastructure services or "Infrastructure as a Service (IaaS)" delivers computer infrastructure, typically a platform virtualization environment, as a service. Rather than purchasing servers, software, data center space or network equipment, clients instead buy those resources as a fully outsourced service. The service is typically billed on a utility computing basis and amount of resources consumed (and therefore the cost) will typically reflect the level of activity. It is an evolution of web hosting and virtual private server offerings.[40

Other not so well-known cloud offerings include Network As A Service (NAAS), Storage As A Service and Security As A Service though the third may be considered a subset of Software As A Service.

When cloud computing is mentioned , the related words we hear are cost savings, the ability to provide for dynamic computing needs (via hybrid clouds and/or public clouds) and the efficiencies gained at being able to reallocate vital resources to more productive uses. Cloud computing is also referred to as utility computing since resources in the cloud can now be turned on or off as dictated by our requirements. IT has become a commodity. So how elastic are its demand & supply curves? And I’m not being laconic!

But besides big dollar savings for large firms, it is also about how small firms can gain a competitive edge by being able to focus on delivering value and not worry about large infrastructural investments; applications can be sourced from cloud computing providers – leased may be the term more familiar to cloud computing advocates. The option to bring these applications in-house to private or internal clouds resides with the enterprise depending upon how their funding and inferentially ramp-up progresses. The economics of cloud computing, for SMEs and non-profits, is very compelling indeed.

That’s all for now! You can keep your head in the clouds! And don’t sport a clouded countenance! Just kidding!

Have a great day!

Architecture – Understanding The Criteria – V

Distributed and parallel systems

Image via Wikipedia

HANOVER, GERMANY - MARCH 01:  Illuminated plas...

Image by Getty Images via Daylife

The final post in this series on Architecture – Understanding The Criteria.

This post consists of the  few remaining terms that I overlooked in my previous posts.

Distributed:

Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.[1]

Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer.

Distributed computing was earlier known as parallel computing. Of late, it has come into vogue and is referred to as grid computing.

Distributed systems are systems that communicate to each other via the network; this implies different systems residing on different physical hardware. A web architecture is by nature distributed; the client is the browser that resides on the user’s machine; the presentation tier may or may not be co-located  with the business objects on a single machine and finally the database resides on a database server.

Also, SOA (Service Oriented Architecture) which implies web services are also by their very nature distributed systems. Message Oriented Middleware add another layer of abstraction and decoupling to distributed systems.

Grid computing involves the asynchronous distribution of tasks of a job or several jobs to slave systems that perform the tasks and the results are sent back to the master where the results are aggregated to be presented to the job submitter. A very relevant example of grid computing would be Google Search Indexing which is a grid computing exercise with several software agents called robots that index the web and update the Google Cache and Index Database.

 

 Variability:

Variability is how well the architecture can be expanded or modified to produce new architectures that differ in specific , preplanned ways. Variability mechanisms may be run-time,compile-time,build-time or code-time. Variability is important in a product setting when the architecture is the underlying architecture behind a whole set of related products or product line.

Conceptual integrity:

Conceptual integrity is the unifying theme underlying an architecture. Simply put, the architecture allows does similar things in a similar fashion. The architecture should exemplify consistency, have few data and control mechanisms and use a small number of patterns. This makes it easy for developers to work on the system and nasty surprises are avoided.

 Elasticity:

Though this seems specific to cloud computing, I will include it here. Elasticity is the ability of the supply of computing resources to react dynamically, in a preplanned manner, to changes in demand – either increased or decreased demand. This is a feature of cloud computing offerings as per the Service Level Agreements (SLAs) the enterprise may have with its cloud computing suppliers. The outsourcing can have various combinations, such as in-sourcing for normal, anticipated computing demand and outsourcing for spikes in demand; or complete outsourcing to a single provider , single sourcing ; outsourcing to a single provider with surges in demand outsourced to another cloud computing provider; outsourcing to multiple providers to reduce downtime – multi-sourcing.

Reference: Evaluating Software Architectures – Methods and Case Studies – Clements, Kazman & Klein.

Evaluating Software Architectures: Methods and Case Studies

That’s all!

Have a great week!

An intellectual is a person who has discovered something more interesting than sex.

Aldous Huxley

Source: http://quotes4all.net/quote_1650.html

[Powered by QuotesPlugin v1.0 for Windows Live Writer]

Share this post :

Architecture – Understanding The Criteria – III

{{tr|1=Depiction of layers of the Service-orie...

Image via Wikipedia

This is the 3rd post in this series Architecture – Understanding The Criteria.

Here we will look at the following aspects of an enterprise architecture:

Interoperability, Configurability,Portability, Resilience and Fault-Tolerance.

Interoperability :

Wikipedia defines interoperability or interoperable as a property referring to the ability of diverse systems and organizations to work together (inter-operate). When we talk about software and interoperability, what we refer to is the ability of different systems to talk to or communicate with each other , in a meaningful manner, thus allowing businesses to unlock value from their existing systems. Interoperability could be within the enterprise allowing departmental systems to integrate their processes or it could be collaboration with their suppliers and/or customers systems, thus integrating the information systems in the extended supply chain.

Interoperability has everything to do with standards; it is about executing software projects using defined communication protocols and file formats.

I have also talked about interoperability specifically with reference to web services and SOA , in my series on Web Services,SOA, Cloud Computing and BPM here.

You can read more about achieving software interoperability here.

 Configurability or configurable systems:

Is your system configurable? Can it be configured to meet my specific requirements? That is the question from the customer. What do you answer?

Most modern systems are configurable. That is, you can either turn on or turn off certain features of the software system using parameter values that are either stored in a system file or a database. The configuration features are usually accessible to the system administrator or a manager role in an enterprise system.[ Before you get too confused, for the layperson, the simplest configurable system that you should think of is a software program that you install on your machine. The program might be a shareware program that has some limited features; after purchasing the system for a small fee, you can unlock all its features by keying in or cutting and pasting the license key.]

For more savvy users, especially those who are more familiar with web browsers, the ability to configure your privacy levels for browsing habits, is another example of a configurable software system.

This is extended to large enterprise solutions, built using decoupled modules. The modularity allows certain modules to be turned on or off depending on what the user requirements are. Configurable systems are more prevalent in packaged software systems, but any good software architect/engineer  worth his salt, will include these practices while building custom-built software as well.

Configurable systems are not limited to the domain of software systems. When you buy a car or a computer, there too you specify the specific configuration or requirements you have in mind, as per your needs and/or as per what you can afford.

Configurability , like portability, provides flexibility.

Portability:

Portability is the art of writing software so that it can run on different operating systems and/or hardware architecture.

Most technical personnel understand portability in terms of porting an application to a different operating system and/or system architecture. This is a task to be performed by versatile programmers who are familiar with the architecture being ported from and the architecture being ported to. There are many reasons why this should happen; one of the principal causes is the upgrading of the hardware the software is being run on; another reason could be migration from one operating system to the other but the enterprise would like to retain the applications in its IT portfolio, if possible.

Web applications and Java technology applications avoid this dilemma ( and now .NET) because in the case of web applications, all you need is a browser; the operating system you are running the system on is irrelevant. As for Java and .NET, as long as the operating system you are running the system on has a JVM or CLR, your system should be able to function w/o any modifications.

In brief, portability is about being able to move your applications to another platform with ease.

More about data portability here.

Resilience:

The dictionary defines resilience as the power or ability to return to the original form, position, etc., after being bent, compressed, or stretched; elasticity or the ability to recover readily from illness, depression, adversity, or the like; buoyancy.

When we talk about resilient systems, what we refer to is the ability of software systems to recover gracefully or degrade gracefully w/o sudden loss of functionality. Resilient software systems is by itself a generic term. This includes the ability to be fault-tolerant and reliable. In today’s world, an enterprise is only as resilient as its IT systems. This especially holds true for financial institutions where IT forms the backbone of their daily business use and transactions.

 Fault-tolerant :

When we refer to fault-tolerance, we are referring to software systems that are able to degrade gracefully. At no point, are we referring to people though there is definitely scope for fault-tolerance in human beings as well, especially project managers!

The ability to recover from errors and continue functioning, albeit at a decreased level of functionality or performance level characterizes a fault-tolerant system. Fault-tolerant systems originated with software written for the NASA space program, where it was critical that software should continue to function even if a critical error occurred.

Fault-tolerant systems strive to catch errors where they occur but like human beings, fault-tolerant systems may not be infallible, and thus a certain amount of redundancy may be built to help cope with unanticipated failure.

The basic characteristics of fault tolerance require:

  1. No single point of failure
  2. No single point of repair
  3. Fault isolation to the failing component
  4. Fault containment to prevent propagation of the failure
  5. Availability of reversion modes

 

Anticipating for failure is key to building a good fault-tolerant system. In this case, failure is truly an option and providing for it is key to building a fault-tolerant architecture.

Other key features of fault-tolerant systems is replication and modularity. The separation of concerns helps to simplify system design and decouple and localize system failures.

To be continued….

Have a great day!

Share this post :

Architecture – Understanding the criteria – II

Principle of a public key infrastructure. Roug...

Image via Wikipedia

Continuing with Understanding the criteria….

Security:

When we refer to IT security, we usually look at access management i.e. authentication and authorization.

Authentication simply means you are who you say you are. It is also referred to as identity management.

Authorization means are you authorized to use the given service / application / system i.e. are you allowed access? Do you have the rights to use the resource? Authorization is usually a group / role specific policy. Rarely is authorization set at the individual level. Authorization can be also implemented , in a charging system, as do you have credits to be allowed to use the resource? This, of course, would be at the level of the individual or an entity such as an organization. Examples of this would be encountered in a utility computing model say cloud computing or even for mobile phone services. In the latter, the services are degraded once the credit limit is reached and are restored once the customer tops up his account with the required minimum amount. Authorization is also referred to as access management.

A robust access management system includes verifying identity and entitlement, granting access to services, logging and tracking access, and removing or modifying rights when status or roles change.

ITIL talks about information security as being effectively managed if

  • information is available and usable when required (availability)
  • information is observed by or disclosed to only those who have a right to know (confidentiality)
  • information is complete, accurate and protected against unauthorized modification (integrity)
  • business transactions, as well as information exchanges, can be trusted (authenticity and non-repudiation).

In cases where information is to be protected , use of cryptography and methods such as symmetric encryption, Public Key Infrastructure (PKI) (asymmetric encryption algorithms) and digital signatures (ensures non-repudiation). For more, read http://en.wikipedia.org/wiki/Public_key_encryption

A strategy referred to as ‘defense in depth’ is used to secure computer systems from outsider attack. Here, the premise is that even if the outer wall is breached, the inner sanctum is still secure and it is also time-consuming for the attacker, by which time, a breach may be detected and flagged by a good audit trail system.

You may be more familiar with this when building systems that access the internet and are accessible from it. Here, a De-militiarized Zone (DMZ) adds another layer of security to the firm’s LAN. For more see http://en.wikipedia.org/wiki/DMZ_(computing)

 Usability:

This is the most overlooked aspect of a solution / application. However clever your system may be, however ingenious the engineers developing the system, if the user does not find the application easy to use, then you have a hit a brick wall. Resistance from the users can sound the death knell of any application. A good application should be intuitive to use and leverage existing habits of users. Forcing users to change their ingrained habits is always difficult. Especially with reference to transactional systems and customer facing applications, where responsiveness is key, a non-intuitive interface coupled with inadequate training on a new system can lead to frustrated users. In my experience, at British Telecom, when a GUI was introduced to the customer service representatives replacing the old mainframe UI, the sluggish responsiveness of the new UI led to experienced users switching over to the old system so that they could finish their quota of calls to be attended to. CSRs are very stressed individuals and you do not want a system to add to their discomfort.

These , in my opinion , are the most relevant criterion in evaluating an architecture. Their importance may vary from system to system. But a good and simple way of evaluating a software architecture to assign weights to each criteria and a range of values from 1 – 10 for each criteria. This will give you a rough and ready estimate as to how well your architecture stands up to scrutiny.

Have a good day!

To be continued ……

Share this post :

Reblog this post [with Zemanta]