Showing posts with label Design And Architecture. Show all posts
Showing posts with label Design And Architecture. Show all posts

Friday, April 20, 2007

Contract-First or Code-First???

Just stumbled over this blog entry this morning. Quite interesting. Give it a read here...

Thursday, December 16, 2004

Booch on Software Factories Vs MDA/UML

A Software Factory is a development environment configured to support the rapid development of a specific type of application. While Software Factories are really just the logical next step in the continuing evolution of software development methods and practices, they promise to change the character of the software industry by introducing patterns of industrialization. This is a methodology developed at Microsoft called Software Factories.

More on Software Factories at:

Grady Booch fires back on Software Factories in an article on IBM's developerWorks, responding to many of the claims put forth over Microsoft's software factories advantages compared to MDA using UML. Citing factual innacuracies and a confusion of the use of tools versus language definition, he points out several statements that he considers false. Read the complete article here.

Information Via: ServerSide.NET

Thursday, October 28, 2004

Smart Client Architecture and Design Guide Released

MSDN has just released a new architecture and design guide for Smart Clients which provides information on several topics for those creating smart client applications. Issues addressed include data handling, connection state management, security, and threading.

The definition of "smart client" is dependent on requirements and implementation details but all share the following characteristics:
  • Make use of local resources
  • Make use of network resources
  • Support occasionally connected users
  • Provide intelligent installation and update
  • Provide client device flexibility

To understand more on Smart Clients this article by David Hill would be helpful.

Access the Design Guide here...

Thursday, September 30, 2004

Find Memory Leaks and Optimize Memory Usage in Programs Written in C#, VB.NET or Any Other .NET Language

This post is an update to my previous post on Circular References / Memory Leaks /other baddies.

Having a garbage collected runtime removes one of the biggest sources of program errors, memory allocation errors. Unfortunately, memory leaks are still a reality. A memory leak can occur if an instance is unintentionally being referenced from some other long-living instance, or from a static field. In this case the instance cannot be garbage collected. A very common unintentional reference is an event handler that is never removed.

Here is a .NET Memory Profiler as claimed by the vendors, that helps locate instances that are being referenced unintentionally, and it will tell why the instance has not been garbage collected.

Tuesday, September 14, 2004

Model Driven Architecture(MDA)

This post intends to give info on MDA, further to my post on Formal Methods, UML & OCL.

MDA stands for Model Driven Architecture. It is framework defined by OMG for software development. It is an approach to creating designs that can cope with multiple technology deployments of a software system and is based on widely used standards like the Unified Modeling Language (UML). The intention of the MDA is to create machine-readable models that can be understood by automatic tools that generate schemas, code skeletons, testing models, test packs, and integration code for multiple platforms and technologies.

The central idea of the MDA is to develop and maintain an abstract design of a system that can be automatically transformed into multiple platform designs and finally transformed into the code that will realize those deployments. The core of the MDA depends on the three models that are created as part of the software development process, namely,
  1. Platform Independent Model - The PIM is a highly abstracted model that is independent of any implementation technology. It describes a software system that supports a part, or the whole, of business. The PIM may include generic functions, scenarios and class descriptions.
  2. Platform Specific Model - Using the PIM as a foundation it is then transformed into one or more platform specific models, which describes in detail how the PIM is implemented on a specific platform, or technology. Depending on the platforms across which the software system is going to be deployed PSM's will be created - one per platform, or technology. It is common to have many PSM's per PIM.
  3. Code - The detailed designs defined in the PSM's are then transformed into code in the final step of the MDA software development process.
The whole basis is MDA is a tools-based approach. While one or two hardy pioneers have implemented MDA using only a UML modelling tool, a text editor and a steely determination to keep their code and models synchronised, most of the benefits of MDA actually come from having generators to create code, test scripts, database schemas and other development artifacts directly from models. Some researchers and consultants have built their own model transformation tools, and there are some interesting work from a couple of people on using XSLT to transform models expressed in OMG's XMI (XML Metadata Interchange) exchange format. However, for most people, implementing MDA involves buying a vendor's MDA tool. However, because MDA is based on standards like UML, XMI and MOF (Meta Object Facility), buying a tool doesn't mean that you're permanently locked into using only one vendor's product.
When the visions of the MDA are realized there is a number of benefits it would bring to the software development community. The two main benefits are:
  1. Productivity - The developer will focus on the development of a PIM. From the PIM the PSM's and Code will be automatically created via transformations. Because the focus is on the PIM, quite a lot of the technical details of the underlying technologies and platforms do not need to be considered. The Majority of the code will be created through the automated transformation process and as such relatively small parts of code will need to be written (Yes! Coding will still happen). With less focus on the coding and detail design for specific platforms, the developers can spend more time in accommodating business problems. This will ensure better business fit and hopefully a happier user community.
  2. Portability - Portability is achieved via the PIM that is transformed into PSM's for the multiple platforms on which deployment will take place. With the transformation between PIM and PSM automated the PIM becomes totally portable.

There are a number of downsides to the MDA as it exists currently. They are:

  1. Current tools (if they exist?) for automatic transformation from PIM to PSM are not yet sophisticated enough. These automated transformation tools will rely heavily on transformation definitions and rules.
  2. The PIM's, if defined loosely, might not deliver the systems required. To ensure that the PIM's and subsequent PSM's and Code align with business requirements, the PIM's need to be defined precisely. Imprecise definitions will lead to faulty and incomplete systems that may create a huge maintenance overhead.
  3. Portability (in the future), trough transformation from PIM to PSM will probably be cater for the popular platforms, but for the less popular platforms may still remain an issue. Emerging technologies may also be plagued by not having automated transformation tools available in early stages of release
Early-adopters are already using MDA very effectively on real applications, and this will increase over the next few years as major suppliers like Sun, IBM and Microsoft ship MDA tools to their customers. Sun and IBM already provide some MDA support via Netbeans and Eclipse, and there are strong hints that Microsoft will soon ship model-driven tools, including this speech by Bill Gates.

Monday, September 13, 2004

Object Spaces and NHibernate

Database interaction via the FCL centers around retrieving a static snapshot of some portion of the database and manipulating it via the dataset, which mimics the RDBMS in almost every way. The problem with the dataset is that it doesn’t fit particularly well with modern object-oriented application design. Whereas datasets have tabular data, we tend to code using objects. Datasets have foreign key relationships, our domain objects use references. Where we want to use only methods, datasets require a certain amount of SQL code. Of course, some of these problems can be solved through the use of “strongly typed” datasets, but the fact remains that you are changing modes as you move from your domain model to your data access and back again. Depending on how you choose to layer that data access code into the application, changes to the data store can have enormous ripple-effects on your codebase.

The key to any enterprise application today is the domain model that needs to be transparent. It is in these classes that your customers’ problems are addressed; everything else is just a service to support the domain, things like data storage, message transport, transactional control, etc. Transparency means that your model benefits from those services without being modified by them. It shouldn’t require special code in your domain to utilize those services, it shouldn’t require specific containers, or interfaces to implement. Which means that your domain architecture can be 100% focused on the business problem at hand, not technical problems outside the business. A side effect of achieving transparency is that you can replace services with alternate providers or add new services without changing your domain. Coding directly against the dataset breaks the transparency. It is obvious inside of your code what storage mechanism you use, and it affects the way your code is written. Another approach to storage is the use of an object-relational mapping tool. Microsoft is in the process of building such a framework, called ObjectSpaces, but recently announced it would be delayed until as far as 2006.

NHibernate, an open source solution, is available today and solves the same set of problems. With NHibernate, your code and your data schema remain decoupled, and the only visible indicator of the existence of the O/R layer are the mapping files. With HNibernate, you’ll see that these consist of configuration settings for the O/R framework itself (connecting to a data source, identifying the data language, etc.) and mapping your domain objects to the data tables.

NHibernate Article on TheServerSide.NET

Read more on Object Spaces here...
Download the Source from SourceForge.NET

Wednesday, September 08, 2004

Throwing Exceptions

A quick tip on catching and throwing exceptions.

General way of throwing exceptions:

try
{
-----------------------

your code
-----------------------
}
catch(Exception ex)
{
----any clean up activities-----

throw ex;
}

Recommended way of throwing exceptions:

1. If you want to just do some cleanup when an exception occurs, you should re-throw the caught exception using this code instead:

catch(Exception)
{
---- clean up activities -----;
throw;
}


This preserves the original calling stack. Nobody knows you were involved, and they can trace back to the exception from its true origin without being diverted into your cleanup code. the above examle belongs to this category.

2. If you want to be part of the exception chain, then you should re-package the exception with your own, and assign the old one as the inner exception:

catch(Exception exception)
{
---clean up activities -----;
throw new MyException(exception);
}


You turn the general exception into a specific one, while preserving the original inner exception so that it can continue to be traced back to the origin.

Friday, September 03, 2004

XML style guidelines for leveraging schema validators

Used correctly, XML Schema validation can dramatically reduce the effort necessary to perform basic data validation tasks. Additionally, validation rules that are centrally located in an XML schema can help users to better understand your system. It takes the right XML structure, however, to leverage a schema validator. This article discusses proper XML structure as well as best and worst practices for defining data validation rules in XML Schema.

How do you keep invalid data from getting into your system? Should you hand-code validation routines that perform bounds checking? With the XML entry points into your system, XML Schema validators can save you an incredible amount of time in this area. This goes for DTD validators as well as those for XML Schema.

Read more here...

Formal Methods, UML and OCL

I am now studying formal methods wherein I have come across something interesting I thought I could share with people.

Formal Methods is the application of logic to the development of "correct" systems. They are mathematical foundations for many technologies and practices that software engineers use. Joseph Goguen says that formal methods are “syntactic in essence but semantic in purpose.” A more narrower definition could be "A formal method in software development is a method that provides a formal language for describing a software artifact (e.g. specifications, designs, source code) such that formal proofs are possible, in principle, about properties of the artifact so expressed."

UML is one of the tools Engineers use to design more formal systems. This language fits Goguen’s description of a formal method. It is syntactic in essence, offering a well-defined way to construct a model. It is also semantic in purpose — that is, it is designed to convey meaning. Much information can be encoded in a UML model. But it is not always easy to construct syntactically correct and semantically rich models of software using just UML diagrams. The rules for which type of arrowhead and which type of connector to use for which purpose can be just as confusing as the syntax for a programming language such as Java. And then, even if you can construct a correct UML diagram, there is much information that it will not convey.
However, OCL, a formal specification language that is part of the UML specification, enables you to annotate models with expressions that clarify meaning. In UML 1.1, the main purpose of OCL was to identify the constraints on model elements. More on this...

Tuesday, August 31, 2004

Locality of Reference & Performance ??

Was just surfing the net and came across this blog entry of Rico

http://blogs.gotdotnet.com/ricom/permalink.aspx/c5e117b6-8f8c-4e07-b941-c6fa4d3413d8

That was a good article to understand Locality of Reference and Performance...!!