Search This Blog

Tuesday, July 5, 2011

What is client ID mode in asp.net 4.0

We have been using ClientID’s in ASP.NET 2.0/3.5 that makes each control to generate the unique client side id attribute to that page or browser. But these ID’s were long and randomly generated. Developers who work on Client-Side programming that uses Java Script, JQuery or Ajax have been suffering a lot spending considerable amount of time when it they need to reference those ClientID’s in the client side scripts.
Now the good news is that, ASP.NET 4.0 comes with new ClientIDMode property, gives full control to the developer on the ClientID’s generated by ASP.NET controls.
ClientIDMode can take the following four possible values
  • AutoID / Legacy - ASP.NET generate IDs as it does in v3.5 and earlier versions.
  • Static - ASP.NET use exactly the same ID given to the server control for its client-side ID.
  • Predictable - ASP.NET try to generate IDs that are guessable from looking at the structure of the page.
  • Inherit - ASP.NET generate a client-side ID for the page or control using the same ClientIDMode as its parent. i.e. the client ID gets inherited from the parent control.
You can set this property in 3 ways
1.Control Level
2.Page Level
3.Application Level

Setting ClientIDMode at Control Level
Each and every server control in ASP.NET 4.0 has this property and the default value is inherit.
1.<asp:panel id="pnl" runat="server" cssclass="newStyle1"
2.ClientIDMode ="Static"> </asp:panel>
Setting ClientIDMode at Page Level
1.<%@ Page Language="C#" ClientIDMode ="Inherit"
2.AutoEventWireup="true"
3.CodeBehind="Category.aspx.cs"
4.Inherits="WebApplication3.Cat" %>
Setting ClientIDMode at Application Level
You need to set it at System.Web section of Web.config
1.<system.web>
2.<pages clientIDMode="Predictable">
3.</pages>
4.</system.web>

Wednesday, June 22, 2011

Monday, June 20, 2011

SQL SERVER – 2005 – Difference Between INTERSECT and INNER JOIN – INTERSECT vs. INNER JOIN

INTERSECT operator in SQL Server 2005 is used to retrieve the common records from both the left and the right query of the Intersect Operator. INTERSECT operator returns almost same results as INNER JOIN clause many times.
When using INTERSECT operator the number and the order of the columns must be the same in all queries as well data type must be compatible.
Let us see understand how INTERSECT and INNER JOIN are related.We will be using AdventureWorks database to demonstrate our example.
Example 1: Simple Example of INTERSECT
SELECT *
FROM HumanResources.EmployeeDepartmentHistory
WHERE EmployeeID IN (1,2,3)
INTERSECT
SELECT
*
FROM HumanResources.EmployeeDepartmentHistory
WHERE EmployeeID IN (3,2,5)
ResultSet:
Explanation:
The ResultSet  shows  the  EmployeeID  which  are  common in both the Queries, i.e  2 and 3.
Example 2:  Using simple INTERSECTbetween two tables.
SELECT VendorID,ModifiedDate
FROM Purchasing.VendorContact
INTERSECT
SELECT
VendorID,ModifiedDate
FROM Purchasing.VendorAddress
ResultSet:


Explanation:

The Resultset shows the records that are common in both the tables. It shows 104 common records between the tables.
Example 3:  Using INNER JOIN.
SELECT va.VendorID,va.ModifiedDate
FROM Purchasing.VendorContact vc
INNER JOIN Purchasing.VendorAddress va ON vc.VendorID = va.VendorID
AND vc.ModifiedDate = va.ModifiedDate
ResultSet:

Exlanation :
The resultset displays all the records which are common to both the tables. Additionally in example above INNER JOIN retrieves all the records from the left table and all the records from the right table. Carefully observing we can notice many of the records as duplicate records. When INNER JOIN is used it gives us duplicate records, but that is not in the case of INTERSECT operator.
Example 4:  Using INNER JOIN with Distinct.
SELECT DISTINCT va.VendorID,va.ModifiedDate
FROM Purchasing.VendorContact vc
INNER JOIN Purchasing.VendorAddress va ON vc.VendorID = va.VendorID
AND vc.ModifiedDate = va.ModifiedDate
ResultSet:
Explanation:
The resultset in this example does not contain any duplicate records as DISTINCT clause is used in SELECT statement. DISTINCT removes the duplicate rows and final result in this example is exactly same as example 2 described above. In this way, INNER JOIN can simulate with INTERSECT when used with DISTINCT.
Summary :
INNER JOIN can simulate with INTERSECT when used with DISTINCT.

How to reset the Autoincrement column in sql server after DELETE operation

Type the Following Command after the DELETE operation:

DBCC CHECKIDENT ("TABLE NAME", RESEED, 0); --next id will be 1

Monday, May 9, 2011

GridView and DetailsView Master/Detail page using SqlDataSource control

The GridView and DetailsView controls are commonly used in a Master/Detail page. A Master/Detail page is a page that displays a list of items from a database along with the details of a selected item in the list. The DetailsView control is often used to display the details portion in a Master/Detail page.

The list of items can be displayed in a Master/Detail page using a DropDownList or a GridView control. In this example, we have used GridView control to display the list of items and DetailsView control to display the details of a selected item in the list.

In the example, we have used two data source controls, one to bind the GridView control and the other to bind the DetailsView control. The data source control which is connected to the GridView control retrieves all the items from the AccountsTable. The data source control which is connected to the DetailsView control retrieves the single account details selected by the user in the GridView control. The datasouce control SqlDataSource1 is connected to the GridView control and the datasouce control SqlDataSource2 is connected to the DetailsView control.


<asp:GridView ID="GridView1" DataSourceID="SqlDataSource1" 
DataKeyNames="AccountCode"  AutoGenerateSelectButton="true" 
runat="server" />

<asp:DetailsView ID="DetailsView1" 
DefaultMode= "Edit"  AutoGenerateEditButton="true" AutoGenerateInsertButton="true" 
AutoGenerateDeleteButton="true"
DataSourceID="SqlDataSource2"  runat="server">
</asp:DetailsView>
   
<asp:SqlDataSource ID="SqlDataSource1" runat="server"    
ConnectionString="<%$Connectionstrings:ERPConnectionString%>"
SelectCommand="SELECT AccountCode, AccountName FROM AccountsTable">
</asp:SqlDataSource> 

<asp:SqlDataSource ID="SqlDataSource2"       
ConnectionString="<%$Connectionstrings:ERPConnectionString%>"
SelectCommand="SELECT * FROM AccountsTable Where AccountCode=@Code" 
UpdateCommand="Update AccountsTable 
SET AccountName=@AccountName,AccountDescription=@AccountDescription,
AccountPGroup=@AccountPgroup Where AccountCode=@AccountCode" 
runat="server">
<SelectParameters>
<asp:ControlParameter Name="Code" ControlID="GridView1"/>
</SelectParameters>
</asp:SqlDataSource>




The GridView control displays AccountCode and AccountName columns. The DetailsView control displays details of the single account which is selected by the user.

We need to enable a user to select a particular row in a GridView control when we want to build a single page Master/Detail page. To do so, we set the GridView control AutoGenerateSelectbutton property to ‘true’. This results in a Select button for each row of the GridView rendered as LinkButton, as shown in Figure.

We also set the DefaultMode property of the DetailsView control to ‘Edit’ which results in the DetailsView control appearing in the EditMode. This is useful to enable the user to edit the data in a master/detail relationship. At runtime, when the user selects a record in the GridView, the selected record is displayed by the DetailsView control and this data can be edited.

How to Display the Selected row Details in a DetailsView in a Master/Detail page

To display the selected row, the Code parameter's value is first retrieved from the GridView control SelectedValue property.

The GridView control’s SelectedValue property contains the first datakey value of the selected row. To do the above, the GridView's DataKeyNames property is set to AccountCode. When the user clicks on the Select button, a postback occurs and the GridView's SelectedValue property returns the AccountCode of the selected row, and the DetailsView shows the details of a selected account.

For the above to happen, the ControlParameter object is added to the SelectParameters collection of the SqlDataSource control (SqlDataSource2). Note that datasource control SqlDataSource2 is bound to the DetailsView control.

When using ControlParameter object, we need to set the ControlID property to point to the GridView control (GridView1). The ControlParameter represents the AccountCode of the selected account in the GridView control. When using a ControlParameter, we must always set the value of the ControlID property to point to a control on a page (here, GridView control).

DetailsView vs FormView Control

The DetailsView and FormView controls enable us to display a single data item that is a single database record at a time. Both controls enable display, edit, insert and deletion of data items such as database records but with a condition -single data item at a time. Both the controls, DetailsView and FormView controls support page forward and backward traversing (page forward and backward traversing allow us to move through the records one at a time both in the forward and backward direction).

Each of these two controls render the user interface in its own unique way. This is the major difference between the two controls. The difference is that the FormView control uses a template to display a single database record at a time and the DetailsView control displays a single database record as HTML table.

DetailsView control

The DetailsView control is typically used for updating and inserting new records often in a master/detail scenario. In such a scenario, the selected record of the master control (GridView or ListBox control) determines the record to be displayed in the DetailsView control. For instance, in an ERP application, we use the GridView control to display pending sales order details and the DetailsView control to display the selected single sales order details. We use the <BoundField> elements or <TemplateField> elements to render the DetailsView control. The DetailsView control displays each field of a record as a table row.


The FormView control

The FormView control is designed to display a single data item (single database record) from a data source, for example Sql server database. Unlike the DetailsView control, templates have to be used in the FormView control to display data. The FormView control renders all fields of a single record in a single table row. While using templates, we can place any control such as dropdownlist, checkbox and even we can place tables and rich control like GridView etc. Compared to the DetailsView control, the formView control gives more flexibility over the rendering of fields. This form of rendering data enables more control over the layout of the fields. Using the FormView control is more complex as compared to the DetailsView control. Note that we cannot use <BoundField> elements to render the FormView control.



Summary
  • The DetailsView control is easier to work with.
  • The FormView control provides more control over the layout.
  • The FormView control uses only the templates with databinding expressions to display data. The DetailsView control uses <BoundField> elements or <TemplateField> elements.
  • The FormView control renders all fields in a single table row whereas the DetailsView control displays each field as a table row.

Wednesday, April 27, 2011

Software Development Life Cycle (SDLC)

The Systems Development Life Cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project from an initial feasibility study through maintenance of the completed application. 
         Various SDLC methodologies have been developed to guide the processes involved including the waterfall model (the original SDLC method), rapid application development (RAD), joint application development (JAD), the fountain model and the spiral model. Mostly, several models are combined into some sort of hybrid methodology. Documentation is crucial regardless of the type of model chosen or devised for any application, and is usually done in parallel with the development process. Some methods work better for specific types of projects, but in the final analysis, the most important factor for the success of a project may be how closely particular plan was followed. 



Different types of SDLC they are

1.Water fall model
2.Iterative model
3.Spiral model
4.Proto type model
5.RAD model(Rapid application development) 
6.cocomo model:cost to cost model
7.v-model
8.Fish model
9.Component Assembly Model 
 
 1.Water Fall Model:
 
This is also known as Classic Life Cycle Model (or) Linear Sequential Model. 
This model has the following activities. 

          
A. System/Information Engineering and Modeling

As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when the software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system. 

B. Software Requirement Analysis

This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focussed specially on software. To understand the nature of the program(s) to be built, the system engineer or "Analyst" must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved .



C. System Analysis and Design

In this phase, the software development process, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc... are all defined in this phase. A software development model is thus created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.

D. Code Generation

The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen.

E. Testing

Once the code is generated, the software program testing begins. 
Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations.

F. Maintenance

The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.
 
2. Iterative model:
An iterative lifecycle model does not attempt to start with a 
full specification of requirements. Instead, development begins by specifying and implementing just part of the software, 
which can then be reviewed in order to identify further requirements.
This process is then repeated, producing a new version of the software for each cycle of the model.Consider an iterative 
lifecycle model which consists of repeating the following four phases in sequence: 

A Requirements phase, in which the requirements for the software are gathered and analyzed. Iteration should eventually result in a requirements phase that produces a complete and final specification of requirements. - A Design
phase, in which a software solution to meet the requirements is designed. This may be a new design, or an extension of an earlier design.

- An Implementation and Test phase, when the software is coded, integrated and tested.

- A Review phase, in which the software is evaluated, the current requirements are reviewed, and changes and additions to requirements proposed.

For each cycle of the model, a decision has to be made as to whether the software produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). Eventually a point will be reached where the requirements are complete and the software can be delivered, or it becomes impossible to enhance the software as required, and a fresh start has to be made.

The iterative lifecycle model can be likened to producing software by successive approximation. Drawing an analogy with mathematical methods that use successive approximation to arrive at a final solution, the benefit of such methods depends on how rapidly they converge on a solution.

3.Spiral model:

This model proposed by Barry Bohem in 1988, attempts to combine the strengths of various models. It incorporates the elements of the prototype driven approach along with the classic software life cycle. Is also takes into account the risk assessment whose outcome determines taking up the next phase of the designing activity.

Unlike all other models which view designing as a linear process, this model views it as a spiral process. This is done by representing iterative designing cycles as an expanding spiral.
Typically the inner cycles represent the early phase of requirement analysis along with prototyping to refine the requirement definition, and the outer spirals are progressively representative of the classic software designing life cycle.
At every spiral there is a risk assessment phase to evaluate the designing efforts and the associated risk involved for that particular iteration. At the end of each spiral there is a review phase so that the current spiral can be reviewed and the next phase can be planned.
Six major activities of each designing spirals are represented by six major tasks:
1. Customer Communication
2. Planning
3. Risk Analysis
4. Software Designing Engineering
5. Construction and Release
6. Customer Evolution

Advantages
1. It facilities high amount of risk analysis.
2. This software designing model is more suitable for designing and managing large software projects.
3. The software is produced early in the software life cycle.

Disadvantages
1. Risk analysis requires high expertise.
2. It is costly model to use
3. Not suitable for smaller projects.
4. There is a lack of explicit process guidance in determining objectives, constraints and alternatives..
5. This model is relatively new. It does not have many practitioners unlike the waterfall model or prototyping model.


4.Proto type model:
Prototyping is a technique that provides a reduced functionality or limited performance version of the eventual software to be delivered to the user in the early stages of the software development process. If used judiciously, this approach helps to solidify user requirements earlier, thereby making the waterfall approach more effective.

What is done is that before proceeding with design and coding, a throw away prototype is built to give user a feel of the system. The development of the software prototype also involves design and coding, but this is not done in a formal manner. The user interacts with the prototype as he would do with the eventual system and would therefore be in a better position to specify his requirements in a more detailed manner. The iterations occur to refine the prototype to satisfy the needs of the user, while at the same time enabling the developer to better understand what needs to be done.

Disadvantages
1. In prototyping, as the prototype has to be discarded, so might argue that the cost involved is higher.
2. At times, while designing a prototype, the approach adopted is “quick and dirty” with the focus on quick development rather than quality.
3. The developer often makes implementation compromises in order to get a prototype working quickly.
4.RAD model(Rapid application development):

 The RAD modelis a linear sequential software development process that emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases:


A. Business modeling

The information flow among business functions is modeled in a way that answers the following questions:

What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?

B. Data modeling

The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristic (called attributes) of each object is identified and the relationships between these objects are defined.

C. Process modeling

The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing the descriptions are created for adding, modifying, deleting, or retrieving a data object.

D. Application generation

The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc... rather than creating software using conventional third generation programming languages. The RAD model works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software.
 
E. Testing and turnover

Since the RAD process emphasizes reuse, many of the program components have already been tested. This minimizes the testing and development time. 

6.Cocomo model:cost to cost model:


 The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases.
1.Basic COCOMO:
Basic COCOMO computes software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of lines of code (KLOC)
COCOMO applies to three classes of software projects:
  • Organic projects - "small" teams with "good" experience working with "less than rigid" requirements
  • Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid and less than rigid requirements
  • Embedded projects - developed within a set of "tight" constraints (hardware, software, operational, ......)
The basic COCOMO equations take the form
Effort Applied (E) = ab(KLOC)bb [ man-months ]
Development Time (D) = cb(Effort Applied)db [months]
People required (P) = Effort Applied / Development Time [count]
where, KLOC is the estimated number of delivered lines (expressed in thousands ) of code for project, The coefficients ab, bb, cb and db are given in the following table.
Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Basic COCOMO is good for quick estimate of software costs. However it does not account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and so on.

b.Intermediate COCOMOs :

Intermediate COCOMO computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product, hardware, personnel and project attributes. This extension considers a set of four "cost drivers",each with a number of subsidiary attributes:-
  • Product attributes
    • Required software reliability
    • Size of application database
    • Complexity of the product
  • Hardware attributes
    • Run-time performance constraints
    • Memory constraints
    • Volatility of the virtual machine environment
    • Required turnabout time
  • Personnel attributes
    • Analyst capability
    • Software engineering capability
    • Applications experience
    • Virtual machine experience
    • Programming language experience
  • Project attributes
    • Use of software tools
    • Application of software engineering methods
    • Required development schedule

    The Intermediate Cocomo formula now takes the form:
    E=ai(KLoC)(bi).EAF
    where E is the effort applied in person-months, KLoC is the estimated number of thousands of delivered lines of code for the project, and EAF is the factor calculated above. The coefficient ai and the exponent bi are given in the next table.
    Software project ai bi
    Organic 3.2 1.05
    Semi-detached 3.0 1.12
    Embedded 2.8 1.20
    The Development time D calculation uses E in the same way as in the Basic COCOMO.

    c.Detailed COCOMO:
    Detailed COCOMO incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step (analysis, design, etc.) of the software engineering process.
    The detailed model uses different efforts multipliers for each cost drivers attribute these Phase Sensitive effort multipliers are each to determine the amount of effort required to complete each phase.
    In detailed COCOMO, the effort is calculated as function of program size and a set of cost drivers given according to each phase of software life cycle.
    The five phases of detailed COCOMO are:-

  • plan and requirement.

  • system design.

  • detailed design.

  • module code and test.

  • integration and test.


7.v-model:

The V-Model is a software development model designed to simplify the understanding of the complexity associated 
with developing systems.
 

 

The V-model consists of a number of phases. The Verification Phases are on the left hand side of the V, the Coding Phase is at the bottom of the V and the Validation Phases are on the right hand side of the V.
Requirements analysis:
In the Requirements analysis phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated.
The user requirements document will typically describe the system’s functional, physical,interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance
tests are designed in this phase.
System Design:
Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly.
The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system
testing is prepared in this phase.
Architecture Design:
The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this
phase.
Module Design:
The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudo code -database tables, with all elements, including their type and size. The unit test design is developed in this stage.

Advantages of V-model

  • It saves ample amount of time and since the testing team is involved early on, they develop a very good understanding of the project at the very beginning.
  • Reduces the cost for fixing the defect since defects will be found in early stages
  • It is a fast method

Disadvantages of V-model

  • The biggest disadvantage of V-model is that it’s very rigid and the least flexible.If any changes happen mid way, not only the requirements documents but also the test documentation needs to be updated.
  • It can be implemented by only some big companies.
  • It needs an established  process to implement.

8.Fish model:
This is a process oriented company's development model. Even though it is a time
 consuming and expensive model One can be rest assured that both verification and 
validation is done paralley by separate teams in each phase of the 
model. So there are two reports generated by the end of each phase one 
for validation and one for verification. Because all the stages except 
the last delivery and maintenance phase is covered by the two parallel 
processes the structure of this model looks like a skeleton between two 
parallel lines hence the name fish model. 

  Advantages: 

This strict process results in products of exceptional quality. 
So one of the important objective is achieved.
 


 Disadvantages: 

 Time consuming and expensive.
 
9.Component Assembly Model  :
Object technologies provide the technical framework for a 
component-based process model for software engineering. The object 
oriented paradigm emphasizes the creation of classes that encapsulate 
both data and the algorithm that are used to manipulate the data. If 
properly designed and implemented, object oriented classes are reusable 
across different applicationsand computer based system architectures. 
Component Assembly Model leads to software reusability. The 
integration/assembly of the already existing software components 
accelerate the development process. Nowadays many component libraries 
are available on the Internet. If the right components are chosen, the 
integration aspect is made much simpler.