Wednesday, August 4, 2010

Creation of Singleton Object with Coldfusion

Note: Coldfusion in and out tag is replaced because of blogging problem)

Singletons are perhaps one of the most simple Design Patterns. For those who don’t know sigletons are a class that can only have one instance. They can be thought of as a glorified global variable - but are a lot more useful.Most ColdFusion classes, or rather instances of CF components, can be turned in a singleton by placing the following code in your Application.cfm:

cfif not structkeyexists(application,instance name)
cfset application.instance name = createobject(”component”,path to component)
/cfifor OnApplicationStart method of your Application.cfc: cfset application.instance name = createobject(”component”,path to component)

The above code places an instance of the component in the application scope and you can then access the properties and methods of the component via the application variable.Singletons can also be placed in other ColdFusion scopes such as the server or session scopes or even the request scope. Which scope you choose depends on what your code does.Another way to create a singleton is to add an getInstance method to your component and use that to return the instance.Like so:

cffunction name=”getInstance” access=”public” output=”false”

cfif not isdefined(”application.instance name”)
cfset application.[instance name] = this
/cfif

cfreturn application.[instance name]
/cffunctionRather than hard coding the instance name we can base it on the displayname of the component.cffunction name=”getInstance” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif not isdefined(”application.#displayname#”)
cfset application.[displayname] = this
/cfif

cfreturn application.[displayname]
/cffunction

While this is an improvement on the original code this method would need to be added to all components you wanted to turn into a singleton. A better solution is to create a singleton component and in a component you need to turn into a singleton extend from the singleton component.The singleton component (singleton.cfc):

cfcomponent displayname=”singleton”

cffunction name=”getInstance” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif not isdefined(”application.#displayname#”)
cfset application[displayname] = this
/cfif

cfreturn application[displayname]
/cffunction

/cfcomponentThe component we want to use as a singleton (dsn.cfc): cfcomponent displayname=”DSN” extends=”singleton”
cfset variables.DNS = “”

cffunction name=”getDSN” access=”public” returntype=”string” output=”false”
cfreturn variables.DSN
/cffunction

cffunction name=”setDSN” access=”public” output=”false”
cfargument name=”DSN” type=”string” required=”yes”
cfset variables.DSN = arguments.DSN
/cffunction

/cfcomponentUsing the component (in Applicaton.cfm): cfscript
if (not structkeyexists(application,’dsn’)) {
application.dsn = createobject(’component;,’dsn’).getInstance();
application.dsn.setDSN(’mydsn’);
}
/cfscriptor OnApplicationStart method of Application.cfc: cfscript
application.dsn = createobject(’component’,'dsn’).getInstance();
application.dsn.setDSN(’mydsn’);
/cfscriptIn the page: cfquery name=”myquery” datasource=”#applicaton.dsn.getDSN()#”

/cfquery Here the example code : singleton.cfc cfcomponent displayname=”singleton”

cffunction name=”init” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif not isdefined(”application._singletons”)
cfset application._singletons = structnew()
/cfif
cfif not isdefined(”application._singletons.#displayname#”)
cfset application._singletons[displayname] = this
/cfif

cfreturn application._singletons[displayname]
/cffunction

cffunction name=”remove” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif isdefined(”application._singletons.#displayname#”)
cfset structdelete(application._singletons, displayname)
/cfif
/cffunction

/cfcomponentAnd here hows it’s setup in Application.cfm (or .cfc): cfscript
// function to get an instance of a singleton
function getInstance(name) {
if (not isdefined(”application._singletons.#name#”)) {
instance = createobject(”component”,”com.classsoftware.utils.#name#”).init();
}

return application._singletons[name];
}

// function to remove a singleton
function removeInstance(name) {
if (isdefined(”application._singletons.#name#”)) {
application._singletons[name].remove();
}
}

// remove instance if asked
if (isdefined(”url.init”)) {
removeInstance(’dsn’);
}
/cfscriptAnd how it’s used on the page: cfset dsn = getInstance(”dsn”)

cfquery name=”myquery1″ datasource=”#dsn.getDSN()#”
select ….
/cfquery

cfquery name=”myquery2″ datasource=”#dsn.getDSN()#”
select ….
/cfquery

The functions getInstance and removeInstance could be placed inside a component that creates/removes singletons (a singleton factory?). However that component itself would need to be a singleton or you’d need to create it (via createobject) on every page. I’ll feel it’s best just to leave them as user defined functions for simplicity and performance sake.Anther issue that came up was that you can still use createobject (or cfinvoke) to create other instances of the component and there seems no way of stopping this.Well there’s one way I can think of but I’m not sure if I’d actually use it in a production system, but it may be of interest to someone so here’s how to do it.ColdFusion methods can be set at run time, you can add or replace methods by assigning them to new functions like so: // from this point on when method is called call newmethod instead
cfset instance.method = newmethodMethods can also be removed like so: // remove method “method” from instance
cfset structdelete(instance,”method”)

So you can create a component that has a method that throws an exception (via cfabort) and then have all methods of that component call that method. You can create an instance of the component but if you call any methods you will get an error.Applying this to our singleton component we get:

cfcomponent displayname=”singleton”
cffunction name=”init” access=”public” output=”false”
cfscript
var displayname = getMetaData(this).displayname;

this.invalid();

if (not isdefined(”application._singletons”)) {
application._singletons = structnew();
}
if (not isdefined(”application._singletons.#displayname#”)) {
application._singletons[displayname] = this;
}

return application._singletons[displayname];
/cfscript
/cffunction

cffunction name=”remove” access=”public” output=”false”
cfscript
var displayname = getMetaData(this).displayname;

this.invalid();

if (isdefined(”application._singletons.#displayname#”)) {
structdelete(application._singletons, displayname);
}
/cfscript
/cffunction

cffunction name=”invalid” access=”public” output=”false”
cfabort showerror=”Singletons must be created via helper functions not via create object!”
/cffunction

/cfcomponentThe this.invalid();

would also needed to be added to all methods of classes than extend singleton.cfc. eg dsn.cfc in the last article.If you then remove the method that generates the error (via structdelete) before any methods are called then the methods of the instance can be called.Applying this to our getInstance function we get: // function to get an instance of a singleton
function getInstance(name) {
if (not isdefined(”application._singletons.#name#”)) {
instance = createobject(”component”,”path..#name#”);
structdelete(instance,”invalid”);
instance.init();
}

return application._singletons[name];
}

That way only instances returned from our getInstance function can be used and any other instances created via created object (or other way) will throw an error when a method of that instance is called.

Singletons are perhaps one of the most simple Design Patterns. For those who don’t know sigletons are a class that can only have one instance. They can be thought of as a glorified global variable - but are a lot more useful.Most ColdFusion classes, or rather instances of CF components, can be turned in a singleton by placing the following code in your Application.cfm:

cfif not structkeyexists(application,instance name)
cfset application.instance name = createobject(”component”,path to component)
/cfifor OnApplicationStart method of your Application.cfc: cfset application.instance name = createobject(”component”,path to component)

The above code places an instance of the component in the application scope and you can then access the properties and methods of the component via the application variable.Singletons can also be placed in other ColdFusion scopes such as the server or session scopes or even the request scope. Which scope you choose depends on what your code does.Another way to create a singleton is to add an getInstance method to your component and use that to return the instance.Like so:

cffunction name=”getInstance” access=”public” output=”false”

cfif not isdefined(”application.instance name”)
cfset application.[instance name] = this
/cfif

cfreturn application.[instance name]
/cffunctionRather than hard coding the instance name we can base it on the displayname of the component.cffunction name=”getInstance” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif not isdefined(”application.#displayname#”)
cfset application.[displayname] = this
/cfif

cfreturn application.[displayname]
/cffunction

While this is an improvement on the original code this method would need to be added to all components you wanted to turn into a singleton. A better solution is to create a singleton component and in a component you need to turn into a singleton extend from the singleton component.The singleton component (singleton.cfc):

cfcomponent displayname=”singleton”

cffunction name=”getInstance” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif not isdefined(”application.#displayname#”)
cfset application[displayname] = this
/cfif

cfreturn application[displayname]
/cffunction

/cfcomponentThe component we want to use as a singleton (dsn.cfc): cfcomponent displayname=”DSN” extends=”singleton”
cfset variables.DNS = “”

cffunction name=”getDSN” access=”public” returntype=”string” output=”false”
cfreturn variables.DSN
/cffunction

cffunction name=”setDSN” access=”public” output=”false”
cfargument name=”DSN” type=”string” required=”yes”
cfset variables.DSN = arguments.DSN
/cffunction

/cfcomponentUsing the component (in Applicaton.cfm): cfscript
if (not structkeyexists(application,’dsn’)) {
application.dsn = createobject(’component;,’dsn’).getInstance();
application.dsn.setDSN(’mydsn’);
}
/cfscriptor OnApplicationStart method of Application.cfc: cfscript
application.dsn = createobject(’component’,'dsn’).getInstance();
application.dsn.setDSN(’mydsn’);
/cfscriptIn the page: cfquery name=”myquery” datasource=”#applicaton.dsn.getDSN()#”

/cfquery Here the example code : singleton.cfc

cfcomponent displayname=”singleton”

cffunction name=”init” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif not isdefined(”application._singletons”)
cfset application._singletons = structnew()
/cfif
cfif not isdefined(”application._singletons.#displayname#”)
cfset application._singletons[displayname] = this
/cfif

cfreturn application._singletons[displayname]
/cffunction

cffunction name=”remove” access=”public” output=”false”
cfset var displayname = getMetaData(this).displayname

cfif isdefined(”application._singletons.#displayname#”)
cfset structdelete(application._singletons, displayname)
/cfif
/cffunction

/cfcomponentAnd here hows it’s setup in Application.cfm (or .cfc): cfscript
// function to get an instance of a singleton
function getInstance(name) {
if (not isdefined(”application._singletons.#name#”)) {
instance = createobject(”component”,”com.classsoftware.utils.#name#”).init();
}

return application._singletons[name];
}

// function to remove a singleton
function removeInstance(name) {
if (isdefined(”application._singletons.#name#”)) {
application._singletons[name].remove();
}
}

// remove instance if asked
if (isdefined(”url.init”)) {
removeInstance(’dsn’);
}
/cfscriptAnd how it’s used on the page: cfset dsn = getInstance(”dsn”)

cfquery name=”myquery1″ datasource=”#dsn.getDSN()#”
select ….
/cfquery

cfquery name=”myquery2″ datasource=”#dsn.getDSN()#”
select ….
/cfquery

The functions getInstance and removeInstance could be placed inside a component that creates/removes singletons (a singleton factory?). However that component itself would need to be a singleton or you’d need to create it (via createobject) on every page. I’ll feel it’s best just to leave them as user defined functions for simplicity and performance sake.Anther issue that came up was that you can still use createobject (or cfinvoke) to create other instances of the component and there seems no way of stopping this.Well there’s one way I can think of but I’m not sure if I’d actually use it in a production system, but it may be of interest to someone so here’s how to do it.ColdFusion methods can be set at run time, you can add or replace methods by assigning them to new functions like so: // from this point on when method is called call newmethod instead
cfset instance.method = newmethodMethods can also be removed like so: // remove method “method” from instance
cfset structdelete(instance,”method”)So you can create a component that has a method that throws an exception (via cfabort) and then have all methods of that component call that method. You can create an instance of the component but if you call any methods you will get an error.Applying this to our singleton component we get: cfcomponent displayname=”singleton”
cffunction name=”init” access=”public” output=”false”
cfscript
var displayname = getMetaData(this).displayname;

this.invalid();

if (not isdefined(”application._singletons”)) {
application._singletons = structnew();
}
if (not isdefined(”application._singletons.#displayname#”)) {
application._singletons[displayname] = this;
}

return application._singletons[displayname];
/cfscript
/cffunction

cffunction name=”remove” access=”public” output=”false”
cfscript
var displayname = getMetaData(this).displayname;

this.invalid();

if (isdefined(”application._singletons.#displayname#”)) {
structdelete(application._singletons, displayname);
}
/cfscript
/cffunction

cffunction name=”invalid” access=”public” output=”false”
cfabort showerror=”Singletons must be created via helper functions not via create object!”
/cffunction

/cfcomponent

The this.invalid(); would also needed to be added to all methods of classes than extend singleton.cfc. eg dsn.cfc in the last article.If you then remove the method that generates the error (via structdelete) before any methods are called then the methods of the instance can be called.Applying this to our getInstance function we get:

// function to get an instance of a singleton
function getInstance(name) {
if (not isdefined(”application._singletons.#name#”)) {
instance = createobject(”component”,”path..#name#”);
structdelete(instance,”invalid”);
instance.init();
}

return application._singletons[name];
}

That way only instances returned from our getInstance function can be used and any other instances created via created object (or other way) will throw an error when a method of that instance is called.

How Coldfusion works and it’s Key Features( Compilation and precompile )


Where Does the Compilation Go?
CFMX compiles cfm (and cfc) templates into .class files, which are Java byte code files. The files are written to (and executed from) the cfclasses subdirectory of [cfusionmx]\wwwroot\WEB-INF\ directory where CFMX is installed. This occurs whether you are using another Web server or have located your file outside the default wwwroot location. CFMX compiles and runs the code from this cfclasses directory, regardless of the location of the source file. The file names for these class files may not be at all apparent. A CF template named Setsession.cfm might lead to a class file named cfsetsession2ecfm1011928410.class. All templates from all directories end up in this one cfclasses subdirectory. They’re not stored here in any subdirectories related to their original location. Instead, CF includes a hash of the directory name in that set of numbers after the file name. Keep that in mind when trying to associate a given class file with its original cfm template. The hashing process is a bit convoluted. Perhaps the easiest way to detect which class file goes with which source file is to simply edit the file and then execute (or precompile) it. Look in the cfclasses directory for the most recently created class file. Assuming your server is not too busy with many compilations taking place, it should be pretty easy to associate the classname with the CF source code name.



Saving Java Source Code Produced by CFMX (Earlier coldfusion Versions)
So that’s where the compiled source code goes. But what about seeing the actual uncompiled Java source code that your CF template is converted into? Normally it’s of no concern to CF developers what CFMX is doing under the covers in converting our CFML to Java. For the ardently curious among you, did you know that you can ask CFMX to save the Java code it creates, in source form? You can. It’s an undocumented feature, and while I’ve had no trouble doing it. The setting can only be enabled by someone with administrative control of the server, and the setting is also server-wide. It will add a slight additional time to the compile process, so it’s not something you’d want to turn on in production. It probably ought not be left on in development either. You need to edit the file web.xml in the [cfusionmx]\wwwroot\WEB-INF directory. There, if you’re familiar with XML files, you’ll find a parameter called “coldfusion.compiler.saveJava”. Change its value from false to true. Save the file. Restart the server. Now, whenever a new or recently edited file is compiled (whether automatically by CFMX or by our precompile.bat file), CFMX will also create a “.java” file along with the “.class” file. This “.java” file will be found in that same [cfusionmx]\wwwroot\WEB-INF\cfclasses\ directory as the “.class” files (and will be subject to that same issue of the curious file naming mentioned above).

The Idea of Deleting the Generated Class Files
Some have proposed that instead of precompiling their code they’d just as soon delete the underlying Java class file that was created when it was last compiled. That may seem like overkill, but there are times when it might be worth trying. Just note that, as the previous sections discussed, finding the class file that’s associated with a given source template can be challenging. While some may simply delete all the class files, that’s certainly overkill. There is a -f directive you can pass to the compile process (by modifying the precompile.bat file now cfcompile.bat). That will force a recompile of a file even if CF doesn’t think it’s necessary. Sometimes that solves the same problem that deleting the class file would solve.


Precompiling ColdFusion pages

You can use the cfcompile utility to precompile ColdFusion pages (CFM, CFC, and CFR files). This can enhance initial page loading time at runtime.

Use the following command to compile ColdFusion pages into Java classes:

cfcompile webroot [directory-to-compile]

Sourceless distribution
You can use the cfcompile utility with the -deploy option to create ColdFusion pages (CFM, CFC, and CFR files) that contain Java bytecode. You can then deploy the bytecode versions of the ColdFusion pages instead of the original CFML source code.

Use the following command to compile CFML files into bytecode format that you can deploy instead of CFML source code:

cfcompile -deploy webroot directory-to-compile output-directory

After you run the cfcompile utility, perform the following steps:

Back up your original CFML files
Copy the generated bytecode CFML files to the original directory
Deploy the application.

Thursday, June 24, 2010

How to get the primary key of a record added to the table with coldfusion

Many people use database tables with autonumber primary keys. These are columns (typically

named “id”) that the database will provide a value for by simply adding one to the last highest value.

So if the last record inserted had an ID value of 5, the next will be 6. (Note that this isn’t always true.

You can’t assume the next value will be one over the last highest value.) If you need to find out what

value was used for the primary key, ColdFusion provides a simple way to do that.

To use this feature, you first must provide the result attribute to your cfquery tag. This tells

ColdFusion to save information about the query to the variable named by the result attribute.

“foo” result=“result”>

insert into people(name,email)

values(“Paris Hilton”, “trash@celebs.com”)

After running this query a structure named result will be created. Most of the keys of this structure

are set, including the sql of the query, recordcount, and other values, however there is a special key

that will store the value of the primary key assigned to the insertion. Unfortunately, this key value

will vary depending on the database. For SQL Server, the value will be in the IDENTITYCOL key.

For Oracle, the value will be in the ROWID key. For Sybase, the value will be in the

SYB_IDENTITY key. For Informix, the value will be in the SERIAL_COL key. For MySQL, the

value will be in the GENERATED_KEY key.

Using the above query as an example and assuming MySQL, you can display the primary key value

like so:

The ID of the row I just inserted was #result.generated_key#.

Logical architecture of a Fusebox application

Logical architecture of a Fusebox application

The logical architecture of a Fusebox app resembles a hub and spoke system, with all actions returning to the hub (the Fusebox). This sort of structure is also known as a circuit application. fusebox

Figure 1 Fusebox App Structure

A circuit application is usually a single directory of files and generally does a few related tasks such as search. The overall application is called the home application, which is made up of many circuit applications. This is where Fusebox gets its name. Just like an electrical Fusebox, it is set up as a group of circuits (”fuses”) that are ready to send the user to whichever part of the application his or her next click requires. Each of these fuses has a name, called a fuseaction. Fuseactions are used to turn on the appropriate switches to cause the required action. So the Fuseaction is the key to the application?without a fuseaction, the application will only do the default fuseaction.

INDEX.CFM and the FuseactionThe home application is ALWAYS engendered in a file called INDEX.CFM, which is placed in the root directory of your application. Every link on the website will always be to this file! When creating the user interface for the application, each URL link or form will be to INDEX.CFM and then contain the name of the fuseaction that will do the work necessary if it is activated. For a URL, the fuseaction will be contained in the query string, for instance: http://localhost/INDEX.CFM?fuseaction=search. For a form, the usual method of placing the fuseaction is to call the INDEX.CFM file in the form?s action field, but then include a hidden form field with the fuseaction:

Now, The question is for internal workings of the INDEX.CFM file. It will use CFINCLUDEs to combine files together to create a working application. But how does ColdFusion use the fuseaction to know which files to combine? This is done using CFCASE/CFSWITCH. The CFCASE/CFSWITCH tags perform a similar function to a CFIF statement with a bunch of CFELSEIFs. But, using CFSWITCH/CFCASE will run much faster than a similar series of CFIF/CFELSEIF. When there are many ELSEIF’s, the logic is exactly the same. For example, here is a logical statement using CFIF/CFELSEIF:

“Wooof” “Meeooow” “Moooooo” [[Silence]]

Using CFCASE/CFSWITCH, the same statement would be:

“Wooof” “Meeooow” “Moooooo” [[silence]]

Since CFSWITCH/CFCASE is faster, it is the method used in the Fusebox architecture. INDEX.CFM will contain CFSWITCH/CFCASE to determine what the user wants to do. Think of the INDEX.CFM as basically one big switch statement and each CFCASE contains the information on what to do for a particular fuseaction. Therefore, the EXPRESSION= parameter of the CFSWITCH will be equal to your Fuseaction variable. An example: fuseaction=search might run a search and then display the results. In this case, the opening CFSWITCH statement should assign #FUSEACTION# to be the variable to be used, as in:

This means that basically, a fuseaction is the equivalent of a single CFCASE statement in your INDEX.CFM. Have a look at this code:

Suppose a user clicks on the following link:

registration

Using the example code above, when ColdFusion executes our CFSWITCH using the “registration” fuseaction, it will first display our HTML header block, then it will include the registration form itself and then finally the HTML footer. The rest of the CFCASES will be ignored.

Article Inspired from : fusionauthority

Coldfusion and It’s Advantages

ColdFusion is a rapid scripting environment server for creating dynamic Internet Applications. ColdFusion Markup Language (CFML) is an easy-to-learn tag-based scripting language, with connectivity to enterprise data and powerful built-in search and charting capabilities. ColdFusion enables developers to easily build and deploy dynamic websites, content publishing systems, self-service applications, commerce sites, and more.

coldfusionmx.JPGcoldfusionmx_arch.JPG

ü Develop and manage applications quickly and easily—ColdFusion lets you condense complex and powerful business logic into fewer lines of code that can be reused, helping you to save time and reduce errors. It provides insight into applications across servers and helps you maintain a consistent configuration across clusters to more efficiently manage your environment. Using ColdFusion , you can improve application performance with more granular control over code, templates, and applications. And speed up application development with the tight integration between ColdFusion and Adobe ColdFusion Builder™ software, the new Eclipse™ based IDE.

ü Rapidly build rich interfaces for new and existing ColdFusion applications—By leveraging the unique integration between ColdFusion and the products in the Adobe Flash Platform, you can accelerate the development of RIAs and interfaces, from client to server. Built-in support for Ajax controls enables you to easily create rich interfaces using Ajax and to build more compelling and intuitive applications. New controls include mapping, multimedia player, multifile upload, accordion navigation, progress indicator, confirmations, alerts, buttons, and sliders. In addition, you can now leverage the power of ColdFusion enterprise services via AMF or SOAP without writing a single line of CFML.

ü Integrate ColdFusion applications with enterprise technologies—Using the enterprise services in ColdFusion , you can easily access data from an existing infrastructure. It’s also easy to build a hub application for enterprise personnel by including Microsoft Exchange enterprise messaging, calendaring, a contact list, and task management. You can expose data from Microsoft Office SharePoint web services to a ColdFusion application and dynamically generate office documents for reporting, decision making, and presentations. Leverage .NET objects from other applications to build a hub application for multiple enterprise resources. Also integrate with Java™ objects, IMAP, and more.

coldfusionmx_adv.JPG

coldfusionmx_arch_comp.JPG

Sources: • Sun Developer Network overview of Java SE security• MSDN, “How To: Use Regular Expressions to Constrain Input in ASP.NET”• PHP.NET Manual—“Security” section• Adobe white paper—Rapid application development for J2EE using Adobe ColdFusion 8• Adobe white paper—ColdFusion 8 developer security guidelines


ColdFusion Magic:- Join between oracle and sql server Using resultset

Yes, it is possible to JOIN result sets from different datasources

One unique use for Query of Queries is to JOIN recordsets from separate queries. By extension, this means you can JOIN recordsets from different datasources as well.

Let’s assume that my datasource for customers is an Oracle database, but the database for customer orders is SQL Server. I realize this is a bit contrived, but we all know how strange the corporate operating environment can be. Using Query of Queries, you can run a JOIN on the two recordsets.

In this example, I’m grabbing orders for a specific customer. First, we’ll look at the getCustomerOrders query, which will provide the second recordset that we’ll use in our JOIN along with the getCustomers recordset:


select
orderID,
customerID,
orderAmount
from
customerOrders

This would produce a recordset like the one shown in Figure 1.

fig1.JPG

JOIN query(Figure 1)

Now, let’s JOIN these separate result sets:


select
getCustomers.customerID,
getCustomers.customerName,
getCustomerOrders.orderID,
getCustomerOrders.orderAmount
from
getCustomers, getCustomerOrders
where
getCustomerOrders.customerID = getCustomers.customerID
AND getCustomerOrders.customerID = 91

The resulting recordset can be seen in Figure 2.

fig2.JPG

JOIN Query of Query(Figure 2)

This ability to relate records in separate queries can be a useful approach to certain programming challenges.

Vulnerability affects ColdFusion MX 7 and ColdFusion 8

A vulnerability has been reported in Adobe ColdFusion, which potentially can be exploited by malicious people to hijack user sessions.

The vulnerability is caused due to an unspecified error when using CFID or CFTOKEN and can be exploited to e.g. hijack a user’s session on an application built using ColdFusion.

NOTE: This vulnerability does not affect customers using J2EE session management.

The vulnerability affects ColdFusion MX 7 and ColdFusion 8.

Issue


ColdFusion manages sessions by keying on cookie values for CFID and CFTOKEN, by default. It has been found that ColdFusion will accept empty string values for either or both of these variables. If an application accidentally stored empty values to CFID and CFTOKEN, all users could share the same session data.

Solution


This update will cause ColdFusion to create a new session if CFID and/or CFTOKEN values are empty strings.
ColdFusion 8

You use the ColdFusion 8 Administrator to install hot fixes. The installation process is the same for all platforms and installation choices.

  1. Download hf800-70523.zip (6.25K) and extract the hf800-70523.jar file.
  2. Open the ColdFusion 8 Administrator and select the System Information page.
  3. Next to the Update File field, select the Browse Button and browse to the extracted file. Select the file and click Submit.
  4. Restart ColdFusion.

The ColdFusion 8.0 hot fix JAR file does not need to be retained after installing it with the ColdFusion Administrator. The file has been copied into the correct location.

The ColdFusion 8.0 hot fix JAR file will appear as a new entry in the System Information list.

Hot fixes are installed in the cf_root\lib\updates directory. To uninstall a hot fix, delete the JAR file from the updates directory that are being replaced by the cumulative update, after stopping the ColdFusion 8 application server.

ColdFusion MX 7

You use the ColdFusionMX 7 Administrator to install hot fixes. The installation process is the same for all platforms and installation choices.

  1. Download hf702-70523.zip (106K) and extract the hf702-70523.jar file.
  2. Open the ColdFusionMX 7 Administrator and select the System Information page.
  3. Next to the Update File field, select the Browse Button and browse to the extracted file. Select the file and click Submit.
  4. Restart ColdFusion.

The ColdFusionMX 7.02 hot fix JAR file does not need to be retained after installing it with the ColdFusion Administrator. The file has been copied into the correct location.

The ColdFusionMX 7.02 hot fix JAR file will appear as a new entry in the System Information list.

Hot fixes are installed in the cf_root\lib\updates directory. To uninstall a hot fix, delete the JAR file from the updates directory, after stopping the ColdFusionMX 7.02 application server.

For More information use following link

http://kb2.adobe.com/cps/402/kb402805.html

Use Single Sign-On to access ColdFusion applications via SharePoint

SharePoint custom Web Parts let you access multiple ColdFusion applications from the SharePoint server using Single Sign-On (SSO). After signing in, users can access multiple secure ColdFusion applications by accessing ColdFusion services from multiple Web Parts.

To make a ColdFusion application available from SharePoint, use the CFSharepoint SSO WebPart template. This template is a customized version of PageViewer WebPart. It enables you to pass SSO credentials to the ColdFusion application. Download this template from the Adobe website or copy it from the ColdFusion 9 DVD.

Remember these points:

  • Web Parts support only the native single sign-on solution; other pluggable single sign-on services are not supported.
  • Only single sign-on credentials are passed to the ColdFusion application. The ColdFusion application must have the necessary logic to retrieve the credentials and login to the application.

Deploy the CF9SSOWebPart.wsp Web Part for SharePoint Portal Server 2007

To configure single sign-on for SharePoint Server 2007, deploy the CF9SSOWebPart.wsp file to the SharePoint server.

  1. Copy the CF9SSOWebPart.wsp file to the BIN folder within the Web Server extensions. It is normally located at Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN in the SharePoint server.
  2. To deploy the solution to SharePoint, use the command prompt to navigate to Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN and enter the following commands, as required.

    To delete the solution if it is already present:

    STSADM.EXE -o deletesolution -name CF9SSOWebPart.wsp -override

    To add the solution to SharePoint:

    STSADM.EXE -o addsolution -f CF9SSOWebPart.wsp

    To deploy the solution to the configured website by specifying the URL:

    STSADM.EXE -o deploysolution -name CF9SSOWebPart.wsp  -url  -local -allowGacDeployment

    To deploy the solution to all the configured websites:

    STSADM.EXE -o deploysolution -name CF9SSOWebPart.wsp -local -allowGacDeployment

Import the CF9SSOWebPart.wsp Web Part into a Web Part Page

  1. Navigate to the web page on the SharePoint server where you want the Web Part to be accessible.
  2. In the Web Part page, click Site Actions > Site Settings.
  3. In the Site Settings page, click Galleries > Web Parts.
  4. In the Web Part gallery, click Upload in the toolbar pane.
  5. Select the CF9SSOWebPart.wsp Web Part.
  6. Enter the following details in the toolbar pane.
    • URL of the ColdFusion application to access
    • The form field name as the User ID
    • The form field name as the password
    • Name of the SSO application where the credentials are configured

Once the Web Part is deployed, it takes the credentials from the SharePoint Single Sign-On database (based on the application name entered in the Tools Pane). These credentials are transferred to the ColdFusion application through the URL (provided in the Tools Pane) in a FORM containing the specified form fields.

Using cfsharepoint

Sharepoint integration with ColdFusion helps you dynamically manage user lists, views, and groups; work with images and document workspaces; and use search effectively. The cfsharepoint tag lets you create new lists, retrieve list items, and update list items on the SharePoint server.

The following example shows how to create a picture library list called “getpics”.

      < action  ="create new folder" login= "#login#" name="collection1" params="#{strListName=" strparentfolder="">          myimage = filereadbinary(expandpath("Bird.jpg"));      //convert the image into byte array to pass as input for "upload" action.                baseimage = filereadbinary(expandpath("bird.jpg"));  //convert the image into byte array to pass as input for "upload" action.      

To check and ensure that all the updates are made, you can retrieve the list items using code like the following:

      SUCCESS      

Access ColdFusion from SharePoint using custom Web Parts

You can access ColdFusion applications from within SharePoint using custom Web Parts. You can create a custom Web Part using the Page Viewer Web Part template that is shipped, by default, with SharePoint services 2.0 and 3.0, and Microsoft Office SharePoint Portal Server 2003 or 2007.

  1. From the SharePoint Server page, click Modify Shared Page.
  2. Select Add Web

Ref : www.adobe.com

Saturday, April 24, 2010

How can I automate cached queries to update at an exact time each day?

While ColdFusion gives you the ability to choose how long query data is cached, and even couple of
options to clear a cached query by hand, you may still find a situation that requires more precise
control over your cached query updates.
For example, say you have a query that generates a list of the newest recipes submitted to your
recipe site. Because this query is used so often, you choose to cache it. However, you would like to
update the cached query at exactly noon each day to reflect a daily cutoff for new recipe entries.
How could you do this?
The solution lies in using the scheduling engine of your choice (ColdFusion server has one built in)
to run a ColdFusion template that will refresh your cached query.
If you were using the following cached query in your pages:

start cfquery
name="qWithinTest"
datasource="myDs"
cachedWithin="#createTimeSpan(1, 0, 0, 0)#">
select name
from recipes
end cfquery

You could create a template that flushes the query cache using the following code:

start cfquery
name="qWithinTest"
datasource="myDs"
cachedwithin="#createTimeSpan(0, 0, 0, -1)#">
select name
from recipes
End cfquery

It is then a simple matter of using your scheduling engine to run this query flush template at noon
each day.

Wednesday, April 21, 2010

Create 6D Array with Coldfusion 8

Coldfusion 8 Server support 1-3 Dimentional array only.
If you want to create more that 3D Array then coldfusion server will give you error.
Then what will be the possible sollution for it or is it impossible??

What i preffer to do is in following way.

arr = arrayNew(3);
arr[1][1][1] = arrayNew(3);

Here arr is 6D array.
you can use in following way
arr[1][1][1][1][1][1].
.............................................................................................................................................
Some more interesting sollutions are comming...

Urgent openning in coldfusion in CTS(Cognizant) Kolkata

Very urgent openning in coldfusion in cognizant(CTS) Kolkata.
Interested people can apply.
Please send me your updated resume to kalyan.cse.jis@gmail.com

Friday, April 2, 2010

An Introduction to SQL Server 2005 Integration Services

This paper discusses the challenges that face businesses that rely on data integration technologies to provide meaningful, reliable information to maintain a competitive advantage in today’s business world. It discusses how SQL Server 2005 Integration Services (SSIS) can help Information Technology departments meet data integration requirements in their companies. Real-world scenarios are included.

On This Page

Introduction
Challenges of Data Integration
SQL Server 2005 Integration Services
Making Data Integration Approachable

Introduction

The ability to transform corporate data into meaningful and actionable information is the single most important source of competitive advantage in today’s business world. Harnessing the data explosion to better understand the past and get direction for the future has turned out to be one of the most challenging ventures for enterprise Information Technology departments in global organizations. There are three broad categories of issues associated with data integration:

  • Technology challenges

  • Organizational issues

  • Economic challenges

In this paper, we will explore these challenges in detail and discuss how to address them with Microsoft® SQL Server™ 2005 Integration Services (SSIS). First, let’s view them in the context of a real-world scenario.

A Real-World Scenario

A major global transportation company uses its data warehouse to both analyze the performance of its operations and to predict variances in its scheduled deliveries.

Data Sources

The major sources of data in this company include order data from its DB2-based order entry system, customer data from its SQL Server-based customer relationship management (CRM) system, and vendor data from its Oracle-based ERP system. In addition to data from these major systems, data from spreadsheets tracking “extraordinary” events, which have been entered by hand by shipping supervisors, is incorporated into the data warehouse. Currently, external data such as weather information, traffic status, and vendor details (for subcontracted deliveries) are incorporated on a delayed basis from text files from various sources.

Data Consumption

Not only are the sources for these data diverse, but the consumers are also diverse both in their requirements and their geographic locations. This diversity has led to a proliferation of local systems. One of the major efforts for the Information Technology department is to establish a “single version of the truth,” at least for its customer data.

Data Integration Requirements

In view of this diversity of data, business needs, and user requirements, the Information Technology department has specified the following set of data integration requirements:

  • They must provide reliable and consistent historical and current data integrated from a variety of internal and external sources.

  • To reduce lags in data acquisition, data from providers and vendors must be available via Web services or some other direct mechanism such as FTP.

  • They need to cleanse and remove duplicate data and otherwise enforce data quality.

  • Increasing global regulatory demands require that the company maintain clear audit trails. It is not enough to maintain reliable data; the data needs to be tracked and certified.

Challenges of Data Integration

At one level, the problem of data integration in our real-world scenario is extraordinarily simple. Get data from multiple sources, cleanse and transform the data, and load the data into appropriate data stores for analysis and reporting. Unfortunately, in a typical data warehouse or business intelligence project, enterprises spend 60–80% of the available resources in the data integration stage. Why is it so difficult?

Technology Challenges

Technology challenges start with source systems. We are moving from collecting data on transactions (where customers commit to getting, buying, or otherwise acquiring something) to collecting data on pre-transactions (where customer intentions are tracked via mechanisms such as Web clicks or RFID). Data is now not only acquired via traditional sources and formats, such as databases and text files, but is increasingly available in a variety of different formats (ranging from proprietary files to Microsoft Office documents to XML-based files) and from Internet-based sources such as Web services and RSS (Really Simple Syndication) streams. The most pertinent challenges are:

  • Multiple sources with different formats.

  • Structured, semi-structured, and unstructured data.

  • Data feeds from source systems arriving at different times.

  • Huge data volumes.

In an ideal world, even if we somehow manage to get all the data we need in one place, new challenges start to surface, including:

  • Data quality.

  • Making sense of different data formats.

  • Transforming the data into a format that is meaningful to business analysts.

Suppose that we can magically get all the data we need and that we can cleanse, transform, and map the data into a useful format. There is still another shift away from traditional data movement and integration. That is the shift from fixed long batch-oriented processes to fluid and shorter on-demand processes. Batch-oriented processes are usually performed during “downtimes” when users do not place heavy demands on the system. This usually is at night during a predefined batch window of 6-8 hours, when no one is supposed to be in the office. With the increasing globalization of businesses of every size and type, this is no longer true. There is very little (if any) downtime and someone is always in the office somewhere in the world. The sun really doesn’t set on the global business.

As a result we have:

  • Increasing pressure to load the data as quickly as possible.

  • The need to load multiple destinations at the same time.

  • Diverse destinations.

Not only do we need to do all these, but we need to do them as fast as possible. In extreme cases, such as online businesses, data needs to be integrated on a continuous basis. There are no real batch windows and latencies can not exceed minutes. In many of these scenarios, the decision making process is automated with continuously running software.

Scalability and performance become more and more important as we face business needs that can’t tolerate any downtime.

Without the right technology, systems require staging at almost every step of the warehousing and integration process. As different (especially nonstandard) data sources need to be included in the ETL (Extract, Transform, and Load) process and as more complex operations (such as data and text mining) need to be performed on the data, the need to stage the data increases. As illustrated in Figure 1, with increased staging the time taken to “close the loop,” (i.e., to analyze, and take action on new data) increases as well. These traditional ELT architectures (as opposed to value-added ETL processes that occur prior to loading) impose severe restrictions on the ability of systems to respond to emerging business needs.

Figure 1

Figure 1

Finally, the question of how data integration ties into the overall integration architecture of the organization is becoming more important when both the real-time transactional technology of application integration and the batch-oriented high-volume world of data integration technology are needed to solve the business problems of the enterprise.

Organizational Challenges

There are two broad issues with data integration in a large organization; these are the “power” problem, and the “comfort zone” problem.

Power Challenge: Data is power and it is usually very hard to make people think of data in terms of a real valuable shared asset of the company. For enterprise data integration to be successful, all the owners of multiple data sources have to whole-heartedly buy into the purpose and direction of the project. Lack of cooperation from the relevant parties is one the major reasons for the failure of data integration projects. Executive sponsorship, consensus building, and a strong data integration team with multiple stakeholders are a few of the critical success factors that can help resolve the issues.

Comfort Zone Challenge: Problems of data integration, when analyzed in the context of an isolated need, can be solved in multiple ways. About 60% of data integration is solved by hand-coding. The technology used to solve similar problems can range from replication, ETL, SQL, to EAI. People gravitate towards the technology they are familiar with. Although these approaches have overlapping capabilities and can perhaps do the job in isolated cases, these technologies are optimized to solve different sets of problems. When trying to solve the problem of enterprise data integration, the lack of a sound architecture with appropriate technology choices can turn out to be a recipe for failure.

Economic Challenges

The organizational and technology related issues previously outlined conspire together to make data integration the most expensive part of any data warehouse/business intelligence project. The major factors that add to the cost of data integration are:

  • Getting the data out in the format that is necessary for data integration ends up being a slow and torturous process fraught with organizational power games.

  • Cleansing the data and mapping the data from multiple sources into one coherent and meaningful format is extraordinarily difficult

  • More often than not, standard data integration tools don’t offer enough functionality or extensibility to satisfy the data transformation requirements for the project. This can result in the expenditure of large sums of money in consulting costs to develop special ETL code to get the job done.

  • Different parts of the organization focus on the data integration problem in silos.

When there is a need to put them all together, additional costs are incurred to integrate these efforts into an enterprise-wide data integration architecture.

As the data warehousing and business intelligence needs of the organization evolve, faulty data integration architecture becomes more and more difficult to maintain and the total cost of ownership skyrockets.

SQL Server 2005 Integration Services

The traditional ETL-centric data integration from standard data sources continues to be at the heart of most data warehouses. However, demands to include more diverse data sources, regulatory requirements, and global and online operations are quickly transforming the traditional requirements for data integration. In this fast growing and changing landscape, the need to extract value from data and the need to be able to rely on it is more important than ever before. Effective data integration has become the basis of effective decision making. SQL Server Integration Services provides a flexible, fast, and scalable architecture that enables effective data integration in current business environments.

In this paper we will examine how SQL Server Integration Services (SSIS) is an effective toolset for both the traditional demands of ETL operations, as well as for the evolving needs of general purpose data integration. We will also discuss how SSIS is fundamentally different from the tools and solutions provided by major ETL vendors so it is ideally suited to address the changing demands of global business from the largest enterprise to the smallest business.

SSIS Architecture

Task flow and data flow engine

SSIS consists of both an operations-oriented task-flow engine as well as a scalable and fast data-flow engine. The data flow exists in the context of an overall task flow. It is the task-flow engine that provides the runtime resource and operational support for the data-flow engine. This combination of task flow and data flow enables SSIS to be effective in traditional ETL or data warehouse (DW) scenarios as well as in many other extended scenarios such as data center operations. In this paper we will mainly focus on the data-flow related scenarios. The use of SSIS for data center oriented workflow is a separate topic by itself.

Pipeline architecture

At the core of SSIS is the data transformation pipeline. This pipeline has a buffer-oriented architecture that is extremely fast at manipulating rowsets of data once they have been loaded into memory. The approach is to perform all data transformation steps of the ETL process in a single operation without staging data, although specific transformation or operational requirements, or indeed hardware may be a hindrance. Nevertheless, for maximum performance, the architecture avoids staging. Even copying the data in memory is avoided as far as possible. This is in contrast to traditional ETL tools, which often require staging at almost every step of the warehousing and integration process. The ability to manipulate data without staging extends beyond traditional relational and flat file data and beyond traditional ETL transformation capabilities. With SSIS, all types of data (structured, unstructured, XML, etc.) are converted to a tabular (columns and rows) structure before being loaded into its buffers. Any data operation that can be applied to tabular data can be applied to the data at any step in the data-flow pipeline. This means that a single data-flow pipeline can integrate diverse sources of data and perform arbitrarily complex operations on these data without having to stage the data.

It should also be noted though, that if staging is required for business or operational reasons, SSIS has good support for these implementations as well.

This architecture allows SSIS to be used in a variety of data integration scenarios, ranging from traditional DW-oriented ETL to nontraditional information integration technologies.

Integration Scenarios

SSIS for Traditional DW Loading

At its core, SSIS is a comprehensive, fully functional ETL tool. Its functionality, scale, and performance compare very favorably with high-end competitors in the market at a fraction of their cost. The data integration pipeline architecture allows it to consume data from multiple simultaneous sources, perform multiple complex transformations, and then land the data to multiple simultaneous destinations. This architecture allows SSIS to be used not only for large datasets, but also for complex data flows. As the data flows from source(s) to destination(s), the stream of data can be split, merged, combined with other data streams, and otherwise manipulated. Figure 2 shows an example of such a flow:

Cc917721.s2(en-us,TechNet.10).jpg

Figure 2

SSIS can consume data from (and land data into) a variety of sources including OLE DB, managed (ADO.NET), ODBC, flat file, Excel, and XML using a specialized set of components called adapters. SSIS can even consume data from custom data adapters (developed in-house or by third parties). This allows the wrapping of legacy data loading logic into a data source that can be seamlessly consumed in the SSIS data flow. SSIS includes a set of powerful data transformation components that allow data manipulations that are essential for building data warehouses. These transformation components include:

  • Aggregate Performs multiple aggregates in a single pass.

  • Sort Sorts data in the flow.

  • Lookup Performs flexible cached lookup operations to reference datasets.

  • Pivot and UnPivot Two separate transformations do exactly as their names suggest.

  • Merge, Merge Join, and UnionAll Can perform join and union operations.

  • Derived Column Performs column-level manipulations such as string, numeric, date/time, etc. operations, and code page translations. This one component actually wraps what other vendors might break up into many different transformations.

  • Data Conversion Converts data between various types (numeric, string, etc.).

  • Audit Adds columns with lineage metadata and other operational audit data.

In addition to these core data warehousing transformations, SSIS includes support for advanced data warehousing needs such as Slowly Changing Dimensions (SCD). The SCD Wizard in SSIS guides users through specifying their requirements for managing slowly changing dimensions and, based upon their input, generates a complete data flow with multiple transformations to implement the slowly changing dimension load. Support for standard Type 1 and 2 SCD along with 2 new SCD types (Fixed Attributes and Inferred Members) is provided. Figure 3 shows a page from the SCD Wizard.

Cc917721.s3(en-us,TechNet.10).jpg

Figure 3

Figure 4 shows the data flow generated by this Wizard.

Cc917721.s4(en-us,TechNet.10).jpg

Figure 4

SSIS also can be used to load Analysis Services multidimensional OLAP (MOLAP) caches directly from the data-flow pipeline. This means that SSIS can not only be used to create relational data warehouses, but also to load multidimensional cubes for analytical applications.

SSIS and Data Quality

One of the key features of SSIS is its ability to not only integrate data, but also to integrate different technologies to manipulate the data. This has allowed SSIS to include cutting edge “fuzzy logic” based data cleansing components. These components were developed by the Microsoft Research labs and represent the latest research in this area. The approach taken is a domain independent one and doesn’t depend upon any specific domain data, such as address/zip reference data. This allows these transformations to be used for cleansing most types of data, not just address data.

SSIS is deeply integrated with the data mining functionality in Analysis Services. Data mining abstracts out the patterns in a dataset and encapsulates them in a mining model. This mining model amongst other things then can be used to make predictions on what data belongs to a dataset and what data may be anomalous, allowing data mining to be used as a tool for implementing data quality. Support for complex data routing in SSIS allows anomalous data to not only be identified, but also be automatically corrected and replaced with better values. This enables “closed loop” cleansing scenarios. Figure 5 shows an example of such a closed loop cleansing data flow.

Cc917721.s5(en-us,TechNet.10).jpg

Figure 5

In addition to its built-in data quality features, SSIS can be extended to work closely with third-party data-cleansing solutions.

Application of SSIS Beyond Traditional ETL

The ability of the data-flow pipeline to manipulate almost any kind of data, the deep integration with Analysis Services, the support for extending it with a large variety of data manipulation technologies, and the inclusion of a rich work-flow engine allow SSIS to be used in many scenarios that are not traditionally thought of as ETL

Service Oriented Architecture

SSIS includes support for sourcing XML data in the data-flow pipeline, including data both from files on disk as well as URLs over HTTP. XML data is “shredded” into tabular data, which then can be easily manipulated in the data flow. This support for XML can work with the support for Web services. SSIS can interact with Web services in the control flow to capture XML data.

XML can also be captured from files, from Microsoft Message Queuing (MSMQ), and over the Web via HTTP. SSIS enables the manipulation of the XML with XSLT, XPATH, diff/merge, etc. and can also stream the XML into the data flow

This support enables SSIS to participate in flexible Service Oriented Architectures (SOA).

Data and text mining

SSIS not only has deep integration with the data mining features from Analysis Services, but it also has text mining components. Text mining (also referred to as text classification) involves identifying the relationship between business categories and the text data (words and phrases). This allows the discovery of key terms in text data and based upon this to automatically identify text that is “interesting.” This in turn can drive “closed-loop” actions to achieve business goals such as increasing customer satisfaction and enhancing the quality of the products and services.

On-demand data source

One of the most unique features in SSIS is the DataReader destination, which lands data into an ADO.NET DataReader. When this component is included in a data-flow pipeline, the package containing the DataReader destination can be used as a data source, exposed as an ADO.NET DataReader itself. This allows SSIS to be used not only as a traditional ETL to load data warehouses, but also as a data source that can deliver integrated, reconciled, and cleansed data from multiple sources on-demand. For example, this might be used to allow Reporting Services to consume data from multiple diverse data sources using a SSIS package as its data source.

A possible scenario that integrates all of these, consists of identifying and delivering interesting articles from RSS feeds as part of a regular report. Figure 6 shows a SSIS package that sources data from RSS feeds over the Internet, integrates with data from a Web service, performs text mining to find interesting articles from the RSS feeds, and then lands the interesting articles into a DataReader destination to be finally consumed by a Reporting Services report.

Cc917721.s6(en-us,TechNet.10).jpg

Figure 6

Figure 7 shows the use of the SSIS package as a data source in the Report Wizard.

Cc917721.s7(en-us,TechNet.10).jpg

Figure 7

From an ETL tool perspective, this scenario is very unusual because there really isn’t any data extraction or transformation or loading.

SSIS, the Integration Platform

SSIS goes beyond being an ETL tool not only in terms of enabling nontraditional scenarios, but also in being a true platform for data integration. SSIS is part of the SQL Server Business Intelligence (BI) platform which enables the development of end-to-end BI applications.

Integrated development platform

SQL Server Integration Services, Analysis Services, and Reporting Services all use a common Visual Studio® based development environment called the SQL Server Business Intelligence (BI) Development Studio. BI Development Studio provides an integrated development environment (IDE) for BI application development. This shared infrastructure enables metadata-level integration between various development projects (integration, analysis, and reporting). An example of such shared construct is the Data Source View (DSV), which is an offline schema/view definition of data sources, and is used by all three BI project types.

This IDE provides facilities such as integration with version control software (e.g., VSS) along with support for team-based features such as “check-in/check-out” and as such it fulfills the need for an enterprise-class team-oriented development environment for business intelligence applications. Figure 8 shows a BI Development Studio solution that consists of Integration, Analysis, and Reporting projects.

Cc917721.s8(en-us,TechNet.10).jpg

Figure 8

Not only does this provide a single place to develop BI applications, but it also can be used to develop other Visual Studio projects (using Visual C#®, Visual Basic® .NET etc.) and so can provide developers with a true end-to-end development experience.

Besides an integrated BI development environment, BI Development Studio has features for true run-time debugging of SSIS packages. These include the ability to set breakpoints and support for standard development constructs such as watching variables. A truly unique feature is the Data Viewer, which provides the ability to view rows of data as they are processed in the data-flow pipeline. This visualization of data can be in the form of a regular text grid or a graphical presentation such as a scatter plot or bar graph. In fact, it is possible to have multiple connected viewers that can display the data simultaneously in multiple formats. Figure 9 shows an example of geographic data visualized using a scatter plot and a text grid.

Cc917721.s9(en-us,TechNet.10).jpg

Figure 9
Programmability

In addition to providing a professional development environment, SSIS exposes all its functionality via a set of rich APIs. These APIs are both managed (.NET Framework) and native (Win32) and allow developers to extend the functionality of SSIS by developing custom components in any language supported by the .NET Framework (such as Visual C#, Visual Basic .NET, etc.) and C++. These custom components can be work-flow tasks and data-flow transformations (including source and destination adapters). This allows legacy data and functionality to be easily included in SSIS integration processes, allowing the past investments in legacy technologies to be effectively leveraged. It also allows easy inclusion of third-party components.

Scripting

The extensibility previously mentioned is not only limited to re-usable custom components but also includes script-based extensibility. SSIS has script components both for task flow as well as for data flow. These allow users to write scripts in Visual Basic. NET to add ad hoc functionality (including data sources and destinations) and to re-use any preexisting functionality packaged as .NET Framework assemblies.

Figure 10 shows an example of a script that manipulates rows of data inside a data flow.

Cc917721.s10(en-us,TechNet.10).jpg

Figure 10

This extensibility model makes SSIS not only a data integration tool, but also an Integration Bus into which technologies like data mining, text mining, and UDM can easily be plugged in to enable complex integration scenarios involving pretty much arbitrary data manipulation and structures.

Making Data Integration Approachable

The flexible and extensible architecture of SSIS allows it to address most of the technology challenges to data integration outlined earlier in this paper. As shown in Figure 11, SSIS eliminates (or at least minimizes) unnecessary staging. Because it performs complex data manipulation in a single pipeline operation, it is now possible to react to changes and patterns in the data fairly quickly, in a time frame that is actually meaningful for closing the loop and taking action. This is in contrast to traditional architectures that rely on data staging and that become impractical for closing the loop and taking meaningful action on data.

Figure 11

Figure 11

The extensible nature of SSIS makes it possible for organizations to leverage their existing investments in custom code for data integration by wrapping it as re-usable extensions to SSIS and by doing so to take full advantage of features such as logging, debugging, BI integration, etc. This greatly helps to overcome some of the organizational challenges outlined earlier in this paper.

The inclusion of SSIS in the SQL Server product makes the cost acquisition extremely reasonable as compared to other high-end data integration tools. Not only is the initial cost acquisition lowered, but via tight integration with Visual Studio and the rest of SQL Server BI tools, the cost of application development and maintenance is also significantly lowered in comparison to other similar tools. The extremely reasonable total cost of ownership (TCO) of SSIS (and the rest of SQL Server) makes enterprise-class data integration approachable to all segments of the market, taking it out of the exclusive domain of the largest (and richest) companies. At the same time, the architecture of SSIS is tuned to take advantage of modern hardware and to deliver performance and scale at the highest end of customer requirements. SSIS enables rich, scalable data integration to all customers, from the highest end enterprise to the small and medium business. In conjunction with the rest of the features in SQL Server, the Microsoft customer support infrastructure (ranging from broad, long beta testing, to rich online communities to premiere support contracts) and the consistency and integration with the rest of Microsoft product offerings, SSIS is truly a unique toolset that opens up new frontiers in data integration.

Tacken from microsoft.com