Thursday, April 30, 2009

Creating sample Windows communication Foundation application using .net

This article describes how to create and run a sample Windows Communication Foundation with .NET Framework 3.0 and above. This shows step-by-step in .net sample project to create a WCF Service and host the service in a console application.


To develop a WCF application, we need

  • WCF Service library which implements the Service using ServiceContract, OperationContract, DatatContract. This has the core logic of the service

  • Service Host using ServiceHost. The Service can be hosted in a Console application, Windows Service , IIS or Windows Process Activation Service (WAS)

  • Create a client and consume the service


Step by step to create sample WCF application and consume it

1. Open Visual studio and Create a WCF service Library project as shown.

[WCF1.JPG]

2. Once the project is created, visual studio by default creates ServiceContract with name IService1, OperationContract and DataContract.

[WCF2.JPG]

For sample purpose, we use the default contracts created by visual studio.

3. Implement the operation contract. This operation is used by the WCF clients to consume the service
public class Service1 : IService1 {

public string GetData(int value) {

// Todo Custom code here

return string.Format("Sample WCF application. you have Entered {0}", value);

}

}

4. Configure the endpoints,base address using app.config file as below

[WCF3.JPG]

5. create a new windows console application(WCFHostSample) to host the WCF Service (WcfServiceLibrarySample) as shown below.


[WCF4.JPG]

Add the reference to the WcfServiceLibrarySample library as shwon below.

[WCF5.JPG]

6. Include the following namesapce in WCF host sample and add the code in console application for hosting the service

using System.ServiceModel;

using System.ServiceModel.Description;

static void Main(string[] args)
{
// This is to host the WcfServiceLibrarySample
ServiceHost servicehost = new ServiceHost(typeof(WcfServiceLibrarySample.Service1));
servicehost.Open();
Console.WriteLine("Service started.....Press to terminate service.");
Console.ReadLine();
servicehost.Close();
}

Note : create an app.config for this host console application(WCFHostSample). Copy and paste the app.config (with end points and behavior nodes) created from service library to console application since the service library is loaded from this console application and uses the configuration of host console application.

Alternate way to host the WCF service without configuration file(app.config) and using ServiceMetadataBehavior as descibed below

static void Main(string[] args)
{
ServiceHost servicehost = new ServiceHost(typeof(WcfServiceLibrarySample.Service1));
//// A channel to describe the service.
ServiceMetadataBehavior metadataBehavior;
metadataBehavior = servicehost.Description.Behaviors.Find ServiceMetadataBehavior ();
if (metadataBehavior == null)
{
// This is how I create the proxy object add behavior to the service
metadataBehavior = new ServiceMetadataBehavior();

// configure url
metadataBehavior.HttpGetUrl = new Uri("
http://localhost:8731/WcfServiceLibrarySample/SampleApp");
metadataBehavior.HttpGetEnabled = true;

// add behviour to the host
servicehost.Description.Behaviors.Add(metadataBehavior);
}
servicehost.Open();
Console.WriteLine("Press to terminate service.");
Console.ReadLine();
servicehost.Close();
}

7. Start the WCF host console application as shown in the below. This should be running to consume the service from an client.

[WCF5a.JPG]

Note: Once the Service host is closed, then WCF service cannot be consumed from client


8. Create a windows console application (WCFClientSample) for consuming the WCF service hosted using the endpoint.


[WCF6.JPG]

9. Add Service reference using the url configured in the application configuration file and name the reference( as WCFSampleRef) as shown below.

[WCF7.JPG]

10. Then write the following code in WCF client project to consume the WCF service operations

static void Main(string[] args)
{
// create intstance of the WCF proxy
WCFSampleRef.Service1Client serviceproxy = new WCFClientSample.WCFSampleRef.Service1Client();
// CAll the Service GetData Method
Console.WriteLine(serviceproxy.GetData(100).ToString());
Console.ReadLine();
}

11. Then run the client and it get the following output

[WCF8.JPG]

Tuesday, April 28, 2009

Table Valued Parameters in SQL Server 2008

This summary is not available. Please click here to view the post.

SQL Server 2005 - Tips for Optimizing SQL Server Query Performance

The following are some tips to improve the performance of SQL Server queries.

1. Turn on the execution plan, and statistics. Then analyse the query performance
The first step is to use the tools that help you determine whether a query done one way is better than another. By comparing the original query to a new query that we come up with is the best way to evaluate the benefits of any changes.

To do this, go into SQL Server Management Studio and select the Query menu. Select the “Include Actual Execution Plan.”

[SqlServer1.JPG]

Next, turn on statistics. Type the following statement:

SET STATISTICS IO ON;

To counteract the above two you should also compare the actual execution times. To do this, execute the following statement:

SET STATISTICS TIME ON;

After running a simple query, analyze the query execution plan for optimizing the query as below.

[sqlserver2.JPG]

After getting the execution plan, do the following

  • Analyzing Execution Plans
  • Estimated Cost of Execution

2. Use Clustered Indexes in appropriate columns
Having the clustered index on the primary key is sometimes not the most efficient place for the clustered index to be. A clustered index is the most per formant type of index.
The scenario is

  • Generally, the whole table is sorted according to the clustered index which is created by default while creating Primary key.
  • If the table is involved in lots of joins based on the primary key, it is probably the right place for it to be.
  • But in some cases that you are continually filtering or grouping on other columns in a table, then you should possibly consider changing the primary key index to Non-Clustered, and putting the clustered index on those filtered or grouped columns.


In such cases, drop the clustered index from PK and create new clustered index for other column.

ALTER TABLE testtable
DROP CONSTRAINT PK_testtable
GO
ALTER TABLE testtable
ADD CONSTRAINT PK_testtable
PRIMARY KEY NONCLUSTERED (testcolumn);

Then the following statement adds a new clustered index to a table for the required column other than primary key where it involved lot of joins.

CREATE CLUSTERED INDEX MyClusteredIndex
ON testtable (testcolumnID)

3. Use Indexed views

Indexed Views have been around for a while for improving the performance using SCHEMABINDING option and create index on the views.
Imagine that you have created a view without SCHEMABINDING option and you have altered the schema of underlying table (deleted one column). Next time when you run your view, it will fail. Here is when SCHEMABINDING comes into picture. Creating a view with SCHEMABINDING option locks the underlying tables and prevents any changes that may change the table schema.

CREATE VIEW testview
WITH SCHEMABINDING
AS
SELECT testcolumn1, testcolumn2,
FROM testtable a INNER JOIN testtable2 b
ON (a.column1 = b.column1)

Note the use of the schema binding attribute. This prevents you from changing underlying tables while this view exists, and is necessary if you want to add an index.

CREATE UNIQUE CLUSTERED INDEX Idxtestview
ON testview (
testcolumn1, testcolumn2
)

4. Use Covering Indexes

Covering indexes are a feature that was newly added to SQL 2005. Basically, you can create an index optimized for the query itself based on joins, filters and grouping, and then add additional columns that can be retrieved directly from the index for use in select statements, as follows:

CREATE NONCLUSTERED INDEX TestIndex ON salestable (OrderId)INCLUDE (Quantity, UnitPrice)

If queries are executed on the OrderId column, the index will be used, and if the only other columns being retrieved are Quantity and UnitPrice, then the query optimizer doesn’t need to retrieve any extra columns from the underlying table. It can just use the index. Because the query optimizer doesn’t need to query the original table, performance is improved.

5. Keep your clustered index small in the table

One thing you need to consider when determining where to put your clustered index is how big the key for that index will be. So if you have a large clustered index on a table with a decent number of rows, the size could blow out significantly. In the case where there is no clustered index on a table, this could be just as bad, because it will use the row pointer, which is 8 bytes per row.

6. Avoid cursors

Cursors are less performant because every FETCH statement executed is equivalent to another SELECT statement execution that returns a single row. The optimizer can’t optimize a CURSOR statement, instead optimizing the queries within each execution of the cursor loop, which is undesirable. Given that most CURSOR statements can be re-written using set logic, they should generally be avoided.
Each time when the FECTH is executed, it causes a roundtrip to the server and hence performance is degraded.

7. Archive old data using Replication or Backup
If you want to improve query performance, give the optimizer less work to do. If you can cut down the number of rows the query has deal with, then performance will improve.

This can be done in the following ways

  • One can also create audit triggers to move historical data into other tables for this reason.
  • Don’t need your data after a certain period of time, back up your database and remove the data.
  • Also we can use SQL Server Replication technique to back up the data


8. Partition your data correctly
These days, you don’t actually have to move old data out of a table to improve query performance. You can partition your table into a number of data segments based on a partition function. The query optimizer can use the partition function to look at rows only on the most appropriate file group. To create partitions, you need a partition function and a partition scheme.

CREATE PARTITION FUNCTION TestPartitionFunction (int) AS RANGE RIGHT FOR VALUES (1, 100, 1000)

Once the partition function is created, you can then apply the function to a partition scheme for a table.
CREATE PARTITION SCHEME TestPartitionSchemeAS PARTITION TestPartitionFunctionTO (filegrp1, filegrp2, filegrp3, filegrp4)

Then it’s just a matter of creating the table to use the partition scheme on the column you decided to partition on:
CREATE TABLE myPartitionTable ( Col1 int, Col2 varchar (100))ON TestPartitionScheme (col1)

9. Remove user-defined inline scalar functions
Inline scalar functions are convenient if you want to return a single value, but at the expense of performance. They look somewhat like stored procedures, and they can be used in SQL statements. The problem is that they are not expanded and therefore not optimized into the query plan by the query optimizer. Queries like this may appear to be performant in the Execution plans and also in the IO statistics, but when you run the query, it can perform really badly.


CREATE FUNCTION dbo.fnGetPostalCode ( @Suburb varchar (100), @State varchar (10)) RETURNS int AS BEGIN

RETURN ISNULL (( SELECT Postal Code FROM dbo.PostalCode WHERE Suburb = @Suburb AND State = @State ), -1);

END

The following statement will only perform a clustered index scan, not a seek, and on a big table this could seriously affect performance.

SELECT s.SalesPersonID, s.SuburbName, s.State, dbo.fnGetPostalCode (s.SuburbName, s.State) AS PostalCode FROM dbo.SalesPerson

You can have a look at the details by clicking on SQL Server Management Studio’s Query menu, and selecting “Include Actual Execution Plan”
One way to get around this is to simply inline the underlying queries from the function, as follows:
SELECT s.SalesPersonID, s.SuburbName, s.State, ISNULL ((SELECT PostalCode FROM dbo.PostalCode WHERE Suburb = s.SuburbName AND State = s.State), -1) AS PostalCode FROM dbo.SalesPerson

Inline the SQL statement will perform significantly better than the inline function.

10. Use APPLY in SQL queries
The apply statement was created for the situation where you put multiple inline nested queries in the one statement. For example, take the following statement:

SELECT soh.SalesOrderID, Quantity= (SELECT TOP 1 (Quantity) FROM SalesOrderDetails WHERE SalesOrderID = a.SalesOrderID), UnitPrice= (SELECT TOP 1 (UnitPrice) FROM SalesOrderDetails WHERE SalesOrderID = a.SalesOrderID)FROM SalesOrderHeader a

This performs an extra query, retrieving data from another table using the same criterion. This can now be replaced with the following:

SELECT b.SalesOrderID, b.OrderDate, a.* FROM SalesOrderHeader b CROSS APPLY ( SELECT TOP (1) c.UnitPrice as UnitPrice, c.Quantity as Quantity FROM SalesOrderDetail c WHERE c.SalesOrderId = b.SalesOrderId ORDER BY c.Quantity DESC) as a

a.* mentioned in the above query returns Quantity and UnitPrice. We can also refer as a. UnitPrice and a.Quantity.

11. Use computed columns

Computed columns are derived from other columns in a table. By creating and indexing a computed column, you can turn what would otherwise be a scan into a seek.
For example, if you needed to calculate SalesPrice and you had a Quantity and UnitPrice column, multiplying them in the SQL inline would cause a table scan as it multiplied the two columns together for every single row. Create a computed column called SalesPrice, then index it and the query optimizer will no longer need to retrieve the UnitPrice and Quantity data and do a calculation.
Creating computed column syntax

CREATE TABLE sales (SALESORDERID int, UNITPRICE int, QUANTITY int, SALESPRICE as (UNITPRICE * QUANTITY))

where SALESPRICE is a computed column

Inserting the values into other columns automatically inserts value in SALESPRICE which is a computed column.
INSERT INTO sales (SALESORDERID, UNITPRICE, QUANTITY) VALUES (1, 100, 2)

12. Use the correct transaction isolation level

If there are a lot of rows in your table, multiple concurrent requests to that table could cause contention if the correct transaction isolation level is not set. If requests are repeatedly blocked, it could be time to consider whether to change.

For example, READ UNCOMMITED is equivalent to dirty reads, or NOLOCK. That is, if a transaction is in the middle of processing and you read a row, the data may not be valid, especially if multiple inserts/updates are occurring that require atomicity. This is the most per formant and it ignores locking altogether, but is generally not allowed by good design and is a special case.

With READ_COMMITTED_SNAPSHOT, it specifies that any data read by the transaction will be the transitionally consistent version of the data that existed at the start of the transaction. Internally, it makes a versioned copy of the data and this is placed in tempdb until the transaction has competed. Except when the database is being recovered, snapshot transactions do not request locks when reading data, and therefore do not block other transactions from writing data. Transactions writing data also do not block other transactions reading data.

There are various other types of transaction options, including REPEATABLE_READ and SERIALIZABLE amongst others that you can look at to determine whether they are appropriate for your needs.

Microsoft suggested some check list for SQL Server performance is available here

.Net framework Frequently asked interview questions

Frequently asked interview questions with answers are explained below. This helps to understand about .net framework basics

1. Difference between Mutable and immutable objects
An object qualifies as being called immutable if its value cannot be modified once it has been created. For example, methods that appear to modify a String actually return a new String containing the modification. Developers are modifying strings all the time in their code. This may appear to the developer as mutable - but it is not. What actually happens is your string variable/object has been changed to reference a new string value containing the results of your new string value. For this very reason .NET has the System.Text.StringBuilder class. If you find it necessary to modify the actual contents of a string-like object heavily, such as in a for or foreach loop, use the System.Text.StringBuilder class System.String is immutable.

System.StringBuilder was designed with the purpose of having a mutable string where a variety of operations can be performed.

2. Difference between interface and abstract class
In an interface class, all methods are abstract - there is no implementation. In an abstract class some methods can be concrete.
In an interface class, no accessibility modifiers are allowed. An abstract class may have accessibility modifiers.

3. Difference between Struct and a Class
Structs are value-type variables and are thus saved on the stack, additional overhead but faster retrieval. Another difference is that structs cannot inherit.

4. Debug class and Trace class difference?
Documentation looks the same. Use Debug class for debug builds, use Trace class for both debug and release builds.

5. assert() method in Debug and Trace class
In debug compilation, assert takes in a Boolean condition as a parameter, and shows the error dialog if the condition is false. The program proceeds without any interruption if the condition is true.
For Release mode, need to use
Trace.Assert(false, "Message to be displayed");

6. How to call Win32 API from a .NET Framework program?
Yes. Using platform invoke, .NET Framework programs can access native code libraries by means of static DLL entry points.

Here is an example of C# calling the Win32 MessageBox function:

using System;
using System.Runtime.InteropServices;
class MainApp {
[DllImport("user32.dll", EntryPoint="MessageBox")]
public static extern int MessageBox(int hWnd, String strMessage, String strCaption, uint uiType);
public static void Main() {
MessageBox( 0, "Hello, this is PInvoke in operation!", ".NET", 0 );
}
}

7. Difference between Early Binding and late binding
Early binding occurs at compile time where as late binding occurs at Run-time.
With early binding, Visual Basic .NET uses type information that is available about the Office application in question to bind directly to the methods or properties that it needs to use.

This occurs at compile time
objApp = New Excel.Application()

In contrast to early binding, late binding waits until run time to bind property and method calls to their objects.

This occurs at run time and .net reflection is used to invoke the objects and methods dynamically
objApp = CreateObject("Excel.Application")

8. supportedruntime Element in .net framework
Specifies which versions of the common language runtime the application supports. This element should be used by all applications built with version 1.1 or later of the .NET Framework. This should be specified in exe.config or web config file


[Config1.JPG]

9. Delay Signing in .Net

During development process you will need strong name keys to be exposed to developer which is not a good practice from security aspect point of view. In such situations you can assign the key later on and during development you can use delay signing

[assembly:AssemblyKeyFileAttribute("myKey.snk")]

[assembly:AssemblyDelaySignAttribute(true)]

The compiler inserts the public key into the assembly manifest and reserves space in the PE file for the full strong name signature. The real public key must be stored while the assembly is built so that other assemblies that reference this assembly can obtain the key to store in their own assembly reference.

Because the assembly does not have a valid strong name signature, the verification ofthat signature must be turned off. You can do this by using the –Vr option with theStrong Name tool. The following example turns off verification for an assembly calledmyAssembly.dll.
Sn –Vr myAssembly.dll

Just before shipping, you submit the assembly to your organization's signing authorityfor the actual strong name signing using the –R option with the Strong Name tool.

Sn -R myAssembly.dll sgKey.snk

The above example signs an assembly called myAssembly.dll with a strong nameusing the sgKey.snk key pair.

10. Native Image Generator (Ngen.exe)

The Native Image Generator utility (Ngen.exe) allows you to run the JIT compiler on your assembly's MSIL and generate native machine code which is cached to disk. After the image is created .NET runtime will use the image to run the code rather than from the hard disk. Running Ngen.exe on an assembly potentially allows the assembly to load and execute faster.

ngen.exe install assemblyname.dll

Friday, April 17, 2009

Creating Sequential Workflow Example with Windows Workflow Foundation (WWF)

This article describes how to create and run a Console Sequential Workflow in Windows Workflow Foundation with .NET Framework 3.0 and above. This shows the .net sample project to create sequential workflow step by step.

Pre-Requsities


  • .NET Framework 3.5 Framework
  • Visual Studio Team system 2008
    or
  • .NET Framework 3.0 Framework
  • Visual Studio 2005 Professional Edition
  • Windows Workflow Foundation extensions for Visual Studio 2005

Steps for creating a sample workflow project

1. Open Visual studio and Create a Sequential Workflow ConsoleApplication project as shown.


[WFImage1.JPG]

2. After creating the application, your designer window should look like this

[WFImage2.JPG]

3. From the ToolBox of the visual studio, add the code activity as shown in the below.

[WFImage3.JPG]

4. After code activity is added, Double click on the code activity. This creates codeActivity1_ExecuteCode method.

5. Add the code snippet as below

Initilize global variables


public sealed partial class Workflow1: SequentialWorkflowActivity
{
// Global Variables and used by activities
private int inputval = 0;
private string status = string.Empty;

}

Add the following code for code activity

private void codeActivity1_ExecuteCode(object sender, EventArgs e)
{
// Get the input from the user
Console.WriteLine("Enter a number");
// store the input in the global variable
inputval = int.Parse(Console.ReadLine().ToString());
Console.WriteLine("Workflow started....");
}

6. Go back to design view and click ifElseBranchActivity1 to add to the workflow. the properties window, you will see the Condition row. Click it and select Declarative Rule Condition. Now, expand the row and set Condition Name. Then add new Rule and then add the condition code for if loop like this.

Note: inputval is a global variable for the class

this.inputval <100
Do the same for else branch also with the condition
this.inputval > 100

7. After the condition is added for ifelseActivity , the designer will look like this

[Wfimage4.JPG]

9. Also add codeActivity3 and codeActivity4 in the if else loop to get the status. The final work flow diagram looks like this


[WFImage5.JPG]


10. Add the code for all the code activities
private void codeActivity2_ExecuteCode(object sender, EventArgs e)
{

// status is the global variable
Console.WriteLine(status);
Console.WriteLine("Workflow completed...");
Console.ReadLine();
}

private void codeActivity3_ExecuteCode(object sender, EventArgs e)
{
// set the global variable ... Also do custom code if any
status = "If Activity Executed....";
}

private void codeActivity4_ExecuteCode(object sender, EventArgs e)
{
// set the global variable ... Also do custom code if any
status = "Else Activity Executed....";
}

11. Now, press F5 or click Run in order to run the project. If the user input is 10, then the output will look like this

[WFImage6.JPG]

After completing the excerise, you have good understanding of creation of sequential workflow and it's use.

Wednesday, April 15, 2009

Web.config Hierarchy in ASP.NET

ASP.NET integrates the settings in configuration files (the Machine.config and Web.config files) into a single inheritance hierarchy. With a few exceptions, you can place a Web.config file wherever you need to override the configuration settings that are inherited from a configuration file located at a higher level in the hierarchy.

We can have web config file at each folder or sub folder of the web application which overrides the higher level config.

System wide configuration settings are defined in the Machine.config for the .NET Framework. The Machine.config file is located in the C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG folder. By including Web.config files in sub-folders, we can override the settings defined in the Web.config file in the application's root folder.

The following are sample section declarations from a Machine.config file:

<<section name="processModel"
type="System.Web.Configuration.ProcessModelSection, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
allowdefinition="MachineOnly" allowlocation="false"/>>

allowDefnition attribute and it's significance in machine config file.
  • If allowDefinition is MachineOnly , then we can not override this section either in application level or in folder level. The only section declared in the Machine.config file with this settings is processModel. If you are trying to override the configuration in web.config file, then "Parser Error Message: It is an error to use a section registered as allowDefinition='MachineOnly' beyond machine.config" error is thrown.
  • If allowDefinition is MachineToApplication, then we can override these sections by the root directory Web.config. Sections with this setting in Machine.config are authentication, machineKey, sessionState, trust, and securityPolicy.
  • If allowDefinition attribute is omitted in a section declaration of the Machine.config file, we can override that section at any level.

Tuesday, April 14, 2009

Entity Framework in .Net

Eliminate the impedance mismatch between data models and between languages that application developers would otherwise have to deal with new features available in the upcoming version of ADO.NET.

Key Points
  • This introduces a new model called Entity Data Model (EDM)
  • It uses a new provider called MapProvider.
  • The mapping provider is given the EDM schema and the mapping information, so it can internally use the mapping infrastructure to translate between the logical and conceptual schemas.
  • Integration with LINQ(Language Integrated query): This query written using LINQ will be processed by the compiler, which means that you'll get compile-time validation as the rest of the application code would.
  • Visual Studio tools can generate EDM Schema from logical and conceptual schema.
SELECT sp.FirstName, sp.LastName, sp.HireDateFROM SalesPerson sp INNER JOIN Employee e ON sp.SalesPersonID = e.EmployeeIDINNER JOIN Contact c ON e.EmployeeID = c.ContactID WHERE e.SalariedFlag = 1 AND e.HireDate >= '2006-01-01'

In the above query we have made unnecessary join to the Contact and Employee table even though we concern only about person data. Now that we have a higher-level EDM model, we could write this same query against a set of SalesPeople entities as:

SELECT sp.FirstName, sp.LastName, sp.HireDateFROM AdventureWorks.AdventureWorksDB.SalesPeople AS spWHERE e.HireDate >= '2006-01-01'

So, for example, to execute a query against the EDM model that finds the names and hire-dates of the sales people after a given hire date, the ADO.NET code would be:

using(MapConnection con = new MapConnection(connectionName))
{
con.Open();
MapCommand cmd = con.CreateCommand();
cmd.CommandText = "SELECT sp.FirstName, sp.LastName, sp.HireDate " + "FROM AdventureWorks.AdventureWorksDB.SalesPeople AS sp " + "WHERE sp.HireDate > @date"; cmd.Parameters.AddWithValue("date", hireDate);
DbDataReader r = cmd.ExecuteReader();
while(r.Read()) {
Console.WriteLine("{0}\t{1}", r["FirstName"], r["LastName"]);
}
}

Steps to create a project using .net Entity Framework

1. Creating an Entity Data Model using ADO.NET Entity Data Model (.csdl file)

2. Using Entity Client
using (EntityConnection cn = new
EntityConnection(”name=NorthwindEntitiesconnetion”))
{
cn.Open();
Console.WriteLine(cn.State.ToString());
}

NorthwindEntities is the connection string created automatically while creating data model.

3. Working with Objects Using ObjectQuery (Of T)

4 Using LINQ to Entities to Query your database

creating sample project is here