I got a crazy error yesterday while refactoring my data access layer for a client. I wanted to provide a mechanism to get the output parameters from a stored procedure call. I typically use delegates or events to give the developer the option to get the output parameters if they want them.
So I made a change to the order of events to dispose of the connection earlier (rather than hold onto it longer than necessary--"acquire resources late and release them early," as Microsoft says). My pseudocode looked something like this:
...
Close and dispose of connection
Call output parameter access callback
Dispose of command object
...
I was getting back my data, but I started getting an unusual error:
System.Runtime.InteropServices.InvalidComObjectException: COM object that has been separated from its underlying RCW cannot be used.
The error referenced the following:
System.Data.Common.UnsafeNativeMethods.IAccessor.ReleaseAccessor(IntPtr hAccessor, Int32& pcRefCount)
As you might guess, I did not suspect the order of object disposal causing this, so I spent a lot of time looking at other potential issues before nailing this down. I believe there is probably a solution via closing the command's underlying connection rather than destroying it without it realizing. Unfortunately, I have not had time to tinker with this yet.
If you run across this error, try the simple solution. Try closing and disposing of your connection after the command object is closed.
Good luck out there!
Object disposal order in .NET
Posted by
jtabernik
on Tuesday, September 29, 2009
Labels:
.net,
ADO.NET,
IAccessor.ReleaseAccessor,
RCW exception
/
Comments: (0)
Use a Dynamic Data Web Site for simple database table administration
Posted by
jtabernik
on Friday, September 18, 2009
Labels:
asp.net,
asp.net 3.5 extensions,
asp.net SP1,
dynamic data web site
/
Comments: (0)
Since its release last year as part of the ASP.NET Extensions (and its subsequent inclusion in the 3.5 framework with SP1), the Dynamic Data Web Site has been a fantastic way to simply and easily develop administrative pages for database tables literally without writing a line of code. But recently, I had a client who wanted to take one of these sites that I generated, and change it. They wanted the look and feel more like their standard apps--I assumed this would be pretty easy. But they also wanted to limit and modify some functionality--take away the edit and delete function, for example, unless the user explicitly had access to maintain data. I was reticent to commit to the cost on this because I had no idea how hard or easy this would be.
To my surprise, the site generated by the Dynamic Data Web Site is amazingly flexible, clear, and easy to configure. There is a master page and css class included with the site. As you would imagine, but replacing the master page in the site with the client's standard master page, and by changing the colors and styles in the css, I had the dynamic site matching the client's standard look and feel in minutes. To do this, look at the DynamicData/PageTemplates folder. There are 5 aspx pages that you will need to modify. Change the master page to point to the standard one. Then rename the content section to the master page's content body, and maybe (as I did) move some content into new content sections to rearrange the page a bit. It was that easy.
But what about the security functionality? My client uses a well-designed page inheritance hierarchy that exposes security information. I was able to go into the code behind for the aspx pages I mentioned above and change them to inherit from the client's standard base page. This allowed me to use logic in the aspx pages to filter out functionality I didn't always want to display. For example, I would wrap the Edit buttons on the list page in an if block like this:
In short, I was able to filter out any and all functionality from users that I wanted to be able to see the data but not edit it.
On a more detailed note, we also had an issue with bit fields in the database--sometimes we had nullable bit fields. This means the standard Yes/No check box display for a bit is not sufficient; you need a tristate control. The easiest solution is either a drop down or radio button list. Either way, this change was easy to manage--just swap out the control in FieldTemplates/BooleanEdit for a new control and add in some logic to set the field value to nothing if it is not set.
All in all, I have been extremely impressed with the ability to change, edit, and extend the functionality of the Dynamic Data Web Site. This is a great tool to have in your tool box, whether you need a quick and dirty admin site, or you want to start with a simple site and extend it to meet your needs.
Good luck out there!
To my surprise, the site generated by the Dynamic Data Web Site is amazingly flexible, clear, and easy to configure. There is a master page and css class included with the site. As you would imagine, but replacing the master page in the site with the client's standard master page, and by changing the colors and styles in the css, I had the dynamic site matching the client's standard look and feel in minutes. To do this, look at the DynamicData/PageTemplates folder. There are 5 aspx pages that you will need to modify. Change the master page to point to the standard one. Then rename the content section to the master page's content body, and maybe (as I did) move some content into new content sections to rearrange the page a bit. It was that easy.
But what about the security functionality? My client uses a well-designed page inheritance hierarchy that exposes security information. I was able to go into the code behind for the aspx pages I mentioned above and change them to inherit from the client's standard base page. This allowed me to use logic in the aspx pages to filter out functionality I didn't always want to display. For example, I would wrap the Edit buttons on the list page in an if block like this:
If CanPerformAdminFunctions() Then
...
End If
In short, I was able to filter out any and all functionality from users that I wanted to be able to see the data but not edit it.
On a more detailed note, we also had an issue with bit fields in the database--sometimes we had nullable bit fields. This means the standard Yes/No check box display for a bit is not sufficient; you need a tristate control. The easiest solution is either a drop down or radio button list. Either way, this change was easy to manage--just swap out the control in FieldTemplates/BooleanEdit for a new control and add in some logic to set the field value to nothing if it is not set.
All in all, I have been extremely impressed with the ability to change, edit, and extend the functionality of the Dynamic Data Web Site. This is a great tool to have in your tool box, whether you need a quick and dirty admin site, or you want to start with a simple site and extend it to meet your needs.
Good luck out there!
Using the UITableView, Part 1
Posted by
jtabernik
on Wednesday, September 9, 2009
Labels:
delegate,
iPhone development,
Navigation,
UITableView
/
Comments: (0)
I approached the UITableView for iPhone development with some trepidation. It looked pretty complicated to get up and running--I assumed I would be spending a significant amount of time getting it working. To my surprise, I found adding this to my app, configuring it, and adding functionality to it have been remarkably easy.
If you are a novice, you may still be learning to get a view transition to occur when a UITableItem is clicked. Here is code that you need in the tableView:didSelectRowAtIndexPath: method. This assumes you have a NIB named "PageTwo" and that its class is named pageTwo.
Now you can transition to a new page, any new page, when a record in the UITableView is selected. Note that this can be animated, so you immediately get the great sliding animation effects standard in all iPhone apps.
Consider adding buttons to the Navigation bar at the top. I found some complicated examples of this, but Apple provided some good direction to keep this simple. This is all I needed to do to provide a button on the right side of a Navigation bar. I put this code in my viewDidLoad method:
For my next trick, I will demonstrate how to use your controller class as a delegate for the UITableView, so your class can respond to calls from the UITableView, allowing you to populate the class on the fly. See you next time!
John
If you are a novice, you may still be learning to get a view transition to occur when a UITableItem is clicked. Here is code that you need in the tableView:didSelectRowAtIndexPath: method. This assumes you have a NIB named "PageTwo" and that its class is named pageTwo.
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
pageTwo *anotherViewController = [[pageTwo alloc] initWithNibName:@"PageTwo" bundle:nil];
[self.navigationController pushViewController:anotherViewController animated:YES];
[anotherViewController release];
}
Now you can transition to a new page, any new page, when a record in the UITableView is selected. Note that this can be animated, so you immediately get the great sliding animation effects standard in all iPhone apps.
Consider adding buttons to the Navigation bar at the top. I found some complicated examples of this, but Apple provided some good direction to keep this simple. This is all I needed to do to provide a button on the right side of a Navigation bar. I put this code in my viewDidLoad method:
UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithTitle:@"Button" style:UIBarButtonItemStyleBordered target:self action:@selector(myFunction:)];
self.navigationItem.rightBarButtonItem = addButton;
[addButton release];
For my next trick, I will demonstrate how to use your controller class as a delegate for the UITableView, so your class can respond to calls from the UITableView, allowing you to populate the class on the fly. See you next time!
John
Load and Parse a File on the iPhone
Posted by
jtabernik
on Thursday, September 3, 2009
Labels:
componentsSeparatedByString,
iPhone development,
load file,
parse file,
stringWithContentsOfFile
/
Comments: (0)
Right now, I am hard at work on my next game. It is a puzzle style game, and I am using a variety of different tricks to pull it together. One of which is using my .NET skills to generate the actual puzzles on the Windows platform, then transfer them and use them on the Mac. I had to do some computations that I knew how to do easily on the PC, so I decided to use those skills (read: take a shortcut).
Anyway, this means that I had to be able to read in the file once I got it over to XCode. Luckily, like most things, there are an impressive amount of examples to do the load....assuming you don't need to parse the results and pass them into a custom object.
The format of my file is pretty simple: I am using an identifier for a node which is a string, then I am using 3 floating point values for the x-, y-, and z-coordinates of a point in 3-dimensional space. So I need to be able to read in the rows from the file, and parse out the results that I can use to create these custom nodes.
It was pretty tedious getting there, because the main method to parse out delimited strings,
Hopefully this bit of code helps you. This will load in all the lines in your string, parse out the values, then create a custom object for each one. Some of the supporting code is left out, so you can see the meat of the example; I also converted this to loading ints instead of floats because it made the code a little simpler. Enjoy!
Good luck to you!
John Tabernik
Anyway, this means that I had to be able to read in the file once I got it over to XCode. Luckily, like most things, there are an impressive amount of examples to do the load....assuming you don't need to parse the results and pass them into a custom object.
The format of my file is pretty simple: I am using an identifier for a node which is a string, then I am using 3 floating point values for the x-, y-, and z-coordinates of a point in 3-dimensional space. So I need to be able to read in the rows from the file, and parse out the results that I can use to create these custom nodes.
It was pretty tedious getting there, because the main method to parse out delimited strings,
componentsSeparatedByString
, returns types that you might not expect. Coming from a non-C background, I was slowed down by having to convert each node to something to be able to use it. I am a .NET guy...I expect some implicit conversion!!Hopefully this bit of code helps you. This will load in all the lines in your string, parse out the values, then create a custom object for each one. Some of the supporting code is left out, so you can see the meat of the example; I also converted this to loading ints instead of floats because it made the code a little simpler. Enjoy!
NSString *filePath = [[NSBundle mainBundle] pathForResource:@"nodeOutputFile" ofType:@"txt"];
NSString *fileContents = [NSString stringWithContentsOfFile:filePath encoding:NSASCIIStringEncoding error:NULL];
if (fileContents) {
NSArray *lines = [fileContents componentsSeparatedByString:@"\n"];
for (int i = 0; i < [lines count]; i++) {
JTNode *newNode = [JTNode alloc];
NSArray *tokens = [[lines objectAtIndex:i] componentsSeparatedByString:@","];
NSString *identifier = [tokens objectAtIndex:0];
NSNumber *x = [NSNumber numberWithInt:[[tokens objectAtIndex:1] integerValue]];
NSNumber *y = [NSNumber numberWithInt:[[tokens objectAtIndex:2] integerValue]];
NSNumber *z = [NSNumber numberWithInt:[[tokens objectAtIndex:3] integerValue]];
[newNode createWithX:[x integerValue]
Y:[y integerValue]
Z:[z integerValue]
Identifier:identifier];
// do something here with newNode
}
}
Good luck to you!
John Tabernik
Load SQL Server from .NET Much Faster Than With Insert Statements
Posted by
jtabernik
on Tuesday, September 1, 2009
Labels:
.net,
BCP,
insert,
sql server,
sqlbulkcopy,
sqlclient
/
Comments: (0)
SQL Server is a developer's friend--it is everywhere, it is easy to use, there are tons of resources available for it, and you can distribute a free version with your apps. Odds are SQL Server is either your primary database platform, or you have some data in it and work with it fairly regularly.
One of the most common questions I hear is: "How can I get data into SQL Server?"
Most of the time the answer lines somewhere between creating insert statements for a few records or developing a DTS package to handle a file import. But every client has different needs, and sometimes you run across a situation where you need a different tool in the toolbox.
A client I work for needed a robust solution to get a lot of data into SQL Server while providing detailed reporting on the process, high availability of the database, and a good deal of fault tolerance. Due to some environmental issues, DTS was not feasible. As a result, I needed a fast way to get a lot of data (hundreds of thousands of rows per load) into the database with essentially just .NET.
Luckily, I happened on the SqlBulkCopy class in System.Data.SqlClient. SqlBulkCopy abstracts the BCP utility that has been around in SQL since version 7. The best part--this class, via BCP can bypass standard SQL overhead like logging, triggers, etc. When loading the data without executing any constraints is acceptable--i.e. when you really just want to bulk load the table--SqlBulkCopy is unbeatable.
The process couldn't be simpler. You just need to load a datatable with data, create a SqlBulkCopy class, set up the column mappings, and execute the copy. Loading up the datatable is a great method because there are many, many ways in .NET to get data into a datatable natively, and you can get data there by hand if necessary (manually adding rows).
Here is some example code. Pass this method a datatable, set up the connection string, and you will get great performance!
A few notes about this code. First, you can set a property on the class initialization to determine if triggers will fire or not, etc. Second, you can set a time-out on the copy. Third, I am simply looping through the column in the datatable and setting them to output to the same column in the target table--assuming that you set up the input columns to be named whatever the target table's columns are. This is nice because you only have to populate the fields you want in the destination table.
This process is great for reading chunks of data and uploading them to SQL Server. I originally ran a process pulling data and uploading via individual insert statements. I stopped this process after 6 hours. In contrast, using BCP I was able to upload 100,000 rows in under 10 minutes!
Clearly insert statements versus BCP is an apples to oranges comparision--if data integrity is not certain from your data source, you may find BCP if the wrong choice because constraints are bypassed. These statements are not logged, so you cannot recover them in a partial back-up or log shipping situation. Also, you cannot run the BCP load in a transaction.
But, if you are able to scrub the data first, or even scrub the data as you load the datatable, or if you are certain the data is acceptable, you cannot beat the raw speed of using this process to insert into SQL Server from .NET.
Under certain circumstances, this is an invaluable tool to have in the toolbox!
Good luck!
John Tabernik
One of the most common questions I hear is: "How can I get data into SQL Server?"
Most of the time the answer lines somewhere between creating insert statements for a few records or developing a DTS package to handle a file import. But every client has different needs, and sometimes you run across a situation where you need a different tool in the toolbox.
A client I work for needed a robust solution to get a lot of data into SQL Server while providing detailed reporting on the process, high availability of the database, and a good deal of fault tolerance. Due to some environmental issues, DTS was not feasible. As a result, I needed a fast way to get a lot of data (hundreds of thousands of rows per load) into the database with essentially just .NET.
Luckily, I happened on the SqlBulkCopy class in System.Data.SqlClient. SqlBulkCopy abstracts the BCP utility that has been around in SQL since version 7. The best part--this class, via BCP can bypass standard SQL overhead like logging, triggers, etc. When loading the data without executing any constraints is acceptable--i.e. when you really just want to bulk load the table--SqlBulkCopy is unbeatable.
The process couldn't be simpler. You just need to load a datatable with data, create a SqlBulkCopy class, set up the column mappings, and execute the copy. Loading up the datatable is a great method because there are many, many ways in .NET to get data into a datatable natively, and you can get data there by hand if necessary (manually adding rows).
Here is some example code. Pass this method a datatable, set up the connection string, and you will get great performance!
Private Sub BcpRecordsToSqlServer(ByRef MyTable As DataTable)
Dim s As New SqlBulkCopy(SQLConnectionString, SqlBulkCopyOptions.Default)
s.DestinationTableName = "MyDestinationTableName"
s.BulkCopyTimeout = 36000
For Each field As DataColumn In MyTable.Columns
s.ColumnMappings.Add(field.ColumnName, field.ColumnName)
Next
s.WriteToServer(MyTable)
s.Close()
End Sub
A few notes about this code. First, you can set a property on the class initialization to determine if triggers will fire or not, etc. Second, you can set a time-out on the copy. Third, I am simply looping through the column in the datatable and setting them to output to the same column in the target table--assuming that you set up the input columns to be named whatever the target table's columns are. This is nice because you only have to populate the fields you want in the destination table.
This process is great for reading chunks of data and uploading them to SQL Server. I originally ran a process pulling data and uploading via individual insert statements. I stopped this process after 6 hours. In contrast, using BCP I was able to upload 100,000 rows in under 10 minutes!
Clearly insert statements versus BCP is an apples to oranges comparision--if data integrity is not certain from your data source, you may find BCP if the wrong choice because constraints are bypassed. These statements are not logged, so you cannot recover them in a partial back-up or log shipping situation. Also, you cannot run the BCP load in a transaction.
But, if you are able to scrub the data first, or even scrub the data as you load the datatable, or if you are certain the data is acceptable, you cannot beat the raw speed of using this process to insert into SQL Server from .NET.
Under certain circumstances, this is an invaluable tool to have in the toolbox!
Good luck!
John Tabernik