|
I'm glad I could be of service. 
|
|
|
|
|
I'm using ADO.NET in a windows service application to perform a process on
SQL Server 2000. This process runs very quickly if run through Query
Analyser or Enterprise Manager, but takes an excessively long time when run
through my application. To be more precise, executing stored procedures and
views through Query Analyser take between 10 and 20 seconds to complete. The
same exact stored procedures and views, run in the same exact order, through
my program, take anywhere from 30 minutes to 2 hours to complete, and the
system that runs SQL Server (a 4-cpu Xeons system with 2gigs of physical
ram) is pegged at 25% cpu usage (the query uses 100% of a single cpu's worth
of processing power). I am at a complete loss as to why such a vast
difference in execution time would occurr, but here are some details.
The windows service executes on a workstation.
SQL Server 2000 executes on a server different from the workstation through
a 100mbps ethernet network.
Query Analyser/Enterprise Manager run on the same workstation as the windows
service.
The process is as follows:
1) Run a stored procedure to clear temp tables.
2) Import raw text data into a SQL Server table (Reconciliation).
3) Import data from a Microsoft Access database into 3 SQL Server tables
(Accounts, HistoricalPayments, CurrentPayments).
(This takes about 10 - 15minutes to import 70,000 - 100,000 records from
an access database, housed on a network share on a different server.)
4) "Bucketize" the imported data. This process gathers data from the 4
tables stated so far (Reconciliation, Accounts, HistoricalPayments,
CurrentPayments, and places records into another table (Buckets) and
assigned a primary category number to each record through a stored
procedure.
5) Sort buckets of data into subcategories, updating each record in
(Buckets) and assigning a sub category number, through another stored
procedure.
6) Retrieve a summary of the data in (Buckets) (this summary is a count of
rows and summation of monetary values), grouped by the primary category
number. This is a view.
7) Retrieve a summary of the data in (Buckets), grouped by both the primary
and sub category numbers. This is a view.
When I execute these steps manually through query analyser, (save step 3),
each query takes anywhere from 1 second to 20 seconds. The views,
surprisingly, take more time than the fairly complex stored procedures of
step 4 and 5.
When I execute these steps automatically using my windows service (written
in .NET, C#, using ADO.NET), the simple stored procedures like clearing
tables and whatnot execute quickly, but the stored procedures and views from
steps 4-7 take an extremely long time. The stored procedures take at a
minimum 30 minutes to complete, and sometimes nearly an hour. The views are
the worst of all, taking no less than 1 hour to run, and often two hours
(probably longer, actually, since my CommandTimeout is set to 7200 seconds,
or two hours). I have never seen such a drastic difference between the
execution of a query or stored procedure between query analyser and an
application. There should be little or no difference at all, considering
that everything is stored procedures (even the views...I wrap all the views
in a simple stored procedure that calls the view using a SELECT), and as
such executes on the server. Not only that, but Query Analyser is running on
the same exact box that the application is running on, and is connecting ot
the same SQL Server.
I doubt this is a network bandwidth issue, as after calling the stored
procedure from code, there is no network activity except mssql keep-alive
messages, until the procedure completes and returns its result set or return
value (if any), and then its only a momentary blip as the data is sent
accross.
I've followed proper practice when using views and stored procedures. When I
select, I always explicitly name the columns I wish to retrieve. I have
appropriate indexes on the columns in the 4 data tables. The queries that
execute in the stored procedures are fairly complex, involving summations,
count(), group by, and order by. I can understand a moderate difference in
performance between query analyser and an ADO.NET application due to
ADO.NETs extra overhead, but a difference between 20 seconds and 1 hour is
more than can be attributed to .NET overhead.
I greatly appreciate anyone who might have some insight to this offering
some help. I've scanned the net looking for similar situations, but
searching for them is somewhat difficult, considering the nature and volume
of factors. Thanks.
-- Jon Rista
|
|
|
|
|
Some thoughts/questions, in random order:
1) Are you execuring the stored procedures synchronousy?
2) Are you using ADO.NET transactions? (don't)
3) Is your service running at a low priority (use task manager to view base priority)
4) Have you installed the latest service pack for whatever version of .NET you are using?
5) What network library are you using to connect?
6) In the SQL Server Client Network Utility, is the "Enable Shared Memory Protocol" checkbox checked? (Try unchecking it)
7) Which method are you using to execute (best would be ExecuteNonQuery)
8) Try placing SET NOCOUNT ON as the first line of your stored procedures
See also this MSDN checklist.
my blog
|
|
|
|
|
(8) is probably your best bet (SET NOCOUNT ON)
my blog
|
|
|
|
|
Thanks for the questions. First, let me answer them:
1) I'm not sure what you mean by "synchronously". I just call sqlCommand.ExecuteNonQuery() after setting up the parameters.
2) Yes, I was. I eliminated them after adding a temp sp that I ran from Query Analyser. I added a SQL transaction around calls to all my other stored procedures, and processing time jumped a ton. I let it run for just over a half hour then killed it. I think transactions may have been the problem, or at least a large part of it. I don't know why, but my queries, which are few but complex, may just be too much to transact.
3) SQL Server is running at a normal priority (priority level 7).
4) All of .NET is fully updated to the latest. We actually have a few versions, but the code has a .exe.config specifying which version to use.
5) I'm connecting to SQL Server using TCP/IP.
6) Shared memory protocol is disabled.
7) I'm executing with ExecuteNonQuery().
8) I have not been using SET NOCOUNT ON. I forgot all about that, but I'll try adding it in a moment and see what happens.
Now, to my questions. What exactly happens when you run queries through a transaction? Do the changes made get marked in a certain way, or is there some more extensive processing done to properly transact a set of queries/procedure calls? When I ran each of these stord procedures one by one, manually in QA, each one took between 1 and 20 seconds to run. The whole process couldn't have even taken a minute, even including the time for me to type the commands. Is it really possible that enabling transactions could induce a performance hit of at least 240%, possibly more? What kind of difference is there in performance between using true SQL Server transactions, and ADO.NET SqlTransaction?
I've used transactions, and ADO.NET SqlTransactions, in the past. I've never run into such performance issues, but then again, I've never transacted a set of queries as complex as these. There arn't many of them, maybe 20 total, but they are all fairly complicated queries with lots of summations, a couple subqueries, and an average of 3 joins per query (except for DELETE's). Maybe some resources on SQL Server transactions, tips on improving their performance, proper ordering of queries and proper query structuring to maximize performance might help. I've been searching the net, but havn't found anything truely helpful yet.
Thanks for the tips so far.
|
|
|
|
|
Usually, if anything, transactions will speed things up. Everything in SQL is a transaction, so it is only the size that differs when you make "large" transactions.
The reason I asked about ADO.NET transactions, is that sometimes it is best to just stick to one type. So, if you are using SQL transactions, then just use that. If you are using ADO.NET transactions, then try and avoid SQL transactions.
Deadlocks can also be a problem when using transactions. Usually though, these will come back as errors. While your queries are running, you can run the command sp_who2 in Query analyzer, to see if any queries are being blocked. This can either be a deadlock situation, or just some other process that is also trying to do stuff.
I don't know how much it would help in this scenario, but for optimizing queries, my best tip is to use the "optimizer hints". For example, DELETE MyTable (TABLOCKX) WHERE X = 1 . The TABLOCKX hint tells the query to make a single exclusive lock on the table, rather than using the default page-style locking. In certain queries, this can improve performance tremendously. (It applies to all types of queries, not just deletes).
my blog
|
|
|
|
|
Hmm....I think I am more confused about the situation now than before. Why would adding a transaction to the process in QA increase the processing time by so much? Originally it took a minute, after adding the transaction it took much longer.
Anyway, I'm adding SET NOCOUNT ON in each sp. I'll play with table locking and see how it helps. I don't think I've run into any deadlock situations, since the queries are all run sequentially, one after the other, and there are currently no other processes happening on the server.
|
|
|
|
|
I am trying to make an address book and am having trouble with certain steps.
Does anyone know where I could find a good example to add, delete,
find, edit, save, and undo changes to an access database? This is my first time creating a project with a database and am having trouble with the concept. An example would be once a user fills in all the text boxes with their info and clicks the save button, do I have to update and add rows to my dataset first and then update the database? That is the confusing part for me. Even just a small example would be great for me to get the overall concept and build on it from there. I am using VB.Net.
Thanks
Beginner in ASP.Net and VB.Net
|
|
|
|
|
How to display message when the process cannot find the record in the database?
I'm using SQL database.
Please help.
Thanks.
|
|
|
|
|
DotNet wrote:
How to display message when the process cannot find the record in the database?
That depends on what technology you are using. What is accessing you SQL Server? Or do you just want a message to appear in the Query Analyser?
"If a man empties his purse into his head, no man can take it away from him, for an investment in knowledge pays the best interest." -- Joseph E. O'Donnell
Not getting the response you want from a question asked in an online forum: How to Ask Questions the Smart Way!
|
|
|
|
|
Just a simple datagrid and textbox and search button.
When you enter ID in the textbox,
if the ID is not in the table,
then a label1.visible = true or a prompt message will pop-up.
How to write the condition part when the datagrid is blanked (no data)?
|
|
|
|
|
label1.Visible = (myGrid.Items.Count > 0); , called sometime after the DataBind() .
my blog
|
|
|
|
|
How can I counter check it when I enter some ID, let's say, ID - P0002 is not in the database. So the datagrid will invisible.
Then, label1.visible = true.
The initial part, I don't understand.
Please help.
|
|
|
|
|
When importing an Access database to SQL Server all the date fields get offset by several days (its uniform but irritating). The formats were changed to long and everything else looks great. Any changes made to the dates after the import continue to reflect the offset. Any suggestions?
Thanks in advance...
Eagles may soar but weasels don't get sucked into jet engines.
|
|
|
|
|
Hi,
I’m a new to asp.net.
I have module01 to create & fill data adapter and another module02 to add a new data row to data set.
(module02…)
Dim dRow As DataRow = xSet.Tables(0).NewRow
dRow("Login_name") = "Baste"
dRow("pws") = "sindhu"
xSet.Tables(0).Rows.Add(dRow)
Since I put this code inside a command button, button_click submits the page to IIS, which I want to avoid.
Is it possible to execute module2 without submitting the page, if answer is “no”, do a client computer need to maintain consistent connection with IIS while adding a new row in a data set ?so what is disconnected model ? please explain
Advance Thanks
|
|
|
|
|
When you write ASP or ASP.NET, you are writing server-side code, i.e. the code that you write will execute on the server. If you want to run client-side code, then you have to write that in JavaScript.
In terms of ADO.NET, the disconnected nature of datasets refers to the idea that the data is available after the connection to the database has been closed. It has nothing to do with IIS.
my blog
|
|
|
|
|
I am designing a new application and I don't want to confine it to a specific database system. So when I want to choose a Data adapter in C#.NET, my options are OdbcDataAdapter OR OleDbDataAdapter. Can anybody explain me the different between the two or give me a link where I can learn abut the difference?
|
|
|
|
|
|
No, I am sorry. The information with the MSDN page is not sufficient. I have already gone through that.
It just tells me the following difference:
- The OleDbDataAdapter object is suitable for use with any data source exposed by an OLE DB provider.
- The OdbcDataAdapter object is optimized for accessing ODBC data sources.
But does not explain in scenarios where both OLE and ODBC providers are available (e.g. MS SQL Server and Oracle) what should I go for. What are the merits/demerits of each etc.
|
|
|
|
|
Supposedly OleDb is faster thsn Odbc because it has to go through fewer layers between your application and database.
In my experience however, this hasn't been the case - they appear to be equal, with the Odbc giving better support for Oracle (although Oracle now provide ODP.NET which should adjust things).
If you google for OLEDB v's ODBC, you'll see several discussions of this sort.
http://www.google.co.uk/search?q=OleDB+vs+ODBC&hl=en&lr=&ie=UTF-8&start=0&sa=N
store your internet favourites online - www.my-faves.co.uk
|
|
|
|
|
I'm pretty sure the original OLE DB used to call the ODBC drivers.
Now i think both are developed in parallel - neither depends on the other.
I had always heard/thought the opposite, ODBC is a simpler C API design that has fewer layers. I tend to believe this as i've seen internal COM calling diagrams - i can't believe it is a thinner API.
I still consider ODBC to be a cleaner/faster interface, but OLE DB is more abstract - it allows connections to excel spreadsheets, text files, ...
I tend to still use ODBC for most projects. The only place i'm actually using OLE DB right now is for WinCE development - and that's because it doesn't support ODBC.
...cmk
Save the whales - collect the whole set
|
|
|
|
|
Hey thanks Davey... This will help! ![Rose | [Rose]](https://codeproject.freetls.fastly.net/script/Forums/Images/rose.gif)
|
|
|
|
|
Thanks guys. I got some amount of info but it mainly described the concept behind OLEDB and ODBC technologies, like:
ODBC is Open Data Base Connectivity, which is a connection method to data sources and other things. It requires that you set up a data source, or what's called a DSN using an SQL driver or other driver if connecting to other database types. Most database systems support ODBC.
OLE is Object Linking and Embedding. OLEDB is partly distinguished from OLE itself, now called
"automation".
OLEDB is the successor to ODBC, a set of software components that allow a "front end" such as GUI
based on VB, C++, Access or whatever to connect with a back end such as SQL Server, Oracle, DB2,
mySQL etal. In many cases the OLEDB components offer much better performance than the older ODBC.
OLEDB is a different type of data provider that came about with MS's Universal Data Access in 1996 and does not require that you set up a DSN. It is commonly used when building VB apps and is closely tied to ADO. It works with COM, and DCOM as of SQL 7.0.
However my quetion is more related with the .NET classes provieded for the same. If I don't want to get tied to a specific database, what is the better choice: OdbcDataAdapter OR OleDbDataAdapter in terms of performance and scalability?
|
|
|
|
|
Hi,
Iam using OleDbCommand to insert/update data in the database (SQL 2000).
IS it possible to handle "RowUpdated event" without using OleDbDataAdapter ??
Are there any other methods for handling the updated row event ?
I Need this to get the value of an Auto increment field.
Any help will be greatly appreciated.
Thanx
saleem
|
|
|
|
|
Hello.
I try to Open CDaoDatabase object in Windows XP (the user has the Excel
2000), and I use the "Excel 8.0" connect string. It returns the following
error "Couldn't Find Installable ISAM". BUT the same code works in Windows
2000. I tried the "Excel 9.0" and "Excel 10.0", but without success...
CDaoDatabase* p_daoDB = NULL;
p_daoDB = new CDaoDatabase;
p_daoDB->Open(_T("SCCardOS.XLS"), // file name
FALSE, // exclusive
FALSE, // read only
_T("Excel 8.0;")); // connect string
I found the post with the same problem, but without solution ;(
I need some tips for solving...
Thank you in advance.
|
|
|
|
|