"Multi-threading and parallelism to their logical limits"
Then think (literally) outside the box. Taking something "to its limit" is a REALLY poor design...
The newest and greatest parallel processing architecture is distributed processing, using agents on multiple machines (boxes), with a master delegating work. This way you really can have the best of both worlds, throughput, millions of processed data items per second, etc. SETI@home has been doing this since early 2000's, and the concept is catching on. Either you can create a processing farm and not push a machine to its limit, or you can create some kind of public distributed processing system and let people with free idle-time (and even better, free electricity and equipment) do your processing for you.
First, what makes you think that the number of threads you launch is going to do the job?? Without the cores to support that many, you're just wasting resources and killing throughput, not improving it.
You have to find out what the domain of the problem is first. Why are you launching threads? What causes one thread to take so long processing a single record? Is it a compute-bound problem? Or is it an I/O problem where the thread is stalled, waiting for an I/O operation to complete??
Without knowing the exact causes of the delays in processing a record, throwing threads around will get you nowhere fast. You can throw threads at a stack of records, but if there are not enough cores or enough I/O throughput to run those threads you'll get no benefit. You may have to add hardware to solve the problem, not threads.
But, this is going to take a ton of research to figure out.
The problem I am trying to address, to start with, is how some of our developers continue to use SQL and Oracle as an application and not for data storage as it was intended. I've seen these immensely complicated SQL statements they have written which is a horrible programming practice and was looking for solutions using modular programming and parallelism. The problem is processing 4 million records in 45 different ways but quickly and efficiently. There are no real problems yet and I stress yet. So far their solution to slow processing is to throw more hardware at it and not increase the efficiency of the actual application.
No amount of hardware or threading or anything other than going back and redesigning and reworking that pile of crap is going solve the problem. You cannot fix bad design with anything, other than redesign.
You can throw all the threads you want at the problem, but they'll all just end up sitting idle waiting the SQL to process. Sure, your application will be starting hundreds of threads, but the SQL server will not be matching you. It'll spin up only what it can work with and will queue up any work it can't readily get to.
If you have 30 threads querying remote networks is significantly different than having 30 threads working on matrix computations based on in memory data.
Both of those are impacted if you are attempting to run the 'application' on a server that is already running other 'applications'.
And all of that is dependent on whether you do everything correctly. Messing up a single sync and you will suddenly have an application that is running slower than a single threaded app. Or deciding that an optimal solution which completely ignores the bandwidth limits of the network.
What I am working toward is more of an assembly line approach to some complex financial transactions. Some of my other replies to other posts under this topic will help give a better idea of what I am working toward. This project is testing the feasibility of creating truly modular programming, which is not creating new modules for an existing program and recompiling but passing code changes into an already running program and the code changes are applied without the program missing a beat. In a sense, it would end up a process that would never have to be restarted and processes constantly.
I am modifying an SSRS 2008 r2 report so users can export the data to excel and sort and filter the data. The only way to accomplsh this task is to remove the report headers.
The users will click on a button that says hide 'headers' and click the view button. Then the users will export to excel.
Problem:When the SSRS 2008 R2 report is exported to excel, row one in excel is blank. The column headers and data start on line #2 in excel. I want to remove line #1 from being blank in excel.
To solve this problem, for the tablix that I want to keep, I want to set the location value to 0,0. I do not want to come up with a new report that looks
like the original report but the data is shifted. I would prefer to write C# code to solve this problem since I can not find a way to set the SSRS tablix.location
property in SSRS.
Thus can you show and/or tell me the following:
1. What C# code can be setup to accomplish my goal?
2. How would you attach the C# code to the SSRS 2008 r2 report for the code to work?
Last Visit: 31-Dec-99 19:00 Last Update: 1-Dec-23 23:24