|
Do some research into ROW_NUMBER and PARTITION OVER these may be the keywords you are looking for. Oh no I'm channelling POH
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
channelling POH? :P I am already using DENSE_RANK(). but I didn't think of partition thank you 
|
|
|
|
|
Mycroft and Richard are completely correct in that you should never store duplicated data.
But theres a way to automize it, on SQLServer it's called a Computed Column[^] and if you're using Oracle it's a Virtual Column[^].
The best part is that you can put an index on them.
|
|
|
|
|
Haha I realize that they are correct. and it is not going to store duplicate data it will always change. see this is to analyse user actions and time they spent lingering about on certain pages. and I know about Computed columns I am a bit more intermediate with sql but thanks any way
|
|
|
|
|
Hi,
I want to know if there is a simple and efficient way to select a value from a row that has a column with minimum creation time for the given parameters.
Let us take a sample table named Patient
-----------------------------------------------------------
PatientId Observation Time Value
-----------------------------------------------------------
1 Temp 2014-02-19 03:55:00 35
1 Temp 2014-02-19 03:45:00 37
1 Weight 2014-02-19 03:40:00 60
If i am given the PatientId and Observation, I need to retrieve the value field with minimum creation time
For patient id 1 and observation Temp this would be Value 37
since it has 2014-02-19 03:45:00 as minimum creation time
|
|
|
|
|
This should work:
select top 1 Value
from Patient
where PatientId = 1
and Observation = 'Temp'
and Time in
(
select min(Time)
from Patient
where PatientId = 1
and Observation = 'Temp'
)
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Thanks. That seems to be simple. If i have numerous entries for the specific patiend id and observation, will the use of top be performance efficient ?
|
|
|
|
|
Top ensures that only one line is returned.
You can remove the top 1 - however if more than one line is returned you will have to make a choice at some point if the values are different.
Simply replace the top 1 with distinct if you know there will only be one value returned.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
So...it seemed like a good idea at the time. I have a C# application that is distributed among multiple users (about a dozen). I set each user up with their own database (MS SQL 2005) so there is no interaction between users and it works great. Now, I am making a change to the program which requires a change to the database. In the past, with just a few users, I simply went into each database and altered the tables one at a time ... takes a few minutes for each one. But as the number of users grow and I keep adding features to the application more frequently ... well, you can see where this is going. My question is; what type of command can I use to alter the databases (a script of some kind?) to add or remove columns from tables, add tables, etc. to make it so that I can alter multiple identical database structures at the same time while maintaining the data in them. Although I have been making databases for my applications for about 15 years, this is an area that I have never had to deal with; something I would assume would normally be the job of a dedicated DBA. If applications end up being sold on a larger scale (and it is heading that way), then I would hire such a person immediately. I considered having everyone in a single large database at the start and simply use proper queries to pull the data, but I decided that the sheer size and loss of performance would not make it as nice as it is, especially if I were to end up with several hundred users. There are about 30 tables in the database. I never really understood the limits and capabilities of these databases; only to say I don't really trust them to do the job when they get too large(not very scientific; more of a gut feeling). Am I way underestimating the ability of the database? Until then, can you give me a lead in on how to do this? I love it here at Code Project and I Thank You for your time....Pat
|
|
|
|
|
Whew, lots of questions here.
Let me suggest one item.
You should create a script that would methodically alter the necessary tables, columns etc and use that script to update your various clients.
Here is a reference to the ALTER TABLE command ...
http://msdn.microsoft.com/en-us/library/ms190273.aspx[^]
There are tools out there which compare 2 schemas and generate scripts which make the necessary changes. The idea is that you point it at database1 ver 1.0 and at database2 ver 2.0 and the tool will detect that there are new tables, new columns, etc.
|
|
|
|
|
Thank you David. I appreciate your suggestion and I have found a little better link (solution) than Microsoft's to describe the Alter command's use. I will practice with it on a sample database and then include that script as part of the program update. If you get this and you specifically know of some better comparative software for database modification, please let me know, and THANKS again....
ps; sorry about all the questions. I am methodical and just want to get it right the first time if I can. I appreciate your response! Pat
|
|
|
|
|
I am wondering about your concerns about consolidating to a single large database. What are the largest row counts for your 30 tables for the biggest user? Database consolidation would seem to be a very attractive option, unless the numbers are extremely large. I suspect your may actually be losing performance by having seperate databases. A DB instance has a considerable base overhead and you are decreasing the available resources by that amount for each user you have. Further, the simplifacation in administration would be considerable.
Regards
Nicholas Swandel
|
|
|
|
|
Thank you for your thoughts Nicholas. Truthfully, I am confused due to my lack of experience with "big" databases. Big is a relative term. What is Big? Perhaps some numbers might help. Lets say that there is a table called 'Orders' and it has 30 columns. Now lets say that it creates about 10 new rows every day per client. It must be able to retrieve 2 years worth of data in this table before it can be purged, or about 500 days. So that would be 5000 rows per client. Now, if it was a single database being shared by potentially 1000 users, that would make 5 million rows with 30 columns each for this table. Assuming that about 8 of the tables have this same amount of data, and the others are much smaller, the question is "is this big"? The problem is that I do not know. It seems big to me, but then again, I never understood how a major search engine can look through billions of entries in a matter of seconds either. To your statement that separate databases decrease performance, I would disagree with that in its entirety. On the other hand, I totally agree that Administration, which is what triggered the request for information in the first place, is definitely the issue of concern for me with individual databases. I suppose I could write an 'update' script to modify the database each time I update the program (a possible solution). I am at the point that I must go one way or the other. You raise some excellent points. Looking it over, perhaps a Hybrid choice is possible. Nothing says I cannot have 10 databases instead of 1000 which would limit the total users to 100 and decrease the data storage by 90 percent. Perhaps? The question is...Is that Enough? At how many users do I stop and make another database...10, 50, 100? I will keep researching and considering expertise from developers like yourself, for which I am grateful. If you have any followup thoughts, or can shed some light for me on the numbers game, please feel free to write me either here or privately. I appreciate your time and expertise. Thank you again. Best Regards, Pat
|
|
|
|
|
5 million is not very big. Typically tables are indexed to make retrieval quick. Further there are other DB features such as partitioning that would allow the table to appear as one large table but the DB would actually store the table and its indexes in seperate partitions based on some attrubute, likely, in your case, a user id or more likely a set of user ids. Retreival would then only look at the index and table partitions for the individual user or set of users. Keep an open mind, I think your strong view that separate databases increase performance is miss placed. We have done a exercise of consolidating 50+ databases each with 30,000+ tables and 10 of millions of rows in some tables. Same harwdare and the performance gain was about 3 times, administration effort cut 3 fold. The memory and CPU requirements just to start a single DB is considerable, consolidating allows these resource to be better utilized and boosts performance. There is also the idea that each user could be contained in a schema within a single DB, this is a half measure though as will you would gain performance, administration would still be high. I would encourage you to try a proof of concept with a single table on 10-30 DBs shut down those DBS and compare it to the combined verion of the table on one DB with all the resources from the 10-30 DBS allocated to single DB. The single DB is typically dramatically faster.
|
|
|
|
|
Hello All,
I am performing an update statement. About 100,000 rows.
Just before my update statement I had, "begin tran". There was an issue with the update statement, and errored out. Do I have to issue a "rollback" statement to make sure there is not a partial update or does using begin tran make sure a partial update does not occur?
Thanking you!
|
|
|
|
|
Just restore to the backup of the rows you took before you ran the update statement.
You did make a backup of the rows.. didn't you?
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
This question is more for my future understanding.
Do I have to issue a rollback statement, or since the update is part of begin tran, and since the update failed nothing would update or would it result in a partial update?
Thanks.
|
|
|
|
|
Those rows are probably going to be locked - when you drop the connection a rollback will automatically take place -> as long as you did not issue a commit in your statement.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Nope I did not issue a commit. Usually have it commented and if all goes well, highlight the commit and execute it!
Thanks a bunch!
|
|
|
|
|
Glad it helps.
For what is't worth, and I am sure others will disagree with me - don't use the transaction begin, commit rollback system for admin work on the database.
Write a select query to test the update, create a backup table of the rows you are about to update, run the update and check the changes.
If there is a problem restore the rows using the backup table you created.
Using the transaction system to get you out of trouble is bad practise and will one day ;and you in trouble.
If you use your current system you are going to lock the rows and there is a real danger that you will forget the commit or you will accidentally run a commit when you did not mean to.
How do I know this?
I used to do what you are now doing, years ago, and through a considerable experience of embarrassment learnt to use the method I illustrate above.
In summary:
Don't run an update without testing it and knowing what the results will be.
Take a backup, of the rows being updated, to restore the rows in case something goes wrong, which it won't as you fully tested the update with a select statement first.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Creating the backup table every time an update is executed does not make sense. The database has all these tables _bak and begins to get messy real soon.
|
|
|
|
|
I don't think you read my post fully.
Before the update run a query like this:
select * into temp_20140219 from cash_withdrawals where userid = 1234
You then run your update:
update cash_withdrawals set withdrawals = -1000000 where id = 1234
You then check that the update worked correctly:
select * from cash_withdrawals where userid = 1234
If everything looks correct you then drop the backup table.
drop table temp_20140219
If there is a problem you restore the data:
delete from cash_withdrawals where id = 1234
insert into cash_withdrawals
select * from temp_20140219
If you carry on with things the way you currently do you will get burnt, maybe not this weeek but it will happen eventually...
[EDIT]SQL syntax error corrected[/EDIT]
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
GuyThiebaut wrote: Using the transaction system to get you out of trouble is bad practise and will one day ;and you in trouble. Transactions are great if you need multiple statements to work or fail as if they are one single statement; and yes, one should always take a five-second pause to reflect before doing a commit. To make a short story long..
We do a lot of things on autopilot. I've seen other people drive cars and look around when turning in a bend. They all peek in a similar fashion, starting on the left-shoulder, checking each angle, unto the far right. That routine has been drilled in to protect one from their own autopilot. Check each angle, then turn.
Programmers get the same tendencies, and most of us remember each crash and the tendency; still pressing fanatically Ctrl-Shift-S in the days of autorecover? Selecting your entire post and copying it to the clipboard in case of a time-out? We're very familiar with crashes, but there's few resources that explain where there be dragons.
So, before committing; check whether you got the correct server (left shoulder), correct database (left screen), correct credentials (mirror), explain to an imaginary friend what your query should do (look ahead) - then explain what each statement actually does (back-mirror), then find someone to blame when it goes wrong (right mirror).
Now you can commit. Just don't do it without the five-seconds delay - it works as well as GMails' "undo send" button. It'll save you often, but not often enough to live without backups.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
begin tran
//insert or update or delete query
Commit tran //if insert/update/delete used correct
Rollback tran //if insert/update/delete used incorrect
|
|
|
|
|
Have a read of my posts - I am saying that yes this can be done, however there are too many risks involved for me to consider it a suitable pattern for administration on a database.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|