|
I got your point sir. I know i am an inexperienced newbie, this the first time I am doing this. It would be of great help if you can help me with this.
I am using Mysql server
|
|
|
|
|
Member 11840363 wrote: It would be of great help if you can help me with this. I did.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
10 is TWO in binary.
can you please help me in framing the correct query? Please
|
|
|
|
|
Did you lookup how to use IsNull or COALESCE()?
It's very simple.
SELECT COALESCE(field1, 0)
or
SELECT IsNull(field1, 0)
If field1 is null then you will get 0. So, if you are adding them up then adding 0 will not get counted.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Looking to switch from the high cost of ownership or licensing fees, for transaction based processing we've looked at a few providers and don't want to switch to open source tbh. 
|
|
|
|
|
There is a postgres add-on called EnterpriseDB you may want to look into, it has a lot of compatibility with PL/SQL which may help reduce your conversion costs. They charge to support it, but it should be less than what you would pay Oracle.
|
|
|
|
|
|
William Ivanski wrote: We are looking forward to hear feedback My feedback is that this is not a question.
There's a "Free Tools" section where you can point out free tools. It's in the "Product Lifecycle" category.
-- taking a peek at the website;
William Ivanski wrote: OmniDB is cross-platform Where is the download for Ubuntu?
..the bottom row of "features" does not look very appealing - having tabs in an editor is not something to brag about; it is like highlighting that your car has wheels.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Hi, Eddy! I really appreciate your feedback.
Eddy Vluggen wrote: There's a "Free Tools" section where you can point out free tools. It's in the "Product Lifecycle" category.
I'll post that, thanks for the advice.
Eddy Vluggen wrote: having tabs in an editor is not something to brag about
It may sound crazy, but not all database management tool have tabs in their editors. Note that OmniDB is a web database management tool with tabs in its editor.
Eddy Vluggen wrote: Where is the download for Ubuntu?
OmniDB is a web application, so it needs to be served by a webserver. For that purpose, you can use Mono XSP (which runs on any Linux distribution as well as on Mac OS X) or Cassini (Windows).
|
|
|
|
|
William Ivanski wrote: It may sound crazy, but not all database management tool have tabs in their editors. Just as there still exists text-editors without tabs.
William Ivanski wrote: Note that OmniDB is a web database management tool with tabs in its editor. Nope, still doesn't work for me. The TabControl is over 16 years old, and I haven't seen much products that tout their MDI-interface as a core feature.
William Ivanski wrote: OmniDB is a web application, so it needs to be served by a webserver. Wouldn't that make the .NET webserver the platform?
I'm using Sql Management Studio at work, which is also free. How would you compare the two?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
There are plenty of cross-platform webservers that run ASP.NET applications.
Also, SQL Management Studio is a desktop application that manages SQL Server databases only, the applications have different purposes.
|
|
|
|
|
i want to create data base by using c++ language what i'm suppossed to do efore starting
|
|
|
|
|
Read the documentation and look for examples and tutorials.
In managed C++, you'd open a connection, issue an command and read the results.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Learn C++ and database concepts. You are taking on a huge task.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Hi,
I have a table which has many Columns among those Columns two Columns hold Image Datatypes, when we update the Table contents CDC is Capturing fine but its not capturing the before value for it but it is capturing the after value though.
I am not sure is it happening because of the Entity Framework or CDC or my team is implementing Image update in a wrong way.
Please any type of help a code snippet, a link or even a suggestion helps me a lot - thanks in advance buddies.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
By default, the maximum size that can be added to a captured column in a single INSERT, UPDATE, WRITETEXT, or UPDATETEXT statement is 65,536 bytes or 64 KB. What is the size of your blob? Does it work with nvarchar(max)?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
So my program is almost complete, and I'm trying to enable migrations in my project.
I did the package command -Enable-Migration -EnableAutmaticMigration
Here's the part I don't get.
I wrote a Intializer that makes the database and seeds it with data, which works fines.
Here I call the class I made show below in the first class
public indigoDBContext() : base("DefaultConnection")
{
Database.SetInitializer(new indigoIntializer());
}
Then I ran PM -EnableMigrations -EnableAutomaticMigrations
And my Intializer file changed, it added a class called Configuration, what I am suppose to do with this class?, OK, so I changed the commented code to my seed code.
class indigoIntializer : CreateDatabaseIfNotExists<indigoDBContext>
{
protected override void Seed(indigoDBContext context)
{
base.Seed(context);
<pre>
siteAdministrators.seed(context);
countries.seed(context);
states_Provinces.seed(context);
salesTax.seed(context);
avatars.seed(context);
themes.seed(context);
paymentGateways.seed(context);
paymentBrands.seed(context);
}
}
class Configuration : DbMigrationsConfiguration<indigodbcontext>
{
public Configuration()
{
AutomaticMigrationsEnabled = true;
ContextKey = "Indigo.DataAccessLayer.indigoDBContext";
}
protected override void Seed(Indigo.DataAccessLayer.indigoDBContext context)
{
base.Seed(context);
// Seed the Website Adminsitrator Table
siteAdministrators.seed(context);
// Seed the Website Countries and States
countries.seed(context);
states_Provinces.seed(context);
salesTax.seed(context);
avatars.seed(context);
themes.seed(context);
paymentGateways.seed(context);
paymentBrands.seed(context);
}
}
Then it added a file called Configurations.cs in a Migration folder, so I added my seed code to that as well.
internal sealed class Configuration : DbMigrationsConfiguration<Indigo.DataAccessLayer.indigoDBContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
protected override void Seed(Indigo.DataAccessLayer.indigoDBContext context)
{
base.Seed(context);
siteAdministrators.seed(context);
countries.seed(context);
states_Provinces.seed(context);
salesTax.seed(context);
avatars.seed(context);
themes.seed(context);
paymentGateways.seed(context);
paymentBrands.seed(context);
}
}
Questions:
Am I suppose to have the database created when I when the package commands?
Or do create the database and then run the package commands?
I don't understand what I'm suppose initialize with, my original code, or the new code. I tried the new code in the first example, but could not get the call right.
And then this Add-Migration, I sort of get it, I write code to change the database, and call the function with Add-Migration "Function"
And the Update-Database, I always get the multiple something error.
Guess I need help fixing my plan or design for Migrations.
And I need help fixing my post here. I just started writing in c# and I missing something in wrapping my code examples here.
|
|
|
|
|
Hi All,
I am using Dynamic SQL to load values from Audit tables into Flat-Table, for that I am taking all the tables and looping through their columns and inserting the values into Temp Table then I am putting the values from that Temp Table into a Flat Table.
Below is one of the insert into a temp Table, but like this there are 4 more Dynamic Sqls are there execute in that SP. But I saw these Dynamic Sqls are taking lot of time to execute. Can anybody please let me know if I can execute the Dynamic Sql as fast as normal SQL?
Any suggestion is appreciated. Thanks in advance I am also searching as well.
SET @sql =N'INSERT INTO #TrackUpdatedColumnNamesFlatTable_ForTemp (TableName, ColumnName, PrimaryKeyColumnName, PrimaryKeyValue, ChangeSetId
, TransactionId, TransactionType, CreatedBy, ModifiedBy, CreatedDate, ModifiedDate, IsSoftDelete, IsCmsActive, ColumnValueAfter)
select ''' + @TblName + ''', ''' + @ColName + ''', ''' + @PKColName + '''
, CASE WHEN ' + @PKColName + ' IS NULL THEN '''' ELSE ' + @PKColName + ' END, ChangeSetId, $start_lsn, CASE WHEN ($operation=1) THEN ''Delete''
WHEN $operation=2 THEN ''Insert''
WHEN __$operation=4 THEN ''Update''
END TransactionType, ' + @CreatedBy + ', '+ @ModifiedBy + ', '+ @CreatedDate + ', '+ @ModifiedDate + '
, ' + CAST(@IsDeleted AS varchar(max)) + '
, ' + CAST(@IsCmsActive AS varchar(max)) + '
, CASE WHEN __$operation=1 THEN NULL ELSE CAST(' + @ColName + ' AS nvarchar(max)) END<br />
FROM [cdc].' + @cdcinstancename
+ ' WHERE CAST(sys.fn_cdc_map_lsn_to_time($start_lsn) AS DATE)=CAST(''' + @DateToTrack + ''' AS DATE)
AND sys.fn_cdc_is_bit_set(sys.fn_cdc_get_column_ordinal(''' + @captureinstance +
''',''' + '' + @ColName + '''), __$update_mask) = 1
AND __$operation IN (1, 2, 4) OPTION(OPTIMIZE FOR UNKNOWN)';
exec sp_executesql @sql, N'@IsColumnModified int out', @IsColumnModified out
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Try reducing the number of CAST s in the query. For example have @DateToTrack as type Date to avoid
CAST(''' + @DateToTrack + ''' AS DATE)
Similarly if ColumnValueAfter is declared as a nvarchar(max) and @ColName is an nvarchar I don't think you need the
CAST(' + @ColName + ' AS nvarchar(max))
I also doubt very much that you need to loop their columns in this way but I'd need to see some sample table schemas, and I'm not sure what you mean by a "flat table"
|
|
|
|
|
Yeah I did man thank you.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Maybe should try OPTION(RECOMPILE), so the execution plan is different every time.
|
|
|
|
|
Hi, I have written a stored Procedure which fills data in to a Flat table, but all it needs to do is to check if a particular row is there if not insert the record to avoid duplication of the values.
And I am making this check against the same table that I am inserting. I am not sure what went wrong after this check the SP Performance has drastically dropped from 4 minutes to 20 minutes. Can anybody please help me how to improve performance of the SP in these situations.
Here is the check that I am doing on the table, all those columns that I am checking against are must, I have check this to avoid duplicate data in the table.
IF (@ChangeSetId IS NOT NULL)
BEGIN
IF NOT EXISTS (select top 1 * from TrackUpdatedColumnNamesFlatTable where
ChangeSetId=@ChangeSetId AND TransactionType=@TransactionType and TableName=@TableName
AND ColumnName=@ColumnName and PrimaryKeyColumnName=@PrimaryKeyColumnName and
PrimaryKeyValue=@PrimaryKeyValue)
begin
INSERT INTO TrackUpdatedColumnNamesFlatTable
(
ApplicationId
,ApplicationComponentId
,ApplicationName
,ApplicationFriendlyName
,SubApplicationName
,SubApplicationFriendlyName
,UserId
,SamAccountName
,FullName
,DefaultDisplayValue
,ChangeSetId
,TransactionType
,TableName
,ColumnName
,ColumnNameForDisplayValue
,FriendlyColumnName
,ColumnValueBefore
,ColumnDisplayValueBefore
,ColumnValueAfter
,ColumnDisplayValueAfter
,PrimaryKeyColumnName
,PrimaryKeyValue
,ImageBefore
,ImageAfter
,ChangedBy
,ChangedDate
,IsSoftDelete
)
select
@ApplicationId,
@ApplicationComponentId,
@ApplicationName,
@ApplicationFriendlyName,
@SubApplicationName,
@SubApplicationFriendlyName,
@UserId,
@SamAccountName,
@FullName,
@DefaultDisplayValue,
@ChangeSetId,
TransactionType,
TableName,
ColumnName,
@ColumnNameForDisplayValue,
@FriendlyColumnName,
ColumnValueBefore,
@ColumnDisplayValueBefore,
ColumnValueAfter,
@ColumnDisplayValueAfter,
PrimaryKeyColumnName,
PrimaryKeyValue,
ImageBefore,
ImageAfter,
@FullName,
@AuditDate,
IsSoftDelete
from #TrackUpdatedColumnNamesFlatTable where ID=@MinId
end
END
Unfortunately the SP is using the Dynamic Sql any thing to increase the Performance of the application helps a lot.
Any help a suggestion, a code snippet or a link anything helps me please. Thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
modified 18-Apr-16 20:32pm.
|
|
|
|
|
You should be able to convert this to a single insert query with a correlated WHERE NOT EXISTS query, removing the loop entirely
Something like this:
INSERT INTO destination_table (
id
,data_col_1
,data_col_2
)
SELECT id
,data_col_1
,data_col_2
FROM #tempdata TD
WHERE NOT EXISTS (
SELECT 1
FROM destination_table
WHERE id = TD.id
)
=========================================================
I'm an optoholic - my glass is always half full of vodka.
=========================================================
|
|
|
|
|
Hi All.
I have a table in Microsoft SQL Database like that.
date,UserID,number_of_product, number_of_sale
Every day User sale number of product.
Now I Want to count consecutive for each user, each product, and each day the total number of sale in number of day (for example 7 days)
Thanks
|
|
|
|
|
|