Rajesh, do you know of any good implementations of BigNum that are not Gnu based, i.e. use MS tools like MASM? I have Googled "multi precision math" and went through all of the hits, and the results (there were 97) were all about BigNum, or proprietary for sale products, or for Java, or were white papers. Nothing there about a PC based MS implementation. I even searched the CP articles and found no hits.
I downloaded the GMP version and started looking at the ASM source, but without the GCC compiler and its tools, the code is not complete. It needs to be expanded by the Gnu M4 macros and assembled by GAS (Gnu Assembler?), and even then I don't know whether or not it produces anything like a .LST file that would actually indicate exactly what instructions are executing at which locations to gain their reported speed by taking advantage of cashing, etc. I really don't want to go this route, I want to stick with MS tools and don't even want the C++ front end, strictly MASM.
I do have an integer implementation of a multi precision math library and was thinking of expanding it to a floating point version and was looking to see how it stacked up with "the best". As far as I could easily determine, the BigNum algorithms matched mine. Mine were home brew - what worked fastest, theirs were based on the experts like Knuth. I went to my library and cracked open Knuth, Vol 2, for the first time to see what the expert had to say. Enlightening. I have also done a thorough read of the AMD specs on my Athlon and have used many of their suggestions.
I am not able to suggest something readily that might suit you. Further to that, I have not worked too much on this front. I have forwarded this query to a few people that might possibly give a fruitful reply.
I'll write to you if I hear something from them.
It is a crappy thing, but it's life -^Carlo Pallini
A number with 1000000000 digits is the largest number? Cool,
that simplifies my life considerably!
is there any data type is which I can store the 1 billion digit numbe
You can easily make your own type.
Assuming you only need to work with digits 0-9, you only need
four bits per digit. That means two digits will fit in a byte.
Write a class to wrap a 500000000 byte array and cross your fingers
that you are actually able to allocate that much in one chunk.
A backing store on the harddrive may help here - perhaps a memory
mapped file. In the class, provide whatever methods you need to access
do you mean I have to replace BSTR by _bstr_t ? I tried it in the following way but now I get a compile error that tells me that the _bstr_t data type is not defined. Which header do I have to include in my C++ project ?
char* str = "This is a test string...";
return SysAllocStringByteLen(str, strlen(str));
would be great if you could report me an example
maybe important: the dll I created is a win32 dll not a MFC or similar
In you client C++ application you may use _bstr_t as wrapper for the BSTR return value. You know, BSTR stands for OLECHAR *, i.e. like a wide char string, you may also handle directly it in you client app.
With VS6 compiler you need to include comdef.h to use _bstr_t class.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke