Ok, so it appears the rules of this game are:
- If a number is a perfect square, take its square root
- if a number is divisible by 3, divide it by 3
- if a number is divisible by 2, divide it by 2
- otherwise, subtract 1.
So, given an input of 32, we'd go 32 -> 16 -> 4 -> 2 -> 1 -> 0 for 5 steps.
and given 75, we'd go 75 -> 25 -> 5 -> 4 -> 2 -> 1 -> 0 for 6 steps.
Now, what problem are you having? A quick glance over the code didn't show any obvious errors.
Your checkSquareNumber() and squareNumber() functions are quite inefficient. You'd be better off googling a better square root algorithm (or just using the library sqrt() function). Ad since they are doing the same thing, they could be combined (return -1 if it is not a perfect square, and you can replace the two calls with one)
i am having problem with my code. where the first condition to check if the original number is a perfect square. can you point out where my error is or point me to a better direction for this problem. i am trying to count the minimum number of steps to convert any number to 0
If you are not happy with the output produced by your code, then you should post here an example of input data (i.e. the sarting value of num), together withe the expected result. Otherwise, how could we possibly help?
"In testa che avete, Signor di Ceprano?"
Fair enough, but that is only the very beginning. A precursor to parsing.
Tokenization is the chopping into atomic pieces of the input text, with no concern for how they are put together. All the tokenizer knows is how to delimit a symbol (token): That a word symbol start with a alphabetic and continues through alphanumerics but ends at the first non-alphanumeric - the tokenizer doesn't know or care whether the word is a variable name, a reserved word or something else. If it finds a digit, it devours digits. If the first non-digit is a math operator or a space, it has found an integer token. If it is a decimal point or an E (and the language permits exponents in literals), the token is a (yet incomplete) float value, and so on. The only language specific thing that the tokenizer needs to know is how to identify the end of a token. Once it has chopped the source code into pieces, its job is done.
Parsing is identifying the structures formed by the tokens. Identifying block, loops, conditional statements etc.
The borderline isn't necessarily razor sharp. Some would say that when the tokenizer finds an integer literal token, it might as well take the the task of converting it to a binary numeric token value, to be handed to the parser. That might be unsuitable in untyped languages where a numeric literal may be treated as a string. After identifying a word symbol, it might search a table of reserved words, possibly delivering it to the parser as a reserved word token. Again, in some languages this is unsuitable (and lots of people would say it goes far beyond a tokenizer's responsibility).
If you want to analyze some input, doing an initial tokenization before starting the actual parsing is a good idea. Most compilers do that.
One of my fellow students was in his first job after graduation set to identify bacteria in microscope photos. That was done by parsing: They had BNF grammars for different kinds of bacteria, and the image information was parsed according to the various grammars. If the number of parsing errors was too high, the verdict was 'Nope - it surely isn't that kind of bacteria, let me try another one!' Those grammars with a low error count was handed over to a human expert for confirmation, or possibly making a choice between viable alternatives, if two or more grammars gave a low error count. This mechanism took a lot of trivial work off the medical personnel, and the computer could scan far more images for possibly dangerous bacteria than there would be human resources to do. The university lecturer in the Compilers and Compilation course certainly hadn't prepared us for compiling bacteria!
I want to create some DLL files for specific calculations and import them into my C# project. Can those calculations (C++ codes in DLL files) be done in the C# application as fast as a native C++ environment?
I was reading this turorial on how to use Java in your C++ project, and at one step it says to add the location of jvm.dll to PATH. Well that is fine for developing purpose, but not for a released project. So instead of that I tried the second part, to add it manually to Debug/Release folder and remove the location from PATH, but unfortunately I'm getting the following error:
Error occurred during initialization of VM
Failed setting boot class path.
What I'm I doing wrong, and how to fix the problem?
Yeah, that was the first thing I tried. I even tried it's Released version (copy in another location .exe and the required .class file) and that was working fine if I had it in PATH, but same problem as soon as I removed it from there and added the dll file.
You should not do this; Java uses other items in its run-time library. Any client wishing to run your application will need to install Java before they can use it. And it is quite possible that if you install the dll yourself you will be breaching Oracle's licencing conditions.
I thought that is the problem but then what is the solution for this? Having the user to install Java isn't a problem, but even if it is installed, I still need to add something in Visual Studio to know where to look for jvm.dll as it is the case with <JDK-DIR>/include and <JDK-DIR>/include/win32, or else it will give me an error with "jvm.dll not found".
You have to read the documentation about what and where is to be written while installing the Java.
Perhaps you will also need to check the registry to find out where Java installer stores the path you need for your application to work properly.
If the customer correctly installs the Java runtime then it will set the PATH variable with the correct details. Your code should then run correctly. I have done a test on my system and that is all that is needed as far as I can tell.