Well, aside from asking this exact question again (ok, you might ask it again to assert that VN and/or K5 is/are NOT robots I guess), use keywords in your PROGRAM by typing them into the "search" here at CP. The Discussions are titled and the QA is tagged. So keep those two things in mind also.
c ++ who can show me the direction of this article. There is an undirected star graph consisting of nodes labeled 1 to n. A star graph is a graph in which there is a central node and exactly n - 1 edges connecting the central node to every other node.
You are given a 2Dedges integer array where each edge edges[i] = [ui, vi] indicates that there is an edge between the nodes ui and vi. Returns the center of the given star graph.
Input: edges =
Ok, so it appears the rules of this game are:
- If a number is a perfect square, take its square root
- if a number is divisible by 3, divide it by 3
- if a number is divisible by 2, divide it by 2
- otherwise, subtract 1.
So, given an input of 32, we'd go 32 -> 16 -> 4 -> 2 -> 1 -> 0 for 5 steps.
and given 75, we'd go 75 -> 25 -> 5 -> 4 -> 2 -> 1 -> 0 for 6 steps.
Now, what problem are you having? A quick glance over the code didn't show any obvious errors.
Your checkSquareNumber() and squareNumber() functions are quite inefficient. You'd be better off googling a better square root algorithm (or just using the library sqrt() function). Ad since they are doing the same thing, they could be combined (return -1 if it is not a perfect square, and you can replace the two calls with one)
i am having problem with my code. where the first condition to check if the original number is a perfect square. can you point out where my error is or point me to a better direction for this problem. i am trying to count the minimum number of steps to convert any number to 0
If you are not happy with the output produced by your code, then you should post here an example of input data (i.e. the sarting value of num), together withe the expected result. Otherwise, how could we possibly help?
"In testa che avete, Signor di Ceprano?"
Fair enough, but that is only the very beginning. A precursor to parsing.
Tokenization is the chopping into atomic pieces of the input text, with no concern for how they are put together. All the tokenizer knows is how to delimit a symbol (token): That a word symbol start with a alphabetic and continues through alphanumerics but ends at the first non-alphanumeric - the tokenizer doesn't know or care whether the word is a variable name, a reserved word or something else. If it finds a digit, it devours digits. If the first non-digit is a math operator or a space, it has found an integer token. If it is a decimal point or an E (and the language permits exponents in literals), the token is a (yet incomplete) float value, and so on. The only language specific thing that the tokenizer needs to know is how to identify the end of a token. Once it has chopped the source code into pieces, its job is done.
Parsing is identifying the structures formed by the tokens. Identifying block, loops, conditional statements etc.
The borderline isn't necessarily razor sharp. Some would say that when the tokenizer finds an integer literal token, it might as well take the the task of converting it to a binary numeric token value, to be handed to the parser. That might be unsuitable in untyped languages where a numeric literal may be treated as a string. After identifying a word symbol, it might search a table of reserved words, possibly delivering it to the parser as a reserved word token. Again, in some languages this is unsuitable (and lots of people would say it goes far beyond a tokenizer's responsibility).
If you want to analyze some input, doing an initial tokenization before starting the actual parsing is a good idea. Most compilers do that.
One of my fellow students was in his first job after graduation set to identify bacteria in microscope photos. That was done by parsing: They had BNF grammars for different kinds of bacteria, and the image information was parsed according to the various grammars. If the number of parsing errors was too high, the verdict was 'Nope - it surely isn't that kind of bacteria, let me try another one!' Those grammars with a low error count was handed over to a human expert for confirmation, or possibly making a choice between viable alternatives, if two or more grammars gave a low error count. This mechanism took a lot of trivial work off the medical personnel, and the computer could scan far more images for possibly dangerous bacteria than there would be human resources to do. The university lecturer in the Compilers and Compilation course certainly hadn't prepared us for compiling bacteria!
Thank you very much for such extensive replay.
Very unexpected , considering the other "clowns contributions " . I hope they, the other replies, are not an indicators of this site turning into social media...
I have started my coding and it looks as I have to parse out non ascii alphanumeric characters first.
I want to create some DLL files for specific calculations and import them into my C# project. Can those calculations (C++ codes in DLL files) be done in the C# application as fast as a native C++ environment?
I was reading this turorial on how to use Java in your C++ project, and at one step it says to add the location of jvm.dll to PATH. Well that is fine for developing purpose, but not for a released project. So instead of that I tried the second part, to add it manually to Debug/Release folder and remove the location from PATH, but unfortunately I'm getting the following error:
Error occurred during initialization of VM
Failed setting boot class path.
What I'm I doing wrong, and how to fix the problem?
Yeah, that was the first thing I tried. I even tried it's Released version (copy in another location .exe and the required .class file) and that was working fine if I had it in PATH, but same problem as soon as I removed it from there and added the dll file.
Last Visit: 31-Dec-99 19:00 Last Update: 3-Feb-23 14:05