
I'm trying to create/solve a puzzle in which you are given tetrislike cages in which digits will be placed but cannot have 2 digits in the same row or same column. The goal is to determine the possibilities of doubles and triples [doubles,triples] for a given cage.
For example, with the cage
[[0,1]
[1,1]]
in which 1 's indicate cells in the cage, up to 1 double ([1,0] ) is allowed.
The restrictions for the cage input is it has a max width of 8, a max height of 3 and max size (# of 1 's) of 8.
Here are some more examples along with there possible outputs:
[[1,1],
[1,0],
[1,0]]
[[1,1],
[1,1]]
[[1,1],
[1,0],
[1,1]]
[[1,1,1],
[1,0,0],
[1,0,0]]
[[1,1,0],
[0,1,1]]
[[1,1,0],
[0,1,0],
[0,1,1]]
[[1,0,0],
[1,1,0],
[1,1,1]]
[[1,1,1],
[1,0,1],
[1,0,1]]
[[1,1,1],
[1,0,1],
[1,1,1]]
Please reach out with any questions. I appreciate the help figuring out how to do this programmatically!





It's far enough removed from reality that no one seems to care.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





Figured it out. The key was recursion.





Program1:
for (i = 0; i < m, i + +){
for (j = 0; j < n, j + +){
if (a = b){
arr[i][j] = 1;
}
else{
arr[i][j] = 2;
}
}
}
for (i = 0; i < m, i + +){
for (j = 0; j < n, j + +){
arr[i] += arr[i][j];
}
}
for (i = 0; i < m, i + +){
if (a = b){
p[i] = i
q [i] = i
}
}
Program2:
for (i = 0; i < m, i + +){
sum = 0;
for (j = 0; j < n, j + +){
if (a = b){
arr[i][j] = 1
sum += 1;
}
else{
arr[i][j] = 2
}
}
if (c = d){
arr[i] = sum;
p[i] = i
q [i] = i
}
}
I tried to make a time complexity equation.
for program1:
m*n*c1 + m*n*c2 + m*c3
for program2
m*n*c1 + m*c2
which one is better and how much? Could you please help on that?







You need to account for the "number" of "for loops" also; the total iterations might be equivalent, but not the "setting up and tearing down."
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





I need a routine that will generate a random names for files and folders. As long as they are valid file/folder names it doesn't matter what it is.
I have been thinking up several different ways to do this, but I'm wondering what the community thinks is the best way to do this.
I thought about the Windows API that generates a unique temporary file, but technically that file is unique only within the temp directory.
I thought about generating Guids, but that seems to be frying an egg with a nuclear reactor.
What do you think?
The difficult we do right away...
...the impossible takes slightly longer.





Richard Andrew x64 wrote: I thought about generating Guids, but that seems to be frying an egg with a nuclear reactor.
What do you think?
I think you are wrong. Using Guids is simple enough and has nothing to do with something like "a nuclear reactor".





Yes I guess you are right.
Thank you for your response.
The difficult we do right away...
...the impossible takes slightly longer.





I want to calculate the time complexity and show by the equation for both algorithms. Also, I need a comparison between the two algorithms to show which one is better and why. Could you please help with that? It would be beneficial for me and very much appreciable.
Algorithm1

1.for 1 to m
for 1 to n
if (some condition)
do something
else
do something
end for
end for
2.for 1 to m
for 1 to n
do something
end for
end for
3.for 1 to m
if (some condition)
do something
end for
4.do something
5.for 1 to m
for 1 to m
if (some condition)
do something
end for
end for
6.for x to y
for 1 to x
if (some condition)
do something
end for
end for
Algorithm2

1.for 1 to m
for 1 to n
if (condition)
do something
else
do something
end for
if (condition)
do something
end for
2. do something
3.for 1 to m
for 1 to m
if (condition)
do something
for 1 to x
if (condition)
do something
break
end for
if (condition)
do something
end for
end for
Thanks, Advance





Help how? You could answer the question yourself by coding the two algorithms and running some tests.





Thank you Richard MacCutchan for your response. I have to prove by equation that one is better than other and how much better. I need help on that. You may can help me for that.





Benchmarking is not to determine algorithmic complexity.
It can be used to confirm that your theoretically determined O() is correct, but a few timing values is not an O().
To the OP: In my studies, algorithmic complexity was the sole subject of a quarterlong course. The textbook was a few hundred pages long. I don't remember if I took it during our junior or senior year; in any case, we were quite experienced programmers then, and had completed courses in related subjects like statistics and queue theory. The foundation was in place.
If you want to learn how to determine the algorithmic complexity in general, you may have a long path to walk. There may be quick and easy answers to specific, simple, algorithms (an experienced guy can pop an O() right out of his head for the examples you present), but you would benefit from learning more general methods for complexity estimation.
And: If you are doing timing benchmarks, make sure to test over a broad range of input  here: in terms of problem size. With small problem sets, initial setup, reporting etc. can make up a large part of the time consumed. Also, be very much aware of compiler optimizations. Some compilers are clever shortcutting loops etc. in a way that may severely affect timings. It may be safer to turn all optimization off. (Actually, for verifying a theoretical O(), you might come better out by adding counters to your code in appropriate places, rather than measuring execution times!)





Why are you telling me this? You need to reply to the OP.





because you seem to suggest to the OP that "coding the two algorithms and running some tests" can serve as a replacement for an analytic complexity determination.
Besides: What is posted in a public forum is not directed at you alone, but to all readers of the forum.





All I said was, "You could answer the question yourself by coding the two algorithms and running some tests.". I left it up to the OP to decide if that was what he wanted. But either way, it is unlikely that OP will read your message, so missing some useful information.





Member 11512486 wrote: Could you please help with that? It would be beneficial for me and very much appreciable.
Sure! Hope the below would help.
Start by learning about complexity: Understanding Time Complexity with Simple Examples  GeeksforGeeks
Then, have a look at something like: How to analyze time complexity: Count your steps · YourBasic
There are many more material online in case you need more info to understand what it means and helps.
Overall,
1. code a sample doing same problemX using algorithm 1 & 2
2. compare the time taken for both to solve that problem with multiple set of inputs (increasing)
3. you can also compare the space used by both and use as another comparing parameter
Try out!





Thank you for your response. I have executed both algorithm and get second algorithm taken less time. But need to prove by equation. I hope you are expert but I am not. So if you help me that would be very helpful for me.





The second one has an "if ... for".
As for the rest, they have a deterministic number of iterations. Count iterations, then assume the second can be faster because of the "if ... for" (depending on the counts).
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





How can you state as a general conclusion that "the second algorithm takes less time"?
Execution times depend on, for algorithm 1, four different input variables (m, n, x, y), and the outcome of four different iftests, so you may consider p(if1) .. p(if4) four additional input variables. Total execution time depends heavily on the resources (mainly CPU) required for evaluating the four different conditions, and for the seven different actions. The execution counts for six of the actions depend on two or three input variables.
Algorithm 2 has three input variables (m, n, x) and five tests so p(if1) .. p(if5) are input variables. Timing depend on five different evaluation times, seven different actions execution times. Here, too, execution counts for six of seven actions, and two conditions, depend on two or three input variables.
An algorithm, as such, is independent of both implementation and the context where it is applied; each step may have an arbitrary interpretation. (The pseudocode description reflects this.) So, e.g. performing another loop iteration may involve a lot more than just incrementing an integer variable and perform a conditional jump back to the top; it could take a lot work to get hold of the next element. So, the loop management itself might be the dominating cost, measured time might be practically independent on all the 'if()' and 'do something'.
Total execution times depend heavily on the relative cost of evaluating an 'if()' vs. an action ('do something'). Say, an 'if()' searching a candidate photo for similar ones in a photo archive (a very heavy test), the action simply logs the outcome (very lightweight)  then the number of tests greatly dominate the execution time, almost independent of p(if) probabilities. Or rather: If a test is supercheap, like looking up the photo's serial number in a list, but the action is like compressing the image, then test outcomes (determined by p(if)) strongly affect execution times.
All of this affects benchmark timings, but only them. No matter how dramatic the effects on the timings if you apply the same algorithm in another context  or even port the same program code to another platform with significantly different relative costs: O() is unaffected by relative costs of loop management, tests and actions.
Given these considerations, how can we make any reliable statements about algorithm complexity based on execution times measurement measured with a (usually very small) set of combinations of eight explicit input values and a bunch of hidden ones (the relative costs of test/action execution)? Can you claim that algorithm 2 is faster than algorithm 1 for any combination of input variables? Even when considering that y affects algorithm 1 but not algorithm 2? Even when algorithm 1 depends on four different p(if), algorithm 2 on five?
If you give me the freedom to set arbitrary cost values (in terms of CPU execution times) for all actions and if() tests, to freely set values for m, n, x and y, and p(if) values, I am quite certain that I could make algorithm 2 significantly slower for given values, but faster for others.
Execution times "all depends"! If you set up an O() expression in the input variables, you can, for a given set of input values, estimate (relative) expected execution time for the alternatives.
Usually, you simplify the O()expression, leaving out e.g. constant parts (like step 4 / step 2, independent of input values). If you know that e.g. if tests will always be supersimple, you may leave them out  but you should make that explicit in your presentation of the complexity. It is sort of customary to give O() only by the highest order subexpression(s) (such as m*m*x in algorithm 2, disregarding all p(if))  but as a main rule, that restricts you complexity analysis to nonextreme values of those parts you have shaven off. If you know the application area, you will usually know the expected range of variation of each value, so you e.g. chop off the x if it varies from 1 to 3 but not if it varies from zero to a billion.
In any case: Benchmarks generally covers only a tiny little selection of possible explicit inputs (in particular in an 8dimentional input space!). Usually, they are based on a large number of hidden (usually unidentified) factors affecting the results. For a specific application of the algorithm, in a specific implementation environment, and with a limited value range for all inputs, the benchmarks may give an indication of expected execution time. But they cannot provide anything comparable to an analytic derived expression for the complexity.





You're overthinking it: it's a thought experiment.
One assumes all the "if's" and "do somethings" are equivalent. It's the "complexity" (looping and branching) that one is interested in; not a bunch of "yeah buts".
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





Hi all, i tried to implement the Sieve of Eratosthenes for finding all primes under a given number n.
I came up with the this solution, and i think its kind of neat. But when i run it, it is many times slower than the standard solution i found on the net afterwards.
Can someone explain to me, why my solution is slower:
public static List<Integer> sieveEratosthenes(List<Integer> numbers){
int baseNum = 0;
while(baseNum < Math.sqrt(numbers.size())){
for(int i = 1; i < numbers.size()baseNum; i++){
if(numbers.get(baseNum+i) % numbers.get(baseNum) == 0)
{
numbers.remove(baseNum+i);
}
}
baseNum++;
}
System.out.println("Calcs: "+calc);
return numbers;
}





while(baseNum < Math.sqrt(numbers.size())){
You are calculating the square root each time round the loop. Change it to:
int root = Math.sqrt(numbers.size());
while(baseNum < root){
Also, make sure your list only contains odd numbers. Or even faster use a fixed size array.





I suspect it's also going to be incorrect, since numbers.size() will potentially change on each iteration.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
 Homer




